IGF2024
Open Forum #33 Open Consultation Process Meeting for WSIS Forum 2025
Open Forum #33 Open Consultation Process Meeting for WSIS Forum 2025
Session at a Glance
Summary
This discussion focused on preparations for the WSIS Plus 20 Forum High-Level Event in 2025, which will review 20 years of progress since the World Summit on the Information Society. Participants emphasized the importance of maintaining the multi-stakeholder approach that has been central to WSIS, while also adapting to new technological developments and challenges. Many speakers stressed the need to avoid duplication of efforts, particularly in light of the recently adopted Global Digital Compact (GDC).
There was broad agreement on the continued relevance of the WSIS action lines, though some suggested updates may be needed to address emerging issues. Several participants highlighted the importance of inclusivity, calling for greater involvement of voices from the Global South, rural communities, and youth. The need to address persistent and new forms of the digital divide was a recurring theme.
Speakers from various UN agencies, governments, civil society, and the private sector shared their perspectives on priorities for WSIS beyond 2025. These included focusing on sustainable and ethical technology use, enhancing digital skills, and leveraging ICTs for development. There were calls to better integrate the implementation of the GDC with existing WSIS mechanisms.
The discussion also touched on procedural aspects, such as the open consultation process for shaping the agenda of the 2025 event. Overall, participants expressed a strong commitment to building on WSIS achievements while adapting the process to address current and future digital challenges in an inclusive, rights-based manner.
Keypoints
Major discussion points:
– The importance of maintaining the multi-stakeholder and inclusive nature of the WSIS process as it moves beyond 2025
– The need to avoid duplication between WSIS and other initiatives like the Global Digital Compact, and instead build on existing frameworks
– Updating the WSIS action lines to address new technologies and challenges while maintaining their technology-neutral approach
– Increasing focus on issues like digital divides, rural connectivity, gender gaps, and youth engagement
– Streamlining implementation efforts and maximizing limited resources
The overall purpose of the discussion was to gather input from various stakeholders on priorities and considerations for the WSIS+20 review process and high-level event in 2025, as well as the future of WSIS beyond 2025.
The tone of the discussion was largely collaborative and forward-looking. Participants expressed enthusiasm for the WSIS process while also highlighting areas for improvement and evolution. There was a sense of urgency about making the most of limited time and resources to address critical digital issues. The tone became more action-oriented towards the end, with specific suggestions for new initiatives and calls to submit formal input.
Speakers
– Gitanjali Sah – Organizer of WSIS Forum
– Thomas Schneider – Ambassador of Switzerland
– Torbjörn Fredriksson – UNCTAD representative
– Cynthia Lesufi – South Africa representative, Council Working Group on WSIS and SDGs
– Anriette Esterhuysen – Association for Progressive Communications
– Meni Anastasiadou – International Chamber of Commerce
– Mina Seonmin Jun – South Korea representative, Council Working Group on WSIS and SDGs
– Felix Nyström – Swedish Ministry of Foreign Affairs
– Craig Stanley-Adamson – Head of Internet Governance, UK Department for Science, Innovation and Technology
– Wallace S. Cheng – Global Ethics representative
– Halima Ismaeel – Secretary General’s Youth Advisory Board member
Additional speakers:
– Cedric – UNESCO representative
– Yuping – UNDP representative
– Delfina – United Nations University representative
– Moaz – Saudi Arabia representative
– Mike Walton – UNHCR representative
– Rian – Brazilian Association of Internet Service Providers
– Paola Galvez – Youth representative from Peru
– Fawad Bajwa – Digital Dera representative from Pakistan
Full session report
Expanded Summary of WSIS Plus 20 Forum High-Level Event Discussion
Introduction
This discussion focused on preparations for the WSIS Plus 20 Forum High-Level Event, scheduled for July 7-11, 2025, at Palexpo in Geneva. The event will review 20 years of progress since the World Summit on the Information Society. The dialogue brought together representatives from various UN agencies, governments, civil society organisations, and the private sector to share perspectives on priorities for WSIS beyond 2025 and shape the agenda for the upcoming event.
Key Themes and Agreements
1. Multi-stakeholder Collaboration and Inclusivity
There was broad consensus on the importance of maintaining and enhancing the multi-stakeholder approach that has been central to WSIS. Speakers such as Thomas Schneider, Ambassador of Switzerland, emphasised that multi-stakeholder collaboration is crucial for WSIS implementation. This sentiment was echoed by other participants, including Anriette Esterhuysen from the Association for Progressive Communications, who highlighted the importance of intergenerational and inter-institutional collaboration.
The discussion underscored the need to raise inclusiveness to a new level. Wallace S. Cheng from Global Ethics advocated for supporting local innovation, particularly in the Global South. Halima Ismaeel, a member of the Secretary General’s Youth Advisory Board, stressed the importance of engaging youth and focusing on emerging technologies. Other participants called for including the voices of refugees, displaced persons, and small and medium enterprises in the WSIS process. Fawad Bajwa suggested highlighting rural communities’ struggles to connect to the internet at the WSIS Forum.
2. WSIS Action Lines and Their Continued Relevance
Several speakers emphasized the ongoing importance of the WSIS action lines. While there was agreement on their continued relevance, discussions touched on potential updates to address new technologies and challenges. Gitanjali Sah noted that UN agencies are updating WSIS action lines based on their expertise. Some participants, like Renata, argued for the need to update the action lines, while others cautioned against duplicating efforts with other initiatives.
3. Global Digital Compact (GDC) and WSIS
The relationship between the recently adopted Global Digital Compact (GDC) and the WSIS process was a significant topic of discussion. Moaz from Saudi Arabia emphasised the importance of avoiding duplication with GDC implementation. Other speakers suggested using the WSIS framework to implement the GDC, with Torbjörn Fredriksson from UNCTAD advocating for leveraging existing UN mechanisms for this purpose.
4. Digital Rights and Equality
Several speakers emphasised the need to focus on digital justice and human rights in the digital context. Anriette Esterhuysen highlighted the importance of addressing the gender digital gap, a point that resonated with other participants. Paola Galvez, a youth representative from Peru, proposed creating a specific action line on addressing the gender digital gap. Felix Nyström stressed the importance of human rights mainstreaming in new initiatives.
5. Implementation and Resource Allocation
The discussion touched on practical aspects of WSIS implementation, with several speakers emphasising the need to streamline efforts and maximise limited resources. There were calls to focus on financing mechanisms for digital development and to set concrete targets for financing digital infrastructure in the developing world. Mr. Hossain highlighted the role of the Inter-Islamic Development Bank in supporting ICT development.
6. Role of UN Agencies and International Cooperation
Torbjörn Fredriksson emphasised the importance of UN agency collaboration in WSIS implementation and highlighted the role of the Commission on Science Technology for Development (CSTD) in the WSIS follow-up. Mina Seonmin Jun, the South Korea representative, highlighted the role of regional cooperation in advancing WSIS outcomes.
Open Consultation Process and Future Directions
Gitanjali Sah, the organiser of the WSIS Forum, explained the open consultation process for shaping the WSIS Forum agenda. She encouraged participants to submit proposals and suggestions for the WSIS+20 Forum agenda through this process by March 14. Sah also mentioned plans for a hackathon and the need for incubators for young innovators.
Thought-Provoking Comments
Several comments stood out for their potential to shape future discussions:
1. Thomas Schneider’s call to concentrate resources and build on the WSIS process in an inclusive and efficient manner.
2. Nandini from IT4Change’s emphasis on governing the internet as a global public good.
3. Anriette Esterhuysen’s reminder of the unique collaborative potential of WSIS across diverse stakeholders and geographies, and her suggestion to include the Tunis Agenda outcomes, particularly financing mechanisms, in the WSIS Forum structure.
4. Paola Galvez’s specific proposal for an action line on the gender gap.
Conclusion and Next Steps
The discussion highlighted the ongoing relevance of the WSIS process while also emphasising the need for evolution and adaptation. Key takeaways include the importance of maintaining multi-stakeholder collaboration, avoiding duplication with other initiatives like the GDC, enhancing inclusivity, and updating WSIS action lines to address new challenges.
Moving forward, participants suggested several action items:
1. Organising consultative sessions and brainstorming workshops at the WSIS Forum to gather stakeholder input on future directions.
2. Submitting proposals and suggestions for the WSIS+20 Forum agenda through the open consultation process by March 14.
3. Considering the inclusion of a specific focus on the gender digital gap in WSIS action lines or declarations.
4. Exploring ways to better include rural community perspectives in the WSIS process.
5. Integrating discussions on emerging technologies and their impact on digital development.
The discussion set the stage for continued dialogue and negotiation in the WSIS+20 review process, with a clear emphasis on collaboration, efficiency, and inclusivity in shaping the future of global digital cooperation and development. The upcoming WSIS Forum, which will also celebrate ITU’s 160th anniversary, promises to be a pivotal event in advancing these goals.
Session Transcript
Gitanjali Sah: and also to all the virtual participants who have joined us today. We do know that many of our colleagues who couldn’t be here are with us virtually. So this is an open forum. If you notice on the website, we’ve listed quite a few speakers, but those are all of those who wish to make interventions. We already listed them as speakers online. So please do feel free to make your intervention once the co-organizers and co-hosts have spoken. So the idea of the meeting is to provide you with an update of our preparatory process for the WSIS Plus 20 Forum High-Level Event. It’s co-hosted with Switzerland, so we are very happy to have Ambassador Schneider with us, and it’s co-organized by ITU, UNESCO, UNDP, and UNCTAD in collaboration with more than 40 UN agencies who are working closely with us, and I can see many of you in the room. So thank you for being here with us. Before I begin to provide updates, I’d like to invite Ambassador Schneider, our co-host, to please welcome us and to give us some more information about what Switzerland is preparing for the event. Over to you, Ambassador.
Thomas Schneider: See how this works, yes. So good morning, everyone. As we all know, it is just a little over 20 years ago at the first summit of the Information Society in December 2003, where not just the representatives of all nations, but also lots of other stakeholders gathered together in Geneva and agreed on a shared vision for inclusive, people-centered, and development-oriented. information and digital society. In addition, we formulated 11 action lines and some other concrete targets that should help us to work toward this vision. And we have created various processes for dialogue, cooperation and partnership among all stakeholders in their respective roles. And this has allowed us to learn from each other. Today, however, we are not at the end of our work in cooperation, but still at the beginning, because new tools like digital platforms, social media, new technologies, including AI and the use of data have emerged and are promises, but also a new pitfall. So this means for us a call for even greater and more inclusive cooperation for improvement and strengthening existing mechanisms. 2024, we are at an important point in time in setting the course for the near future. The meeting on the Alps 10 meeting, which adopted the Stampaolo guidelines and the WSIS plus 20 forum high-level event this May in Geneva, which took stock of the implementation of the WSIS action line. But then we also have, as we all know now, the agreement on the Global Digital Compact, which should outline shared principles for an open, free and secure digital future for all. But this is again not the end. It is rather the beginning of a process that should lead us to some bigger decisions, even bigger decisions at the end of next year, because next year we’ll have the WSIS plus 20 overall review by the UN General Assembly at the end of the year. But before that, we’ll have a number of events and processes that run up to this. There’s an important meeting of the CSTD, the UN Commission on Science and Technology for Development, which has a key role as focal point for the UN system-wide follow-up to the WSIS. And of course, we do have important events. Facilitated by the ITU, UNESCO and other specialized agencies together with other partners which should all contribute to an inclusive WSIS plus 20 REU process where all voices from all regions over the world should be heard. We as Swiss have supported WSIS since 2003 and will continue to support all actors and processes that cooperate constructively to develop and implement WSIS goals. We are looking forward for this year’s WSIS plus 20 forum high level event that is taking place from 7 to 11 July in Geneva again like last year in parallel with the AI for good summit. Both events will be co-hosted by Switzerland and the ITU and its partners. Because there were so many people wanting to participate at these events, this year we had a queue that was going from the conference center to the Place des Nations which is something I have never seen before in Geneva. The whole thing will be moved to Palexpo which is a small place 5 minutes or 10 minutes away by bus or 5 minutes by train near the airport where we have much more space to accommodate all people wanting to participate physically and of course it is also possible to participate online. I think the most important of all these processes and the criteria that all the processes and events that we have leading up to the WSIS plus 20 REU for us is really inclusivity. We do not want just to hear a few voices from a few powerful stakeholders. For us it is important to give room. that as many stakeholders can express their views, their needs, in particular the ones that may not have the resources to follow everything. We really have to be mindful to support those stakeholders that are less resourced, that are struggling to keep up with everything that’s going on. And this again is why we strongly support the San Paolo guidelines, to support stakeholders that are normally not sitting at a table, that are normally not heard. For us it’s like this is the measurement for all the processes that we are seeing, that it’s not just something that we call multi-stakeholder approach, but it’s something that is actually a true, inclusive and meaningful multi-stakeholder approach involving everybody in their respective roles into these processes. Because we believe this is the only way to find and identify solutions that actually create win-win situations for us all and not just for a few. Thank you very much.
Gitanjali Sah: Thank you very much, Ambassador Schneider, and also for highlighting the real multi-stakeholder philosophy of the WSIS process. It’s really in the DNA of the WSIS process and we must continue to strengthen the WSIS Forum as a platform for this kind of engagement. And for those of you who haven’t read the chair’s summary from the high-level event this year, I’d like to encourage you to read it. It’s a really beautifully written document which also talks about the WSIS+. So please do have a look at it and do refer to it in your considerations of WSIS Beyond 2025. I’d now like to invite Cedric from UNESCO to give us a little bit of UNESCO’s perspective to the whole review process and some of the important timelines that we must consider.
Speaker 1: Thank you so much. Yes, now you can hear me. So let me start by thanking the co-hosts, Switzerland, but also ITU, for preparing for the last high-level event, but also for the next one. We will be co-organizing together with UNDP, with ACTAT, and for your remarkable leadership and commitment to this work. I have here a full speech prepared for the ADG, he has speech writers, but I’m asked to be short, and some of the elements actually Thomas covered already. For us, the multi-stakeholder nature of WSIS, I need to emphasize that, is really important, and I’m very happy to see such a full room here, and Torbjörn committed already early up in Geneva, online too, as one of our partners. What I found an interesting fact in the speech is that since 2009, over 120,000 participants from 160 countries participated, which is really impressive, and of course we have now the task of the WSIS review, but also of implementing the GDC, building on the existing WSIS mechanisms. And so that will be helpful for this kind of translation of the GDC into reality too. So UNESCO is fully committed to support this process. We are also working with all stakeholders to prepare for a conference, which was initially announced for mid-February, but is moved now to 4th, 5th June. I wanted to share that with you. I have a few flyers here on AI and digital transformation in the public sector, which is part of the WSIS plus 20. and I am pleased to listen to you mainly now and hand back over the mic to Gitanjali and ITU.
Gitanjali Sah: Thank you so much, Cedric, and thank you for being brief because the whole purpose of this meeting is to listen from stakeholders who are present here. Yuping, I’d like to invite you to say a few words from UNDP.
Speaker 2: Thank you so much, Gitanjali, and recognizing the need to be brief, I’m just going to say that, again, we’re so honored as UNDP to be here. As Ambassador Schneider said, we have new challenges, that’s the Global Digital Compact, but what is really very important and critical to the conversation is the fact that the WSIS outcomes and the action lines are so much more relevant today than ever before, and so we’re looking forward to the WSIS Plus 20 review as a chance to reflect on all these new developments while reaffirming the importance of what has brought us all together here today, of deep commitment to multi-stakeholderism, being open and inclusive, focusing on those who are most in danger of being left behind, and overall, and in essence, the most enduring importance of the WSIS process and the IGF itself. This is all in line with how UNDP sees a rights-based, inclusive approach to digital transformation and working together with everyone to ensure that digital is an empowering force for people on the planet. It’s an honor to be here with our fellow UNGIS and WSIS co-chairs. We’re at Switzerland, and the organizers, we look forward to having you as all part of this process, and we stand ready to fully support it.
Gitanjali Sah: Jiubing, Torbjörn couldn’t join us here physically, but he’s there with us virtually for all our WSIS sessions. Thank you so much, Torbjörn. Can I please invite you to say a few words as UNGTAD, over to you.
Torbjörn Fredriksson: Thank you, Geetanjali. Hello, participants, colleagues and friends. Good morning to you from Geneva. I also wish I would be with you in Riyadh on this important week. Let me start by thanking ITU for all its preparations for the next WSIS Forum 2025 and for rebranding it to the WSIS plus 20 high-level event. I think it’s going to be the 16th time that we hold this together now in Geneva. Many thanks also to Switzerland for co-hosting the event. From the perspective of the UN trade and development, this is a very valuable opportunity for achieving an in-depth, multi-stakeholder and constructive dialogue on every relevant aspect of the 20-year review of the WSIS. The recent adoption of the Pact for the Future and the GDC add food for thought in this context. It is essential that the commitments made by member states in the GDC can feed into the discussion on the WSIS plus 20 review and any decision on what will come after and at the end of 2025. At the same time, as we consider how best to support the implementation of the GDC, we need to fully harness the existing mechanisms that we have already set up as a result of the WSIS. This includes the very good division of labor that is established in the WSIS outcome documents, the Partnership on Measuring ICT for Development, the UN Group on the Information Society, the ITU stock-taking database and various other initiatives. And in the case of UNCTAD, we can also leverage the E-Trade for All initiative as well as E-Trade for Women in this context. The Commission on Science Technology for Development, which is responsible for the overall follow-up on the WSIS implementation, was also given a new role in the GDC. I would urge all stakeholders this year to focus attention on what should be done to make sure that we achieve effective results from these various mechanisms and forums. Our collective work towards building a, as we said, a people-centered, inclusive and development-oriented information society is far from complete. Digital and e-commerce device, for example, remain very wide and some are actually growing. In the coming years, we need to give added attention to how to make the digital economy and society more inclusive and more sustainable. The WSIS plus 20 high-level event here offers a very good platform to explore the ways we should go to meet this challenge. We look forward to co-organizing it together with ITU, UNESCO, and the UNDP co-hosted with Switzerland. Thank you very much.
Gitanjali Sah: Thank you, Torbjörn. I’d now like to move on to my presentation. If you put it in full screen, so we could put it in full screen, presentation. Okay. So, thank you very much and the clicker doesn’t seem to work. Okay. So, colleagues, just a gentle reminder, who in the room are very familiar with the WSIS process, but just to remind you of the important milestones that have brought us here in 1998, in ITU’s plenipotentiary conference in Minneapolis. Tunisia proposed this framework of the WSIS process in 2001. There was a UN General Assembly resolution that requested that WSIS is organized in the two phases, one in Geneva and first one in Geneva, that came up with the Geneva Plan of Action, and the other one in Tunisia in 2005. We started with the cluster of WSIS related events. So, this was the WSIS forum, which was rebranded into the WSIS forum in 2006. We had the UNGA overall review in 2015, where our mandate was decided to be updated till 2025. In 2020, of course, also in 2015, we had the SDGs. So we started aligning the WSIS goals. 2025, we will have the WSIS high-level event. We will have the UNGA review. So the resolution says that there should be a high-level event. And of course, there will be a resolution that will be adopted in 2025, which will then highlight the future of the WSIS process beyond 2025. So the governance structure of WSIS, you often hear this myth in New York that WSIS doesn’t have a governance structure. Of course it does. And it starts in New York. We have the UNGA resolutions. We have the ECOSOC resolutions. This chief executive board, through the organizations group on information society, the CSTD, the annual WSIS forum, the annual IGF, the UN agencies mandated to implement the WSIS action lines, UN regional commissions on the ground, WSIS prizes, WSIS stock-taking database, the partnership, which is so crucial, UN-DESA, UNESCO Statistical Committee, ITU, and so on and so forth, and the WSIS special initiatives. We are guided by our WSIS action lines. And if you look at the WSIS action lines, it’s a beautiful framework that covers the entire gamut of information and communication technologies, right from ICT infrastructure to cybersecurity to e-governance, and the beautiful framework that each UN agency, based on their respective mandates, actually lead the implementation of the action lines. For example, ITU leads ICT infrastructure, cybersecurity, capacity building with UNDP. and e-business is done by UNCTAD, along with UPU and IDC. Of course, UNESCO has several action lines that cover the whole knowledge society’s part. So it’s a beautiful framework of the UN in action. So a lot of work is being done there. There are more than 30,000 projects aligned with the sustainable development business action lines in the WSIS stock taking database. Now, as all the panelists before me said, multi-stakeholderism is the key principle of the WSIS framework, the UN framework called WSIS. If you look at it, we have our UN colleagues, we have the countries, we have young people who are following it. We have representatives from the technical community, the civil society, private sector, all have played a very passionate role in the implementation of the WSIS action lines since its inception. So we want to keep the spirit and the momentum alive. So this is our wheel of implementation. I already touched upon it. The WSIS action lines, WSIS forums, stock taking, UNGIS, great implementation action wheel, which is already working very well, and all stakeholders are working together to implement it. Now, we heard about the GDC and the GDC, we are very happy that it has been endorsed and it’s already in action, but just we have a lot of similarities with the WSIS process, you know, capacity building, protection of human rights, innovation, knowledge sharing, ethical use of technology, AI inclusivity, bridging the digital divide, promotion of international cooperation. So you can clearly see that, of course, there was always an alignment and the UN group has actually worked on a matrix which is available online. to map the WSIS process and the 2030 agenda with the GDC principles and the existing framework that is already available to implement the GDC. The UNGIS group, those of you who have not heard of it, we are a group of UN agencies and the chief executive board ensuring that digital always remains an important mandate in all of our agendas. So what have we achieved since 2003? You know WSIS has targets. If you go to the WSIS outcome documents, we have very basic targets. Connecting the schools. So if you look at the Giga maps, 280,000 plus schools are connected but we really need to do a lot more work. 65% of women are using the internet. 79% of youth are using the internet. We are working with WHO to collect the data for how many hospitals are connected and 5.5 billion people are online. So these are great achievements but we need to address the gaps, you know, and the new gaps also that have been coming up. So WSIS has continuously evolved with, you know, emerging technologies. You know, the action lines have provided a very very sound framework to include and adapt to the emerging trends in technology. So save the date. Ambassador Schneider already informed us we are moving to Pal Expo Geneva 7 to 11 July. Please do book already if you can because you can cancel them later in case your plans change but do book them because last year some of you did have problems when you waited till the last minute. So please do book them. Do register soon. Registration will be open and as WSIS is a UN process we will have accreditation and visa support from Switzerland. So we are working on all of that closely with and it will be all available very soon. So our objectives, you know, this year, we use the WSIS plus 20 high level event to kind of look at what we’ve achieved in 20 years. But next year, we should definitely look at beyond. We really want more consultative sessions, more outcomes, and really concrete actions and suggestions from all stakeholder communities, the private sector, civil society, technical community, governments, UN system, what are we doing beyond 2025? So we really need to start thinking and engaging. So our agenda is built through an open consultative process. This is one of the venues where you have to provide your suggestions, but there is an online form that you should be filling up and submitting your suggestions until the 14th of March. So based on this, we will be building our agenda and program. So the open consultation process, the next one would be in February, 11th of February at the ITU headquarters. After that, in April, we still don’t know the date. And then on 10th of June, we will be having the final brief. Draft agenda, this is how it kind of looks like, very, very draft Monday. So we are still waiting for the inputs through the open consultation process, but this is kind of the skeleton and we will upload this presentation on the website so you can take it from there as well. Consisting of high-level tracks, interactive sessions, WSIS prizes, hackathons, ambassadors and regulators roundtables, the UN in action exhibitions, and of course, the 20-year celebration. We must celebrate what we’ve achieved. Key topics that we hear of, the Youth Day, Halima, our youth envoy. more about it. This is plus 20 review, business and academia roundtables, the gender track, extremely important for all of us here. UN focus on the implementation of the action lines and civil society roundtables. So high level track, as every year we will have head, we are also looking at the participation of heads of states this year. We have received several, you know, we have received several interests I can say, because we will also be celebrating the ITU’s 160th anniversary during the AI for Good and the WSIS high level event. So of course you heard about the database, this is a beautiful database by the people and for the people, we are just the means to, you know, we’re just providing a platform, it’s searchable by all these categories, it’s really a wealth of information about what’s going on in the ground. The WSIS prizes, I can already see so many of you have been champions and winners sitting in this room, please do not forget to submit before the deadline, it’s 10th of February. Very interesting, we want to capture your WSIS story. So please participate in this campaign, it’s a social media campaign, join us on what WSIS has done to impact you or the lives, your lives or your organization’s lives in 20 years. So we did have this story from Marcus, but it doesn’t work, can we try it once please? Colleagues, could we try to see if you could click on play and if people can hear it? Can you hear it? Okay, so I’ll leave this presentation online, we had Marcus’s story and we have Wendy’s story from the Dominican Republic about how WSIS has impacted their lives and their organization’s course, of course. So, colleagues, WSIS is extra budgetary at the ITU. We always look for partnerships, and in terms of the partnerships, we do give visibility at the WSIS Forum. So we are very thankful to those of you who have already confirmed your sponsorships for WSIS. I don’t think I can take your names yet because the agreements are being signed, but thank you so much. You know who I’m talking about. Some of them are still open, so please do get in touch with me in case you would like to be visible at the high-level event. Again, a quick reminder for the calls, the open consultation process. If you want workshops, exhibition spaces, if your high levels are coming, please do inform us. The WSIS stock-taking database, register your new projects there. The WSIS prizes, the deadlines coming up, photo contest, which I didn’t talk about, but we have a really wonderful repository of photos from the ground, of people implementing WSIS on the ground. So if you’d like to use it for your presentations, for your reports, please feel free to use them. The WSIS special prizes, hackathon, we are planning to do a hackathon on gender-disaggregated data on health, and if any of you are interested in it or have other suggestions, please contact us. So I’ll stop here now. Well, there are some slides on the review, which I can probably go through very quickly, but I’ll leave this presentation online. So some of the important timelines identified during our weekly meetings, we have a joint preparatory process of ITU, UNESCO, UNCTAD, UNDP, and UNDESA. The milestones, of course, starting in SDG Digital in 2023, it goes on. The next one would be the IGF in Norway, then the CSTD review, the Paris phase, 4th to 5th June. Cedric spoke about Geneva WSIS plus 20 high level event and culminating in New York. All of us are already doing a lot of work, the ITU already has a Secretary General’s roadmap for WSIS plus 20, the UNESCO has a report of the DG and the CSTD will come up with the WSIS plus 20 report very soon. Maybe I can then very smoothly pass on to Cynthia if our chair would like to explain.
Cynthia Lesufi: Thank you and good morning to everyone. Yes, colleagues, you will recall that in the 2024 council, the council members adopted a resolution and in that resolution we included an invitation to contribute their views on the work of the ITU in the WSIS plus 20, relating to the review of the WSIS action lines and in that resolution we also approved a timeline and in terms of that timeline, the ITU was supposed to issue an online form which was indeed launched in August 2024 for all stakeholders to then respond to the call and the deadline for that is the 31st of January 2025 and following that we then had the first physical meeting which took place in October 2024 of the council working group WSIS and SDG and the next item will then be happening in February 2025 with the council working group on WSIS and SDG holding its second physical meeting and then we will then move to the 7 to 11 July 2025 where we will be having a side event during the WSIS plus 20. a high-level event in 2025. Thank you.
Gitanjali Sah: Thank you, Cynthia and South Africa for being such a close partner of the process. Also, just to remind all of you, the UN has been working on updating the action lines in terms of content. So each UN agency, based on their expertise, has already updated the action lines based on the evolution of the context, the 20 years of achievements. I would really like to encourage you to visit the page on WSIS Forum, WSIS Plus 20, and have a look at this wonderful work that our colleagues have done. We are now opening the floor, of course, and passing on to our vice chair, South Korea, from Council Working Group on WSIS and SDGs. Mina, the floor is yours.
Mina Seonmin Jun: Thank you, Kitanjali. I was going to share a bit of the updates in our region with you. So as you know, Asia-Pacific regions are large and diverse, both geographically and culturally. So it includes advanced economies and less developed countries, and as well as nations with unique geographical challenges, as you know. The small island developing countries and the land of developed countries. So this diversity creates a unique digital landscape for offering both significant opportunities and challenges. So we all work together. So we all put our work together. So over the past 20 years, this region has made a significant progress in advancing the WSIS outcomes, thanks to the collective efforts of member states, and ITU, and other WSIS co-positors, and then many other stakeholders. So this collaboration has led to remarkable achievement in ICT infrastructure development, digital inclusion, and adoption of emerging technologies. So I will continue to explain more at our next session, but I just want to briefly share with you. Thank you.
Gitanjali Sah: Thank you so much, Meena. Also have Brett. She, APC, has been leading the work on the ground, so much of research, so much of implementation on digital for development, ICTs for development.
Anriette Esterhuysen: Over to you, Anne-Marie. Thank you very much, Gitanjali. Association for Progressive Communications is an international network of member organizations from around the world. We work in around 70 different countries, and we have been part of the WSIS process from the outset. We are a co-facilitator, which I think is quite a unique little bit of history that we sometimes forget, that the non-state actor and a non-UN agency as a co-facilitator, together with the ITU. I just think for civil society, particularly those from the global south, this is a very good opportunity. Can you hear me? To refocus, can you hear me now, on what is so unique about the WSIS vision, is that it integrates a human rights-based approach to development, to social and economic justice. It is about a people-centered information society, not a digitally-centered information society. Already, APC, along with IT for Change, has initiated a new forum called the Global Digital Justice Forum, which is bringing together primarily, but not exclusively, civil society from the global south. to use this opportunity to launch a revived digital justice agenda. I think WSIS is also for us, as a multi-stakeholder community, an opportunity to find the kind of community and collaboration, the North-South, business, tech, civil society, UN, government solidarity, which I think is important. It’s not just about multi-stakeholder collaboration, it’s also about international collaboration and international solidarity. Just to say that I think, I mean, one proposal I would make, I think it’s incredible that you’ve already, that UNGASS has already done this analysis, that combines the GDC objectives with the WSIS action lines. And one suggested, we’ll submit it, but I want to make it now already. I think, to think of using the WSIS forum, to have your high-level component, which is so useful to bring governments to the WSIS forum, but to have an event design approach, where you actually work, rather than have lots of workshops, to bring the community together, to validate the work that the action line facilitators have done, in updating the action lines, to assess how that relates to the GDC objectives, and have maybe an open space or a thematic track proposal, where instead of people just all presenting their own workshops, they actually assess, analyze, what have we achieved, where are the gaps, and then look at those action line updates that the UN agencies have prepared, and comment critically and creatively on those, so that we leave the WSIS forum, not just with a set of discrete workshops, but active output, that can inform both GDC implementation and the WSIS Plus 20 review.
Gitanjali Sah: Thank you very much, Anurag. Indeed, we were thinking of doing that, some knowledge cafes, brainstorming sessions with the Action Line facilitators and this community to look at all this work that has been done. So thank you, but please do submit it through the form as well. Is Cheryl here? I do not see her here, but Cheryl from USCIB, because she had no. Could you please check if she’s online, Cheryl? In the meanwhile, we’ll move on. I can already see many here. Many represents ICC, International Chamber of Commerce. This has been the private sector arm of the WSIS process right since its inception. So many, what are your plans and what is the vision of ICC for WSIS beyond? Over to you.
Meni Anastasiadou: Thank you, Gitanjali. And yes, as we just said, I’m from the International Chamber of Commerce. I’m a digital policy manager representing ICC and we are the international business organization. We represent over 45 million businesses across 170 countries and this has been a very long process that we have been part of since the outset and we value extremely from a private sector perspective. We will be at the WSIS Forum next year. Of course, it’s a crucial year as we are preparing ourselves ahead of the WSIS Plus 20 review and we’re really looking forward to make sure that we bring them at sector, make sure that all the people are and we really ensure a strong, let’s say, preparation ahead of the review. So there are a few considerations that we really consider, let’s say, crucial and ahead of the WSIS Plus 20 review. On the importance of the information technologies and the Internet and how those really hold enormous potential for social and economic growth. And of course that this potential, this very potential can only be unlocked when there is multi-stakeholder collaboration across governments, civil society, business, the technical and the academic community. And those all have held true through WSIS, through the WSIS Forum that really tracks the progress made over the past year since the WSIS first took place in 2003 and then in 2005. And we really do see the WSIS Forum as a valuable component in the process of really tracking the progress made and making sure that we are headed to the right direction. So again, as business, we are really taking our role seriously in advancing the WSIS Action Lines and we will make sure to be present at the WSIS Forum to work with the governments on the ground, to work with all the stakeholders on the ground, to inform and partner together, to inform the process and partner together for really ensuring outcomes that serve everyone’s interests. So just to reiterate our support for the WSIS Forum next year, we will be there and we’re looking forward to the discussions that will be taking place. Thank you, Vitanjali.
Gitanjali Sah: Thank you so much, Mani. We look forward to the private sector brainstormings and the consultative sessions. So Cheryl is from the United States Council for International Business. She was supposed to be here, but she’s not here. But they have already informed us that with you, they will be doing some consultative sessions, bringing the business perspective to the WSIS Beyond 2025. So thank you so much for being here with us. I also see Renata from Brazil. Renata is also our vice chair representing the region. So Renata, some perspectives from your region, please.
Speaker 3: Hi, good morning for all. Actually from, can you hear me? Yes? Sorry, because my channel just changed here, sorry. From our perspective, we are also aligned about the need to update the action lines for this moment now, so we have new and emerging technologies that we need to be considered, and also the sustainable and green technologies, all of these I think that we have to consider to make a good WSIS plus 20. And about the collaboration, I also believe that the multistakeholder environment is important, and the collaboration between government and private sector and civil society, they are all very important to continue to have a unique, open, free internet. That’s our goal that we believe. So I think that it is. Thank you very much. Thank you, Renata. We have a remote participant who wants to say something. Nandini, our remote participant. Nandini, if you could please bring her in. Hi, are you able to hear me? I can hear you. Yeah, I am Nandini from IT4Change. Thank you for the opportunity to contribute to this session. So we just wanted to say that in the context of the GDC adoption and when we are going into the WSIS plus 20 review in this context, there are a set of issues which we feel the review process needs to look at. To begin with, I think it’s important to again put on the table that the foundational principle is of governing the internet as a global public good and not as a utility. And this issue should be brought up again. And the second issue is that when we look at the question of the digital divide, we see that the digital divide is continuing to yarn wide. And today, there are not just divides in connectivity. But there are also new divides in data capabilities, and the opportunity to design concrete targets for financing for development mechanisms to build digital infrastructure in the majority world must be put back on the table. When we look at the action line on the ethics in the information society, we need to reinvigorate this to look at how we can protect and promote human rights in the digital context, effectively in the context of market concentration and corporate accountability for human rights violations in digital value chains. Finally, we need a data and AI constitutionalism at the global level urgently, and the issue of digital gene sequences and digital biopiracy, and how this is changing the implementation of the biodiversity convention today is a good exemplar for this. And how do you govern cross-border data flows from a development justice basis, and you reconcile the political quest for digital sovereignty as infrastructural autonomy with data and AI standards development processes, these issues become important. Considering that many of the issues are now channeled through enhanced cooperation processes for which the Global Digital Compact has paved the way, I think it’s absolutely essential given this is digital multi-stakeholderism principle to look at how we can expand the role of the UN Internet Governance Forum, and how can IGF fulfill the mandate given in paragraph 72 of the business document in this new context. This might become a critical issue for us to discuss towards the business review. Thank you so much.
Gitanjali Sah: Thank you. I would now like to invite Mr. Hossain from the Inter-Islamic Development Bank. Over to you, sir. Okay, so,
Speaker 4: thank you, first of all, for inviting. the Islamic development to this open concentration process and as one of the MDBs we are so concerned about the ICT and digital sector and the bank has drafted its ICT sector policies which have four main pillars including infrastructure and regulations and mainstreaming ICT in all development sector and this is the pillars of the of the policy is fully aligned with the forces action plan and if you see the listed action lines in the first presentation and our main focus areas is fully aligned with this action lines of forces we see that the partnership is a key success because as an MDB we found that there is huge requirements in terms of finance and private sector involvement is a must and because of that especially at the south-south institutions we are having 57 member countries most of the member countries in the south so we see that capacity building and development regulations for promoting private investment is a key and because of that we have many programs in the capacity development and in development the capacity and the regulations of the least developed countries to promote the private investment in the sector and also we have some modality for the partnership and sharing of expertise from the countries in the north or even in the south to some of the least developed countries in this sector So, with that, we see that private investment and development of regulations in addition to building some of infrastructure is the key for development of this. Thank you so much. Thank you.
Gitanjali Sah: Thank you, sir. And we look forward to seeing you at the forum. Sweden has the floor, Felix, over to you. Thank you. It’s working. Hi, everyone.
Felix Nyström: My name is Felix from the Swedish Ministry of Foreign Affairs, based in â you can’t hear me. Like this. It’s better. Good. Thank you. Based at our mission in Geneva. From our side, obviously, we fully support the multi-stakeholder model embodied through the WSIS process and the IGF and are excited about the important WSIS events and milestone of 2025. A lot of momentum in this process now since the adoption of the GDC, which Sweden was proud to co-facilitate together with our friends from Zambia. As you may know, the modalities of the new Office for Digital and Emerging Technologies, the follow-up of what was called OSET, is now being discussed and debated in New York. From our perspective, it’s important to avoid duplication, as we’ve heard from others here in the room. There is already a lot of important work being done by various UN agencies, and as I said, we want to avoid duplication in that regard. Additionally, another priority for us is is the, what’s the word, ensuring, I mean, human rights mainstreaming across these new initiatives being taken. We’re excited about the new AI dialogue that was proposed in the GDC and AI panel, and importantly also the new AI advisory board envisioned to be organized by the OHCHR. So that’s it for me. Thank you.
Gitanjali Sah: Thank you very much, Felix. I think UK has the floor. Craig. Thank you, Kisanjali. Good morning, everyone.
Craig Stanley-Adamson: My name is Craig Stanley-Adamson. I am the Head of Internet Governance at the UK Department for Science, Innovation and Technology. My comments are largely around the process, particularly in the run-up to the high-level events next year, but I would like to touch upon a couple of the points that have been made today. First of all, just to say that I fully agree with my Swedish colleague and many others around the room about avoiding the duplication of the GDC, but of course we accept that WSIS will play a strong role in implementing this process, as well as delivering its other processes such as the SDGs. I want to touch upon some of the comments that are made on the action lines. Whilst these are obviously crucial, and of course these were made 20 years ago, we should be a little bit cautious about how we approach them. For example, they were designed as deliberately agile and tech-neutral at the time, which means that they can stay current to this present day and beyond. So instead, we need to look at how we frame the outcome of WSIS document around the impacts of these technologies in a particularly future-focused and action-oriented way, which can also ensure that these action lines remain current going forwards beyond 2025. So touching upon the process that we were talking about earlier, and thank you Gitanjali for your presentation. and the UK hugely welcomes the fact that the UN family is fully joined up during this process, which is a great start. We obviously fully support the multi-stakeholder process and participation behind this, and I think for us the crucial element to this alongside all of these events that are going to take place over 2025 is the role of the co-facilitators. Of course they’re not permed, they don’t know who they will be just yet, but whoever they are will play a key role in this process. So one thing that I think could be really important for the high-level event in July next year is for them to be present at this event alongside all the other events that are taking place, including the IGF. They need to make sure that they get a true audience with the multi-stakeholder community, and I think one thing that’d be really crucial as well, and this would be interesting to get views from UN agencies, is whether they can work together with the co-facilitators to ensure that there is some form of issues paper to be presented at either the UN IGF or the high-level event in July, so that’s not just countries but all stakeholders have an opportunity to comment ahead of the process that will take place later in the year. So yeah, thank you very much.
Gitanjali Sah: That’s a wonderful idea, Craig, and I’ve put up the timeline slide and I missed that part, Craig, so thanks for bringing it up. The appointment of the co-facilitators is like, it should have been done yesterday perhaps, so it’s really urgent and that would be really a nice direction that the WSIS process would take, so we’re all waiting for that actually, yeah. I’d now like to invite Global Ethics, Mr. Wallace, and Global Ethics, it’s a baby of the WSIS, right? Tell us more about it.
Wallace S. Cheng: Thank you, thank you very much, Chidangali. As you said, for some of you who are not familiar with Global Ethics, Global Ethics is a foundation based in Geneva with global presence. We started as one of the outcomes of WSIS with the objective to promote access to knowledge and ethics of ICT. So I just want to make one point to compliment what we have heard, is that inclusiveness was highlighted in the beginning of the process, and 20 years later, I think the inclusiveness need to be raised to a new level. Not go beyond from workshops and dialogues, but move to encourage, support local innovation. For example, we thought about how we can, someone already mentioned how we can identify the gaps of those local innovation in Global South, how to build a matchmaking platforms between Global South and North, between business and civil society, between innovators, local innovators, and with philanthropy, with government, with multilateral banks. I think my, just to conclude, I think there’s opportunity for us, all work together to raise the inclusiveness to a new level. Thank you.
Speaker 5: Oh, Wallace, if you could just pass the mic to Samia, World Bank, she’s just behind you. Samia? Okay, Samia. Oh, it’s for me. Yes. It’s, yeah, but I’m not World Bank. Yes. Well, is Samia in the room? Apologies, too many. Okay, so. In any case, in one, just one minute, just to say that I’m from UNEWGO, not from World Bank, but yes, we fully support, we have been cooperating strongly in all these processes. We will keep doing the same. We are living a crucial moment. We need indeed, and a lot of colleagues already mentioned in the room, we need to create some organization and create some clarity about all these instruments. can be used to achieve what we want. I think this is one of the strongest messages that I’ve been hearing since the beginning of this event. Thank you very much.
Gitanjali Sah: Thank you very much, Delfina, from United Nations University. Apologies for that. We also have Halima, our Secretary General’s Youth Advisory Board member. So Halima, what’s the youth perspective you want to bring in and how can we make our Youth Day at the WSIS Forum more active and action-oriented?
Halima Ismaeel: Thank you for your invitation in this room. And I would appreciate the Youth Day that will be during the IGF during the WSIS next year. I have two concerns to be raised. Firstly, the first thing is the digital devices rely on scarce resources, contributing to a digital e-waste crisis with inadequate recycling efforts. So I think WSIS must recognize the role of reusable global goods in achieving these goals. I think WSIS and the WSIS process should focus on this thing. The second thing I want to mention is the WSIS process is always focusing on AI and IoT. There are new emerging technologies to be focused on, like the digital twin technology and biofiber technology. For example, when I’m working on submarine cables, if any cables need maintenance, there is a technology to auto-repair the damage of the cable. So I think we should focus on this thing. The third technology I want to mention is subspeed spectrum management. I think this is a modern topic that we should focus on. So, I think engaging youth in Davos is a good thing, and thank you.
Gitanjali Sah: Thank you so much, Halima. You brought so many important points into the conversation, so please submit your inputs through the open consultation process, because that’s how the agenda is built. It has a bottom-up approach, and oh, hello, Maud, I just noticed that you were here. Thank you. Thank you for joining us, and as the host, Maud, and always one of the strategic partners of the WSIS forum, Saudi Arabia, would you like to please say a few words?
Speaker 6: Thank you so much, Chitangeli, and good morning, everyone. It’s good to be here with you, and I see many familiar faces here in Riyadh. I welcome all of you again, and I hope all of you enjoy your good time. Of course, the GDC and the WSIS, it’s a very critical subject at the moment, and as we heard already, the GDC was adopted with a very ambitious target and objective, and we wasn’t expecting such a successful outcome from the United Nations. I’m sorry, I’m not feeling comfortable to talk from that point. So the challenge, in my point of view, is the implementation part of the Global Digital Compact. So if we leave that section of the GDC, it’s not clear who exactly will implement and follow up that process. So we have… the WSIS process, namely the IGF, the WSIS Forum, we have the CSTD, we have the TechInfo in New York, and of course we have the many United Nations agencies. All of them will contribute to implementation of the GDC. But here we have the risks. We have the risk of duplication, we have the risk of wasting the financial and the human resources from all sides, from the United Nations side and from the whole stakeholder, the private sector, the civil society. So it will be quite difficult for all of us to streamline and to follow up the implementation of the GDC. So now we have the WSIS Plus 20 review. I think it’s crucial for us, all of us, all the stakeholder, to make sure the main objective of the GDC, it should be implemented somehow with the WSIS review process. This is good for all of us to make our life much easier, to have a single window to follow up, to participate, to contribute. And of course, the many agencies and the TechInfo could be a part of this. So as you say, Gitanjali, many expertise, there will be assignment for the co-facilitator. And from my point of view, it’s good to focus. One of the good aspects to be focused on is how to streamline or how to maximize the collective efforts between the different GDC. I’m sorry, Gitanjali, dear colleague, I wasn’t prepared to take the floor, but this is a valuable chance to take any deliverable. Thank you so much.
Gitanjali Sah: Thank you very much, Moaz. And Saudi Arabia has had a very crucial role in the WSIS process, the success of the 20 years of implementation, so thank you very much for that. Ambassador Schneider, you want to?
Thomas Schneider: Yes, thank you. And I think it is very obvious looking at this or listening to the discussion. that the expectations are very high, also diverse, that we have a limited amount of time and a limited amount of resources to actually accomplish what we are hoping or trying to achieve. So I think it is, and just to follow up on what our Saudi colleague has said, we need to concentrate and focus resources, we need to try and bring the different threats together. Of course you can’t, there’s no one-size-fits-all, but I think it is crucial that we build on the existing structure, on the expertise of the UN specialized agencies in their fields, with their partners in all the countries and the network that has been grown, facilitated by a number of partners over the past 20 years, that the political vision of the GDC, if we create a new structure to try and implement this, it will take us another 10 years just to create the structure, so we have to, we have no choice, than building on the WSIS process in an inclusive way, and inclusive and efficient, sometimes not easy to combine, but I think we have to find a way. We have 20 years of experience with the WSIS framework, we know more or less what works, what is more difficult, and really have to concentrate on using this framework. In terms of substance, we will not have too much time to discuss about how do we, which action lines we need to transform or renew, but we may have to have some discussions on this, so that the framework is up to date, but I agree with many that it’s actually fairly technologically neutral, so we may not have to reinvent the wheel, but there are some new issues also identified in the GDC, but this is already being mainstreamed into the work of the ITU, of UNESCO, UNDP, of other institutions. We just need to bring this together in an open and transparent way. It’s not going to be easy, but I think together we can do it, we have to do it. Thank you very much.
Gitanjali Sah: Thank you, Ambassador Schneider, and for the passion. And you can, this is the reason why the WSIS process is so effective because of the passion of all the stakeholders working for more than 20 years. And we were reminded in the previous session, you know, we started from community radios, the open source movement, narrow casting, like basic technologies, you know, and we moved to AI, meta. So it’s really the world has evolved. And because of this passion and hard work of the community here. Sir, you had asked for the floor. Please introduce yourself.
Audience: Qusayr Shati from Kuwait. First, I would like to commend my dear colleague, Thomas Snyder. He’s 20 years my colleague, but I forgot his name. And we’re getting older, Thomas. We’re getting older. And of course, let me second the comments that has been said from our dear brothers from Saudi Arabia on focusing and concentrating the efforts. And of course, it was the comments also supported by my dear colleague, Thomas. In that regard, I just need a clarification. You have mentioned that there will be open consultations in 11th of February. And that consultation will be during the working group meeting on the WSIS and the SDG, the IT working group meeting. Usually the working group meeting is only for member states and sector members. Is the open consultation on 11th of February that will take place in Geneva will be multi-stakeholder or it will be only, just a clarification.
Gitanjali Sah: Thank you, sir. So basically we use the opportunity of all of you being in Geneva and we will have like one hour outside the council working group to expand it to the community to be present. So it will be not part of the council working group on WSIS and SDG. Yes, you can just come, you can just come for the meeting. I had seen a hand here and then I’ll move to Anirudh. Did you raise your hand, sir, or was it, okay, Giacomo. Thank you.
Audience: Giacomo Mazzone, I’m co-chair of the Policy Network for Meaningful Access at the IGF. I want to elaborate on what has been said by previous speakers, and it’s about the future of the process WSIS plus 20. Within the WSIS process, you are doing the same reflection that we are doing within the IGF. In the IGF, we are discussing not only about the renewal of the mandate, but about the change of the mandate. And I think that the change of the mandate could be an opportunity to reinforce the strength of the relationship between the WSIS and the IGF. There is a similar process going on within UNGIS on that. Good morning, everyone. Mike Colton from UNHCR, UN Refugee Agency. Just say we welcome this process, really looking forward to supporting as much as we can. With UNHCR’s digital strategy, its digital protection, its digital inclusion, and really information integrity is becoming more and more important. So with 120 million displaced and many stateless as well, we’d love to work with you to see how we can build those voices into the conversation, both at the event and leading up to the event. So happy to help and happy to build that in to get a truly multi-stakeholder approach here. Thank you.
Gitanjali Sah: Thank you, Mike. We really hope to see more of this work at the WSIS Forum. We have been connecting the refugee centers to the WSIS Forum in the past, and that’s been really successful. So that would be great if we could continue that and strengthen it. Thank you so much. Okay, Anirudh. And then we’ll take one last speaker, please.
Anriette Esterhuysen: Thanks very much for letting me speak again, Gitanjali. I just was inspired by Halima, and Mohad, and Thomas, that it is about not duplicating and not having poor use of resources, but it’s also about connecting people. It’s people and the institutions that they are in that implement, and I think what we have of the WSIS, I just look at this room, I see Tijani, I see Desiree, I see Fouad, you all know who you are, Raquel, there are many of you, Avis, who have been with this process since 2003, 2005, and are you not hearing me? Sorry. We’re hearing you twice. Yes, we are hearing you twice. Is there another mic around? You know what, this is amazing. Sorry, apologies. And I think that the power of WSIS is that you have this continuity, you have older people, and they are in this room because they are working, they are implementing on the ground. And then I’m here, we work a lot in spectrum and dynamic management about spectrum and with community networks. When I hear Halima talk about the power of undersea spectrum, I think that’s incredible. Small island developing states, what is there? There’s a potential. And I think we do need to bring new voices in, we do need to innovate, but I think our real power is this intergenerational, inter-institutional, and I suppose also inter-tech, the fact that the WSIS represents people that are still working in basic digital equality and access, but also engaging with new and emerging challenges. But then I just wanted to make one other reflection, which is more procedural. The WSIS has two outcome documents. It has the Geneva documents, the plan of action, which has the action lines. It also has the Tunis agenda, which includes financing mechanisms. And I’m just, of course, on internet governance. And I’m just wondering if we can’t this year in the WSIS Forum include those action lines in quotation marks, they’re not called that, those outcome areas and gender, how we design and structure the event. Because I think there is a concern about financing mechanisms, it’s in the pact for the future. It’s mentioned in the Global Digital Compact as well.
Gitanjali Sah: Thank you, Anirudh. All these comments are very well taken, but please submit them through the open consultation process. There’s a form. This is absolutely transparent because once we receive your request, it is put into the agenda of the event. And we also reflect all these inputs online. So we really want it to be like this transparent bottom-up approach. So submit all these inputs into the open consultation process. So, Anirudh, thank you. So, any last speaker? Okay, yes, sir. Thank you.
Audience: I think driving home the new faces approach, then we started participating in the WSIS only a few years ago. My name is Rian. We are from the Brazilian Association of Internet Service Providers. So small operators. And we really would like to participate in the consultations and make sure we’re also bringing the small and medium enterprises approach and view to the WSIS as well, because we play, at least in Brazil, we play a very important part, not only on ICT, but on infrastructure as well.
Gitanjali Sah: Thank you so much. This has been an important track at the WSIS Forum, small and medium enterprises. So you’re most welcome. We also do a hackathon every year, like I mentioned. So we have many innovations coming up and we are looking for incubators as well for all those. So if any of you are interested in incubators. incubating these really young, interesting talents that come out of the hackathons. We’ll be very, very happy to connect all of you. I maybe will close with the young lady I met during breakfast and she’s doing such amazing work, mentoring young women and girls over to you in Peru and in Paris.
Audience: Thanks so much. Good morning to everyone. My name is Paola Galvez. I’m Peruvian. I’m still young. And under the UN definition, I’ve been participating in the Youth Observatory of ISOC. I was Youth Ambassador in 2019. And I want to bring the perspective of South America as a Peruvian living now in Paris, but contributing from the Latino perspective. There is a lot to do still in terms of our future digital era. I truly believe that the future of the AI is in our hands. I think the Council of Europe AI Convention, it’s a huge milestone and sets the first step of having global standards on AI regulation. But there is a lot to do. And when I met Jiteng Ali during breakfast, she gave me a light in terms of, Paola, we need to think beyond 2025, right? And with that being said, the gender digital gap, it’s a topic that was brought by Ms. Doreen, by our host, our minister that did an amazing presentation on the Open Ceremony. So the numbers are there. I will not repeat them. But what can we do for that? My small piece is, yes, mentoring girls, because I do think we need to change the mindset so that they can go out of school thinking, yes, I can be part of the STEM sector. But if it’s possible, I would like to raise my voice and ask all the ambassadors that are here and policymakers so that we can have an action line on gender gap. I think it’s necessary, I have not seen it in WSIS, if it’s not in the declaration maybe having as an annex or I don’t really know the exact word the policy can make but let’s think about gender gap as a very important topic because only three out of ten professionals working in AI are women. The data that is being used to train the AI systems are not representative so please and we don’t need to be ambassadors or representative to make a change in that right so if in our respective roles we can do something like mentoring kids or sending proposals to consultations. I just finished the UNESCO AIRAM consultation process in Peru so there’s something we can do, let’s do it because those small steps make a change.
Gitanjali Sah: Thank you so much for this space. Thank you very much, we’ve actually run out of time
Audience: but very short, very short. Good morning, this is Fawad Bajwa, I’m from Pakistan. I had an organization which is called Digital Dera which connects rural communities to the internet and helps them capitalize on the power that the internet has brought. I’ve been with this community for over 20 years now, as old as the WSIS process. So over the years one thing we’ve you know there’s so many developments, so many representatives like you know all my friends over here including Thomas but one community which still feels left out is our rural communities and our rural community in my part of the world is 65% of the population. So a few years back when we went down to connecting these villages to the internet, we found that there was so much talent and so many perspectives that have never been heard. The review process also encourages us to look at, you know, what things have been either left out, been neglected, or haven’t been adequately included. And those are our rural communities. I would request the host country, as well as, you know, the international stakeholders, to have specific, you know, a village or an activity over there, which highlights the rural struggle to connect to the internet. Despite our 4G, 5G, 6G efforts, rural communities still, to date, remain disconnected. And they require an extraordinary attention, because they’re the other billions that we’re trying to work for and connect. So their representative is a must. The second thing I would like to offer is, I’ve been part of the National Incubation Centers in Pakistan, and the United Nations Population Fund and ITU, mentoring for startups. So I’m connected to that network, and I can volunteer and offer you that possibility for mentoring the kids, the youth, to actually build initiatives for the next 5-10 years, that would have a huge impact on the future of WSIS. Thank you.
Gitanjali Sah: Thank you very much. We’d like to end our session here, but in the same room at 11.30, we have an ITU session led by South Africa and our vice chairs. So please do be there in the same room at 11.30. Thank you so much. Bye-bye. Sorry to interrupt you, did you see I sent you a message? Yeah, I’m just reading it now. I haven’t seen it in so long. I mean, why the IGF and things are not mentioned? Why is IGF not here for this? What do you mean with that? Oh, the meaning of that is that the IGF, we’ve also been saying that the IGF needs to connect with business plus trade. But the IGF plays a very passive role in this. It plays a very passive role. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. . . . . . . .
Thomas Schneider
Speech speed
151 words per minute
Speech length
1158 words
Speech time
458 seconds
Multi-stakeholder collaboration is crucial for WSIS implementation
Explanation
Schneider emphasizes the importance of inclusive cooperation among all stakeholders in their respective roles for implementing the WSIS vision. He states that this collaboration allows stakeholders to learn from each other and work towards shared goals.
Evidence
Mentions the 2003 Geneva summit where stakeholders agreed on a shared vision for an inclusive, people-centered information society
Major Discussion Point
WSIS+20 Review Process and Future Direction
Agreed with
Renata
Anriette Esterhuysen
Agreed on
Importance of multi-stakeholder collaboration in WSIS implementation
Importance of building on existing WSIS structure and expertise
Explanation
Schneider argues that creating new structures to implement the Global Digital Compact would be time-consuming and inefficient. He suggests building on the existing WSIS framework and expertise of UN specialized agencies to move forward efficiently.
Evidence
References 20 years of experience with the WSIS framework and knowledge of what works
Major Discussion Point
Implementation and Resource Allocation
Renata
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Need to update WSIS action lines to address new technologies and challenges
Explanation
Renata emphasizes the importance of updating the WSIS action lines to reflect current technological advancements and challenges. She suggests including considerations for new and emerging technologies, as well as sustainable and green technologies.
Major Discussion Point
WSIS+20 Review Process and Future Direction
Agreed with
Thomas Schneider
Anriette Esterhuysen
Agreed on
Importance of multi-stakeholder collaboration in WSIS implementation
Differed with
Craig Stanley-Adamson
Differed on
Approach to updating WSIS action lines
Craig Stanley-Adamson
Speech speed
197 words per minute
Speech length
466 words
Speech time
141 seconds
Importance of avoiding duplication with Global Digital Compact implementation
Explanation
Stanley-Adamson stresses the need to avoid duplicating efforts between the WSIS process and the Global Digital Compact implementation. He suggests that WSIS should play a role in implementing the GDC while also delivering on its other processes like the SDGs.
Major Discussion Point
WSIS+20 Review Process and Future Direction
Agreed with
Speaker 6
Torbjörn Fredriksson
Agreed on
Need to avoid duplication and streamline efforts in implementing Global Digital Compact
Differed with
Renata
Differed on
Approach to updating WSIS action lines
Speaker 6
Speech speed
143 words per minute
Speech length
394 words
Speech time
164 seconds
WSIS framework should be used to implement Global Digital Compact
Explanation
The speaker argues that the WSIS framework should be utilized to implement the Global Digital Compact. This approach would help streamline efforts and avoid duplication of resources across different UN processes.
Evidence
Mentions the risk of wasting financial and human resources if implementation efforts are not coordinated
Major Discussion Point
WSIS+20 Review Process and Future Direction
Agreed with
Craig Stanley-Adamson
Torbjörn Fredriksson
Agreed on
Need to avoid duplication and streamline efforts in implementing Global Digital Compact
Need to streamline efforts and avoid wasting resources in GDC implementation
Explanation
The speaker emphasizes the importance of coordinating efforts to implement the Global Digital Compact to avoid wasting resources. They suggest using the WSIS review process as a means to streamline implementation efforts.
Evidence
Mentions the risk of duplication and wasting financial and human resources from all sides, including the UN, private sector, and civil society
Major Discussion Point
Implementation and Resource Allocation
Anriette Esterhuysen
Speech speed
131 words per minute
Speech length
814 words
Speech time
371 seconds
Need to focus on digital justice and human rights in digital context
Explanation
Esterhuysen emphasizes the importance of integrating a human rights-based approach to development and social and economic justice in the digital context. She suggests reinvigorating the action line on ethics in the information society to address human rights protection in the digital realm.
Evidence
Mentions the need to protect and promote human rights in the context of market concentration and corporate accountability for human rights violations in digital value chains
Major Discussion Point
WSIS+20 Review Process and Future Direction
Importance of intergenerational and inter-institutional collaboration
Explanation
Esterhuysen highlights the value of the WSIS process in bringing together people who have been involved since its inception with new voices and perspectives. She argues that this intergenerational and inter-institutional collaboration is a key strength of the WSIS process.
Evidence
References the presence of long-time participants in the room who are still actively implementing WSIS goals on the ground
Major Discussion Point
Inclusivity and Stakeholder Engagement
Agreed with
Thomas Schneider
Renata
Agreed on
Importance of multi-stakeholder collaboration in WSIS implementation
Need to focus on financing mechanisms for digital development
Explanation
Esterhuysen suggests incorporating financing mechanisms as a key area of focus in the WSIS Forum. She argues that this is an important aspect of the Tunis Agenda and is also mentioned in recent global agreements.
Evidence
References the Tunis Agenda, the Pact for the Future, and the Global Digital Compact as documents that mention financing mechanisms
Major Discussion Point
Implementation and Resource Allocation
Importance of international solidarity in digital cooperation
Explanation
Esterhuysen emphasizes the need for international solidarity in digital cooperation. She suggests that the WSIS process should focus on bringing together diverse stakeholders to work collaboratively on digital challenges.
Major Discussion Point
Role of UN Agencies and International Cooperation
Wallace S. Cheng
Speech speed
136 words per minute
Speech length
185 words
Speech time
81 seconds
Need to raise inclusiveness to new level by supporting local innovation
Explanation
Cheng argues for elevating inclusiveness beyond workshops and dialogues to actively supporting local innovation. He suggests creating platforms to match innovators from the Global South with potential partners and funders from various sectors.
Evidence
Proposes building matchmaking platforms between Global South and North, business and civil society, local innovators and philanthropy, government, and multilateral banks
Major Discussion Point
Inclusivity and Stakeholder Engagement
Agreed with
Halima Ismaeel
Unknown speaker
Unknown speaker
Agreed on
Importance of inclusivity in the WSIS process
Halima Ismaeel
Speech speed
113 words per minute
Speech length
188 words
Speech time
99 seconds
Importance of engaging youth and focusing on emerging technologies
Explanation
Ismaeel emphasizes the need to engage youth in the WSIS process and focus on emerging technologies. She suggests expanding the scope of technologies discussed to include digital twin technology, biofiber technology, and subspeed spectrum management.
Evidence
Mentions specific emerging technologies like digital twin technology for auto-repairing submarine cables
Major Discussion Point
Inclusivity and Stakeholder Engagement
Agreed with
Wallace S. Cheng
Unknown speaker
Unknown speaker
Agreed on
Importance of inclusivity in the WSIS process
Unknown speaker
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Need to include voices of refugees and displaced persons
Explanation
The speaker from UNHCR emphasizes the importance of including the voices of refugees and displaced persons in the WSIS process. They argue that this inclusion is crucial for a truly multi-stakeholder approach.
Evidence
Mentions 120 million displaced and stateless people whose voices should be included
Major Discussion Point
Inclusivity and Stakeholder Engagement
Agreed with
Wallace S. Cheng
Halima Ismaeel
Agreed on
Importance of inclusivity in the WSIS process
Need to include small and medium enterprises perspective
Explanation
The speaker from the Brazilian Association of Internet Service Providers emphasizes the importance of including the perspective of small and medium enterprises in the WSIS process. They argue that these businesses play a crucial role in ICT and infrastructure development in many countries.
Evidence
Mentions the significant role of small operators in Brazil’s ICT and infrastructure sectors
Major Discussion Point
Inclusivity and Stakeholder Engagement
Agreed with
Wallace S. Cheng
Halima Ismaeel
Agreed on
Importance of inclusivity in the WSIS process
Importance of addressing gender digital gap
Explanation
The speaker emphasizes the need to address the gender digital gap in the WSIS process. They suggest creating a specific action line focused on the gender gap to highlight its importance and drive concrete actions.
Evidence
Cites statistic that only three out of ten professionals working in AI are women
Major Discussion Point
Inclusivity and Stakeholder Engagement
Importance of including rural communities in WSIS process
Explanation
The speaker emphasizes the need to include rural communities in the WSIS process, noting that they often feel left out despite making up a significant portion of the population in many countries. They argue for specific activities at the WSIS Forum to highlight the rural struggle to connect to the internet.
Evidence
Mentions that 65% of the population in the speaker’s part of the world is rural, and many remain disconnected despite advancements in mobile technology
Major Discussion Point
WSIS+20 Review Process and Future Direction
Speaker 4
Speech speed
120 words per minute
Speech length
269 words
Speech time
133 seconds
Importance of private investment and capacity building for ICT development
Explanation
The speaker from the Islamic Development Bank emphasizes the crucial role of private investment and capacity building in ICT development. They argue that these elements are key to addressing the significant financial requirements in the sector.
Evidence
Mentions the bank’s ICT sector policies focusing on infrastructure, regulations, and mainstreaming ICT in development sectors
Major Discussion Point
Implementation and Resource Allocation
Speaker 3
Speech speed
148 words per minute
Speech length
553 words
Speech time
223 seconds
Need for concrete targets for financing digital infrastructure in developing world
Explanation
The speaker argues for the establishment of specific targets for financing digital infrastructure development in the majority world. They emphasize that this is crucial to address the widening digital divide, particularly in data capabilities.
Major Discussion Point
Implementation and Resource Allocation
Gitanjali Sah
Speech speed
129 words per minute
Speech length
3664 words
Speech time
1693 seconds
UN agencies updating WSIS action lines based on expertise
Explanation
Sah explains that UN agencies are updating the WSIS action lines based on their respective areas of expertise. This process aims to reflect the evolution of technology and the achievements of the past 20 years in the WSIS framework.
Evidence
Mentions that updated action lines are available on the WSIS Forum website
Major Discussion Point
Role of UN Agencies and International Cooperation
Torbjörn Fredriksson
Speech speed
152 words per minute
Speech length
427 words
Speech time
167 seconds
Importance of UN agency collaboration in WSIS implementation
Explanation
Fredriksson emphasizes the importance of collaboration among UN agencies in implementing the WSIS outcomes. He highlights the value of the existing division of labor established in the WSIS outcome documents.
Evidence
Mentions specific mechanisms like the Partnership on Measuring ICT for Development and the UN Group on the Information Society
Major Discussion Point
Role of UN Agencies and International Cooperation
Need to leverage existing UN mechanisms for GDC implementation
Explanation
Fredriksson argues for leveraging existing UN mechanisms established through WSIS to support the implementation of the Global Digital Compact. He suggests that this approach would be more efficient than creating new structures.
Evidence
Mentions specific initiatives like E-Trade for All and E-Trade for Women that could be leveraged
Major Discussion Point
Role of UN Agencies and International Cooperation
Agreed with
Craig Stanley-Adamson
Speaker 6
Agreed on
Need to avoid duplication and streamline efforts in implementing Global Digital Compact
Mina Seonmin Jun
Speech speed
134 words per minute
Speech length
169 words
Speech time
75 seconds
Role of regional cooperation in advancing WSIS outcomes
Explanation
Jun highlights the importance of regional cooperation in advancing WSIS outcomes, particularly in the diverse Asia-Pacific region. She emphasizes the progress made through collective efforts of member states, ITU, and other stakeholders in the region.
Evidence
Mentions achievements in ICT infrastructure development, digital inclusion, and adoption of emerging technologies in the Asia-Pacific region
Major Discussion Point
Role of UN Agencies and International Cooperation
Agreements
Agreement Points
Importance of multi-stakeholder collaboration in WSIS implementation
Thomas Schneider
Renata
Anriette Esterhuysen
Multi-stakeholder collaboration is crucial for WSIS implementation
Need to update WSIS action lines to address new technologies and challenges
Importance of intergenerational and inter-institutional collaboration
Speakers emphasized the critical role of inclusive, multi-stakeholder collaboration in implementing WSIS goals and addressing new challenges.
Need to avoid duplication and streamline efforts in implementing Global Digital Compact
Craig Stanley-Adamson
Speaker 6
Torbjörn Fredriksson
Importance of avoiding duplication with Global Digital Compact implementation
WSIS framework should be used to implement Global Digital Compact
Need to leverage existing UN mechanisms for GDC implementation
Speakers agreed on the importance of using existing WSIS frameworks and mechanisms to implement the Global Digital Compact, avoiding duplication of efforts.
Importance of inclusivity in the WSIS process
Wallace S. Cheng
Halima Ismaeel
Unknown speaker
Unknown speaker
Need to raise inclusiveness to new level by supporting local innovation
Importance of engaging youth and focusing on emerging technologies
Need to include voices of refugees and displaced persons
Need to include small and medium enterprises perspective
Multiple speakers emphasized the need for greater inclusivity in the WSIS process, including youth, refugees, SMEs, and local innovators.
Similar Viewpoints
Both speakers argue for leveraging the existing WSIS framework and expertise rather than creating new structures for implementing digital initiatives.
Thomas Schneider
Speaker 6
Importance of building on existing WSIS structure and expertise
WSIS framework should be used to implement Global Digital Compact
These speakers emphasize the importance of addressing human rights and equality issues, particularly gender equality, in the digital context.
Anriette Esterhuysen
Unknown speaker
Need to focus on digital justice and human rights in digital context
Importance of addressing gender digital gap
Unexpected Consensus
Focus on rural communities in WSIS process
Unknown speaker
Wallace S. Cheng
Importance of including rural communities in WSIS process
Need to raise inclusiveness to new level by supporting local innovation
While most discussions focused on global or urban perspectives, there was an unexpected emphasis on including rural communities and supporting local innovation, highlighting a shift towards more inclusive development strategies.
Overall Assessment
Summary
The main areas of agreement include the importance of multi-stakeholder collaboration, leveraging existing WSIS frameworks for new initiatives like the Global Digital Compact, and enhancing inclusivity in the WSIS process.
Consensus level
There is a moderate to high level of consensus among speakers on these key issues. This consensus suggests a shared vision for the future of WSIS, emphasizing collaboration, efficiency, and inclusivity. However, there are still diverse perspectives on specific implementation strategies and priority areas, indicating the need for continued dialogue and negotiation in the WSIS+20 review process.
Differences
Different Viewpoints
Approach to updating WSIS action lines
Renata
Craig Stanley-Adamson
Need to update WSIS action lines to address new technologies and challenges
Importance of avoiding duplication with Global Digital Compact implementation
While Renata emphasizes the need to update WSIS action lines to reflect current technological advancements, Craig Stanley-Adamson cautions against duplicating efforts with the Global Digital Compact implementation.
Unexpected Differences
Focus on specific technologies
Halima Ismaeel
Anriette Esterhuysen
Importance of engaging youth and focusing on emerging technologies
Importance of intergenerational and inter-institutional collaboration
While both speakers emphasize inclusivity, Halima Ismaeel unexpectedly focuses on specific emerging technologies, while Anriette Esterhuysen emphasizes broader intergenerational collaboration.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to updating WSIS action lines, the integration of the Global Digital Compact, and the focus on specific technologies versus broader collaboration.
difference_level
The level of disagreement is moderate, with most speakers agreeing on the overall goals but differing in their approaches. This suggests a need for further discussion and coordination to align strategies for WSIS implementation and review.
Partial Agreements
Partial Agreements
All speakers agree on leveraging existing WSIS structures for future implementation, but differ in their emphasis on how to integrate the Global Digital Compact with existing mechanisms.
Thomas Schneider
Speaker 6
Torbjörn Fredriksson
Importance of building on existing WSIS structure and expertise
WSIS framework should be used to implement Global Digital Compact
Need to leverage existing UN mechanisms for GDC implementation
Similar Viewpoints
Both speakers argue for leveraging the existing WSIS framework and expertise rather than creating new structures for implementing digital initiatives.
Thomas Schneider
Speaker 6
Importance of building on existing WSIS structure and expertise
WSIS framework should be used to implement Global Digital Compact
These speakers emphasize the importance of addressing human rights and equality issues, particularly gender equality, in the digital context.
Anriette Esterhuysen
Unknown speaker
Need to focus on digital justice and human rights in digital context
Importance of addressing gender digital gap
Takeaways
Key Takeaways
The WSIS+20 review process is seen as crucial for shaping the future direction of digital cooperation and development
There is strong support for maintaining the multi-stakeholder approach of WSIS
Participants emphasized the need to avoid duplication between WSIS and the Global Digital Compact implementation
Inclusivity remains a key priority, with calls to better engage youth, rural communities, and underrepresented groups
There is a need to update WSIS action lines to address new technologies and challenges while building on existing frameworks
UN agency collaboration and leveraging existing mechanisms are seen as critical for effective implementation
Resolutions and Action Items
Organize consultative sessions and brainstorming workshops at the WSIS Forum to gather stakeholder input on future directions
Submit proposals and suggestions for the WSIS+20 Forum agenda through the open consultation process by March 14
Appoint co-facilitators for the WSIS+20 review process as soon as possible
Consider including a specific focus on the gender digital gap in WSIS action lines or declarations
Explore ways to better include rural community perspectives in the WSIS process
Unresolved Issues
Specific mechanisms for implementing the Global Digital Compact through existing WSIS frameworks
How to effectively update WSIS action lines without a complete overhaul
Balancing inclusivity with efficiency in the multi-stakeholder process
Addressing financing mechanisms for digital development in the Global South
Role of the Internet Governance Forum (IGF) in relation to the WSIS process moving forward
Suggested Compromises
Use existing WSIS structures and expertise to implement the Global Digital Compact rather than creating new mechanisms
Focus on updating the framing and impact assessment of WSIS action lines rather than completely rewriting them
Balance high-level government participation with increased engagement of grassroots and underrepresented stakeholders
Leverage both UN agency resources and private sector investments to address digital development financing gaps
Thought Provoking Comments
We need to concentrate and focus resources, we need to try and bring the different threats together. … we have to, we have no choice, than building on the WSIS process in an inclusive way, and inclusive and efficient, sometimes not easy to combine, but I think we have to find a way.
speaker
Thomas Schneider
reason
This comment synthesized many of the previous points about avoiding duplication and leveraging existing structures, while acknowledging the challenges of balancing inclusivity and efficiency.
impact
It helped refocus the discussion on practical next steps and the need to build on existing WSIS processes rather than creating new structures.
To begin with, I think it’s important to again put on the table that the foundational principle is of governing the internet as a global public good and not as a utility. And this issue should be brought up again.
speaker
Nandini from IT4Change
reason
This comment reframed the discussion around fundamental principles, challenging participants to consider the core purpose of internet governance.
impact
It broadened the scope of the conversation beyond practical implementation to include philosophical considerations about the nature of the internet.
I think WSIS is also for us, as a multi-stakeholder community, an opportunity to find the kind of community and collaboration, the North-South, business, tech, civil society, UN, government solidarity, which I think is important.
speaker
Anriette Esterhuysen
reason
This comment highlighted the unique collaborative potential of WSIS across diverse stakeholders and geographies.
impact
It reinforced the importance of inclusivity and multi-stakeholder approaches, setting the tone for subsequent comments about representation and participation.
The WSIS has two outcome documents. It has the Geneva documents, the plan of action, which has the action lines. It also has the Tunis agenda, which includes financing mechanisms. And I’m just wondering if we can’t this year in the WSIS Forum include those action lines in quotation marks, they’re not called that, those outcome areas and gender, how we design and structure the event.
speaker
Anriette Esterhuysen
reason
This comment brought attention to often-overlooked aspects of the WSIS process and suggested concrete ways to incorporate them into future events.
impact
It prompted consideration of how to more holistically represent WSIS outcomes in future forums and potentially influenced the structure of upcoming events.
But if it’s possible, I would like to raise my voice and ask all the ambassadors that are here and policymakers so that we can have an action line on gender gap.
speaker
Paola Galvez
reason
This comment made a specific, actionable proposal to address a critical issue (gender gap) within the WSIS framework.
impact
It focused attention on the gender digital divide and potentially catalyzed support for concrete action on this issue in future WSIS processes.
Overall Assessment
These key comments shaped the discussion by repeatedly emphasizing the need for inclusivity and multi-stakeholder approaches while also pushing for concrete actions and structural changes. They helped evolve the conversation from general updates and reflections to more specific proposals for the future of WSIS, particularly in areas like gender equality, rural connectivity, and leveraging existing frameworks. The comments also highlighted tensions between efficiency and inclusivity, and between practical implementation and broader philosophical principles, encouraging a more nuanced and comprehensive approach to WSIS’s future.
Follow-up Questions
How can we ensure effective implementation of the Global Digital Compact (GDC) while avoiding duplication of efforts and wasting resources?
speaker
Moaz (Saudi Arabia)
explanation
This is crucial for streamlining efforts and maximizing the collective impact of various initiatives related to digital development.
How can we update the WSIS action lines to address new and emerging technologies while maintaining their tech-neutral nature?
speaker
Craig Stanley-Adamson (UK)
explanation
This is important to ensure the WSIS framework remains relevant and adaptable to future technological developments.
How can we incorporate the Global Digital Compact objectives into the WSIS review process?
speaker
Moaz (Saudi Arabia)
explanation
This integration is essential for creating a cohesive approach to global digital development.
How can we expand the role of the UN Internet Governance Forum (IGF) to fulfill the mandate given in paragraph 72 of the Tunis Agenda?
speaker
Nandini (IT4Change)
explanation
This is important for strengthening the IGF’s role in the evolving digital governance landscape.
How can we address new digital divides in data capabilities and design concrete targets for financing digital infrastructure in the majority world?
speaker
Nandini (IT4Change)
explanation
This is crucial for ensuring equitable digital development globally.
How can we create a data and AI constitutionalism at the global level?
speaker
Nandini (IT4Change)
explanation
This is important for establishing global norms and standards for data and AI governance.
How can we govern cross-border data flows from a development justice perspective?
speaker
Nandini (IT4Change)
explanation
This is crucial for balancing data flows with development needs and digital sovereignty.
How can we raise inclusiveness to a new level by encouraging and supporting local innovation in the Global South?
speaker
Wallace S. Cheng (Global Ethics)
explanation
This is important for fostering innovation and development in underrepresented regions.
How can we address the digital e-waste crisis and promote the role of reusable global goods in achieving WSIS goals?
speaker
Halima Ismaeel (Youth Advisory Board)
explanation
This is crucial for ensuring sustainable digital development.
How can we incorporate emerging technologies like digital twin technology, biofiber technology, and subspeed spectrum management into the WSIS process?
speaker
Halima Ismaeel (Youth Advisory Board)
explanation
This is important for keeping the WSIS process up-to-date with technological advancements.
How can we include the voices of displaced and stateless people in the WSIS process?
speaker
Mike Walton (UNHCR)
explanation
This is crucial for ensuring truly inclusive digital development that considers vulnerable populations.
Can we create a specific action line on addressing the gender digital gap?
speaker
Paola Galvez
explanation
This is important for focusing efforts on closing the persistent gender gap in digital access and skills.
How can we better include rural communities in the WSIS process and highlight their struggle to connect to the internet?
speaker
Fawad Bajwa
explanation
This is crucial for addressing the digital divide between urban and rural areas.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Open Forum #54 Closing the gender divide for inclusive economic growth
Open Forum #54 Closing the gender divide for inclusive economic growth
Session at a Glance
Summary
This panel discussion focused on closing the gender digital divide and promoting women’s inclusion in technology and digital spaces. Participants from various sectors, including government, international organizations, and civil society, shared insights and experiences.
Key barriers to closing the gender digital divide were identified, including lack of access to digital skills training, cultural and social norms discouraging women from tech fields, and lack of trust in women’s capabilities in the tech sector. The panel emphasized the need for multifaceted approaches, including regulatory frameworks, financial mechanisms, and community-driven initiatives to address these challenges.
Several concrete initiatives were highlighted, such as Namibia’s efforts to use existing infrastructure like libraries and post offices for digital skills training, and Estonia’s early digitalization journey that included creating computer classes in schools. The importance of role models and mentorship for women in tech was stressed, as well as the need to encourage women’s entrepreneurship in the sector.
The discussion also touched on the role of international frameworks like the EU Gender Action Plan and the Global Digital Compact in promoting gender equality in digital spaces. Panelists called for collaboration between public and private sectors, as well as civil society, to create inclusive digital policies and initiatives.
The panel concluded with calls to action, including encouraging girls to pursue STEM fields, supporting women-led tech initiatives, and ensuring online spaces are safe and respectful of human rights. Overall, the discussion emphasized the critical importance of closing the gender digital divide for inclusive economic growth and social progress.
Keypoints
Major discussion points:
– The persistent gender digital divide and barriers to women’s participation in technology
– The need for multi-stakeholder collaboration between government, private sector, civil society and communities to address the digital gender gap
– The importance of role models, education and skills training to empower women and girls in tech
– Regulatory frameworks and policies to promote gender inclusivity in the digital space
– Community-driven and bottom-up approaches to digital development that center women’s needs
Overall purpose:
The purpose of this panel discussion was to examine the challenges of the gender digital divide and explore solutions and best practices for promoting women’s inclusion and empowerment in the digital economy and technology sector.
Tone:
The overall tone was serious but optimistic. Speakers acknowledged the significant challenges but shared inspiring examples of progress and expressed determination to continue working towards gender equality in tech. There was a sense of urgency but also hope that through collaborative efforts, the digital gender gap can be closed.
Speakers
– Anda Bologa: Moderator, Center for European Policy Analysis
– Christophe Farnaud: EU Ambassador to Saudi Arabia
– H.E Emma Inamutila Theofelus: Minister for Information and Communication Technology, Namibia
– Roy Eriksson: Global Gateway Ambassador, Finland
– Kedi Välba: Chair for Europe of the D4D Hub Private Sector Advisory Group and CEO of Aktors
– Radka Sibille: Digital Counsellor at the EU Delegation to the UN, Geneva
– Ravin Rizgar: Founder and Director of Suli Innovation House
– Valeria Betancourt: Programs Manager, Association for Progressive Communication
Additional speakers:
– Peter Zanga Jackson Jr. : From Liberia
– Damilare Oydele: Works with Library Aid Africa
– Catherine Mumma: Senator from Kenya
Full session report
Closing the Gender Digital Divide: A Multifaceted Approach
This panel discussion, moderated by Anda Bologa from the Center for European Policy Analysis, brought together experts from government, international organisations, and civil society to address the persistent gender digital divide and explore strategies for promoting women’s inclusion in technology and digital spaces.
The Scope of the Problem
EU Ambassador to Saudi Arabia, Christophe Farnaud, set the stage with sobering statistics, noting that globally, there are 244 million more men than women using the Internet. Moreover, women comprise only about 25% of the tech workforce, with an even starker disparity in leadership positions where women hold only 11% of executive roles. These figures underscored the urgency of the discussion and the need for concerted action across multiple sectors.
Key Barriers to Closing the Gender Digital Divide
Participants identified several significant barriers hindering women’s participation in the digital economy:
1. Limited Access to Digital Skills Training: H.E Emma Inamutila Theofelus, Namibian Minister for ICT, emphasised that the lack of opportunities for girls and women to learn essential digital skills is a primary impediment to closing the gender gap.
2. Societal and Cultural Barriers: Minister Theofelus also highlighted the religious and cultural obstacles that discourage girls from pursuing technical fields, noting that such career choices are often viewed as anomalies.
3. Lack of Trust in Women’s Capabilities: Ravin Rizgar, Founder and Director of Suli Innovation House, pointed out the pervasive lack of confidence in women’s abilities within the tech sector.
4. Underrepresentation in Leadership Roles: Radka Sibille, Digital Affairs Advisor at the EU Delegation to the UN in Geneva, stressed the shortage of women in tech leadership and decision-making positions.
5. Structural Discrimination: Valeria Betancourt, Programs Manager at the Association for Progressive Communication, highlighted the persistent structural discrimination and exclusion faced by women in the tech industry.
Strategies and Stakeholder Roles for Promoting Women’s Inclusion in Tech
The panel discussed various approaches to address these challenges and highlighted the crucial role of different stakeholders:
1. Targeted Education and Training: Minister Theofelus advocated for providing coding camps and digital literacy programmes specifically for girls, emphasising the importance of encouraging STEM education at every opportunity, including informal settings.
2. Leveraging Existing Infrastructure: Several speakers suggested utilising existing facilities like libraries, schools, and post offices to deliver digital skills training and provide internet access points, with Estonia cited as a successful example.
3. Creating Supportive Ecosystems: Ravin Rizgar emphasised the importance of establishing women-focused innovation hubs and support networks, as well as building trust between the private sector and women in tech.
4. Promoting Women’s Entrepreneurship: Kedi Välba, Chair for Europe D4D Hub private sector advisory group and CEO of Actors, stressed the need to boost women’s entrepreneurship in the tech sector.
5. Implementing Inclusive Regulatory Frameworks: Valeria Betancourt called for gender-inclusive regulatory frameworks to enable diverse models of connectivity provision, including community-owned infrastructure.
6. Government Initiatives: Minister Theofelus shared Namibia’s efforts to provide digital infrastructure and training.
7. Private Sector Partnerships: Roy Eriksson, Global Gateway Ambassador from Finland, emphasised the importance of private sector involvement in funding and supporting women in tech.
8. Community-Driven Approaches: Betancourt advocated for bottom-up, community-driven approaches to technology adoption that centre women’s needs and incorporate feminist principles in tech development.
9. Multi-Stakeholder Collaboration: Välba highlighted platforms like the D4D Hub that facilitate cooperation between different sectors to promote gender equality in digital development.
10. International Frameworks: Sibille discussed the role of initiatives like the EU Gender Action Plan in promoting gender equality in digital spaces.
The Global Digital Compact and Gender Equality
Radka Sibille highlighted the significance of the Global Digital Compact, currently being negotiated at the UN level, in addressing gender equality in tech. This compact aims to outline shared principles for an open, free, and secure digital future for all, with a strong emphasis on bridging the gender digital divide.
Importance of Women’s Empowerment in the Digital Economy
Speakers unanimously agreed on the critical importance of women’s digital inclusion:
1. Economic Benefits: Christophe Farnaud emphasised the potential economic gains from increased women’s participation in the digital economy.
2. Role Models: Roy Eriksson stressed the need for more women role models in tech to inspire the next generation.
3. Online Safety: Radka Sibille highlighted the importance of addressing online safety and human rights issues for women in digital spaces.
4. Improving Women’s Lives: Valeria Betancourt discussed the potential of digital technologies to enhance various aspects of women’s lives when developed with feminist principles in mind.
5. STEM Education: Minister Theofelus emphasised the importance of encouraging girls to pursue STEM education and careers at every opportunity.
Conclusions and Call to Action
The panel concluded with a strong consensus on the need for multifaceted, collaborative approaches to close the gender digital divide. Key takeaways included:
1. The necessity of multi-stakeholder collaboration between government, private sector, civil society, and communities.
2. The importance of community-driven, bottom-up approaches that put women at the centre of digital inclusion efforts.
3. The need to address deep-rooted cultural and societal barriers discouraging women from tech fields.
4. The critical role of targeted initiatives to support women’s digital skills development and entrepreneurship.
The discussion ended with calls to action from each panelist, urging all stakeholders to work together in implementing concrete measures to promote women’s inclusion in the digital economy. These ranged from providing targeted skills training and creating more inclusive regulatory frameworks to fostering women’s leadership in the tech sector and ensuring that digital technologies are developed with women’s needs and perspectives in mind.
Session Transcript
Anda Bologa: . . . . . . . . . . . . . . . . . . . Good morning everyone once again. I’m Anda Bologa. I’m with the Center for European Policy Analysis, moderating this panel today. Thank you so much for joining. You made the right decision despite the fierce competition today. Our goal is to have a serious policy discussion on a very important topic, but at the same time kickstart this day with the right energy. Thank you so much for joining again. I will introduce now our distinguished panel and the keynote speaker. We have with us today Christoph Farnot, the EU ambassador to Saudi Arabia. He will set the stage for our conversation today and he has seen a lot in his long career as a diplomat and he knows very well that diplomacy, much like technology, is about bringing people together and this is what we’re trying to do today with this panel. Next we have Minister Emma Teofelis. She’s the Namibian minister for ICT and she is a bit of a rock star of this conference. With everyone that I spoke mentioned her. It’s extremely impressive. She’s one of the youngest ministers in the world and she also shows with her work that leadership is about determination and vision. Thank you so much for joining us today. Next we have Ambassador Roy Ericsson. He’s the Global Gateway ambassador from Finland and we talk a lot about building bridges at this conference and we use it as a metaphor, but Ambassador Ericsson is here to remind you that the Global Gateway Initiative is doing it quite literally and he will tell us more about it and tell how those projects are also open to everyone, including women. Thank you so much for joining us. Next we have Kedi Välba, Chair for Europe D4D Hub private sector advisory group and CEO of Actors. She comes from Estonia and as you probably all know in here, digitalization in Estonia is more than a buzzword. It’s a way of life and she She’s the one to tell us about how private sector and public sector come together and how ideas come into action. Thank you so much for joining us. Next, we have Radka Sibile. She’s the Digital Affairs Advisor at the EU Delegation to the UN in Geneva. I think each and every one of you that interacted with the UN knows how overwhelming this interaction can be with such a massive organization. Luckily, we have Radka here and she can make complex things seem simple and clear. Thank you so much for joining us and tell us more about how the EU there tries to turn commitments into real-world change for women. Thank you so much. Next, we have Ravin Rizgar. She’s the Founder and Director of Suli Innovation House. She is not only talking about innovation, she’s building it. She’s extremely impressive and she will talk to us about the initiatives that bring technology to women in the MENA region. Next, we have Valeria Betancourt. She’s the Programs Manager, Association for Progressive Communication. She will tell us how the magic happens at the community-led initiatives and innovation. Very much looking forward to hear about that. Our session will deal with data, women and inclusivity. We hear a lot about the digital divide at this conference and at other conferences. It’s important to remember that it’s not just about who gets online, but also who gets seen and how gets opportunities. Because algorithms that are trained on biased data, for instance, are showing higher-paying jobs ads to men rather than to women. The facial recognition systems that are flawed disproportionately fail women of color. These are just a few examples, but I’m sure we will kick-start a very robust and interesting conversation today. Understand that these aren’t just glitches. They’re just symptoms of deeper inequalities. That being said, I’ll leave it to the panel. First and foremost, I welcome to the stage our keynote speaker, Ambassador, the floor is yours.
Christophe Farnaud: Thank you very much. Rockstar Minister, dear colleague Ambassador, excellencies in the room, distinguished guests, ladies and gentlemen, good morning. It’s a real pleasure to open this session today with you. Gender equality, as you know, is a priority for the European Union. And as we all know, it is a multidimensional issue, from education to work, from health to decision-making, and of course, digitalization. And the thing is that the gender-digital divide remains a significant obstacle to achieving, including economic growth and equality. To launch the discussion, let me start with statistics. According to ITU, worldwide, in 2022, 69% of men were Internet users, compared with 63% of women. This means that globally, there are 244 million more men than women using the Internet. Moreover, recent statistics show that women comprise only about 25% of the tech workforce. This disparity is even bigger in leadership positions, where women hold only 11% of executive roles. In a nutshell, the digital gender gap, despite progress, persists, and closing it by investing in women is not only fair, it’s also the smart thing to do. Again, according to UN estimates, if women’s exclusion from the digital sphere was ended, some $1 trillion could be added to the GDP of low- and middle-income countries. However, women still face challenges to access the Internet, to develop digital skills, and therefore, use digital tools. This is also why gender equality is a core ambition for the European Union. Women are agents of development and change. First, the EU has adopted a gender action plan. now in its third edition, to promote gender equality and women’s empowerment. In particular, the plan states that 85% of all new actions in external relations of the EU will contribute to gender equality. As regards digitalization specifically, it promotes equal access and participation in shaping the digital landscape, from policy framework to infrastructure, from development of skills to financial access. Second, the EU Global Gateway Strategy, it aims to accelerate the twin green and digital transitions with partner countries. It promotes a human-centric and inclusive model of digitalization. Its ambitious targets can only be met by acting together in the Team Europe approach, with the Member States of the European Union and other stakeholders, in the spirit of this conference here at the IGF. Third, of course, gender is well mainstreamed in the EU Digital4Development Hub, a strategic multi-stakeholder platform that fosters digital cooperation between Team Europe again and its partners. Amongst others, the D4D Hub aims to ensure women and girls are a privileged target of our digital development cooperation. In the framework of this working group, we, together with Member States, focus our attention on the skills women need to take leadership positions in digital, and on fighting cyber violence, among other things. Today, we have put together an inspiring panel, representing all sectors, public, private, civil society, where men are a visible minority, obviously, good luck, coming from around the globe. They will share with us best practices, lessons learned and success stories. We definitely want to work together on recommendations to move forward. So without much further ado, I will give you back the floor, and wish you all success in your exchanges. Thank you very much.
Anda Bologa: Thank you so much, Ambassador. Thank you to the fantastic panel and before we kickstart the conversation, thank you to the European Commission and the D4D hub and to all our partners that are in this room today and in the cyberspace for championing this cause and for making this discussion possible. I thought that in order to kickstart the conversation in a more energetic way, I will give you 30 seconds to each speaker for a first question and I want to hear from you one thing, your elevator pitch, to what’s the biggest barrier to closing the gender digital divide. But 30 seconds and then we will deep dive into a broader conversation. Minister, can we start with you? Thank you.
H.E Emma Inamutila Theofelus: Thank you very much and a very good morning to everybody. Very happy to be here. Well, it’s very difficult to condense it in 30 seconds but I’ll try. I think the biggest barrier to ensuring that we close the digital gender gap would be the opportunity for girls and women to actually learn the skills they need. There are so many impediments to them being able to access the material they need to ensure that they get online and basic skills to actually be familiar with the Internet and we experience this a lot in Namibia in the Ministry of ICT that there is a fear of the unknown. Somebody who is not exposed to a smartphone or online spaces, they immediately say, I have lived without it for so long, why do I need it now? But not understanding the opportunities that come with it I think is the biggest barrier because once they understand the opportunities, there becomes a demand. They want to know more, they want to learn more and they demand more from the state, from civil actors to actually make it possible that the right infrastructure is in place. That smartphones are cheaper, that data is cheaper and that they get to learn the right skills to be online actors. Thank you.
Roy Eriksson: Thank you so much, very comprehensive. Ambassador Ericsson? Well, I would like to echo what the Minister said and I think also one big stumbling block is the mental kind of thinking that the Internet and digitalization is concerning only men and we do not see what possibilities women have and that’s why I’m very grateful to be part of the Global Gateway and different projects where we take for granted that women are also included in the project. We listen to what their needs are and also try to promote that, yes, we’re building connectivity but it is not connectivity per se, it is so that you can use it. So you can, for example, start a micro company like we have in Africa, good examples of women when they have the possibility to connect with the world. So they have small companies making clothes or home cooking or something. So it’s empowering women if they just get over the first threshold and get online.
Anda Bologa: Thank you so much. Can we hear from Kedi Välba?
Kedi Välba: Yes, thank you. I try not to repeat what the others have already mentioned, but as one of the things I see as a big barrier is the access to services. First of all, which we see is a problem in many countries and also that we don’t teach young kids or girls ICT skills. And this is what we are trying to solve in Estonia with different educational programs for girls to be included in ICT world, to teach them technological knowledge and so on, so that they would have more interest in the field in the future as well. Thank you.
Anda Bologa: Thank you so much. Can we hear from Ratka Sipile? 30 seconds.
Radka Sibille: Is it now on? Okay, thank you. Thank you so much. Good morning. I think one of the issues, I totally agree with all that has been said, and one of the issues that I would add is the lack of women at the decision-making and policy-making tables. I mean, here we have a fantastic panel, which is very pro-women, but this is still very much an exception and not a rule. And especially in the tech force, the situation looks a whole lot different when you talk about technologies. And so I think we need more women at the table when these decisions are made so that they already contain from the beginning the gender lens and the considerations of how they will impact women and all genders. Thank you.
Anda Bologa: Thank you so much. Can we hear from Ravin Rizgar? The floor is yours.
Ravin Rizgar: Good morning, everyone. Good to be here with you. So in terms of working closely with women, I would say it’s about trust, because tech force is quite dominated by men. And when it comes to women trying to find a place in the digital economy and trying to work and find employment opportunities, there’s no trust between the private sector or the companies in the capacity of women in that area. That’s why, despite taking, you know, upscaling in that sector, they would still have lower opportunity of finding employment opportunities. That’s why they think that, okay, why would I learn this if I’m not going to find a job with it, if I’m not going to make it to the market? So we need to try to push and build that trust among the private sector companies, that women can also be in positions where they can actually succeed in the digital economy.
Anda Bologa: Thank you so much. And to close this round of the first question, can we hear from Valeria Petancur, please?
Ravin Rizgar: Thank you very much. I hope you can hear me. Obviously, I agree with everything that has been said, but in the context of Latin America and the Caribbean, which is still the most unequal region in the planet, I think the main barrier has to do with a real recognition that the gender-digital divide is just an expression of persistent structural oppression and exclusion and discrimination. And I think the fact that this is not taken into account in the development of digital policies… Thank you so much.
Anda Bologa: Thank you.
H.E Emma Inamutila Theofelus: Thank you so much. because, obviously, we need a growth, because I think the point I’m trying to make is, in the informal settlements, there are the margins of cities and towns, but not necessarily have access to the services of the new cities and towns. And so we need to advance the high-level skills, because we want to actually run a small business and business online. And we still need a piece of technology, just to be able to navigate the smartphone and email address, social media platform. Not only in the market, but also in business, in Facebook and Twitter. So we really try to see how good the business we can. So some people need to move from this stage to the next stage to see the results. And we do that in three ways. One, it starts off with the idea of bringing up the youth and young people, trying to give literacy training to young women in the country. We do it for six regions in the country. And now that, of course, T14Hub has gone out, we’ve now adopted the program internally in the country, and we have expanded it across the whole country. And in doing so, we have now created a literacy framework, a national literacy framework, to see what is going on in the country to help us in this organization to enhance the skills. And we go back to the original one thing, helping people with training. That’s what you need. And we try to give you, and we’re moving on. So we try to step it up. It’s practically impossible. And this is really going back to the basics. So in the short program, I think, just in the administration, you can see we’ve trained 100 girls, 100 women. Let’s just create one framework and then train where they are. We say, just the next. So if you know the basics, it’s the next person, and the next person teaches the next person. So that’s one way. The second way is we have coding camps because we’re trying to introduce coding as a subject in our schools. But as a ministry of ICT, we have now started coding camps for girls only. And we got the first one we did with the United Nations, the commission on Africa. It was such a huge success. We got to where we think that we’re only teaching coding to girls. It’s an intentional position and so we want to start with girls. We’ll probably expand in the future and we want to continue teaching girls coding. So let’s say you’re doing this one program and it addresses everybody. It’s multifaceted. You have to constantly change the program as you go to different areas. You change the program and when you go to a different area, you change the program again. to change the program again. And in fact, we will try to reduce the amount of time and the skills necessary for women and girls.
Anda Bologa: I would like to move to Ambassador Erikson. And if you can give us the elevator pitch of what is the Global Gateway and then address in a more complex manner, how does it build infrastructure that is gender inclusive? And maybe you can give us an example of a concrete project in that sense. Thank you.
Roy Eriksson: Thank you. First the pitch on Global Gateway, and then I will attempt to also talk about how we can close the gender divide for an inclusive economic growth. And from there, coming back to Global Gateway. Global Gateway is an initiative launched by the European Union and the Commission end of 2021. And the aim is to mobilize 300 billion euros for infrastructure projects in emerging markets. I have to underline it says mobilize 300 billion euros. It doesn’t mean that there is 300 billion somewhere in a magical purse, but we are trying to mobilize the private sector with the help of public money to take down the risks of projects so that we have projects that would not be materialized without this initial small help. And Global Gateway is considering investments in five different sectors, digital being one of them. them, then education, health, climate, and renewable energy, and logistics. And for Finland, we have chosen to participate in projects mainly in digitalization, but also education and in lesser mode also on climate and renewable energy. But coming back to the topic, first, of course, I’d like to thank the organizers for giving me the opportunity to take the floor, because this is an important forum that tackles very pertinent issues. Technology has emerged as a key question for global development, including inclusive economic growth. The issue of digital divide is very important, as 2.6 billion people globally still have no access to the Internet. But why is this an issue? It’s because digital data flows in cyberspace, and data is the new oil of the future. If you allow this kind of comparison, yeah, our modern societies, especially urban ones, are more and more relying on the use of digital information, and with access to data, people can improve their lives. The digital transformation presents us with a wealth of opportunities that we need to grasp. At the same time, it presents us challenges that we need to act on, such as gender divide. As we move towards a more connected future, closing this gender gap is crucial for ensuring inclusive growth, and empowering women and girls to unlock their full potential. With growing threats to peace and security, we need to make sure technology is a force for good, and not used as a weapon. to amplify conflicts and create further instability and division globally. The UN and its stakeholders will have a key role in making sure we have the tools to manage this process in the years ahead. The Global Digital Compact provides an important framework for enhancing multilateral and multi-stakeholder cooperation for bridging the gender digital divide and enhancing the rights of all women and girls in the digital world. Finland took part actively in the Global Digital Compact negotiations, stressing priorities such as human rights, improving digital connectivity, governing emerging technology, and addressing the gender digital divide, and investing in education and digital skills. Finland underlines a multi-stakeholder approach in the implementation of the compact. My government sees technology as a key issue for our foreign and security policy. We have a strong focus on digital development in our foreign policy, including increased attention, private sector solutions, and investments. We are committed to work internationally for digital development that is fair, inclusive, and sustainable with respects for human rights and gender equality. We are pleased that our priorities, including addressing the gender digital divide, are being extensively covered within the ongoing IGF. Globally, we are far from reaching the target of universal connectivity as set out by the Agenda 2030. Despite good initiatives and concrete actions, a lot remains to be done in order to tackle this challenge. The Global Digital Compact calls for more cooperation to close the persistent digital especially in remote and underserved areas. In this regard, I want to highlight the need to mobilize more private investments and capital to meet these needs. Without adequate infrastructure in place, the potential of digitalization remains locked in. And we, in Finland, stand ready to contribute to this process. If the time allows, I would like to share with you some concrete examples of our work towards closing the gender divide. As a co-leader of the UN’s Generation Equality Action Coalition on Technology and Innovation for Gender Equality, we emphasize the pivotal role of digital technologies. For the opportunities of technology to be delivered, women and girls must have equal and safe and quality access to digital technologies, as well as the necessary digital skills. Only by having the whole nation, that is women and girls, along men, on board digitally, can the digital economy reach its full potential. Finland has a decades-long tradition in promoting the rights of women and girls in its foreign and development policy. We believe that the multi-stakeholder cooperation is key to achieve this, including engaging the private sector. For instance, through FinFund, Finland’s development financer and impact investor, we invest in technology companies that are committed to advancing gender equality in Africa. Finland funds also various civil society organizations globally. With Finland’s support, civil societies, for example, enhance digital safety of women, human rights defenders, and digital literacy of women and girls. Through the cooperation with UN organizations, Finland supports, for example, the creation of technology harnessing decent work opportunities for women in Tunisia and Morocco. We are also partnering with UNICEF. to develop virtual safe spaces for women and girls at risk. And then coming to the EU’s Digital for Development Hub, or D4D Hub, Finland, alongside other European Union member states and the European Commission, is enhancing digital partnerships globally. This includes boosting joint investments between the EU and partner countries. The D4D Hub’s ambition to contribute to reducing digital divides includes also gender digital divides. Finland takes, for example, part in the Team Europe initiative in Africa, including the Regional Data Governance in Africa initiative. We do not only build physical connectivity, and especially underlining the importance to take connectivity to rural and underserved regions. But on top of that, soft infrastructure is as important. That means that we provide our partners with skills so that they can utilize the new technologies. And in this, especially, a greater importance is given to take women and girls into the picture as well. In development aid, human rights and gender equality are cross-cutting and important themes, and all our projects should have some element of these issues. Lastly, I would like to underline our long-standing support to the IGF, as we are here, but it is also doing a lot of work to promote the access of women and girls to the Internet. And in that way, promoting our common objectives. In this context, I would like to encourage also other partners to step up their support to the IGF and the multi-stakeholder model of Internet governance, which in essence contributes to closing the gender divide. I look forward to the discussion and I’m happy to answer any questions you have, especially regarding Global Gateway. Thank you so much.
Anda Bologa: Thank you so much and thank you for inviting questions on the Global Gateway. I’ll happily intervene in the next round and then open it to the public. What we heard from you is the need for the private sector to step in. I heard this in different forums, but often the private sector is sometimes seen as a stereotypical source of funding. I would like to understand better how else it can step in. I’m sure that KD Valva is best positioned to tell us besides what is expected in many projects, the funding aspect, how else can the private sector contribute to closing the gender digital divide. Maybe you can tell us more concrete examples from Estonia and other examples in terms of leadership of women in board panels and so on. The floor is yours. Thank you.
Kedi Välba: Yes. Estonia’s history is interesting in that sense that we regained our independence in 1991 and this is when Estonia had to kind of invent what we’re going to be or how we’re going to develop. We had very strong leadership and our prime minister and president both were for the way of digitalization. Our journey began about 30 years ago already and this was all collaboration between the private sector, public sector and also including the academia. My company is one of the two companies behind the creation of the Estonia. Union National Data Exchange Platform, X-Road, that is also implemented in several other countries across the world, including Finland. And this is, for example, one very good example of this pure private-public partnership, and also something that is making sure the society is inclusive to all genders and also people with disabilities and so on, because in Estonia currently we like to say that we have 99% of services available online. Actually, it’s very close to 100 now, because the last two services were getting married and getting divorced, that weren’t accessible online yet, but now these services have been digitalized as well. You just need to show up for the final signature to actually make sure you will meet the person before getting married. And this is the interoperability platform in that sense, is something that is a very valuable asset in accessing the digital services, having inclusivity for also women and people in rural areas, because you can have access to healthcare, the financial inclusion, you can start the business from the convenience of your home within like a couple of minutes. It takes a bit more time to open a bank account, but still you’re not dependent in going to some physical office or traveling long distances and also in the very beginning of our developments in Estonia, for example, the implementation of ID card is a very good example, which was also done in collaboration between the private and public sector and the banks and telcos were actually the first private sector institutions to support the government in implementing these solutions and also the banks and telcos were the first ones to take these services into use. For example in Estonia the banks were authorized to issue ID cards also because you don’t have government offices in all of the rural areas but we used to have banks now we don’t have bank offices anymore as well as bank offices are closing down because everything is moving into the digital world so you have offices only in the bigger hubs and if you want to have for example consultation or receiving the loan you can do it all digitally. Another example is for example the certification authority that was providing certifications for digital signature in Estonia the digital signing that you could do at first with the ID card now we also have the mobile application solution the smart ID or also mobile ID and all of those certifications was the examples that we have and that has provided a very good access to services and also we have made ID card mandatory since the age of 16 so all of the citizens or like adult citizens in Estonia have it and this is also a tool which enables you to have access to digital services in the country.
Anda Bologa: Thank you so much we heard so much about access and skills, which brings us to the first part of the panel, where we discussed about the biggest barriers to closing the gender digital divide. And thank you so much for sharing the leadership of Estonia. It’s very inspiring. I experienced it myself. I was in a study visit in there and indeed I could see how digitized is the medical sector and how much access that provides to so many categories of people. We talked about bottom-up initiatives. We talked about very practical private sector, public sector initiatives. And now it’s time to go back to a very high level. I’m afraid. Radka, I’m looking at you. The EU Gender Action Plan was mentioned at the beginning of this conversation. I think it’s important to also understand what are the higher and broader frameworks. So if you can give us the pitch for the EU Gender Action Plan in an understandable way, easy to grasp and at the same time how it interacts with the work of the UN and how is EU at the UN working on gender. Thank you.
Radka Sibille: Thank you so much. Yes, indeed. So the EU Gender Action Plan is already in its third edition. It was launched in November 2020 and it’s basically a very ambitious framework that binds the EU to mainstream gender and to promote gender equality and women’s empowerment in all our actions, including externally in our cooperation with third countries towards the UN, etc. So when it comes to digitalization sector, for instance, we use the Gender Action Plan of the EU to focus on digital skills of women and girls, also in partnership with third countries, as was mentioned, that this was important to promote access to, for instance, financial services, because when you want to launch… a startup or any kind of digital platform, you would need some kind of capital at the beginning. And I was just, you know, in that sense, looking at some statistics and, for instance, the women tech statistics are saying that, unfortunately, only 2.3 percent of women-led startups get venture capital funding. And I think that’s also related to the trust issues that were raised here before. Basically, the prejudice is that women cannot make it in the tech. So we’re trying to address that also through our projects. And last but not least, we are also trying to promote the women entrepreneurship and the women digital literacy skills. One of the projects, for instance, that we have launched successfully is called Vamos Digital in Mozambique, which tries to promote digital skills and coding skills in high schools and with a particular focus on women and girls. So again, this is the gender lens that we try to use in all our projects. So we do have projects, but we always try to look at how women and girls can participate in their projects and try to bring them to the table. When it comes to the UN, it’s really, I’m grateful that you mentioned it, because the UN just agreed, by consensus at international level, the Global Digital Compact, which was adopted by the General Assembly in September. And it has a very strong human rights-based approach to technology in general, basically saying that all our work on digital cooperation needs to be embedded in international human rights law. And in particular, it also mentions the gender equality and empowerment of women as one of the principles. And the European Union, when negotiating the Global Digital Compact, was one of the staunchest supporters of that principle, of the human rights-based approach in general, but also with the focus on women and girls, because we really believe it’s important to have everybody at the table. And now, as we enter collectively in the implementation phase, of the Global Digital Compact. We will have to see how we can work all together, so the EU is trying to do it as Team Europe, but we will need other partners as well to try to implement that and close the gender digital divide before the 2030 SDG summit that is already fast approaching. Thank you.
Anda Bologa: This is an incredibly ambitious target to make it by 2030, but like with many other extreme challenges that we’re facing, it’s important to put our outmost ambition into those goals. And maybe now we heard about innovation and we heard about startups and so much about coding and encouraging women to get into tech, not only coding. So maybe we can hear from Ravin Rizgar and about your work on Soli Innovation House, which is extremely impressive, and maybe you can share with us how did you come up with the idea and what challenges did you encounter and what’s the success story of the Innovation House at this stage. Thank you.
Ravin Rizgar: Thank you for asking this question, because this is actually starting all from my own story, and I do really believe on the point when there is a challenge, when there’s a need, that’s where innovation comes in. You know, I myself, once I graduated, like let’s say as a top student on my class, studying manufacturing engineering and in the tech sector, I tried to find a job in my field, realizing that all the other male classmates found job in different factories, including international ones. And for me, it was just lots of rejection from lots of companies, including the one that they say that they are gender inclusive, saying that, you know, the factory is all men, you cannot make it. we’re afraid, you know, you have the skill, but we are afraid that it would be difficult for you to work within an area that you cannot find other females. It was very disappointing because I believed my skills and I was like, very ambitious wanting to find job. And now that’s all where the idea started looking at all the other female friends and all the other females in the community that they’re all going through the same challenge. So I actually wanted to build a home where we can share our stories, where we can talk about the challenge that we’re facing. Even the ones that they had job, they’re on daily basis being discriminated or they’re being, yeah, somehow, yeah, not compared to the guys where they’re working in the private sector and facing many different problems or they’re being given jobs where it’s not matching their qualification just because of being a female and they want to get some income, they’re forced to do that job. So that’s where the idea started. And I am really thankful to all the companies that they reject me because they are the reason where I am now, you know, founding Sully Innovation House and running it for around three years now, and trying to do capacity building with another 600 women, which I see like right now, more than 300 of them, they found job and they are some of them that they have their own startup. So I’m very thankful to them. And at the same time, right now, they’re sponsoring the programs that we are running, which is very impressive, like making them to come and support programs that is supporting other women. So that’s the story. And in terms of numbers, as Sully Innovation House, as I mentioned, only females through our one of our program, which is targeting women only called leading women, we have upscaled, like skills of women in digital skills, tech skills and also including soft skill for 600 women in which 53% of them they found job. And we’re very happy with the result and we’re planning more to come for 2025. Thank you.
Anda Bologa: Thank you so much. I think the numbers speak for themselves. And thank you so much for sharing your story that I think is extremely inspiring for other people that hear it. And I encourage you to reach out and seek partnerships in this sense. And speaking about community driven approaches, I’m moving now to Valeria Betancourt. And I would like to hear more about her work and how you shape inclusive policies in the Latin American region. And what’s the role of communities and how do you form a community in this space in the first place? Thank you.
Valeria Betancourt: Okay, thank you very much. First, I will be obviously expressing the work that my colleagues and our network does. I think, first of all, it is very important to situate the conversation about inclusion and overcoming the gender digital gap in the moment that we are in. The Global Digital Compact has been mentioned. Obviously, it is going to be, and it provides a very important updated framework of the challenges that we face. But I would also like to bring the attention to another process that in our view represents also the view, the perspective and the challenges of the global south and countries in the region, which is the plus 20 review of the World Summit on the Information Society. And why I mentioned this, because the emphasis is on people. The vision of the WSIS was precisely on people rather than on the digital. And I think it’s time to look again at that vision and to put people and the planet in the center. That vision will have the opportunity also. so to build on community-based approaches, community-driven approaches that in tandem with the gender justice lens, we open much more possibilities for digital policies to recognize, as I was saying, that the problem is a structural battle. So the fact that technologies are enablers of rights, of development, of inclusion, but are also the tools and the spaces that we reach back. And that economic growth equates or equals development and inclusion, but also help us to overcome this assumption that the solutions should come from a single stakeholder. While what we have seen in the practice is that only through meaningful, effective collaboration and alliances and partnerships between the public sector, the private sector, the academia, the civil society organizations, and the communities as central actors, we can actually respond to the particular way in which communities, in which women and girls and non-conforming people experience technology. So I really want to highlight the fact that these bottom-up approaches with the communities at the center are also the way in which these alliances can really respond to meet particular challenges. So there are several examples on how we can place this gender justice and community-driven approach in the core of the interventions oriented to bridge the digital gap. For instance, there are several experiences about women’s circles that are part of decision making bodies, but also decision in relation to how the technology is designed. Not only the governance of the technologies, but also how the technologies are developed and designed in the first place. So sound and solid methodologies to really incorporate those approaches since the beginning, because that’s the only way in which these bottom-up approaches can respond to specific realities and needs if we want to narrow the digital divide. Those will also help to counteract the pin washing that has happened through several interventions that even though they have good intentions are not able because of this distance that they have with the real needs and the particularities of the barriers that women face in order to be able to really overcome those. So I think that’s a good way to do it, and the different stakeholders working together can put the communities in the center and be able to design solutions that could meet the particular needs and contexts.
Anda Bologa: Thank you so much for sharing that, and this panel in itself is an illustration that multi-stakeholder is not just a buzzword. We hear from governments and the actions they take, and we hear how this resonates with the private sector that is equally as keen to invest, for instance, if we take the coding camps. The government is interested. in that. The private sector is interested because inevitably it’s going to have a great workforce. We have innovation houses that just help and empower women and give them the space and the trust to take part in those activities. We have the international institutions that come with broader frameworks. We have the communities that are there. So that’s a very beautiful illustration of how this multi-stakeholderism works in practice and is more tangible than often the words in high-level documents make it seem. So that being said, thank you so much to our public. You’ve been very patient and very interested and now maybe I’ll give the floor to Ambassador Ericsson for a moment and then I’ll open it to the audience in the room and online. Thank you.
Roy Eriksson: Thank you. Just a quick comment listening to this very thin panel. One thing that I think is important is for women to have role models because I know a person, a woman in Finland with a similar background than what you mentioned. She couldn’t get a job in tech sectors and she was frustrated about it and she started a movement called Chicks Can Code and she has organized these workshops and not knowing is there any demand for that, if there would be anybody who would be interested. So she started small and the house was full and then okay well let’s have another one and again the house was full and again and again. So she actually started then her own business and has workshops throughout Finland on how women can also get involved in coding. That it’s not something that you have to have. a Y-gene to do, women can do it as well, and she has been a great successful. So what the bringing in the closing the gender gap, we need good role models. So now girls have seen that she’s been a success, so coding is no longer something is just for boys, girls are interested in that as well. And it has had for the whole of the Finnish economy, a good beneficial outcome, because our gaming sector is now really booming and being successful, but it couldn’t be successful without the women. So that’s just what I wanted to say. Thank you.
Anda Bologa: Thank you so much. And if you allow me to add one more thing, you mentioned role models. And at the same time, we need allies. Often the gender policies are left to the women, and it’s often that they feel the pressure and the responsibility to fight for it. And it’s extremely important to have allies and have everyone get involved in this because the economic benefits and just the benefits of this concern everyone. And thank you for being an ally with us today on the panel. And thank you to everyone in the public that is here. That being said, I’m going to open it to everyone I know we have, including people in the IT industry, especially in the video games. So opening it to the public in the room and online, perhaps we can start with the room. Do we have microphones? Thank you.
Audience: Thank you, Excellency. I feel called up to say a few words because actually, we would also want to continue on this amazing discussion. And I have more even inspiration for upcoming meeting because today at 1545, as the World Intellectual Property Organization, where the UN agency helping the innovators and creators, we will continue on with women in games and apps, innovation, creativity, intellectual property, where we will also have host role models from the industry, industry leaders, but as well as try to highlight one of the important skills that is important in entrepreneurship and games and apps is also the understanding. and how to manage your creativity and innovation and that is done through the intellectual property system and the inter IP tools that we are also trying to highlight and raise awareness and educate about. So of course congratulations just about this event and I feel further inspired by all the words mentioned here today.
Anda Bologa: Thank you so much. We see synergies not only in between the speakers of the panel but also the events today so that’s amazing.
Peter Zanga Jackson Jr.: Thank you so much sir the floor is yours. Good morning everybody my name is Peter Zanga Jackson Jr. I’m from Liberia and I firstly want to thank our panelists for all what they have said but in our intro during the intro one of them said that the tech force is dominated by men. The question is why the tech force is dominated by men? Are men stopping women from getting involved? Is it a behavior of women to not get involved? And so what can we do to get women involved into tech? Because once you use the word tech they say oh it’s for men. Anytime and even in schools when you talk about the sciences, the physics, the physics, the mathematics they say oh it’s for men and so you have limited women participation into technology. Now the challenge is back to them and they need to work on that niche behavior of not getting involved. I believe the more women are involved into technology the tech they will be heard. Okay they will not be marginalized. Things will go the way we all expect it to go. And so I put it to them, what can they do to work on the minds of our girls to get involved? What can they do? This is my question.
Anda Bologa: Thank you so much. Perhaps taking another intervention, and then I’ll bring the question back to the panel.
Damilare Oydele: Thank you so much. I have a question and a comment together. My name is Damilare Oyedile. I work with Library Aid Africa. My comment and question is about empowering women and young girls with digital skills and literacy skills to explore the economic benefits of digital economy, right? And there’s a huge potential here to leverage libraries, to leverage libraries as key partners and access points for digital literacy skills and empowerment, because libraries have already existing libraries to have infrastructure and space to deliver these trainings. And this is something we have done before. In particular, we did this also in Namibia through one of our projects where one of our fellows trained young women in our community on digital literacy skills. Okay, not just that, also empowering librarians and libraries with the skills and infrastructure to deliver this training. Now that we’re trying to scale up this intervention and engagement to build more collaborative libraries, I’m curious to know from the panelists here in the context of what are the pathways here to engage libraries in your plans and collaborative engagement to scale digital skills development and empowerment for women and young girls? Thank you.
Anda Bologa: Thank you so much. An excellent question. Another intervention, please. And then we move to the panel.
Catherine Mumma: Thank you. My name is Catherine Muma. I’m a senator from Kenya. I just want to commend the panel. I think I enjoyed every presentation, acknowledging that world over we are more as women, but we are not as present as we ought to be. And the tech world has run way ahead and left women behind. That is something we need to acknowledge that it has helped to visualize the actual patriarchal concerns that are in societies across the world. I found the solutions you’re giving very, very practical. I think I’ll speak to you, Honorable Minister, from Namibia, what you’re doing can happen in Kenya, can happen in Uganda. The many proposals you’re suggesting, by the way, my daughter is an engineer and she’s just frustrated in some company going through exactly what you’re saying. She’s there, she has capability, but the men won’t trust her to do this. I’ll be happy to just engage with you and back to Kenya. I will take it back to the Kenyan women’s movement to see how we can engage further and help get more women on board. Thank you very much.
Anda Bologa: Thank you so much for this round. We’ll bring it back to the panel and then we’ll open it again to the floor. On top of everything that was said, I also encourage you to approach Ambassador Erickson and talk about Global Gateway and the type of projects that can be financed and the type of projects that can be created under the Global Gateway umbrella. But bringing us back to the first question about how women can get involved and what women must do, perhaps I’ll give the floor to Valeria Betancourt to give us maybe since you have the vision at the community level, you can give us a few clear examples on how to approach and perhaps frame this question. Thank you.
Valeria Betancourt: Absolutely. Thank you very much. First, I think we have to understand what my colleagues of the panel have been saying. The intervention have to be multidimensional and multifaceted. I think first, it is very important to enable regulatory frameworks that allow for the coexistence of different models and different ways to provide connectivity, for instance, because that will also create the conditions through the regulatory and legal frameworks be able to precisely, for instance, deploy community networks and infrastructure that is owned by the community in a way that will have and ensure the financial sustainability, for instance, so the regulatory framework is one level and the possibility, as I am saying, of allowing for the existence of complementary models for bridging the digital gap. Second, I think financial mechanisms, as part of the implementation of the Global Digital Compact, but also the review of the last 20 years of the World Summit of the Information Society, we are kind of witnessing a revamping of the multilateral agenda, including the restructuring of the financial global architecture that has to also kind of mirror what is happening with the financial mechanisms at national level. So, for instance, for the Global South and I guess in many other regions as well, but in Latin America in particular, the role that the universal service funds can play in ensuring the financial sustainability of women-led initiatives, of women-led enterprises based on digital technologies, I think that’s very important. So, ensuring this kind of deep investment, focused investment for ensuring the financial sustainability of community-driven initiatives is very important. Also, there are very important communities that have a very strong feminist imprint in Latin America, that’s the case of community networks in Brazil, in Argentina, in Colombia, in Mexico, that have been able, working together, the community, but also in partnerships, as I was saying, to be able to… put in practice the feminist principles of the internet, for instance, and also some other guidelines that different groups have developed in order to make sure that the digital policies are truly inclusive and gender sensitive. I would like to mention in particular two initiatives that are happening in Latin America. First is a mapping, a kind of mapping of best practices that the Inter-American Telecommunications Commission has started in order to precisely shed some light in terms of what are the best practices to ensure that digital policies are gender inclusive and gender sensitive, so member states have been invited to provide information and with that information to be able to understand what is happening at the level of the community networks and systematize that knowledge and experience. And second there is a very concrete example, a very recent one in Colombia, that there is a partnership between the European Union and Colnodo around a project that is oriented precisely to connect the unconnected in rural areas in the country and it’s interesting to see how that partnership is allowing the intervention in the different levels that I have mentioned, so to be able to encourage and promote a policy dialogue in the country that will help to precisely work at the regulatory framework but also the digital policies that are sustainable in time and that work closely with the communities but also at the community level to be able to deploy not only the infrastructure but also the capacity building programs for women and girls and non-conforming people in order to be able to make most of the use of technologies. So those are the examples that are there, they are very well documented and I think we can build and learn a lot from those experiences in different countries and also regulatory developments in Mexico for instance allowing the existence and the deployment of indigenous community networks, where women have a very central role in the governance of those communities, not only the technical running of the community network, but also the governance of those initiatives.
Anda Bologa: Thank you so much. And before opening it to the panel, maybe I can add something on women getting involved and the regulations. And it sounds very abstract, but in reality, the buzzword of this conference is artificial intelligence. And one way in which artificial intelligence is being used, if there is no regulatory framework to prohibit it, is for employment. But the way artificial intelligence works is an algorithm trained on data. And if historically, this type of jobs, particularly jobs in tech, higher paid, were given just to men, then the algorithm, when it will look through the CVs, it will just take out the best CVs of men. And those women will not receive an invitation to the interview. So this is a very practical way in which having a regulatory intervention in which companies will not be allowed to do that without human oversight, is clearly opening a path for women to be invited to the interview. And then it’s up to them, of course, to be as competitive as males in those positions. But in a very pragmatic way, that’s how through regulatory frameworks, you can control the use of these technologies. And that’s a practical example from the regulatory framework of the EU. But going back to the panel, we heard two questions. If anyone would like to address the first one and speak about libraries, please.
H.E Emma Inamutila Theofelus: Thank you. I also wanted to add something to the first question. And I know it will prolong it because she has really adequately handled it. But I think the reality of the fact is there are religious, cultural barriers to girls even thinking of going into a technical field, even just the discussion in the house. You almost become an anomaly if you want to go outside of the traditional types of employment that you want to have. When I was in high school, between the last two grades, grade 11 and 12, you’re allowed to choose subjects that you want to do, at least three that you want to do. I didn’t want to do home economics, and home economics was you learn how to knit, you learn how to keep a home. I didn’t want to learn that in school, I learned that at home already with my parents and my guardians. I wanted to do computer studies, and in that class of 35 students, we were only two females. Nobody stopped them to choosing computer studies, but already there’s a lot of social engineering from a girl child. Just even the conversations of what is possible, what is not possible, and you might not think it’s, you know, nobody said, no, no, don’t go do that. But it becomes subconscious without you as the person not knowing, and even the people around you not knowing that they’re socially engineering you not to take some of these career paths that are not primarily geared towards girls. So I think it’s a serious conversation we need to have, because before a child decides that at university, I’m going to choose this particular career, somebody should have been in their ear for years and years on the end, telling them what is possible for them and what is not possible for them. So it’s silent. It’s not out there, but it’s what is impacting the performance of girls to get into science and tech, and even in the workplace. There’s social engineering, a lot of intimidation, you know, being a young junior officer and wanting to be an executive, and already the environment is telling you that you might not succeed, even make it to the interview. And I think in this to women in politics, women in business, it’s the same thing just in different sectors, it’s the same thing happening all across patriarchy showing up in various ways so I just wanted to add. onto that response of the question. On the library’s question, I think in our model in Namibia, we’ve realized that every standing asset is an opportunity for us to do the training. We use libraries as a place where we can give computer literacy training. The basics, how to switch on a computer, how to type a CV, how to print. For all of us who might have advanced skills, it might seem like, I mean, how can somebody not know that? But there are a lot of people who do not know that, and that’s why we use the libraries. Secondly, we use what we call ICT centers that we share between the Ministry of ICT and the Ministry of Youth. This is targeted to young people, both male and female, around training. It’s already a standing infrastructure. The government has invested into that infrastructure 15, 20 years ago. We just help refurbish it by putting 1, 2, 3, sometimes even 5 laptops make such a big difference in ensuring that that space becomes as an ICT hub for somebody who wants to learn computer and digital literacy. Then the third place we are now exploring using is post offices. We have post offices almost everywhere in the country, even in the most rural parts of the country. People get mail because that’s the infrastructure that was invested in a few years back. We’re now trying to see how can we ensure at least every post office has a computer or laptop for the community to learn how to use online spaces and impart digital literacy skills. We almost anticipate that we’ll have an even bigger impact to train more women and girls in using computers and using smartphones as a way to ensure that they get the literacy skills. Every standing infrastructure is an opportunity to train. to impart skills. It’s just how policymakers, community actors, private sector come together to use that space to impart those necessary skills. And I’ve challenged our private sector to say, by the year 2030, no person, young and old, should not have basic digital literacy skills in the Namibian nation. We’re only 3 million people. So I can almost believe that by 2030, every citizen must at least know how to use a laptop, must at least know how to use a smartphone to run something, to have an email, to be able to put their business online, how to navigate websites, how to apply for a job online. The basic skills to get somebody from level zero to level one or two or three. And we don’t want to take that for granted. We don’t want that divide to grow bigger. We want to close the gap around digital literacy skills, whether from a gender lens or any other lens possible in the country. Thank you.
Anda Bologa: Thank you so much for your answer. Please.
Kedi Välba: Thank you. When I was going to university, my first degree was business administration and languages. And my uncle told me when he heard that I’m going to attend the university, he said, nonsense, women and business. That’s nonsense. You should find a rich husband, stay at home, have children and not go to the university. So this is what we’re talking about. It’s happening everywhere all over the world. And this is the environment we come from. But I wanted to make an example, also talk about the libraries from Estonia, this is how we actually also started our digitalisation journey, that we created internet access points. We did also use the libraries and we also created computer classes, one computer class to each school all over Estonia and provided computer lessons, not only to the general public, but but also when we’re talking about gender equality, we also need to talk about the elderly and the inclusivity of the elderly, because they also need to access the services. So this is what helped us a lot. And this was also something that was done in collaboration with private and public sectors, largely funded by the private sector, because computers to the schools were coming from the private sector. And this is where I’m also happy to represent the Digital for Development Hub here, which is an initiative or a strategic platform created by the European Union and member states to actually boost the digital transformation and investments around it. And the focus is very human-centric. Also, the gender topic is a cross-cutting topic in all of the initiatives. And I’m very happy to see that. Well, I’m here because I’m representing the private sector advisory group. We also have the civil society advisory group. And I think this is a good example of a platform for collaboration. And we need more of these neutral platforms where we can talk together, the private sector and the public sector, because innovation usually happens in the private sector. But somehow we are afraid of the private sector. We don’t trust the public sector. But we need those collaboration platforms where we can actually discuss with each other. And then we will have better ideas and inclusivity. And also one of my thoughts that I wanted to add here is that we have heard the topic of women having difficulties in entering the job market and tech field and so on. So what we need to do is to boost or encourage the women entrepreneurship, to have more women founders. women founders, women CEOs, and this innovation hub that we have, because women are more eager to take women on board. And this is also what I have seen and mentored young girls a lot, that when they’re ending up in a very men-focused company, then they struggle a lot and they have a lot of issues. Quite a lot they’re seen as an assistant, just bringing coffee. So we need to boost women entrepreneurship, and that will also be something that will make a change. Thank you.
Anda Bologa: Thank you so much. Thank you again for all your interventions. Thank you for your patience and being here. We’re running out of time, so unfortunately we have one last intervention from each panelist. I’ll start with the other worder from which I started, and I will ask you to think of one key insight that you got from this panel and try to perhaps frame it as a call to action. I will start with Valeria Betancourt, please.
Valeria Betancourt: A call to action. Sorry, can you hear me? Yes, a call to action.
Anda Bologa: 30 seconds, please.
Valeria Betancourt: Yes, my call to action could be to precisely take the responsibility that every stakeholder has for ensuring that we can bridge the digital gap. As I was saying, it is not a matter that just one single actor can provide the responses, and I think bottom-up approaches have, I think in history, shown that might be the best possible way, putting people and communities in the center to be able to provide solutions that are meaningful to people, so digital technologies can help improve the lives and open opportunities, not only to enter and to access to economic justice, but also social justice and environmental justice, that for communities is essential in order to also ensure the integrity, safety, and well-being. the well-being of themselves and the planet.
Anda Bologa: Thank you so much. Maybe we can move to Katie-Valbaa in the interest of passing the microphone around. Thank you.
Kedi Välba: Yes, thank you. We have heard a lot of good thoughts here today. And I would just like the conversation to continue in that sense, that it would not only remain a conversation, but we have also very concrete actions following. And under the D4AD Hub initiative, this is something where we’re looking into very seriously, as gender equality is one of the aspects that’s included in every project.
Anda Bologa: Thank you so much. Ambassador Ericsson, 30 seconds.
Roy Eriksson: 30 seconds. Yes, take action. Not just talking about bridging the digital divide and empowering women. When we are doing a project, we really need to look into that, that it is also benefiting women, that everybody is on board.
Anda Bologa: Thank you. Minister?
H.E Emma Inamutila Theofelus: If all of us at the IGF, it means that we have a role to play in our communities. We are respectable people. There are people who value us. So everywhere you go, let it be a church in your community, at a family gathering, at a wedding. Encourage the young girls to take up STEM fields, encourage them to go above and beyond the normal norm, encourage them to explore job careers, study careers in the tech field, because your word could actually be an encouragement for that young girl or that young woman to actually want to consider something like that. So let’s not take words for granted. They go a long way in encouraging people to take up careers that otherwise are against the normal norms. Thank you.
Anda Bologa: Thank you so much. Radka Cibile, 30 seconds.
Radka Sibille: Thank you so much. My call for action would be to make the online space safe and respectful of human rights, just like they should be respected offline. Because if it’s not safe, even if you then bring and connect all those that are un-safe, it’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. It’s not safe. connected today, including women, they will be dissuaded in using the online space is if it’s full of harassment, hate speech, misogyny. So human rights online should be protected as they are offline. Thank you.
Anda Bologa: Thank you so much. Robin Ryskardt.
Ravin Rizgar: Is seen like other stories like mine happening in Finland and other areas of the world is like quite making us all to work on empower. So we have to empower women and continue this in every part of the world. And please, if you have a friend, a family member, that’s female and trying to start something, either they’re either a startup or a community initiative, just show support. Even if it’s just a few words of encouragement, that would be really appreciated. So just keep supporting. Thank you.
Anda Bologa: Thank you so much. Thank you to the wonderful public today and the panelists. And I encourage you to go to them, speak up and look for partnerships and multi stakeholder perspectives. Thank you. Transcribed by https://otter.ai Transcribed by https://otter.ai
H.E Emma Inamutila Theofelus
Speech speed
151 words per minute
Speech length
1745 words
Speech time
690 seconds
Limited access to digital skills training for women and girls
Explanation
The Minister highlights that a significant barrier to closing the gender digital gap is the lack of opportunities for girls and women to learn necessary digital skills. This includes basic familiarity with the internet and online spaces.
Evidence
Experience from Namibia’s Ministry of ICT shows many women have a fear of the unknown when it comes to technology, often saying they’ve lived without it for so long and don’t see why they need it now.
Major Discussion Point
Barriers to closing the gender digital divide
Agreed with
Ravin Rizgar
Kedi Välba
Agreed on
Lack of access to digital skills training for women and girls
Differed with
Ravin Rizgar
Radka Sibille
Valeria Betancourt
Differed on
Primary barrier to closing the gender digital divide
Societal and cultural barriers discouraging girls from tech fields
Explanation
The Minister points out that religious and cultural barriers often prevent girls from even considering technical fields. There is often social engineering that steers girls away from subjects like computer studies.
Evidence
Personal experience of being one of only two females in a computer studies class of 35 students in high school.
Major Discussion Point
Barriers to closing the gender digital divide
Agreed with
Ravin Rizgar
Kedi Välba
Agreed on
Societal and cultural barriers hindering women’s participation in tech
Providing coding camps and digital literacy programs for girls
Explanation
The Minister describes initiatives in Namibia to promote digital skills among girls. This includes coding camps specifically for girls and efforts to introduce coding as a subject in schools.
Evidence
Namibia has started coding camps for girls only, with the first one done in collaboration with the United Nations Commission on Africa.
Major Discussion Point
Strategies to promote women’s inclusion in tech
Agreed with
Ravin Rizgar
Kedi Välba
Agreed on
Lack of access to digital skills training for women and girls
Leveraging existing infrastructure like libraries for digital skills training
Explanation
The Minister explains how Namibia is using existing infrastructure like libraries, ICT centers, and post offices to provide digital literacy training. This approach maximizes the use of available resources to reach more people.
Evidence
Examples include using libraries for basic computer literacy training, ICT centers for youth training, and plans to equip post offices with computers for community use.
Major Discussion Point
Strategies to promote women’s inclusion in tech
Encouraging girls to pursue STEM education and careers
Explanation
The Minister emphasizes the importance of encouraging young girls to enter STEM fields and explore careers in technology. She calls on everyone to use their influence to promote these opportunities for girls.
Major Discussion Point
Importance of women’s empowerment in the digital economy
Ravin Rizgar
Speech speed
157 words per minute
Speech length
821 words
Speech time
313 seconds
Lack of trust in women’s capabilities in tech sectors
Explanation
Rizgar points out that there is a lack of trust in the private sector regarding women’s capabilities in the digital economy. This leads to fewer employment opportunities for women in tech, even when they have the necessary skills.
Evidence
Personal experience of being rejected for jobs in the tech sector despite being a top student in manufacturing engineering.
Major Discussion Point
Barriers to closing the gender digital divide
Agreed with
H.E Emma Inamutila Theofelus
Kedi Välba
Agreed on
Societal and cultural barriers hindering women’s participation in tech
Differed with
H.E Emma Inamutila Theofelus
Radka Sibille
Valeria Betancourt
Differed on
Primary barrier to closing the gender digital divide
Creating women-focused innovation hubs and support networks
Explanation
Rizgar describes her initiative, Sully Innovation House, which provides a space for women to share their stories and challenges in the tech sector. This hub offers capacity building and support for women in technology.
Evidence
Sully Innovation House has trained over 600 women in digital skills, with more than 300 finding jobs and some starting their own businesses.
Major Discussion Point
Strategies to promote women’s inclusion in tech
Agreed with
H.E Emma Inamutila Theofelus
Kedi Välba
Agreed on
Lack of access to digital skills training for women and girls
Radka Sibille
Speech speed
168 words per minute
Speech length
756 words
Speech time
269 seconds
Lack of women in tech leadership and decision-making roles
Explanation
Sibille highlights the underrepresentation of women in decision-making and policy-making roles in the tech sector. She argues that having more women at these tables would ensure that gender considerations are included from the beginning in technology decisions.
Major Discussion Point
Barriers to closing the gender digital divide
Differed with
H.E Emma Inamutila Theofelus
Ravin Rizgar
Valeria Betancourt
Differed on
Primary barrier to closing the gender digital divide
Addressing online safety and human rights issues for women
Explanation
Sibille emphasizes the importance of making the online space safe and respectful of human rights for women. She argues that if the online environment is not safe, women will be dissuaded from using it, even if they have access.
Major Discussion Point
Importance of women’s empowerment in the digital economy
Valeria Betancourt
Speech speed
135 words per minute
Speech length
1368 words
Speech time
607 seconds
Persistent structural discrimination and exclusion
Explanation
Betancourt argues that the gender-digital divide is an expression of persistent structural oppression, exclusion, and discrimination. She emphasizes that this structural issue is not adequately addressed in the development of digital policies.
Major Discussion Point
Barriers to closing the gender digital divide
Differed with
H.E Emma Inamutila Theofelus
Ravin Rizgar
Radka Sibille
Differed on
Primary barrier to closing the gender digital divide
Implementing gender-inclusive regulatory frameworks
Explanation
Betancourt stresses the importance of enabling regulatory frameworks that allow for different models of connectivity and digital inclusion. She argues that these frameworks should ensure financial sustainability for community-driven initiatives.
Evidence
Examples of regulatory developments in Mexico allowing for the existence and deployment of indigenous community networks, where women play a central role in governance.
Major Discussion Point
Strategies to promote women’s inclusion in tech
Community-driven approaches to technology adoption
Explanation
Betancourt advocates for bottom-up, community-driven approaches to bridging the digital divide. She argues that putting communities at the center leads to solutions that are more meaningful and responsive to people’s needs.
Major Discussion Point
Role of different stakeholders in bridging the gender digital divide
Potential of digital technologies to improve women’s lives
Explanation
Betancourt emphasizes that digital technologies can help improve lives and open opportunities for women. She argues that these technologies can contribute not only to economic justice but also to social and environmental justice.
Major Discussion Point
Importance of women’s empowerment in the digital economy
Kedi Välba
Speech speed
125 words per minute
Speech length
1253 words
Speech time
597 seconds
Promoting women’s entrepreneurship in tech
Explanation
Välba emphasizes the importance of encouraging women’s entrepreneurship in the tech sector. She argues that having more women founders and CEOs can help create more inclusive work environments for women in tech.
Evidence
Personal observation that women-led companies are more likely to hire and support other women in tech roles.
Major Discussion Point
Strategies to promote women’s inclusion in tech
Agreed with
H.E Emma Inamutila Theofelus
Ravin Rizgar
Agreed on
Societal and cultural barriers hindering women’s participation in tech
Multi-stakeholder collaboration platforms like D4D Hub
Explanation
Välba highlights the importance of neutral platforms like the D4D Hub that bring together private sector, public sector, and civil society. She argues that such collaboration is crucial for developing inclusive digital solutions.
Evidence
The D4D Hub initiative includes gender equality as a cross-cutting topic in all its projects.
Major Discussion Point
Role of different stakeholders in bridging the gender digital divide
Roy Eriksson
Speech speed
124 words per minute
Speech length
1633 words
Speech time
784 seconds
Private sector partnerships to fund and support women in tech
Explanation
Eriksson discusses the Global Gateway initiative, which aims to mobilize 300 billion euros for infrastructure projects in emerging markets. He emphasizes the importance of private sector involvement in these projects.
Evidence
Global Gateway focuses on investments in five sectors, including digital, education, health, climate and renewable energy, and logistics.
Major Discussion Point
Role of different stakeholders in bridging the gender digital divide
Need for women role models in tech
Explanation
Eriksson emphasizes the importance of having women role models in the tech sector. He argues that seeing successful women in tech can inspire and encourage more girls to enter the field.
Evidence
Example of a woman in Finland who started a movement called ‘Chicks Can Code’ after facing discrimination in the tech sector.
Major Discussion Point
Importance of women’s empowerment in the digital economy
Christophe Farnaud
Speech speed
125 words per minute
Speech length
541 words
Speech time
258 seconds
Economic benefits of women’s digital inclusion
Explanation
Farnaud highlights the significant economic potential of closing the gender digital divide. He argues that investing in women’s digital inclusion is not only fair but also economically beneficial.
Evidence
UN estimates suggest that ending women’s exclusion from the digital sphere could add $1 trillion to the GDP of low- and middle-income countries.
Major Discussion Point
Importance of women’s empowerment in the digital economy
Agreements
Agreement Points
Lack of access to digital skills training for women and girls
H.E Emma Inamutila Theofelus
Ravin Rizgar
Kedi Välba
Limited access to digital skills training for women and girls
Creating women-focused innovation hubs and support networks
Providing coding camps and digital literacy programs for girls
Multiple speakers emphasized the importance of providing targeted digital skills training and support networks for women and girls to bridge the gender digital divide.
Societal and cultural barriers hindering women’s participation in tech
H.E Emma Inamutila Theofelus
Ravin Rizgar
Kedi Välba
Societal and cultural barriers discouraging girls from tech fields
Lack of trust in women’s capabilities in tech sectors
Promoting women’s entrepreneurship in tech
Speakers agreed that societal and cultural barriers, including lack of trust in women’s capabilities, discourage women from entering and succeeding in tech fields.
Similar Viewpoints
Both speakers highlighted the systemic nature of gender discrimination in the tech sector, emphasizing the need for structural changes and increased representation of women in leadership roles.
Valeria Betancourt
Radka Sibille
Persistent structural discrimination and exclusion
Lack of women in tech leadership and decision-making roles
Both speakers emphasized the importance of multi-stakeholder collaboration, particularly involving the private sector, in addressing the gender digital divide.
Roy Eriksson
Kedi Välba
Private sector partnerships to fund and support women in tech
Multi-stakeholder collaboration platforms like D4D Hub
Unexpected Consensus
Importance of community-driven approaches
Valeria Betancourt
H.E Emma Inamutila Theofelus
Community-driven approaches to technology adoption
Leveraging existing infrastructure like libraries for digital skills training
Despite coming from different backgrounds (civil society and government), both speakers emphasized the importance of leveraging community resources and bottom-up approaches in bridging the digital divide.
Overall Assessment
Summary
The speakers generally agreed on the need for targeted interventions to support women in tech, the importance of addressing societal and cultural barriers, and the value of multi-stakeholder collaboration. There was also consensus on the economic benefits of closing the gender digital divide.
Consensus level
High level of consensus among speakers, with complementary perspectives from different sectors (government, civil society, private sector) reinforcing the urgency and multifaceted nature of addressing the gender digital divide. This consensus suggests a strong foundation for collaborative efforts to promote women’s inclusion in the digital economy.
Differences
Different Viewpoints
Primary barrier to closing the gender digital divide
H.E Emma Inamutila Theofelus
Ravin Rizgar
Radka Sibille
Valeria Betancourt
Limited access to digital skills training for women and girls
Lack of trust in women’s capabilities in tech sectors
Lack of women in tech leadership and decision-making roles
Persistent structural discrimination and exclusion
While all speakers agreed that barriers exist, they emphasized different primary factors contributing to the gender digital divide.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement centered around identifying the primary barriers to closing the gender digital divide and the most effective strategies to address these barriers.
difference_level
The level of disagreement among speakers was relatively low. Most speakers presented complementary rather than conflicting viewpoints, focusing on different aspects of the same overarching issue. This suggests a multifaceted approach may be necessary to address the gender digital divide effectively.
Partial Agreements
Partial Agreements
All speakers agreed on the need for targeted initiatives to support women in tech, but proposed different approaches ranging from education programs to entrepreneurship support.
H.E Emma Inamutila Theofelus
Ravin Rizgar
Kedi Välba
Providing coding camps and digital literacy programs for girls
Creating women-focused innovation hubs and support networks
Promoting women’s entrepreneurship in tech
Similar Viewpoints
Both speakers highlighted the systemic nature of gender discrimination in the tech sector, emphasizing the need for structural changes and increased representation of women in leadership roles.
Valeria Betancourt
Radka Sibille
Persistent structural discrimination and exclusion
Lack of women in tech leadership and decision-making roles
Both speakers emphasized the importance of multi-stakeholder collaboration, particularly involving the private sector, in addressing the gender digital divide.
Roy Eriksson
Kedi Välba
Private sector partnerships to fund and support women in tech
Multi-stakeholder collaboration platforms like D4D Hub
Takeaways
Key Takeaways
The gender digital divide remains a significant obstacle to achieving economic growth and equality globally
Barriers to women’s digital inclusion include lack of access to skills training, cultural/societal barriers, lack of trust in women’s tech capabilities, and underrepresentation in leadership roles
Multi-stakeholder collaboration between government, private sector, civil society and communities is crucial for bridging the gender digital divide
Empowering women in the digital economy has significant economic and social benefits
Community-driven, bottom-up approaches that put women at the center are most effective for meaningful digital inclusion
Resolutions and Action Items
Use existing infrastructure like libraries and post offices to provide digital skills training to women and girls
Implement coding camps and digital literacy programs specifically for girls and women
Promote and support women’s entrepreneurship in the tech sector
Ensure gender inclusivity is incorporated into all digital development projects and policies
Encourage girls to pursue STEM education and careers through mentorship and positive messaging
Unresolved Issues
How to effectively address deep-rooted cultural and societal barriers discouraging women from tech fields
Ways to increase women’s representation in tech leadership and decision-making roles
Methods to build trust in women’s capabilities within male-dominated tech sectors
Strategies to make online spaces safer and more respectful for women
Suggested Compromises
Balancing the need for rapid digital development with ensuring inclusivity and gender sensitivity in all initiatives
Finding ways for public and private sectors to collaborate effectively despite potential mistrust
Thought Provoking Comments
According to ITU, worldwide, in 2022, 69% of men were Internet users, compared with 63% of women. This means that globally, there are 244 million more men than women using the Internet. Moreover, recent statistics show that women comprise only about 25% of the tech workforce. This disparity is even bigger in leadership positions, where women hold only 11% of executive roles.
speaker
Christophe Farnaud
reason
This comment provides concrete statistics that highlight the scale of the gender digital divide, setting the stage for the urgency of the discussion.
impact
It framed the subsequent conversation by emphasizing the magnitude of the problem and the need for action across multiple sectors.
I think the biggest barrier to ensuring that we close the digital gender gap would be the opportunity for girls and women to actually learn the skills they need. There are so many impediments to them being able to access the material they need to ensure that they get online and basic skills to actually be familiar with the Internet
speaker
H.E Emma Inamutila Theofelus
reason
This insight identifies a key root cause of the digital gender divide – lack of access to skills and learning opportunities.
impact
It shifted the discussion towards practical solutions and the importance of education and skill-building initiatives.
Estonia’s history is interesting in that sense that we regained our independence in 1991 and this is when Estonia had to kind of invent what we’re going to be or how we’re going to develop. We had very strong leadership and our prime minister and president both were for the way of digitalization. Our journey began about 30 years ago already and this was all collaboration between the private sector, public sector and also including the academia.
speaker
Kedi Välba
reason
This comment provides a concrete example of how a country successfully implemented digital transformation through multi-stakeholder collaboration.
impact
It introduced the importance of political will and cross-sector partnerships in driving digital inclusion, influencing subsequent discussions on policy approaches.
I think first, it is very important to enable regulatory frameworks that allow for the coexistence of different models and different ways to provide connectivity, for instance, because that will also create the conditions through the regulatory and legal frameworks be able to precisely, for instance, deploy community networks and infrastructure that is owned by the community in a way that will have and ensure the financial sustainability
speaker
Valeria Betancourt
reason
This insight highlights the importance of flexible regulatory frameworks in enabling diverse approaches to digital inclusion.
impact
It broadened the discussion to include policy and regulatory considerations, emphasizing the need for adaptable approaches that can accommodate community-driven solutions.
There are religious, cultural barriers to girls even thinking of going into a technical field, even just the discussion in the house. You almost become an anomaly if you want to go outside of the traditional types of employment that you want to have.
speaker
H.E Emma Inamutila Theofelus
reason
This comment brings attention to the deep-rooted societal and cultural barriers that contribute to the gender digital divide.
impact
It deepened the conversation by highlighting the need to address underlying social norms and expectations, not just technical or educational barriers.
Overall Assessment
These key comments shaped the discussion by progressively broadening its scope from identifying the problem (using statistics) to exploring its root causes (lack of skills, cultural barriers) and potential solutions (multi-stakeholder collaboration, flexible regulatory frameworks). The conversation evolved from a high-level overview to a nuanced exploration of practical, policy-oriented, and societal approaches to bridging the gender digital divide. This progression allowed for a comprehensive examination of the issue, incorporating perspectives from government, private sector, and civil society representatives.
Follow-up Questions
How can libraries be leveraged as key partners and access points for digital literacy skills and empowerment for women and young girls?
speaker
Damilare Oydele
explanation
Libraries have existing infrastructure and space to deliver digital skills training. Understanding how to engage libraries in plans and collaborative engagements could help scale digital skills development for women and girls.
What can be done to work on the mindset of girls to get them more involved in technology?
speaker
Peter Zanga Jackson Jr.
explanation
There is limited women’s participation in technology fields. Understanding how to change perceptions and encourage more women to enter tech is important for increasing diversity.
How can regulatory frameworks be developed to allow for complementary models of connectivity provision, including community-owned infrastructure?
speaker
Valeria Betancourt
explanation
Enabling diverse connectivity models through regulation could help bridge the digital divide, especially for women and underserved communities.
How can financial mechanisms like universal service funds be leveraged to ensure sustainability of women-led digital initiatives?
speaker
Valeria Betancourt
explanation
Targeted financial support is needed to sustain community-driven and women-led digital projects over time.
How can partnerships between government, private sector, and communities be formed to utilize existing infrastructure (like post offices) for digital skills training?
speaker
H.E Emma Inamutila Theofelus
explanation
Leveraging existing infrastructure through multi-stakeholder partnerships could expand access to digital skills training, especially for women.
How can women’s entrepreneurship in the tech sector be boosted to create more opportunities for other women?
speaker
Kedi Välba
explanation
Increasing the number of women founders and CEOs in tech could help create more inclusive work environments and opportunities for women in the sector.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Lightning Talk #57 SDGBookClub&Digital Initiatives for Inclusive Sustainability
Main Session | Best Practice Forum on Cybersecurity
Main Session | Best Practice Forum on Cybersecurity
Session at a Glance
Summary
This discussion focused on cybersecurity capacity building and the challenges of effectively sharing and utilizing existing resources and information. Participants explored the problem of abundant but often overlapping or inaccessible cybersecurity capacity building resources. They emphasized the need for tailored, context-specific approaches rather than one-size-fits-all solutions.
Key points included the importance of collaboration between stakeholders, including governments, private sector, and civil society. Participants stressed the need for bespoke capacity building programs designed in consultation with recipient countries. The discussion highlighted challenges such as language barriers, limited access to technology, and budget constraints in implementing effective cybersecurity initiatives.
Speakers emphasized the importance of follow-up and impact assessment in capacity building efforts. They suggested that qualitative measurements, such as scenario-based exercises, could be more effective than quantitative metrics in evaluating cybersecurity preparedness. The need for a whole-of-nation approach to cybersecurity, involving various sectors of society, was underscored.
The discussion also touched on the importance of making cybersecurity information accessible and relatable to different audiences, including youth and healthcare workers. Participants agreed on the need to demystify cybersecurity and make it more approachable through popular culture and practical, hands-on experiences.
In conclusion, the discussion highlighted the complex nature of cybersecurity capacity building and the need for coordinated, inclusive, and context-aware approaches to effectively address global cybersecurity challenges.
Keypoints
Major discussion points:
– The problem of overlapping and fragmented cybersecurity capacity building resources and initiatives
– The need for better coordination, collaboration and information sharing among stakeholders
– The importance of tailoring capacity building efforts to local contexts and needs
– Challenges in measuring impact and success of cybersecurity initiatives
– The value of practical exercises and simulations to test preparedness
The overall purpose of the discussion was to explore strategies for improving cybersecurity capacity building efforts globally, with a focus on addressing gaps, avoiding duplication, and ensuring resources reach intended audiences effectively.
The tone of the discussion was constructive and collaborative throughout. Participants shared insights from their diverse perspectives in a spirit of mutual learning. There was general agreement on the key challenges and a shared commitment to finding solutions. The tone became more action-oriented towards the end as participants discussed concrete ways to measure impact and move forward.
Speakers
– Carina Birarda: Introduced the session
– Wim Degezelle: Consultant with the IGF Secretariat, supporting the Best Practice Forum
– Josephine Miliza: MAG member and co-facilitator for the BPF cybersecurity
– João Moreno Falcão: Lead facilitator for the youth standing group, cryptography researcher
– Yao Amevi Sossou: Coordinator of U5GF in Benin, part of DC in data-driven health technology
– Tereza Horejsova: Senior outreach manager for the GFCE (Global Forum on Cyber Expertise), former MAG member
– Dino Cataldo: Chief Information Officer of the UN Pension Fund, IGF MAG member representing intergovernmental organizations, co-facilitator of BPF on cybersecurity, co-lead of dynamic coalition on blockchain assurance and standardization
– Mevish P Vaishnav: President of Academy of Digital Health Sciences, represents dynamic coalition in digital health
– Brendan Dowling: Australian ambassador for cyber affairs and critical technology
– Oktavía Hrund Jóns: MAG member, co-chair of BPF on Cybersecurity
– Hariniombonana Andriamampionoma: Co-facilitator of BPF on cybersecurity, manager for Elio Star Wars
Full session report
Cybersecurity Capacity Building: Challenges and Strategies for Effective Implementation
This report summarizes a discussion on cybersecurity capacity building, focusing on the challenges of effectively sharing and utilizing existing resources and information. The session, introduced by Carina Birarda, brought together experts from various sectors to explore strategies for improving global cybersecurity capacity building efforts.
Problem Statement and Context
The discussion began with a clear problem statement: despite an abundance of cybersecurity capacity building resources, there is a significant issue of information overload and lack of targeted resources. Brendan Dowling, the Australian ambassador for cyber affairs and critical technology, highlighted that many existing capacity building programmes have been untargeted and inappropriate for specific contexts. He emphasized the need for tailoring programs to individual country needs, considering factors such as existing capabilities, cultural context, and specific requirements.
Yao Amevi Sossou, Coordinator of U5GF in Benin, underscored the importance of accessibility and localization, particularly in the African context. Sossou pointed out that the most common tool for internet access in Africa is the mobile phone, suggesting that capacity building efforts should consider this reality. He also emphasized the critical role of language and cultural context in developing effective cybersecurity initiatives, highlighting the need for resources in local languages and consideration of cultural nuances.
Mevish P Vaishnav, representing the dynamic coalition in digital health, emphasized the critical nature of cybersecurity in healthcare data protection, illustrating the need for sector-specific approaches in capacity building efforts.
Improving Access and Coordination
A significant portion of the discussion centered on strategies to improve access to resources and enhance coordination among stakeholders. Brendan Dowling expressed a commitment to using existing mechanisms and processes, such as the Global Forum on Cyber Expertise (GFCE) and Partners in the Blue Pacific, to avoid duplication of efforts.
Tereza Horejsova, senior outreach manager for the GFCE, stressed the importance of information sharing and coordinating efforts. She highlighted the GFCE’s Cybil Portal as a crucial tool for sharing project information and avoiding duplication. Horejsova shared insights from recipient countries, noting that they often feel overwhelmed by multiple, uncoordinated capacity building initiatives, emphasizing the need for better organization among donors and implementers.
João Moreno Falcão, lead facilitator for the youth standing group, highlighted the need for basic resources such as computers and internet access for effective cybersecurity learning, particularly for youth engagement.
Follow-up and Impact Assessment
The discussion emphasized the importance of follow-up and impact assessment after capacity building initiatives. Yao Amevi Sossou stressed the need for ongoing evaluation beyond initial project timelines and budgets. Brendan Dowling advocated for qualitative measurement through exercises and simulations, arguing that this approach is crucial for testing preparedness and capacity effectively.
Tereza Horejsova suggested tracking project growth and coverage in databases as a quantitative approach to impact assessment. Mevish P Vaishnav recommended regular auditing and sharing of best practices across countries, highlighting the value of continuous learning and improvement in cybersecurity practices.
Holistic Approach to Cybersecurity
The discussion underscored the necessity of a holistic approach to cybersecurity. Brendan Dowling advocated for a whole-of-nation approach involving multiple stakeholders, including government, industry, and community sectors. This perspective was complemented by Tereza Horejsova’s emphasis on trust-building and cross-sector collaboration.
Oktavía Hrund Jóns, MAG member and co-chair of BPF on Cybersecurity, stressed the importance of focusing on both reactive and proactive approaches to cybersecurity. She emphasized the value of practice and continuous learning in building robust cybersecurity capabilities, and the need for consistent, inclusive approaches to capacity building.
Key Takeaways and Action Items
1. Tailor capacity building programs to specific country needs, considering cultural context and existing capabilities.
2. Utilize existing coordination mechanisms like the GFCE and Partners in the Blue Pacific to avoid duplication of efforts.
3. Encourage stakeholders to share project information on platforms like the Cybil Portal for better coordination and resource allocation.
4. Develop more inclusive and accessible capacity building programmes, considering language barriers and cultural contexts.
5. Implement regular follow-up, auditing, and impact assessment of cybersecurity initiatives through qualitative and quantitative methods.
6. Foster greater collaboration between different sectors and stakeholders in cybersecurity initiatives.
7. Adopt a holistic, whole-of-nation approach to cybersecurity capacity building, involving multiple stakeholders.
The discussion concluded with a recognition of the complex nature of cybersecurity capacity building and the need for coordinated, inclusive, and context-aware approaches to effectively address global cybersecurity challenges. Participants emphasized the importance of continued dialogue and collaboration to refine strategies and improve the effectiveness of cybersecurity capacity building efforts worldwide.
Session Transcript
Carina Birarda: Hello everyone. Ladies and gentlemen, estimated colleagues, and participants. It’s a pleasure to welcome you to the Best Practices Forum on Cybersecurity Capacity Building, part of the Internet Governance Forum 2024 here in Riyadh. Thank you, host country. Our goal today is clear, to explore, develop, and share strategies to strengthen global cybersecurity capacity. Session overview, introductions, past achievements, and 2024 discussion context. Problem statement, define key challenges in capacity building. Expert panel, insight, experiences, and contribution from the room. Objectives, redefine the problem statement, identify best practice forum, and actionable solutions. Define next step to move from dialogue to action. Thank you for showing to us your work and experience matter. Thank you.
Wim Degezelle : Thank you, Karina, for this introduction. And welcome all to this session of the Best Practice Forum on Cybersecurity Capacity Building. Can you move to the next slide, please? Or do I have a remote? Okay. These are the session outline and objectives Karina discussed with you already, so I don’t think it’s necessary to go through them again. So, also for my part, I hope that we had a very interactive and very interesting session. But I’m, first of all, let me present myself. My name is Wim Duggezelle. I’m a consultant with the ICF Secretariat, supporting this Best Practice Forum. So, that’s why I prefer to give the introduction, and then afterwards, leave to the colleagues, the colleagues co-facilitator of the Best Practice Forum, and the distinguished panelists we invited for this meeting. Next slide, please. First of all, what is a Best Practice Forum? You might have seen in the agenda or on the ICF website that there is something called intersessional activities. These are a number of activities that start, kick off at the beginning of the year, and work, have discussions during the year in function of the ICF meeting that comes at the end of the year. This allows to do a little bit more preparation than a normal workshop. And this also allows to collect information, different views from stakeholders, which are then combined and published in a report after the meeting and is sent out for further work. This is not the first Best Practice Forum on cybersecurity. As you can see on the screen, there have been BPFs on cybersecurity for the last seven years almost, between 2018 and 2023. The Best Practice Forum on cybersecurity, they have been with a different focus before that, but between 2018 and 2023, the ICF Best Practice Forum has focused on norms and norms agreements in cybersecurity. I’m not going to dive into detail, but I really would like to list what we did in those years, because they are also based on discussions with the different stakeholders, with the communities, and these reports are still available on the ICF website. They looked into norms and norms agreements from different aspects. Amongst other, how norms are developed, how norms are made into practice. One year, there was a very interesting question that was dealt with. The question, if you look back to specific cybersecurity events that happened in the past, before a specific norm was voted or agreed, would it have made a difference? That was a very interesting story. Another interesting discussion or research we did in the past couple of years was looking outside the realm, or outside the sector of cybersecurity, and look in other fields where there also are norms and norms agreements, and see if lessons could be learned for cybersecurity norms. But this is the past work. These outputs, I really invite you, if you’re interested, to look at them, because they’re very interesting and a very good read, especially as a background. They’re still available on the website. Now, today, the next slide, please. After all those years talking about norms agreements, there was a feeling, well, maybe we have said enough or finished that topic. And in the beginning of this year, the ICF always sends out a request for topics that should be on the agenda. That is a request for topics that inform the general agenda for the meeting, also had informed the choice of the different main themes for this year. But in this call for input, cybersecurity and trust came out as one of the paramount concerns in the community, which was a clear indication that the ICF, in its program, should pay attention to it. Of course, cybersecurity, cybersecurity and trust is an enormous broad topic. Therefore, the people behind and people proposing this best practice forum said it might be interesting to look into capacity building. Capacity building that helps to build cybersecurity, helps to enhance cybersecurity and trust online. So the proposal for this best practice forum was submitted and agreed here in Riyadh in the beginning of the year in February, where the ICF Multistakeholder Advisory Group met. And after that, the work plan, one of the first things the BPF did was to organize a meeting to discuss its own work plan, to discuss the idea to have this BPF. And I mention here because that was a very interesting step or an important step this year. The next slide, please. Because the fact that the BPF took its initial plan and moved and used that to organize its first meeting was very important to get input from the community on what it was planning to do. And it dramatically changed, well, dramatically might sound a bit, let’s say, too dramatic, but it really changed the course of what was planned. Because the initial idea was for the best practice forum to look into cybersecurity capacity building, what is available online, sorry, what is available in terms of specific training, in terms of specific offers, and do a kind of general mapping, mapping of training, mapping of resources available, a mapping so that it would be possible to look for gaps, to look for opportunities, and then provide that to the community. But very early, one of our first calls, one of our first meetings we had, we got some pushback coming from the community and community participants saying, well, but this is already being done. There is already a huge amount of information out there. There are mappings of cybersecurity capacity initiatives, there are inventories, there are organizations providing this type of work already. To that extent, that it might be difficult to find the specific information you need. It might be difficult for a government, an organization, or a person that says, well, I would need to build some capacity in my organization on cybersecurity, but I don’t really know where to go because there’s too much information. And this was a start for a completely other discussion within the best practice forum, and the discussion that led to the session today. It was, how do we deal with this exact situation? And this led to the formulation of a problem statement that you see on the screen, and that will be the main topic or the start for the discussion today. I will read it out. So this discussion we had on the program for the BPF this year led to the problem statement saying that while various mappings, inventories, and initiatives provide a wealth of information on cybersecurity capacity building, different offerings Then this information they may overlap overlaps and gaps in information may exist or exist And the information and therefore the information may not reach a tight target audience effectively and With this I want to leave it there Go to the next slide, please Because the nice thing as an introduction is you can really come up with a problem with a problem statement and then you can hand over To the panel to discuss and come up with conclusions and the question to solve it But for that I give the floor to the two moderators who are also Co-facilitators and the panel. I think the most easiest is everyone introduces himself or Herself and that might be the best. So thank you for me, and I’m looking forward to a very interesting discussion
Josephine Miliza: Thank You Wim for that great Introduction to our discussion today. My name is Josephine Miliza. I am a MUG member and also a co-facilitator for the BPF cybersecurity Really happy to be joined today by a great panel And we’ll be going into the discussion shortly But before that, I’d like to welcome all the panelists and my co-moderator to introduce themselves starting from my far right left
João Moreno Falcão: Hello everybody My name is João. I’m The lead facilitator for the youth standing group, I’m and I’m also a cryptography researcher
Yao Amevi Sossou: Hello, my name is Yao. I’m from Benin I’m the current coordinator of the U5GF in Benin and also part of the DC in data-driven health technology from the International works. I also been working sometime in the BPF. Nice to be here
Tereza Horejsova: Good afternoon My name is Tereza Horejsova senior outreach manager for the GFC the global forum on cyber expertise and also a former Mac member
Dino Cataldo: Good afternoon. My name is Dino the large from the chief information officer of the United Nation Pension Fund and Within the IGF I play several roles. I represent the intergovernment organization in the multi-stakeholder Advisory group. I’m a co-facilitator in the best practice forum on cybersecurity and Co-lead of the dynamic coalition on blockchain assurance a standardization. Happy to be here
Mevish P Vaishnav: Good evening everyone. I’m Mevish P Vaishnav from India President of Academy of Digital Health Sciences and I represent dynamic coalition in digital health over you Pleasure to be you
Brendan Dowling : Hello, I’m Brendan Dowling, I’m the Australian ambassador for cyber affairs and critical technology
Josephine Miliza: Thank you, and I’d also like my colleagues who are joining us online Octavia and Yubonana to introduce yourselves, please
Hariniombonana Andriamampionoma: I’m from Madagascar. This is my fourth year co-facilitating the BPF on cyber security I’m the manager for Elio Star Wars And I’m really happy to moderate this session online. I would like to thank our panelists And welcome everyone who’s joining this session Thank you Good evening, everyone. My name is Yubonana. I’m from Madagascar
OktavÃa Hrund G Jóns: And Octavia shrimp with Bernard Jones Calling in from Iceland This I sit on the mag as well. And this is my my first year on the mag Although I’m a long-term mag or IGF participant I’ve had the absolute pleasure of also being a co-chair of this best practice for Cybersecurity and I’m very excited and happy to to spend the next hour with you all
Josephine Miliza: Thank you so much and yes Getting into the conversation today and our first question is how does the problem statement? Resonate with all your own experiences or perspectives. Do you find that in your context? Do you find that it resonates with your context? Is there something missing or that we overlooked as we are coming up with it? I’ll settle Brendan, please
Brendan Dowling : Thank you. I think the problem statement is valid it captures that there is a Huge proliferation of information about cybersecurity Capacity building, but it’s often not bespoke. It’s often not targeted to Recipients and in our experience that is the most important element. We have seen cyber attacks cyber crime Worsened substantially in our region. We’ve seen Australia and New Zealand Southeast Asian nations Pacific Island nations hit by really disruptive cyber attacks So there is a huge amount of interest and drive to raise cybersecurity and to raise cyber resilience We’ve implemented some very substantial capacity building programs in recent years But we’ve often found that they can be untargeted Inappropriate and we’ve committed to doing better with our partners about Working with them in dialogue to figure out what the right approach for that country for that context for that situation is What that means is in our capacity building work in the Pacific it is very Bespoke to that country it can involve incident response work when there’s a major disruptive cyber attack it can involve Upgrading hardware and software To ensure that pirated software or out-of-date servers are not in use it can involve Developing legal frameworks or national strategies or training to develop computer emergency response teams so every capacity building program that we roll out is designed in consultation with The recipient country and is shaped according to their needs interests and their situation So I think it’s a really positive thing that we have a much more substantial effort in cybersecurity capacity building out there for me the most important consideration is adjusting and tailoring to the particular circumstances of the country organization or partner that you’re working with
Moderator: Fantastic, thank you so much. I
Mevish P Vaishnav: Think the healthcare data is the major important part of individuals and it is a personal information So we need to take care of it And the best practices is the one where we can collaborate and work together on the cyber security part
Moderator: Thank you and to Teresa I know that when we started to have the conversation you G CFE have done amazing work and you’re actually one of the people who pointed to us that yes these resources exist But we need to redefine what really the problem statement is. So what are your reflections? Based on the work that you do over this year and also looking into next year
Tereza Horejsova: Yeah, thank you very much Josephine Two things that resonated with me when we started having this conversation a few months ago First of all, I think the IGF is a natural space to have discussions on capacity building And I feel that it hasn’t been used this space as much as it could have so the fact that the best practice forum on Cyber security decided to focus on capacity building is I think something to really applaud And second from my organizational point of view from from the global forum on cyber expertise We try to make the overview of what’s available What capacity building projects are happening as easy? to find as easy to grasp as possible so that we can serve as a resource for donors for Implementers when they are planning their projects to kind of build on what has been done already to Eliminate duplication of efforts and to simply use resources as efficiently as possible We also try to do it Sensitively To tailor to each region what ambassador has already already stressed through our regional hubs Including in the Pacific where we really try to use the knowledge from the regions themselves to provide even better overview of Of all the capacity building projects and activities now how we do it might not be perfect that’s why having a discussion on you know, what what is most useful and what is most efficient and Comparing with other resources that are available Is extremely useful for us the primary resource that the GFC uses is the so-called civil portal Which is available for free online on? www civil portal org And there we really try to also engage The various actors to help us find and provide us information That we can put up on the portal in a very simple overview that anybody can use as a go-to resource But we hope that the discussion we are having tonight a discussion. We’ve been having with you over So I think it’s very important to have this kind of information, this kind of information in the past months will help us fine-tune it and make it even more useful. Thank you.
Moderator: Fantastic. And Yael? Thank you very much for the floor.
Yao Amevi Sossou: I concur a lot with all what the previous speaker have discussed already about the importance of the topic. At the Dynamic Coalition on Data-Driven and Data-Driven Health, we are moving towards a more data-driven health, especially when it’s come to health care facilities and health care access. We are moving toward a more e-driven health access around the globe. But we need to make sure that health care practitioners also have capacity-building opportunities to have to strengthen the knowledge on how to use the data and how to use the data to make it more accessible. So, the other aspect also we’ve been stressing is accessibility on those available capacity, especially when it’s come to young people and under-served communities, especially in African region. You know, in Africa, we have thousands of languages. There’s nowhere to mention that. So, we need to make sure that we have the capacity-building opportunities to make it more accessible to young people. And the most common used tool to access internet in Africa is a mobile phone. And our people, like, population are not really educated on how to prevent those bridges on how to protect the information, and one best practice, I think, is to use the mobile phone. So, we need to make sure that we are addressing those issues in the spoken language people who understand like the native languages, what the word capacity-building, for example, means in Swahili, would not resonate to someone if you’re not actually altered in the mother language, they would not be able to use the mobile phone. So, we need to make sure that we have the capacity-building opportunities to make it more accessible to young people.
Wim Degezelle : And the most common used tool, I think, is the mobile phone.
João Moreno Falcão: So, we need to make sure that we have the capacity-building opportunities to make it more accessible to young people. So, we need to make sure that we have the capacity-building opportunities to make it more accessible to young people. So, one of the practices that we really need to find these gaps is to use these structures we have and attract people that are in our front line of education. So, I see here a lot of representatives of strong organizations, but we lack the capacity-building opportunities to attract people that are in our front line of education. So, we need to give these contents to these persons, and we need to coordinate with them, because cybersecurity is a very extensive area, and we really need to show that this is possible to the people that we have. So, we need to make sure that we have the capacity-building opportunities to attract people that are in our front line of education, and we need to show also that we have a multitude of tasks inside of cybersecurity, and we can accommodate and train the workforce to work with this.
Josephine Miliza: Great, and, yes, so I think we have a consensus in terms of the problem and the problem, and we need to be very careful in terms of the gaps in terms of the target audience, and now we are getting into discussing how do we fix these issues, and I’ll hand over to my co-moderator, Dino, to take us through the next round of questions.
Dino Cataldo: Thank you very much, Josephine. So, as you alluded to, we look at the problem, and now we would like to solicit from our colleagues, from our partners, from our stakeholders, how we can fix it, how we can address this issue. So maybe I can start with you, Teresa, given your point of view in an institution that basically conducts regular research, consolidated best practices, from your point of view, what can be done to avoid duplication, and, at the same time, how can we make sure that the best practices that we have, the best practices that we have, that they may exist in these resources?
Tereza Horejsova: Thank you, Dino. To have a conversation, yes. So that’s definitely the starting point. We have already acknowledged that there are several resources and portals that map things that might be similar, but not exactly. The worst thing that could happen is that the best practices that we have, the best practices that we have, the best practices that we can share from our experience is the cooperation, and almost integration, if you wish, between the Sibyl portal and the UNIDIR cyber policy portal. What’s that about? I’ve already mentioned that at Sibyl, basically, we try to map resources, tools, and especially projects that have been implemented or are currently being implemented in the field of cyber crime in Cambodia. So, basically, UNIDIR has a very useful resource, the UNIDIR cyber policy portal, which basically kind of gives, as a one-stop shop, a good overview of the cyber security situation in various UN member states. Now, we would agree that it’s very useful when you look at projects, because, at Sibyl, you have a lot of information, a lot of data, a lot of information about the situation of cyber crime in Cambodia, yes, and then you would get this information. Wouldn’t it be helpful to, at the same time, get inputs on what is the situation in cyber security in this given country that the UNIDIR portal maps? So, this is common sense, and that’s why we went ahead, and obviously, there was also the issue of the technical limitations of the portal. So, there is a lot of information that can be fixed. That’s maybe the simplest part of the puzzle, I would say. But this obviously was preceded of months of conversations and exchanges on complementarity and how one portal can benefit from another so that the end user has the best experience possible.
Dino Cataldo: Thank you very much. I would like to ask Ambassador Dowling, we saw the point of view of those working on creating the knowledge and facilitating. Maybe from your point of view as a potential user, how do you find, if I may qualify the question, the ability to share information? Do you find that you use these mechanisms? Do you find that you can still be improved?
Brendan Dowling : I think it’s about committing to use those mechanisms. So, we find the GFCE to be a very useful coordinating body to make available information through the regional hubs. So, I think that is useful. However, from the government perspective, I think it’s very important for us to be able to share that information. Obviously, we saw a country, and I will not name them, prepare a capacity building project, decide the terms of that project, decide when and how it would be implemented, and presented it without talking to the recipient country. Now, if you look at the GFCE, you will see that it’s not duplicative. So, there has to be a commitment to use the mechanisms. In the Pacific, we have set up the program known as Partners in the Blue Pacific, which is expressly about donors coming together, talking to recipient countries, and doing that deconfliction. Annually, we hold the Pacific Cyber Capacity Building Open Conversation, including with the UN bodies, including with the private sector. So, for me, I don’t think we need more mechanisms. We don’t need more processes. We need to commit to using the existing processes and to saying when we find those points of misalignment or duplication, that we will adjust our programming accordingly. Sometimes, donor countries can be focused on their own internal processes for budgeting and programming and not allow that flexibility to adjust as is needed. So, I think it’s really incumbent on us to be willing to listen and to be flexible when we hear the response.
Dino Cataldo: Perfect. Thank you very much for that feedback. Madesh, maybe I can also ask for your feedback vis-Ã -vis your specific domain or specific industry. You’re talking about digital data health, related to health. How do you find this sharing of information be working, and especially the identification of potential gaps about capabilities?
Mevish P Vaishnav: I would say if there’s no capacity, there’s no security. So we need to get trained, upskill ourselves in capacity building. And it is very crucial because the health care workers need to know how to protect data. And that’s why at Academy of Digital Health Sciences, we are providing trainings to nurses, the pharmacists, the health care frontline workers, so that they are aware how to protect the data. And sharing information, we need to be careful. Misinformation should not go out. That’s very crucial.
Dino Cataldo: Thank you. Misinformation, definitely. And hot topic. Maybe if I can pass to our speaker, Joelle. You were talking about already before about issue related to languages. What else can you add from your perspective?
Yao Amevi Sossou: From my perspective, what I could add more apart from the language issue would be most of the capacity building initiatives that I’ve seen, they are budget constraints. They have simply a limited period of budget. And then in time, they are limited. And after the capacity building, what is the next step? So I think in that direction, we should find a way for follow-up on those different initiatives so that they, of course, in the mapping process, we find what should not be replicated and what should be strengthened. And from lesson learned, we could be more equipped, both the capacitors and also the people that are acquiring those capacities. And also toward the young people, which are the most vulnerable, specifically young women. Those are the critical masses that need to be really, really addressed because they are more vulnerable, I think. And trainings and best practices should be refined in a way that they are specifically targeting their specific needs. And I want also to commend the work the GFCA is doing in that direction as well. And I also want to commend the work in the IGF. ISOC Benin is doing that direction with some online capacity, awareness raising on cyber security threats, educating young people, especially young girls, how they can secure their data on the phone, for example. And yes, one key element also, we need to prevent misinformation. How to combat that? We need collective efforts. We need to build trust on what is shared. And we need mechanism to prove that the informations that are out there are reliable and not posing a threat to anybody. Thank you.
Dino Cataldo: Thank you very much. So we see already complementing element from misinformation to lesson learned. Maybe, Joao, will you give us also your perspective from the youth?
João Moreno Falcão: OK. Sure, thank you. So I would like to bring something that really made me into cyber security, which is popular culture. So we are having these kinds of projects to try to bring people to capacitate them into cyber security. And we can use the popular culture that solidified in our mind what is cyber security to invite these people to participate. Because at the same time that they made a dream for a lot of people, they also created a barrier that people said, OK, this is movie thing, so I will not be able to be this person. But we could work on demystifying that and really approaching these people to be part of the cyber security ecosystem. And the other thing that we also need to acknowledge that to learn cyber security, you need basic means to learn. So most of the people that are now in the field were self-taught, but we have several projects that try to bring these people. And what they need is access to this content, like computers or internet, and also sometimes physical access to devices. Like in my example, the only thing that made me into cyber security was that I went to an event and they had a industrial device. And this was the first time I could try to interact with one and learned my way into hacking it. So this was a wonderful experience, and I couldn’t have the opportunity if I wasn’t there. So we need to think about this. The requirements are not as complex as other areas of knowledge, but we also need to acknowledge to offer this structure to the people learning.
Dino Cataldo: Thank you very much, Joao. You actually anticipated the elements of my next question. Again, going back to restarting with Teresa, what can be done to ensure that the message, the resources, are getting to the intended audience? And especially in those situations, those environments where there are less possibilities, where there is a less mature infrastructure, less access limitation, I would say, in accessing internet, in accessing the necessary resources.
Tereza Horejsova: Yeah, thank you. Maybe on the first part of the question, something that we have discovered when collecting information and trying to provide those resources online was that sometimes we have faced a bit of reluctance to actually have information shared. It’s maybe a natural instinct that everybody would like to receive information, but not necessarily seeing the benefit in providing the information. Sometimes we have, is the sound OK? Because I hear like a terrible echo. Maybe I should remove this. My earring fell off. OK, it’s starting, right? Sorry. So for instance, natural instinct could say I shouldn’t share too much about what I’m planning to do, because maybe it will cost me a project I could otherwise get. Or I shouldn’t share that much about what I’m planning to do as a donor, let’s say, in the next three years. Because I don’t know, somebody else might do it. But I think we need to kind of change the narrative a little bit that by sharing information to the extent possible, I am, of course, aware it’s not always realistic. Everybody wins. And who wins ultimately are those that we are trying to assist, the recipients. Because it’s not fair towards them if the efforts, let’s say, are uncoordinated or if an implementer comes and isn’t aware that the same project has been implemented by somebody else two years ago. Or like Ambassador gave the example that some projects would be basically designed without consulting those that it should benefit. This is wrong, and it’s not fair. And then we are kind of just doing things to tick the box. I’ve implemented the project, and I’ve used these resources that were made available. But what was the impact? Could the impact have been bigger? So I just would like to challenge ourselves to really think that, OK, if I share, I’m not going to lose. And that goes for all stakeholders involved. Maybe another note that I can add, sometimes when we have had conversations with recipient countries, even sometimes it was really voicing concerns like, please organize yourselves. We cannot handle our capacity is already limited. And if we have everybody coming to us separately, trying to do their project, we are overwhelmed as well. It would help us tremendously if there was a bit more coordination. And what’s one simple step closer to better coordination is exactly to have these resources available so that anybody can consult them.
Dino Cataldo: Thank you, Teresa. So maybe just to jump immediately to Joel, what is your experience in this access to resources and the ability to coordinate?
Yao Amevi Sossou: I think regarding access to those resources, the key challenges I would say that this way is ability first to, as Joel mentioned, basic to hardware access, for example. People have difficulty to have access to hardware that will bring them to be in contact with the information that should be shared. And another element, I still keep stressing it, is how we convey the information to the recipients. are the capacity building developed inclusive enough and are there, as Elin mentioned, are there in a collaborative way done, like for example, if each and every country have their own capacity building programs sometimes, and I mentioned earlier, they are most, in most cases, budget, they are budget constrained and they have limited amount of time to process. How do we follow up is really key, I’m saying and stressing that again. We need to assess and then we need to follow up impact on impact of those capacity building so that we know from key projects what are the different gaps that need to be addressed with the next round of trainings, and then from there, we become ourselves more resilient and people are more equipped and enough ready to face the challenge that’s out there. Internet is free for everybody, but it’s also have some challenges that not everybody could be able to face alone.
Dino Cataldo: Thank you. I very much like the term that you use. It’s not much about quantity. Sometimes these initiatives are measured between input and output, but you talked about impact, measuring the impact, and that, it will be a segue to the last question. Thank you very much for participating in this concert. Maybe if I can go back to Ambassador Dowling and hear from your perspective as a government representative, what are the critical success factors, maybe in your country, have worked in reaching the intended audience?
Brendan Dowling : I think we have a very substantial experience in Australia. We have, for many years, prioritised cyber resilience as a core part of our economic agenda, as a core part of our national security agenda, so we have many lessons that we have learned in what we have adopted in the capabilities that we’ve built, which we do try to offer as experience for the positives, for the negatives, to share with countries, particularly in our region. I think we have found the most important lesson is that this has to be a whole-of-nation approach. Talking about building cyber resilience, talking about building cyber capacity, has to involve industry, it has to involve the community, it has to be something that is bought into, rather than just as a government program. Most infrastructure, most businesses, most community capabilities are operated by non-government actors, so engaging a whole-of-nation response is, I think, the crucial lesson. That means when we engage in capacity building work, we talk not just to government players, we talk to private sector operators who run civilian infrastructure, we talk to educational institutions, schools, universities, we try to engage across the breadth of society. Cyber is not a technical issue, it is not a government issue, it is a whole-of-nation issue. I think our lesson and experience is the criticality of engaging a broad range of actors when we try to build that cyber resilience.
Dino Cataldo: Thank you very much. Very well noted, the emphasis on partnership and collaboration, public-private. Thank you so much for emphasising that. So Mitesh, your experience from the digital health sector.
Mevish P Vaishnav: In digital health, if you see, every country is trying to secure their data, but there are challenges that are coming up. So we need to be prepared through upskilling ourselves, and that is why we are developing courses in it.
Dino Cataldo: Thank you. Very good to know you’re already working on it in a specific sector. So last question for each one of the participants, and maybe just looking at the time to be, if possible, maybe a little bit more brief. Teresa, we’re starting with you. So we already alluded to what can be done, what should be done, what has been done. How can we measure it? What kind of indicators can be utilised to measure the impact of the cyber security capability project initiatives and programmes?
Tereza Horejsova: Yeah, for instance, for Sibyl, it’s very simple. We can measure how many projects we have there, how it’s growing, what’s the trend. And of course, you know, the more comprehensive coverage we have, the more kind of thorough picture can be provided to anybody who uses the portal. So I would also use this opportunity to encourage everybody here, if you’re working on a cyber capacity building project, check it out, if it’s on Sibyl. If it’s not, we do a lot of our kind of desk research and try to identify the missing projects, but we also rely, and we in particular rely, on actually the implementers, donors and others to share with us the information so that this puzzle is a bit bigger. So if we can internalise that when you’re working on something, that you just quickly check if it’s there and just drop us an email, that would be good, yes. And then we will get to over 900 projects and we have at this moment, you know, a much more interesting number.
Dino Cataldo: Thank you very much. So important. So getting feedback. Joel, what about your experience in measuring the impact?
João Moreno Falcão: Yeah, well, cybersecurity is an odd field because despite other knowledge fields that we can teach and then see how much they learned, when we teach something, there’s someone trying to overcome what you teach. So this makes our lives much harder because we can teach a technique, something, and then in the next day, someone will create another one that will overcome what we teached. So seeing this, I believe we can go to a strategy to understand like the necessities and needs of a specific community and understand if what we teach them really made the difference. So what you establish first was accomplished later.
Dino Cataldo: Thank you very much. Joel, would you like to add your experience?
Yao Amevi Sossou: I think in this regard, what I could suggest would be combining efforts is key. We need to combine different effort and different experience, like what we, one organisation struggle with a capacity in a certain community might be some lesson to be learned from another organisation in another part of the world. And we need to find a way to collaborate so that we have bigger impact and then it becomes easier to assess the impact what I’m stressing so far.
Dino Cataldo: Thank you. So I started to see that the picture and the life cycle, let’s learn impact, consolidation of best practices and databases. So maybe Ambassador Dowling, if you can also share with us your experience in measuring the impact.
Brendan Dowling : Sure. It’s very difficult. I think we all struggle to measure what success in cyber security looks like. How do we know our programs are working? We know that cyber incidents are getting worse. We know that in spite of all our efforts and doing the right thing, we will see more incidents because threat actors are getting more capable, more sophisticated. So measurement can’t be about fewer cyber incidents. I think in cyber security, qualitative measurement is really crucial. For me, testing through exercises is one of the most effective ways to qualitatively test whether your arrangements, your capacity, your preparedness have improved. We’re big advocates for getting everyone in a room, government, private sector, civil society running exercises, testing what your response mechanisms look like, how they operate. It’s better to learn where your failings are in an exercise than in a real incident. We consider ourselves a relatively mature cyber-capable nation, and yet when we run exercises on our electricity sector, our airports, on our government systems, we always find a range of ways where we still have gaps, where we still have shortcomings. I think that qualitative approach in a partnership, open, transparent way to test out what your responses are like, rather than tell yourself, we’re good, we’re prepared, actually road test it and say, here’s where our gaps are, here’s what we need to address. For me, that’s the most important thing.
Dino Cataldo: Thank you. Another critical element of the lifecycle of the simulation, exercising and testing. So thank you so much for that. So Meebish, last but not least, what about your experience in the health industry? What can be used as an indicator that, indeed, this initiative are producing expectation, meeting expectation?
Mevish P Vaishnav: So I think we should have every six months auditing should be done. And training in cybersecurity is something that will help us to understand from other countries we can learn. So best practices from other countries can be shared. And that is how collaboration is important. The best practices of every country. Like, if you have faced an issue, maybe I would learn from it, and I would not face the same issue. So that’s how we should work and collaborate. And IGF is a platform where we can collaborate. So many countries come together. That is how we can try to, if you see, the hackers are more organized than us. So we need to be careful of that. If quantum computing gets democratized, we will be vulnerable. So we need to take care of that.
Dino Cataldo: Thank you. We can open a completely new session on quantum computing debates. So thank you very much. And thank you for the reference, too. As a former auditor, I really appreciate that acknowledgment. So with that, I would like to pass the floor to Octavia that is going to provide a summary and conclusion of this very interesting, engaging session. Octavia?
OktavÃa Hrund G Jóns: Thank you so much, Dino. I would like to see, am I audible? You can hear me? Yes, you are. Fantastic. What an amazing group of people and interesting discussion. I want to go over some of the things that sort of came up and also look at what we, in the best practice forum, could be looking at after this IGF and going forward. In terms of the statement and sort of how our experts looked at that, I think one of the things that really stood out is context and experience. It really is important. So most of our experts did agree with the problem statement and the necessity of it. However, a red thread throughout is that we have many platforms and we have many places. We have a lot of coordination and collaboration that exists. We have to commit to using that. And one of the points that came up is that the IGF could or actually should be used more as a venue for this capacity building. We have to work together. We have to trust and share, which is extremely important on many aspects, but also across sectors. Holistic approaches to security, particularly cybersecurity, it’s not a problem for just companies, private sector, government, individuals. We have to look at both reactive and proactive approaches on multiple levels. I thought it was so interesting to get the health sector perspective. Critical infrastructure as well is made up of individuals. So that’s one of the things that I think came out of the comments that we had on the problem statement. Accessibility and, dare I say, localization is, of course, key. We know this, and somehow we have to hold ourselves to a slightly higher standard than we are right now. Accessibility comes in many forms, as our experts on stage mentioned, because it’s not just about giving access or resources to youth. One of the things that came out of the fixes that I really appreciated is the accessibility doesn’t have to be a threshold or a high level. It simply can be access to information, knowledge, even a device that allows you to become curious and understand your context and your role in a much larger picture. So the consistency in the manner in which we treat not just programming, but activities and projects in cybersecurity is what makes it successful. That’s one of the key elements to the fix, if you like. All of these are guided by conversations, and these conversations need to be done from a point of trust, and it needs to be done across sectors. And it needs to include as many stakeholders as possible. Even some of the low-hanging fruits, interoperability was mentioned as a fairly low-hanging fruit. However, not even those are possible without multiple conversations where we come together and understand the importance of hearing most, or at least as many voices as possible. Interesting for me as a cybersecurity professional is to hear that a lot of the things we’re talking about relate to ecosystems and PDCA, for those of you that know the Plan, Do, Check, Act. It’s not enough to come in and do one thing. It’s not enough to have a training, as our colleagues mentioned. We have to do training, and we have to do follow-up. We have to ensure that the ecosystem of knowledge continues regardless of one person, one community, or even one specific government in place. So we have to commit to the use of these mechanisms that are in place already. We have to, as implementers that come into a context, also understand that we are depleting valuable and necessary resources by constantly going in and inventing the wheel when the wheel is probably already there, perhaps even a really good bicycle or a Ferrari. So understanding that it’s not us first, it is us, the community, first is one of the things that felt very fundamental to our conversations on the fix. A couple of last things. Low resource environments. It’s difficult to prioritize. So how do we do that? Some of the things that came up was not just coordination, but inclusivity and collaboration in a way that is consistent. And that means localization, not just of materials. It means accessibility understood from multiple levels. And that has to do with equity. It has to do with gender. It has to do with age. It has to do with all of the other elements that we know so well from a lot of the work that we do and a lot of things that guide us to spaces like the Internet Governance Forum. Stakeholders, participatory approaches. I’d like to end with something that I thought was very important when we talked about indicators. Because it’s not just numbers. We know that. The thing that we need to be very, very, that we have to emphasize continuously is that it’s a practice. Whether that is allowing more funding or resources to making scenarios, to training, whether that is allowing for follow-up, allowing for more flexible ways in which that we teach people all around the world in very different environments how they can understand their role. If you’re a healthcare worker, it’s not enough to get a regulation or get a don’t-do list. It needs to be relatable to your role and it needs to be understood from the position of where you are and what you have the agency to affect to allow for us to be slightly more secure or at least resilient as we together tackle these huge but very important foundational things that allow us all to be safer, not just online but in reality. I hope that sort of captured, there are so many more things that I’d like to mention and so many good points, including I’d like to mention the demystification of cybersecurity. So on that note, I would like to thank you all for allowing me to summarize these points together and give it back to my colleagues on stage.
Dino Cataldo: Thank you very much, Octavio, for this real-time summarization, very comprehensive, very detailed. Thank you so much. So thank you to all the distinguished speakers that have shared with us their knowledge, their experience, their wisdom. Thank you to my colleagues, co-facilitator, Josephine and Jon Bonanna. And of course, thank you to Wim, the Giselle coordinator, subject matter expert on this event. I don’t know, Wim, if you would like to have some closing remarks.
Wim Degezelle : No, no closing remarks. I just want to make sure that you don’t forget to thank you, Dino, for moderating also part of the session. So thank you all.
Brendan Dowling
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Information overload and lack of targeted resources
Explanation
There is a proliferation of information about cybersecurity capacity building, but it’s often not bespoke or targeted to recipients. This makes it difficult for countries and organizations to find the specific information they need.
Evidence
Example of cyber attacks worsening substantially in the Asia-Pacific region, affecting Australia, New Zealand, Southeast Asian nations, and Pacific Island nations.
Major Discussion Point
Problem Statement and Context
Need for tailored capacity building approaches
Explanation
Capacity building programs should be designed in consultation with recipient countries and shaped according to their specific needs, interests, and situations. This bespoke approach is more effective than untargeted or inappropriate initiatives.
Evidence
Australia’s experience in implementing substantial capacity building programs in the Pacific, involving incident response work, hardware and software upgrades, and developing legal frameworks or national strategies.
Major Discussion Point
Problem Statement and Context
Agreed with
Yao Amevi Sossou
Tereza Horejsova
Agreed on
Need for tailored and context-specific capacity building approaches
Commitment to using existing mechanisms and processes
Explanation
There is a need to commit to using existing coordination mechanisms and processes rather than creating new ones. This involves being willing to listen, be flexible, and adjust programming based on feedback and coordination efforts.
Evidence
Example of the Partners in the Blue Pacific program and the annual Pacific Cyber Capacity Building Open Conversation.
Major Discussion Point
Improving Access and Coordination
Agreed with
Tereza Horejsova
Yao Amevi Sossou
Agreed on
Importance of collaboration and information sharing
Whole-of-nation approach involving multiple stakeholders
Explanation
Building cyber resilience requires a whole-of-nation approach that involves industry, community, and government actors. This comprehensive engagement is crucial for effective cybersecurity capacity building.
Evidence
Australia’s experience in prioritizing cyber resilience as part of their economic and national security agenda, involving engagement with private sector operators, educational institutions, and a broad range of societal actors.
Major Discussion Point
Holistic Approach to Cybersecurity
Qualitative measurement through exercises and simulations
Explanation
Measuring success in cybersecurity is challenging and cannot be based solely on reducing cyber incidents. Qualitative measurement through exercises and simulations is crucial for testing preparedness and identifying gaps in response mechanisms.
Evidence
Australia’s experience in running exercises on their electricity sector, airports, and government systems, which consistently reveal gaps and shortcomings even in a relatively mature cyber-capable nation.
Major Discussion Point
Measuring Impact and Success
Agreed with
João Moreno Falcão
Tereza Horejsova
Agreed on
Challenges in measuring impact of cybersecurity initiatives
Differed with
Tereza Horejsova
João Moreno Falcão
Differed on
Approach to measuring impact of cybersecurity initiatives
Yao Amevi Sossou
Speech speed
144 words per minute
Speech length
912 words
Speech time
377 seconds
Importance of accessibility and localization
Explanation
Capacity building opportunities need to be made more accessible to young people and underserved communities, especially in Africa. This includes addressing language barriers and making information available in native languages.
Evidence
Example of the need to use mobile phones for cybersecurity education in Africa, as it is the most common tool for internet access.
Major Discussion Point
Problem Statement and Context
Agreed with
Brendan Dowling
Tereza Horejsova
Agreed on
Need for tailored and context-specific capacity building approaches
Need for follow-up and impact assessment of initiatives
Explanation
Many capacity building initiatives are budget-constrained and time-limited. There is a need for follow-up on these initiatives to assess their impact and identify gaps that need to be addressed in future trainings.
Major Discussion Point
Improving Access and Coordination
Addressing language barriers and cultural contexts
Explanation
Capacity building efforts should be refined to target specific needs, especially for vulnerable groups like young women. This includes addressing language barriers and cultural contexts to make the information more accessible and relevant.
Evidence
Example of ISOC Benin’s work in raising awareness on cybersecurity threats and educating young people, especially young girls, on how to secure their data on phones.
Major Discussion Point
Improving Access and Coordination
Combining efforts for broader impact assessment
Explanation
There is a need to combine efforts and experiences from different organizations and parts of the world to have a bigger impact. This collaboration can help in assessing the impact of capacity building initiatives more effectively.
Major Discussion Point
Measuring Impact and Success
Agreed with
Brendan Dowling
Tereza Horejsova
Agreed on
Importance of collaboration and information sharing
Mevish P Vaishnav
Speech speed
138 words per minute
Speech length
339 words
Speech time
147 seconds
Relevance of cybersecurity in healthcare data protection
Explanation
Healthcare data is a major part of individuals’ personal information and needs to be protected. Best practices in cybersecurity are crucial for collaborating and working together to protect this sensitive data.
Major Discussion Point
Problem Statement and Context
Addressing cybersecurity challenges in healthcare
Explanation
Every country is trying to secure their healthcare data, but challenges are emerging. There is a need to be prepared through upskilling and developing courses to address these challenges.
Major Discussion Point
Holistic Approach to Cybersecurity
Regular auditing and sharing best practices across countries
Explanation
Regular auditing should be conducted every six months to assess cybersecurity measures. Sharing best practices from other countries can help in learning from their experiences and avoiding similar issues.
Major Discussion Point
Measuring Impact and Success
Tereza Horejsova
Speech speed
152 words per minute
Speech length
1291 words
Speech time
507 seconds
Value of IGF as a platform for capacity building discussions
Explanation
The Internet Governance Forum (IGF) is a natural space for discussions on capacity building, but it hasn’t been used as much as it could have. The focus of the Best Practice Forum on cybersecurity capacity building is applauded.
Major Discussion Point
Problem Statement and Context
Importance of sharing information and coordinating efforts
Explanation
There is a need to change the narrative around sharing information about capacity building projects. By sharing information to the extent possible, everyone wins, especially the recipients of these efforts.
Evidence
Example of recipient countries voicing concerns about being overwhelmed by uncoordinated capacity building efforts from multiple actors.
Major Discussion Point
Improving Access and Coordination
Agreed with
Brendan Dowling
Yao Amevi Sossou
Agreed on
Importance of collaboration and information sharing
Tracking project growth and coverage in databases
Explanation
Measuring the impact of cybersecurity capacity building efforts can be done by tracking the growth and coverage of projects in databases. This provides a comprehensive picture for anyone using these resources.
Evidence
Example of the Sibyl portal, which aims to provide an overview of capacity building projects and activities.
Major Discussion Point
Measuring Impact and Success
Agreed with
Brendan Dowling
João Moreno Falcão
Agreed on
Challenges in measuring impact of cybersecurity initiatives
Differed with
Brendan Dowling
João Moreno Falcão
Differed on
Approach to measuring impact of cybersecurity initiatives
Importance of trust-building and cross-sector collaboration
Explanation
Building trust and collaborating across sectors is crucial for effective cybersecurity capacity building. This involves sharing information and coordinating efforts among various stakeholders.
Major Discussion Point
Holistic Approach to Cybersecurity
João Moreno Falcão
Speech speed
125 words per minute
Speech length
599 words
Speech time
285 seconds
Leveraging popular culture to attract youth to cybersecurity
Explanation
Popular culture has played a role in solidifying the concept of cybersecurity in people’s minds. This can be used to invite people to participate in cybersecurity, while also working to demystify it and make it more approachable.
Evidence
Personal experience of being introduced to cybersecurity through access to an industrial device at an event.
Major Discussion Point
Improving Access and Coordination
Assessing community needs and outcomes
Explanation
Measuring the impact of cybersecurity education is challenging due to the evolving nature of threats. A strategy to understand the specific needs of a community and assess if the teaching made a difference is crucial.
Major Discussion Point
Measuring Impact and Success
Agreed with
Brendan Dowling
Tereza Horejsova
Agreed on
Challenges in measuring impact of cybersecurity initiatives
Differed with
Brendan Dowling
Tereza Horejsova
Differed on
Approach to measuring impact of cybersecurity initiatives
Oktavía Hrund Jóns
Speech speed
145 words per minute
Speech length
1080 words
Speech time
445 seconds
Focusing on both reactive and proactive approaches
Explanation
A holistic approach to cybersecurity requires both reactive and proactive approaches on multiple levels. This includes addressing issues across various sectors and stakeholders.
Major Discussion Point
Holistic Approach to Cybersecurity
Emphasizing practice and continuous learning
Explanation
Cybersecurity is a practice that requires continuous learning and adaptation. This involves allowing for more flexible ways of teaching people in different environments how to understand their role in cybersecurity.
Major Discussion Point
Holistic Approach to Cybersecurity
Agreements
Agreement Points
Need for tailored and context-specific capacity building approaches
Brendan Dowling
Yao Amevi Sossou
Tereza Horejsova
Need for tailored capacity building approaches
Importance of accessibility and localization
Importance of sharing information and coordinating efforts
Speakers agreed on the importance of designing capacity building programs that are tailored to the specific needs, contexts, and languages of recipient countries or communities.
Importance of collaboration and information sharing
Brendan Dowling
Tereza Horejsova
Yao Amevi Sossou
Commitment to using existing mechanisms and processes
Importance of sharing information and coordinating efforts
Combining efforts for broader impact assessment
Speakers emphasized the need for better collaboration, information sharing, and coordination among various stakeholders to improve the effectiveness of cybersecurity capacity building efforts.
Challenges in measuring impact of cybersecurity initiatives
Brendan Dowling
João Moreno Falcão
Tereza Horejsova
Qualitative measurement through exercises and simulations
Assessing community needs and outcomes
Tracking project growth and coverage in databases
Speakers acknowledged the difficulties in measuring the impact of cybersecurity initiatives and suggested various approaches to assess effectiveness, including qualitative measurements and tracking project growth.
Similar Viewpoints
Both speakers emphasized the importance of involving multiple stakeholders and conducting follow-up assessments to ensure the effectiveness of cybersecurity capacity building efforts.
Brendan Dowling
Yao Amevi Sossou
Whole-of-nation approach involving multiple stakeholders
Need for follow-up and impact assessment of initiatives
Both speakers highlighted the importance of cross-sector collaboration and trust-building in addressing cybersecurity challenges, particularly in sensitive areas like healthcare data protection.
Mevish P Vaishnav
Tereza Horejsova
Relevance of cybersecurity in healthcare data protection
Importance of trust-building and cross-sector collaboration
Unexpected Consensus
Role of popular culture in cybersecurity education
João Moreno Falcão
Oktavía Hrund Jóns
Leveraging popular culture to attract youth to cybersecurity
Emphasizing practice and continuous learning
There was an unexpected consensus on the potential role of popular culture in attracting youth to cybersecurity and the importance of practical, continuous learning approaches. This highlights a novel approach to cybersecurity education that combines cultural relevance with hands-on experience.
Overall Assessment
Summary
The main areas of agreement included the need for tailored capacity building approaches, improved collaboration and information sharing, and the challenges in measuring the impact of cybersecurity initiatives. There was also consensus on the importance of involving multiple stakeholders and addressing specific needs of different sectors and communities.
Consensus level
The level of consensus among the speakers was moderately high, with most agreeing on the fundamental challenges and necessary approaches to cybersecurity capacity building. This consensus implies a shared understanding of the complexities involved in cybersecurity education and the need for diverse, collaborative strategies to address these challenges effectively. However, there were some variations in the specific focus areas and proposed solutions, reflecting the multifaceted nature of the topic and the diverse backgrounds of the speakers.
Differences
Different Viewpoints
Approach to measuring impact of cybersecurity initiatives
Brendan Dowling
Tereza Horejsova
João Moreno Falcão
Qualitative measurement through exercises and simulations
Tracking project growth and coverage in databases
Assessing community needs and outcomes
Speakers proposed different methods for measuring the impact of cybersecurity initiatives, ranging from qualitative exercises to quantitative tracking of projects and community-focused assessments.
Unexpected Differences
Focus on specific sectors in cybersecurity capacity building
Mevish P Vaishnav
Brendan Dowling
Relevance of cybersecurity in healthcare data protection
Whole-of-nation approach involving multiple stakeholders
While most speakers discussed general cybersecurity capacity building, Mevish P Vaishnav unexpectedly focused specifically on healthcare data protection, contrasting with Brendan Dowling’s emphasis on a broader, whole-of-nation approach.
Overall Assessment
summary
The main areas of disagreement centered around methods for measuring impact, approaches to coordination, and the specificity of focus in cybersecurity capacity building.
difference_level
The level of disagreement among speakers was moderate. While there were differing perspectives on specific approaches and focus areas, there was a general consensus on the importance of cybersecurity capacity building and the need for improved coordination and information sharing. These differences in approach could lead to varied strategies in implementing cybersecurity initiatives, potentially resulting in a diverse range of solutions but also possible challenges in creating a unified global approach to cybersecurity capacity building.
Partial Agreements
Partial Agreements
All speakers agreed on the importance of coordination and information sharing, but differed in their emphasis on using existing mechanisms versus creating new follow-up processes.
Brendan Dowling
Tereza Horejsova
Yao Amevi Sossou
Commitment to using existing mechanisms and processes
Importance of sharing information and coordinating efforts
Need for follow-up and impact assessment of initiatives
Similar Viewpoints
Both speakers emphasized the importance of involving multiple stakeholders and conducting follow-up assessments to ensure the effectiveness of cybersecurity capacity building efforts.
Brendan Dowling
Yao Amevi Sossou
Whole-of-nation approach involving multiple stakeholders
Need for follow-up and impact assessment of initiatives
Both speakers highlighted the importance of cross-sector collaboration and trust-building in addressing cybersecurity challenges, particularly in sensitive areas like healthcare data protection.
Mevish P Vaishnav
Tereza Horejsova
Relevance of cybersecurity in healthcare data protection
Importance of trust-building and cross-sector collaboration
Takeaways
Key Takeaways
There is a wealth of cybersecurity capacity building information available, but it often lacks proper targeting and coordination
Tailored, context-specific approaches are crucial for effective cybersecurity capacity building
Accessibility and localization of resources are key challenges, especially for underserved communities
Cross-sector collaboration and a whole-of-nation approach are essential for building cyber resilience
Measuring the impact of cybersecurity initiatives requires both quantitative and qualitative methods
Regular follow-up, assessment, and knowledge sharing are necessary for continuous improvement
Resolutions and Action Items
Commit to using existing coordination mechanisms like the GFCE and Partners in the Blue Pacific
Encourage stakeholders to share project information on platforms like the Cybil Portal
Develop more inclusive and accessible capacity building programs, considering language and cultural contexts
Implement regular auditing and testing of cybersecurity measures through exercises and simulations
Foster greater collaboration between different sectors and stakeholders in cybersecurity initiatives
Unresolved Issues
How to effectively measure the long-term impact of cybersecurity capacity building initiatives
Addressing the challenge of limited resources and prioritization in low-resource environments
Finding ways to sustain capacity building efforts beyond initial project timelines and budgets
Balancing the need for information sharing with potential security concerns or competitive interests
Suggested Compromises
Combining efforts and resources from multiple organizations to achieve broader impact and more comprehensive assessment
Balancing standardized approaches with localized, context-specific implementations of cybersecurity capacity building
Using both technical and non-technical methods to engage diverse audiences in cybersecurity education and awareness
Thought Provoking Comments
We’ve implemented some very substantial capacity building programs in recent years But we’ve often found that they can be untargeted Inappropriate and we’ve committed to doing better with our partners about Working with them in dialogue to figure out what the right approach for that country for that context for that situation is
speaker
Brendan Dowling
reason
This comment highlights the importance of tailoring cybersecurity capacity building efforts to specific contexts rather than using a one-size-fits-all approach. It demonstrates a shift in thinking towards more collaborative and customized solutions.
impact
This comment set the tone for much of the subsequent discussion, emphasizing the need for bespoke, context-specific approaches to cybersecurity capacity building. It led to further exploration of localization and accessibility issues.
We need to make sure that we have the capacity-building opportunities to make it more accessible to young people. And the most common used tool to access internet in Africa is a mobile phone.
speaker
Yao Amevi Sossou
reason
This comment brings attention to the specific needs of young people and the importance of mobile technology in Africa, highlighting the need for targeted and accessible capacity building approaches.
impact
It shifted the conversation towards considering specific regional and demographic needs, leading to discussions about language accessibility and the use of popular culture in cybersecurity education.
Sometimes when we have had conversations with recipient countries, even sometimes it was really voicing concerns like, please organize yourselves. We cannot handle our capacity is already limited. And if we have everybody coming to us separately, trying to do their project, we are overwhelmed as well.
speaker
Tereza Horejsova
reason
This comment provides a crucial perspective from recipient countries, highlighting the challenges they face in coordinating multiple capacity building efforts. It underscores the need for better coordination among donors and implementers.
impact
This insight led to a deeper discussion about the importance of coordination and information sharing among different stakeholders involved in cybersecurity capacity building.
I think in cyber security, qualitative measurement is really crucial. For me, testing through exercises is one of the most effective ways to qualitatively test whether your arrangements, your capacity, your preparedness have improved.
speaker
Brendan Dowling
reason
This comment introduces a practical approach to measuring the impact of cybersecurity capacity building efforts, moving beyond quantitative metrics to emphasize the importance of qualitative assessment through exercises and simulations.
impact
It shifted the discussion towards more concrete ways of evaluating the effectiveness of cybersecurity initiatives, leading to a broader conversation about impact measurement and continuous improvement.
Overall Assessment
These key comments shaped the discussion by moving it from general observations about cybersecurity capacity building to more nuanced considerations of context-specific approaches, accessibility, coordination challenges, and practical impact measurement. They helped to highlight the complexity of the issue and the need for multifaceted, collaborative solutions that take into account the perspectives and needs of all stakeholders involved. The discussion evolved from identifying problems to exploring concrete strategies for improvement, emphasizing the importance of tailored approaches, better coordination, and ongoing assessment in cybersecurity capacity building efforts.
Follow-up Questions
How can we improve the accessibility and localization of cybersecurity capacity building resources?
speaker
Yao Amevi Sossou
explanation
Addressing language barriers and making resources accessible to underserved communities, especially in Africa, is crucial for effective capacity building.
How can we better engage and train frontline educators in cybersecurity?
speaker
João Moreno Falcão
explanation
Involving educators is essential to reach a wider audience and make cybersecurity education more effective.
What mechanisms can be developed to ensure follow-up and long-term impact of capacity building initiatives?
speaker
Yao Amevi Sossou
explanation
Many initiatives are budget-constrained and time-limited, so ensuring continued impact is important for sustainable capacity building.
How can we leverage popular culture to demystify cybersecurity and attract more people to the field?
speaker
João Moreno Falcão
explanation
Using popular culture references could help make cybersecurity more approachable and inspire more people to enter the field.
What are effective ways to measure the impact of cybersecurity capacity building initiatives?
speaker
Dino Cataldo
explanation
Developing appropriate indicators to assess the effectiveness of capacity building programs is crucial for improvement and justification of resources.
How can we improve coordination and information sharing among donors and implementers of cybersecurity capacity building projects?
speaker
Tereza Horejsova
explanation
Better coordination could reduce duplication of efforts and improve the overall impact of capacity building initiatives.
What strategies can be employed to ensure cybersecurity capacity building reaches and engages the whole of society?
speaker
Brendan Dowling
explanation
A whole-of-nation approach involving government, industry, and community is necessary for effective cyber resilience.
How can we prepare for the potential vulnerabilities that may arise if quantum computing becomes democratized?
speaker
Mevish P Vaishnav
explanation
Anticipating future technological developments and their impact on cybersecurity is important for long-term resilience.
Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.
WS #82 A Global South perspective on AI governance
WS #82 A Global South perspective on AI governance
Session at a Glance
Summary
This panel discussion focused on global approaches to AI governance, with particular emphasis on perspectives from the Global South. Experts from various regions discussed the challenges and opportunities in developing AI regulatory frameworks that respect human rights and cultural diversity.
The discussion highlighted the African Union’s recent adoption of a regional AI strategy, which aims to promote ethical AI use aligned with the continent’s development goals. Challenges faced by African countries in implementing AI governance include infrastructure limitations, skills shortages, and the need to address unique socio-economic issues.
In Asia, a more cautious “wait-and-see” approach was noted, with most countries opting for soft law and best practices rather than hard regulations. The importance of addressing existing digital divides and human rights concerns before fully embracing AI was emphasized.
The European Union’s AI Act was presented as a pioneering regulatory framework, balancing innovation with fundamental rights protection. Its potential influence on global AI governance through the “Brussels effect” was discussed, along with the challenges of exporting such a politically rooted framework to other regions.
The geopolitical aspects of AI governance were explored, including the role of organizations like G20 and BRICS in shaping policies. The need for collaboration between Global North and South was stressed, with human rights proposed as a universal language for developing inclusive AI governance frameworks.
The discussion concluded by addressing the challenges faced by Global South countries in regulating technologies primarily developed outside their jurisdictions. Participants emphasized the importance of leveraging existing legal frameworks and regional collaboration to effectively participate in global AI governance discussions.
Keypoints
Major discussion points:
– Different regional approaches to AI governance, including the EU AI Act, African Union strategy, and developments in Asia
– Challenges faced by Global South countries in developing AI governance frameworks, including lack of infrastructure and technical capacity
– The need to center human rights in AI governance approaches rather than just focusing on risk
– Geopolitical tensions shaping AI governance and the potential for fragmentation of approaches
– The role of the private sector and need for corporate accountability in AI governance
Overall purpose:
The purpose of this panel discussion was to examine global approaches to AI governance, with a focus on perspectives from the Global South. The panelists aimed to highlight emerging approaches from different regions and discuss challenges and opportunities for more inclusive AI governance frameworks.
Tone:
The overall tone was academic and policy-oriented, with panelists providing expert insights from their regional perspectives. There was a collaborative spirit, with panelists building on each other’s points. Towards the end, the tone became more urgent in discussing the need for Global South countries to move beyond being consumers of AI and become active contributors to global governance frameworks.
Speakers
– Melody Musoni: Digital Policy Officer at the European Centre for Development Policy Management, moderator of the panel
– Jenny Domino: Non-resident fellow at the Atlantic Council’s Digital Forensic Research Lab, Senior Case and Policy Officer at the Oversight Board at META, PhD candidate at Harvard Law School on global governance of technology
– Lufuno T Tshikalange: Expert in cyber law, founder of Orizo Consulting specializing in data privacy, cyber security governance, and ICT contract management in South Africa
– Gianclaudio Malgieri: Associate Professor of Law and Technology and board member of the eLaw Center for Law and Digital Technology at the University of Leiden Law School, Co-Director of the Brussels Privacy Hub, Managing Director of Computer Law and Security Review
Additional speakers:
– Dr. Sabine Witting: German lawyer and Assistant Professor for Law and Digital Technologies at Leiden University, co-founder of TechLegality consultancy firm
Full session report
Global Approaches to AI Governance: Perspectives from the Global South
This panel discussion brought together experts from various regions to examine global approaches to AI governance, with a particular focus on perspectives from the Global South. The panellists explored the challenges and opportunities in developing AI regulatory frameworks that respect human rights and cultural diversity, highlighting the need for inclusive and collaborative approaches to global AI governance.
Regional Approaches to AI Governance
The discussion revealed significant variations in regional approaches to AI governance:
1. African Union: Lufuno T Tshikalange highlighted the African Union’s recent adoption of a regional AI strategy in August 2024, aligned with the continent’s Agenda 2063 and digital transformation strategy. This strategy aims to promote ethical AI use in line with Africa’s development goals while addressing the triple challenges of inequality, poverty, and unemployment.
2. Asia: Jenny Domino described a more cautious “wait-and-see” approach in Asia, with most countries opting for soft law and best practices rather than hard regulations. This approach reflects a desire to observe the outcomes of regulatory efforts in other regions before implementing comprehensive frameworks.
3. European Union: Gianclaudio Malgieri presented the EU AI Act as a pioneering regulatory framework, which has evolved from a product safety tool to a fundamental rights tool. The Act attempts to balance innovation with fundamental rights protection and includes a list of prohibited practices. The potential influence of the EU AI Act on global AI governance through the “Brussels effect” was discussed, along with the challenges of exporting such a politically rooted framework to other regions.
4. Council of Europe: The panel discussed the Council of Europe AI Convention, which focuses on public sector AI and its potential relationship with the EU AI Act.
Challenges in Developing AI Governance Frameworks
The panellists identified several key challenges faced by Global South countries in developing and implementing AI governance frameworks:
1. Infrastructure limitations: Lufuno T Tshikalange emphasised the lack of necessary infrastructure in many developing countries, including inadequate internet access and insufficient data for training models, which hinders their ability to fully engage with AI technologies and governance.
2. Skills shortages: The need for technical expertise and capacity building in AI-related fields was highlighted as a significant challenge for many Global South countries.
3. Existing digital divides: Jenny Domino stressed the importance of addressing existing digital divides and human rights concerns before fully embracing AI governance.
4. Measuring risks to fundamental rights: Gianclaudio Malgieri pointed out the difficulty in quantifying and measuring risks to fundamental rights when developing risk-based approaches to AI governance.
5. Limited leverage over tech companies: An audience member raised the issue of Global South countries having limited influence over tech companies based in other jurisdictions, making it challenging to enforce regulations effectively.
Human Rights-Centred Approach
A recurring theme throughout the discussion was the importance of centring human rights in AI governance approaches:
1. Universal framework: Jenny Domino proposed human rights as a potential universal language for developing inclusive AI governance frameworks that could be understood across different regions and political contexts.
2. Balancing risks and rights: While the EU AI Act was praised for attempting to balance risks and rights, there was debate about whether a purely risk-based approach was sufficient to protect fundamental rights.
3. Corporate accountability: The discussion touched on the need for mechanisms to hold companies accountable for adverse human rights impacts caused by AI systems, drawing parallels with the UN Guiding Principles on Business and Human Rights.
Geopolitical Aspects of AI Governance
The panel explored the significant role of geopolitics in shaping AI governance approaches:
1. G20 and BRICS: Melody Musoni highlighted the potential for organisations like G20 and BRICS to influence global AI governance discussions and policies. South Africa’s G20 presidency was mentioned as an opportunity for collaboration in AI governance.
2. Political standpoints: Differing political views on issues such as social scoring were noted as factors shaping diverse governance approaches across regions.
3. Fragmentation concerns: Jenny Domino emphasised the need to consider geopolitical tensions and the potential fragmentation of internet governance when developing AI governance frameworks.
Collaboration and Inclusivity
The panellists stressed the importance of collaboration between the Global North and South in developing effective AI governance frameworks:
1. Moving beyond consumer status: Lufuno T Tshikalange urged Global South countries to leverage their skills and case studies to contribute actively to global AI governance discussions, rather than being mere consumers of technology and regulation.
2. Regional collaboration: The potential for regional bodies like the African Union to increase leverage in regulating multinational tech companies was discussed.
3. Inclusive dialogue: The need for diverse voices in shaping AI governance was emphasised, with calls for the Global South to be seen as contributors rather than subjects of discussion.
Practical Approaches and Future Directions
The discussion highlighted several practical approaches and unresolved issues that require further attention:
1. Building on existing frameworks: Panellists suggested focusing on existing laws (e.g., data protection, criminal law) as a starting point for regulating AI in countries with limited capacity.
2. Influence of EU AI Act: The potential impact of the EU AI Act on other regions was noted, with the Brazilian AI Act mentioned as an example of its influence.
3. Achieving a universal approach to AI governance while respecting regional differences
4. Effectively regulating AI in developing countries with limited leverage over tech companies
5. Balancing innovation with protection of fundamental rights in AI governance
6. Measuring and quantifying risks to fundamental rights in AI systems
The panel concluded by emphasising the need for continued dialogue and collaboration to address these challenges and develop more inclusive and effective global AI governance frameworks.
Session Transcript
Melody Musoni: Hi, Dr. Sabin, everyone. Hi, everyone. Hi, everyone. Hi, everyone. Okay. I have very good morning, everyone. Okay. Can you hear me. Can you hear me. Good morning, everyone. Welcome to our panel discussion where we are going to be talking about a global South perspective on AI governance. My name is Melody Musoni, I’m a Digital Policy Officer at the European Centre for Development Policy Management and I am going to be moderating this session together with my co-moderator Dr. Sabine. Dr. Sabine Wietzing is a German lawyer and an Assistant Professor for Law and Digital Technologies at Leiden University in the Netherlands. She’s also the co-founder of TechLegality, a consultancy firm specializing in human rights and digital technologies. And the purpose of our panel discussion is basically to have a conversation around the global approaches to AI governance and especially the emerging approaches that we see from the global South. And I think this conversation and this discussion is quite critical as we are at a point where we are setting international standards on AI governance and these standards tend to be set by the global North countries. But as of late we have seen global South countries pushing back and also demanding for fair and equal representation of their cultural and social norms and values when it comes to developing of AI frameworks. So there is also a question of whether it’s possible to even have an international framework on AI governance that is representative and reflective of all the different or diverse cultures that we have and the different political, economic and political and social contexts for different countries. And I guess one of the outcomes that came from the Summit of the Future being the Global Digital Compact is that more and more countries are making that commitment to enhance international governance of AI for the benefit of humanity. and our quote from the Global Digital Compact, it says, for a balanced, inclusive, and risk-based approach to the governance of AI, with the full and equal representation of all countries, especially developing countries, and the meaningful participation of all stakeholders. So I think that the future of AI governance is also being shaped by the geopolitics and the geopolitical competition, especially of the superpowers that we have, the U.S. and, example, China. And, of course, because of that AI race, we need to also start thinking about the implications on that, on how global South countries actually approach the question of AI governance. And, of course, the governance of AI will likely be shaped by aspirations of AI sovereignty or tech sovereignty, and very little to do with the promotion and protection of human rights. So in this panel discussion, we are going to try to answer three policy questions. The first one is, what regulatory approaches to AI are being adopted by the global South, and do these approaches advance the protection of human rights? And the second policy question is, what challenges are being faced by the global South in developing their AI governance framework? And the last question is, what are the implications of different regulatory approaches on AI development and deployment in the global South? That being said, I am joined on this panel by a panel of brilliant experts who will be representing different regions. We have a colleague representing Africa, China, talking about Africa, Latin America, Asia, and Europe. So maybe I’ll start with the introductions. Sitting with us today, we have Jenny Domino. She’s a non-resident fellow at the Atlantic Council’s Digital Forensic Research Lab. and a Senior Case and Policy Officer at the Oversight Board at META, where she oversees content policy development concerning elections, protest, and democratic processes. She’s also completing her PhD at Harvard Law School on the global governance of technology. So welcome, Jenny.
Jenny Domino: Thank you.
Melody Musoni: And then online, we are joined by advocate Lufuno Chikalange, who is a distinguished expert in cyber law and the founder of Orizo Consulting, a firm specializing in data privacy, cyber security governance, and ICT contract management in South Africa. She has over 10 years of multidisciplinary experience in the ICT sector and is recognized as one of Africa’s top 50 women in cyber security. So welcome, advocate. And we are also joined by Gian Claudio Malgheri. He’s an Associate Professor of Law and Technology and a board member of the eLaw Center for Law and Digital Technology at the University of Leiden Law School. He’s also the Co-Director of the Brussels Privacy Hub and the Managing Director of Computer Law and Security Review. So welcome, panelists. So I guess I’m just going to jump right into our discussion, and I’m going to start with advocate Lufuno to shed more light on the development of AI governance, especially on the African continent. What you see is some of the challenges that African countries are experiencing when they are developing their AI frameworks as well. You can go ahead.
Lufuno T Tshikalange: Thank you, Dr. Melody, and thank you for having us here today. In Africa, we do now have a regional artificial intelligence strategy. which has been developed this year, 2024. And it was adopted by the African Union Executive Council in August, 2024. And currently we are participating within the auspices of African Union Commission to contribute to the strategy implementation plan for the next five years, which will be 2025 to 2030. So this strategy is seeking to ensure that we have appropriate establishment of AI governance and regulations and to ensure that we accelerate the AI adoption in key sectors like agriculture, education, and health. And this is aligned to the agenda 2063 and the digital transformation strategy that we had adopted earlier. It is also promoting the creation of an enabling environment for an AI startup ecosystem and also to deal with issues of skills and talent development, which is not just AI skills shortage, but ICT across the board, we have a shortage of skills. And it is also promoting a grassroots research and innovation partnership in AI, which is important for us if we were to participate meaningfully in the digital economy and more importantly in the AI economy. The important part is that the implementation of this strategy is going to be based on ethical principles. which respects human rights and diversity, and also ensuring that we have appropriate technical standards for AI safety and security. So the approach that this AI strategy have taken is a multi-pronged approach, emphasizing that the regulatory framework is flexible, agile, adaptable, context, and risk-based, because it’s important that we do not do cut and paste, though the development of this strategy did learn from the EU, the OECD, and other regions, but the intention is to make sure that the strategy is responding to our own unique challenges. Importantly, the triple challenge, the inequality, poverty, and unemployment. It is also emphasizing that the approach be multi-tiered and is collaborative. The strategy goal is to promote human-centric approach, which is very important that the technology is developed as a tool to assist our developmental goals, not to exploit the people or violate the people’s human rights. And at the same time, it is also very strong on security by design, and ensuring that there is accountability and transparency in both the design and deployment of AI, systems. The strategy highlights the needs to make sure that mechanisms throughout the AI life cycle, life cycle, they. mitigate potential harms from AI technologies while we foster responsible development and use of AI across Africa. It is important to note, as I’ve mentioned earlier, that the AI strategy is not a standalone, but it is a part of the broader digital transformation strategy. So even the priorities that are in the strategy are derived from the digital transformation strategy, which is also informed by the agenda 2063. So the intention is not to have an AI strategy for the sake of it, but to have an AI strategy that will respond to our unique challenges, stimulating our economy and promoting the integration that is a target for the agenda 2063, and ensuring that we generate inclusive economic growth and stimulating job creation, and that there is the holistic approach to digital revolution for socioeconomic development that meets the needs of Africans. Other important baseline policy frameworks include the AU Convention on Cybersecurity and Personal Data Protection, which is commonly known as the Malabo Convention. The current challenge that we have is that the Malabo Convention has been adopted in 2014, came to effect June last year, but is yet to be implemented. We also have the AU Data Policy Framework, which has an expanded framework in terms of how effective data governance can be. to be achieved even as we implement imaging technologies like the the AI. So we have our priorities and guiding principles that are provided by the strategy which is local first people centered and human rights and dignity. It’s everywhere in this strategy because it’s important that as we. Technology or advance. We are not violating human rights like the right to privacy as they have charter and different constitutions in in in states in Africa. Peace and prosperity is also one of the reason or the principles that will be governing this strategy. Inclusion and diversity. As we said that we are struggling with triple challenges, one of which is inequality. Exacerbated by a lot of. Divides digital divide information, divide infrastructure device and a number of device that makes the technological development and even within Africa which is even worse. If we then have to compare ourselves with other regions, and it also promotes ethics and transparency as one of the principles. Cooperation and integration as AI or any other implementation of imaging technologies cannot be done inside. We have to do it in collaboration with each other. So we say that the AI governance conversion is in what conversation is important to us because of the seven aspirations that we need to achieve according to agenda. 2063 and the SDGs that correspond with the same. Our key concerns, however, is the misuse of intellectual property and the potential for misinformation threatening our societal cohesion and democracy, lack of infrastructure. And we also are considering barriers that may make AI adoption a bit of a challenge, including inadequate internet access, insufficient data for training models or unstructured data, limited computing resources, and us not having sufficient skills. So the risk that we have identified and will be addressed fully in the implementation plan includes environmental risks, system level risk like the biasness that may come with the AI systems, structural risk like your automation, which might increase disparities that we have and inequalities, cultural and societal risk because heritage and culture as we advance into the world of technologies. So we are working on strategies that will help us ensure that so our triple challenge and other challenges as identified in the agenda 2063, we are not now creating more challenges for ourselves. We have examples that we will be looking at in terms of case studies that can help us accelerate the implementation and adoption of AI within the African region. as Algeria, Egypt, Benin, Mauritius, Rwanda, and Senegal are already leading in the space, having their own to deal with the identified objectives within their own countries.
Melody Musoni: So, should we continue? Okay, it seems like we have lost our advocate, Lufuno, but from what she was sharing, she was giving an overview of what is happening in Africa, particularly with the continental AI strategy and how it connects with the broader vision of Agenda 2063 and the digital transformation strategy, as well as drawing examples of some of the African countries that are actually developing their own national AI frameworks. So, I guess I’m going to come to you, Jenny, to talk more about the case of Asia. I remember last year at the IGF, we were in Kyoto, and Japan was sharing its role in the shaping of the international frameworks on AI through the Hiroshima AI process and how it shaped the G7 AI principles. And now we are here in Saudi Arabia, and they also developed the Riyadh charter on AI principles for the Islamic world, which is aimed at aligning AI frameworks with Islamic values and principles. And as someone who has been doing work on AI governance, Do you see an emerging of different approaches on AI governance and do you see these approaches promoting or upholding human rights in your perspective?
Jenny Domino: Can you hear me? Yeah? First of all, good morning everyone. I’m happy to be here just to qualify that I’m speaking in my personal capacity and my views don’t necessarily reflect the views of the organizations I represent. To answer your question, I echo what Advocate Lufuno already mentioned and discussed. In Asia, by and large, we see more of a wait-and-see approach. So we don’t have anything that comes close to the EU AI Act in terms of hard law, in terms of regulation. So there are draft legislative initiatives. There’s the ASEAN guidance. So more on best practice approach or soft law approach rather than hard law that we see so far in Asia. Draft laws in Japan, Korea, Thailand, and other countries. And what I see as a common thread among all these various initiatives is it has overlaps with the EU AI Act, for example, in terms of risk tiering, right? With minimal risk, limited risk, up to high risk. There is a lack of centering of human rights and many human rights groups have actually called this out, including in the EU AI Act, that the focus on risk tends to eclipse the focus on rights. And I think there’s something here that we can learn from platform governance, from governing social media platforms. We’ve seen how emerging technology, when I speak about social media… from decade ago, more than a decade ago, there was a lot of excitement, there was a lot of hype, and the lack of centering of human rights actually led to a lot of human rights violations. The UN fact-finding mission on Myanmar many years ago in 2018 identified Facebook being used by ultranationalist monks and military officials in Myanmar to incite violence against an ethnic minority. And this is just to give you an example of how if we don’t center human rights when we talk about AI regulation, whether it’s legislative or policy, then it could be misused. And we already see this happening, for example, in the use of generative AI in conflict settings. And as you said, Melody, there is a lot of geopolitical tensions happening around the world, a lot of armed conflict situations happening around the world. One of the things that we need to think about also is how AI technology can be used. Another thing that I want to emphasize also is, you know, when we speak about AI, AI encompasses a broad range of uses and purposes. So there are more traditional AI uses, for example, in content moderation of social media platforms. They use automated technology to enforce their rules on social media in taking down things like misinformation or hate speech. There are, you know, so there is that, but then there’s also more emerging technology such as generative AI, like chat GPT, and many of the issues that involve these technologies, Advocate Lufuno already mentioned, and I won’t be repeating that here. Except to emphasize that there is a divide in terms of basic infrastructure, on the one hand, inequality. So you can’t really, so I, one of the things that I’m worried about is the hype surrounding AI, again, tends to overlook many infrastructure problems that we still see in Asia in developing countries. And we can’t really think about AI uptake in these countries until we solve basic issues like access to education and digital education. And then on the other hand, we also have existing human rights issues in many countries in the world, including in Asia. We see internet shutdowns, website shutdowns. So if these things are not even addressed, we can’t really talk about AI as a shiny, it’s a shiny new thing, but we haven’t really addressed existing issues concerning the internet. And so I think that’s something that I want to highlight in this conversation. And as Advocate Lufuno also mentioned, we need to center stakeholders, but actually in the human rights framework, it’s rights holders. Who are the communities affected by this technology? and who are we leaving out? Who are we including in the conversations that must happen? I’ll stop here.
Melody Musoni: Thank you so much for your contribution. I guess you have actually raised something that I was talking about earlier coming here, that now we are just talking about AI, AI, but we are forgetting about the important issues, the foundation of AI, issues on digital infrastructure, for example, accessibility to the internet and things like that. So AI is coming in and creating further divide and our focus is now on AI governance, overlooking some of the existential problems that we already face. I’m going to come to Gian Claudio to talk more about the developments that are happening in Europe. I’m sure everyone has heard about the EU AI Act, but what exactly is this Act about? And perhaps you can also touch more on the Council of Europe and the developments that are happening within the Council of Europe in developing another framework on AI and making that distinction because we tend to be confused, like what is the EU AI Act and what is the Council of Europe framework on AI governance as well. So over to you, Gian.
Gian Claudio: Thank you very much. I hope you hear me well. Yeah, good. So yeah, indeed, the AI Act was a bit everywhere in the, I think, in the discussion because all countries, all other countries might refer to the AI Act as a model or as a non-model. So I think I would like to start from the, yeah, maybe the history, the legislative history of the AI Act very briefly. Then we see the mission of the law and the structure in comparison with the other laws. that we have in Europe also regulating AI-related aspects, and maybe, indeed, a focus on the AI Convention and interplay between the AI Act and the AI Convention. Now, doing everything in seven, 10 minutes is impossible, but we will try to do some highlights. Okay, so the AI Act, indeed, is the oldest, sorry, the first AI regulation we have in the world. Of course, we had already some AI-related views in many countries, including Europe, but the AI Act has this ambition to regulate artificial intelligence as a whole. It took three years, more than three years, from the initial text of the European Commission to the final approval, so from April 21 to August 24. August 24, the first of August 24 is the entry into force, but then we have more, you know, a different timing from entry into force for different companies, different kind of entities. At the same time, I was hearing that, for example, the Asian approach is wait and see, and in Europe, we don’t have that approach. Why so? I think the main point is that sometimes, wait and see is too late, because, so we usually have this, there’s this narrative also for big tech, et cetera, right, that it’s better to first let innovation go, and then we can regulate. Then, of course, I think the European Union took a different position, also because sometimes, if we let innovation go, if we let companies go, et cetera, then the consequence might be that, the consequence might be that, indeed, the harm already produced, human rights and fundamental. And sometimes these technologies create dependency on society. So society and individuals might depend on these technologies. So it might be the case that it’s too late or too big to, you know, to sanction, too big to fail. This was happening, for example, to generative AI system like OpenAI, but also to some chatbots. But going zooming a bit more in the structure of the AI act. So I would like to say that the AI act has this difficult balancing between risks and rights. And in particular, also, it is this particular development because it was initially considered as a product safety tool. So something more related to consumer protection or product safety, which is also touching consumer protection. And indeed, it was proposed. This is also interesting to notice that the European Commission has different DG, different Directorate General. And this was not proposed by DG Justice, which is usually the part of European Commission responsible for rights, fundamental rights, liberties, etc. But it was proposed by DG Connect. So it was also interesting that it was developed really as a technological safety tool. Then during the negotiations, both at the European Parliament and also the Council, during these three years, the AI act became something else. It became a fundamental rights tool. So we still see, like, the structure and the DNA of the product safety directive, product safety regulation, and then with a lot of fundamental rights in it. But it’s not a tool for individual rights, it’s a tool for protecting fundamental rights through risks, so this is important. And I think this is one of the main merits of the AI Act, because the AI Act takes some choices. It’s not leaving everything to the decisions of companies or the decisions of member states. There are some political choices about risk to fundamental rights that the AI Act takes. And it’s the first and only time, now we see that other laws like the Brazilian AI Act, I don’t want to talk about other regions, but just to say it’s the first time and maybe the only case in which we have a list of prohibited practices. And this brave choice was a political choice. This was part of the most problematic section of the negotiations. So what was the choice in terms of prohibited practices of AI? So for example, social scoring on AI is prohibited. Manipulation or let’s say exploitation of manipulation for special vulnerabilities is prohibited. We have also special prohibitions for emotional recognition for workers and students in school and in the workplace. But also we have the special prohibitions related to law enforcement and to the use of biometrics. So this is a list, I cannot go through the list, it’s a long list with a lot of exceptions. But I think then what’s important to know… is that in addition to this unbearable risks list, we have also high risk. For high risk, the approach is allowance, so they are allowed, but there’s a lot of governance duties that are considered. One of the main governance duties is that, of course, these high risk systems need to have a data governance plan, need to check their bias, and so to have a system to limit or delete biases. Of course, you cannot delete bias. You can mitigate certain sense of biases. Then, you know, so there are different other rules like human oversight, interpretability, and so on. Then there is an area for limited risk in the AI Act, and the final area of protection is about general purpose AI. General purpose AI. So there’s a parallel regulation in the whole law. The whole law is very big. It’s 112 articles. It’s huge, but just to say, there is a parallel also chapter about general purpose AI, that aims to regulate also generative AI, and they are divided for systemic risk and non-systemic risk. So the generative AI, which produce systemic risk and generative AI, that does. Just to say, I think the balancing with innovation was also really in mind of the legislators, in particular the council, and in particular some parties of European Parliament. So the conclusions on, there is a whole chapter on how AI for innovation should be considered. So I think one of the main aspects of the innovation part is that there are some rules for SMEs, so more than medium enterprises, but also there are some protections, specific protections for the training of AI. with exceptions to the data protection rules. And this is called the regulatory sandboxes, if it’s for public purposes. Okay, of course, I cannot say more on the AI Act, it’s huge, there are so many things I didn’t mention, but just a couple of more minutes to contextualize the AI Act in the broader EU picture, and then the AI Convention about the Council of Europe. So just to say that the AI Act is not the first piece of legislation to regulate AI-related elements or AI-related societal aspects in Europe and in the European Union. Already the GDPR, the first data protection rules that we had on a regional perspective in the world was regulating several aspects about AI. For example, Article 22 is about automated decision-making, but also there’s the definition of profiling, there are many transparency rules. Connected to the GDPR, we have the Digital Services Act, which regulates digital services for online platforms, and also the use of AI, recommender systems, online behavioral advertising using profiling. So there’s a lot also related to AI in the Digital Services Act, which was approved 2022. Digital Markets Act, a bit also, so also regulating big platforms in the digital market. Then we have, of course, Data Act, DGA, et cetera, but just to say it’s not in the silo of just AI Act. And then we have AI Liability Directive that has been proposed that would be probably never accepted because, as you know, the European Union has a compact, which is a big process, and we already know that the Council of Europe, in particular France, are against this Liability Directive because now there’s this wave of, you know, we regulate it too much, we need to have more innovation. France, in particular with Macron, was very concerned about not adding new rules. But actually the AI Liability Directive is just the final element of the AI Act, it’s connected. It’s just providing some exemptions and rules for liability when the AI Act is violated. Okay. But just to say, this is a very, very broad picture. Then 30 seconds of the AI Convention and I will stop. So the Council of Europe this year, actually a few weeks ago, finally approved also the AI Convention. It’s interesting to notice this is not just a regional act, because many other countries that are not part of Europe and not part of European Union, like not part of let’s say political and geographic Europe, are also members of, so are also signatory members. For example, the United States are signatory of the AI Convention. But also we have other countries in the Caucasian, et cetera, for example, just to mention Georgia is a signatory, et cetera. So the AI Convention is similar to the AI Act in terms of, you know, definition of AI. They both take the OECD definition of AI. But it’s different in terms of scope. The AI Convention, so the Council of Europe AI Convention, just regulates public sectors AI. So the use of AI by public sectors. But the member states can also opt in for private actors regulating AI, sorry, providing and deploying AI. Then another important thing is that the AI Act regulates deployers and providers of AI, while the AI Convention just regulates the whole, I mean, not just, but regulates the whole life cycle of AI. So from the very first phase of elaboration to the final elements of the commercialization of AI. So there is this difference. I think this is, these are all some of. the main, let’s say, difference that they have. Also, the AI Convention doesn’t have a list of prohibitions. Then maybe the open question, and I will leave with this question, is whether the AI Act can be a European Union implementation of the AI Convention, because we know that the AI Convention is an international treaty, why it should be implemented by signatories, should be implemented by Member States, while the AI Act is a law of a legal system, which is European Union. So even though the AI Act was approved before the AI Convention, the AI Act might be an implementation of the AI Convention in Europe. And this is an open question that we are still working on. Thank you.
Melody Musoni: Thank you so much, Gian-Claudio, for your contribution. And I guess the points you raise, especially on the roadmap and the discussions leading to the EU AI Act, is something that I find relevant, especially in the African context, where there is a push to regulate, but not taking our time to actually have these negotiations, to actually understand what our approach is. And I always give the EU processes as an example, that it took over three years for you to be where you are with the EU AI Act. And I guess the issues that you also raise on AI, that it’s not just the EU AI Act that’s regulating AI Act, we have the Digital Services Act, the Digital Markets Act, and the GDPR, which also in a way regulate AI, is also important to our approaches to understanding AI governance and regulation. And of course, the distinctions between the Council of Europe framework and the EU AI Act, I think it’s also quite important for me as someone who is learning more about the EU processes and what’s happening with the Council of Europe. So I’m going to change things a bit because one of our speakers was not able to join. So before we go to a second round of questions for the panelists, I wanted to find out from the audience online and here, if you have any questions to our panelists based on what they have already contributed before we move on to the next segment of our discussion. Online?
AUDIENCE: Melody, if I may, just maybe one of the things that I always ask myself, and I know I should know the answer to that, but I don’t. And it might also be interesting for others. I think this risk-based approach, Jean-Claude, that you were talking about, risk to what? So because I think in the EU-AI Act, as you have said, it started as a product liability framework and then later on we infused it with fundamental rights left, right and center, or we tried at least. But so the EU-AI Act looks at risk to what and who is determining, or what was the discussion around these different risk categories? How was that assessed? Maybe you can tell us a little bit more about it. Because Jenny also said that Asian countries are also looking at this kind of tiered approach to risk, and maybe we can hear then from Jenny a little bit more about what the discussions are, how to determine risk and think about these different categories. So maybe Jean-Claude, and then Jenny, you could follow. And then Advocate Lufuno also from maybe the EU perspective.
Gian Claudio: Yeah, sure. Thank you. Great question. Of course, this is a bleeding spot of the legislation about technology, I would say, in Europe, because already in the GDPR, we had this problem. Because the GDPR was one of the first… I think, to first introduce the concept of risk to fundamental rights. What is a risk to a right? Risks come from business management doctrine, let’s say, while fundamental rights come from human rights and fundamental rights analysis. And they are based on very, very, very different mental frameworks, intellectual frameworks. Risks can be measured and controlled. Fundamental rights are moral boundaries that are difficult to conceive and consider as a measurable element. They are usually not measurable, they are qualitatively measurable, if we can say that, but not really quantitatively. So this was a big problem we already had in 2016 with the General Data Protection Regulation. And then it went on because the Digital Services Act has similar problems for risk assessment. And then now the AI Act. So I would say that the idea is risk to fundamental rights is a short answer, but then how can we measure it, how can we analyze it? So I think one of the ambiguities of the law is that also in Article 1, Article 1 says that the AI Act aims to regulate and to protect fundamental rights, health, safety. And it’s always, I’m always a bit skeptical, always, I have to say, a bit critical when you put health and safety as something different from fundamental rights, from the health and safety as part of a bigger understanding of fundamental rights. And then Article 1 goes on and says fundamental rights, especially democracy. But democracy is not really a part of fundamental rights charter. If you do counter-left democracy, you will not find much. much. So it’s a bit of a paradox that health and safety are considered as something else than fundamental rights. And democracy is considered part of that. Of course, we know that they’re all connected, right? They’re all under the same approach. So just to say, there are, of course, political choices to be made, because when you go from fundamental rights to applications and prohibitions, we cannot leave everything to just private accountability. So I think the important thing is that the regulator said, for me, I don’t know, mental integrity means this. And so they prohibited manipulation and expectation of vulnerability. So just to make an example. And yeah, I think, and I will conclude with this. I, with a colleague in Ultrex, Cristiano Santos, we did some research on how we can really measure risk to fundamental rights, in particular, the severity element. And now we are working with an NGO, ECNL, to try to understand how to really do that. So we published an article this year, to try to understand how the subjective use of marginalized groups, for example, can participate, can inform the discussion of how to measure fundamental rights severity. So the risk severity, okay. And how other elements like adverse effects, but also violation of laws, because this, we have usually these big discussions in Europe. Some people say that risk to fundamental rights is just a violation of the law. But this is a bit limited, because it’s just yes or no. The question is black or white. It’s difficult to measure. Other people say that we should measure the adverse effects. But to measure the adverse effects, you would need something quantifiable like property or health, you know, and then you reduce all violation of fundamental rights. of privacy, et cetera, it’s reduced to what the psychologist or a psychiatrist or an economist would measure, not what a real human rights expert measures. So this is a bit the open issue, I guess, but this is bigger than just AI.
Jenny Domino: Yes, thank you. So in the Asia Pacific region, so what I see in terms of risk, it’s sort of similar to the EU AI Act. So there are prohibited uses that are enumerated, for example, in the ASEAN guidance on AI governance and ethics. There is not a lot of articulation on risks to whom, to answer your question, Sabine. And then when it comes to human rights, there’s not a lot of mention of it, if at all. And also with stakeholder engagement or rights holder engagement. Before I delve further into this, I want to just make a note about the earlier panelists about the race to regulate. And I think what’s so interesting to me about this framing is that in this race, who is competing against whom and who is left out. So if we’re thinking about the global South, right, developing countries, I agree that we need to hold tech companies accountable. At the same time, we also need to think about the geopolitical tensions, right? How are we looking at all these competing regulatory initiatives and who is left out from this race, right? Why is it even framed as a race, right? Because a race means there’s going to be a winner in terms of the, or a leader in terms of regulation. But we have to be thinking about this from a global perspective because the internet is global. There will be overlaps. We don’t want fragmentation. And my worry is that that if we don’t even have a unifying framework, and if we’re thinking about this as a race to regulate, then there will be people at the bottom and people on top. So I just wanted to push back a little bit on that framing because it’s interesting to me as somebody coming from the Philippines. And so back to the human rights discussion, I think what’s interesting about corporate accountability, I do agree that we need to hold corporations accountable, tech companies accountable. But I see the discussion here as sort of mirroring the history of the UN guiding principles on business and human rights. That’s the whole reason why the UNGPs were formed because many multinational companies decades ago were operating in developing countries. And what do you do when the human rights violators sometimes are the government actors, right? And that’s why I really do believe that government regulation is warranted. They provide the minimum baseline, but over and beyond that, there are many, many issues that we wouldn’t want governments to regulate under human rights law. So just to give you an example, and I’m using examples from my own experience in content moderation and in platform governance because that’s what I’m most familiar with. So for example, Article 19 of the International Covenant on Civil and Political Rights guarantees freedom of expression of persons. And under Article 19 of the ICCPR, there are areas where you can limit freedom of expression, right? But this whole treaty, this Article 19 was designed to hold state actors accountable. So all treaties are state-based, right? It’s based on the idea that we need to guarantee fundamental rights of persons against their government. And so Article 19 provides the minimum exceptions, right? But what we call in content moderation, there are so many issues that we call lawful but awful. So these are like speech, for example, misinformation. Do we really want governments to legislate a way prohibited false content, right? There are so many human rights groups criticizing fake news laws several years ago that were coming out in different countries around the world, right? Or hate speech, hate speech. I know in certain countries, there are hate speech laws in other countries. There are no hate speech laws regardless, right? So there are, what I’m saying is that there are issues. And that’s what I was talking about earlier when I say AI is all-encompassing. It affects so many different rights, so many human rights. So when we’re thinking about what do we want to regulate? What do we want our governments to regulate? What do we want corporate actors to do above and beyond regulation, above and beyond government regulation? Why? Because under human rights law, there are so many areas of human rights that we wouldn’t want governments to regulate. And one example of that is speech, right? So how do you then regulate that? like Gen AI companies, right? Because you wouldn’t also want the curb expression for satire or for condemning human rights abuses. So we want companies to go beyond what regulation requires, right? And so I think that I see this as complimentary. And for example, in the platform governance space, I think that’s how I see the oversight board is doing its job. It’s not meant to replace government regulation as it should be, it shouldn’t. But at the same time, there are many areas concerning countries around the world where we need companies to step up because we can’t just rely on governments to do their job. So I think that’s what I really want to, I want to nuance this discussion a little bit about regulation and the race theory.
Melody Musoni: I’m not sure, advocate Lufuno, do you want to chip in?
Lufuno T Tshikalange: Can you hear me?
Melody Musoni: Yes, and then we have another two more questions in the audience, you can go ahead.
Lufuno T Tshikalange: Okay, thank you. Hopefully I will finish what I’m saying this time around. In terms of the risk identification from how I’ve understood the approach of the African Union Commission in the development of the artificial intelligence strategy and how even we are doing our own policy framework in South Africa, it’s more human centric. So the technology for the people, not people for technology is the mantra that I believe is coming through a lot to ensure that human rights are respected and reading through them. the Continental Artificial Intelligence Strategy, you see that there is a lot of references to human-centric development and respect and protection of human rights, which is also one of the eight principles for our strategy. So I believe that this approach was because the strategy is based on the Human Rights Charter, the AU Human Rights Charter, and a number of constitutions around the African region. So the risk is to the people. If we do not have ethical behavior and we have technology, people for technology instead of technology for the people, meaning that technology is a tool of advancing our lives and helping us to move out of poverty, inequality, and unemployment, and we are now being made a commodity, that is what the strategy was, by all means, trying to avoid, that the technology is here to enhance, not necessarily to abuse anything or take advantage of anything. So rise to privacy, environmental rights, issues of climate change, that’s why they were identified, because they impact on human rights at individual level. And though there are other issues that were related to economy, the important part was that the risk will be to the consumers of these technologies and they need to be protected. from any risk or harm that may come out of it. So I believe that was the approach to make sure that this strategy is human rights centric and whatever that comes out of the technology itself, it is advancing the human rights, not necessarily violating the same. So I do hope that makes sense. And also the 2063 agenda. Agenda 2063 also says that our development must be human centric. So everything that we have from a policy perspective, it is putting human rights at the center of everything. Thank you.
AUDIENCE: Ends up. We cannot hear. Rely on ISO 31,000 is what they see as the kind of framework for risk assessments. And I think in a lot of tech companies, there’s a compliance team, which is different to the human rights team. And the compliance team are thinking about business risks. And so I think what we need to do is to map what are the risks to the business of having an adverse impact on human rights? Is there an accountability mechanism? Will there be some kind of legislative or regulatory risk? Is there some consequence? Or is there going to be a requirement in legislation or in regulations to provide a remedy to victims of adverse human rights impacts? Because without that, I think there will be a nice human rights department. maybe produce human rights impact assessments, but there’s not really any real consequence. And unless there’s a real consequence to the business and it’s seen as a business risk, I don’t think we’re going to see a lot of change. Thanks very much, first of all, for the insights from all the regions about AI regulation in the different jurisdictions. And I would like to draw the attention a little bit more to some technical requirements. If we look at the technical realities, we currently see around 140 large language models on the globe. More than 120 of them are coming from the US, then you have 20 from China, and all the rest is shared between a tiny numbers of jurisdictions. In practice and beyond any regulatory approaches, that means that a huge number of countries, including the European Union, is dependent on large language models from other regions, from the US and from China. And if we speak about fundamental rights and how we can guarantee access to this technology and fundamental rights when using AI, I think one important aspect is digital sovereignty. To what extent are countries in the position to train AI models as the basis for every AI system themselves? And the fact is that even in Europe, we have not the data center capacities and computing power capacities to train European AI models, which means we are dependent on AI models, large language models from other regions. I think this is a very important factor also in the light of access to AI in developing countries because for them it might be even more challenging to build up the data center and computing power infrastructure to be sovereign and to train AI models themselves. And yeah, I think that’s an important factor for the discussion. A unified AI regulatory framework would be an important aspect. but it won’t work if we do not have the same conditions for training AI models in all regions over the globe. Thank you.
Melody Musoni: Questions?
AUDIENCE: Yes, so I have a question here from Advocate Nusipro. How do we ensure within AI paradigm that the same infrastructure service providers that operate transnationally sustain quality provisions across all jurisdictions in line with human rights protocols? So I guess it goes back to the question around accountability across country borders. I think Jen, you spoke about that a little bit. Maybe anything you wanna add to that?
Jenny Domino: Yeah, of course. Thank you. Maybe I’ll just quickly comment on all the questions and comments. So on remedy, I completely agree. And I think what is, I’m afraid that my answer is more of a question, which is what would constitute sufficient remedy, right? If there is an adverse human rights impact, there is an affected community, what would be remedy in this regard? And again, the UN Guiding Principles was constructed at a time when they were contemplating a different kind of industry, not a tech industry, not a cross-border tech company that may not be within the regulatory jurisdiction the country concern. But yeah, I guess that would be, it’s just an addition. It’s more like what would be good remedy. And again, with respect to platform governance in Myanmar many years ago, when the UN identified Facebook as inciting violence, there were some groups in Bangladesh in the refugee camps that wrote to Facebook saying, you know, give us money, help us, you know, build a life here because your platform was partly used to this, to cause this, what is happening to us. And Facebook said no, saying that they’re not a charity organization, right? And I think that raises at the very least interesting questions about what would constitute remedy? And from a legal standpoint as well, how do you attribute causation or, because again, we’re talking about liability and that’s a different framework altogether from the contribute costs and link to framework under the UNGP. So I think all of this is really interesting as a matter of law, and I don’t have easy answers for that as well, but just to complicate the discussion further. On the second comment, I completely agree. I just want, what I want to add there is the training, the labeling, the labor, the labor aspect of this. And I think this is something that we haven’t discussed yet, right? How are developing countries involved in the training of this data, in the labeling, right? Because cheap labor, again, it’s in the developing countries. Again, we see this in content moderation, we see this in AI as well. And the question there is what are these governments in developing countries doing to regulate that, to regulate labor when there are not even adequate labor protections in many parts of the world? Yeah, so I forgot the third question, quality assurance, right? Across, so yeah, so I think that’s why it’s still very relevant to try to articulate more detailed guidance from the UN Guiding Principles on Business and Human Rights. And I know that the UN-BTEC project has been leading this initiative. So, no easy answers, but I think that the relevant organizations at the UN, but also civil society groups are doing what they can to help articulate more concrete guidance to ensure quality.
Melody Musoni: Thank you so much, Jenny. I guess for the sake of our time, we need to go to the second round of questions. I see there are still more questions online. We’ll try to attend to them at the end of our session. So, the next question that I’m going to ask, I guess I’ll direct my question to Gian-Claudio. One of the policy questions that have been coming up, especially within the African context after the EU adopted the EU AI Act, was the externalization of that framework. And I guess it’s coming from a point where when the GDPR was adopted, most African countries felt that they were being pushed to adopt a similar framework within their regions, so as to comply with the EU and to be able to do business with the EU. And I was wondering, in the context of the EU AI Act and the provisions that we have, does it have the same extraterritorial effect similar to the GDPR? And do you see many countries, for example, emulating the EU AI Act and adopting similar frameworks in their own national legislative processes? Or do you see it very different because it’s very different from the GDPR?
Gian Claudio: Yeah, sure. Great question. And I think it’s connected to some of the other points and questions that were raised in the first round of questions. So, it’s also for me an opportunity to connect on that. So, I think there are two separate aspects to consider. One is the real scope, like the extraterritorial, potential extraterritorial scope of EU law, in particular the AI Act. And the second point is the possible Brussels effect, to name a book by Anno Bradford, so how the Brussels effect that we have for the GDPR, the emulation by other countries and systems might happen also for the AI Act. So, I think these are two separate questions, but connected. And also connected to some of the elements or questions that were raised before, for example, about how can we have a homogeneous regulation of AI if then we have countries that cannot have data centers or cannot have a general purpose AI. But this is connected because, of course, if we just regulate generative AI, we know that generative AI is produced and developed mostly in the US, we need to have some rules for the application, for the, let’s say, extraterritorial application of the law. The AI Act follows that approach. So, for example, if you commercialize in Europe, but you have produced somewhere else, but you commercialize in Europe, then you have to follow the rules of the AI Act. The GDPR went even broader because for the GDPR, even if you just monitor behaviors of people in European Union, or if you offer services even for free to people who are in the European Union, the GDPR was applicable. And this was also, and this is, I think the two parts are connected, right? The fact that the extraterritorial and the law and the process effects are connected because the more extraterritorial… territoriality you have in the law, the more you are pushing other countries to have similar systems in order to be adequate. Adequate is the term that is used by the European Commission to analyse the compatibility of other legal systems to the European data protection system. So just to say, I think the scope and the particular scope of the AI Act tries to go beyond what’s just in the borders of the European Union, but of course this is not easy. About the Brussels effect, this is mostly a political analysis. So what do we expect? Do we expect that other countries will follow the AI Act as it happened with GDPR? And this is also connected by a comment about a race between the first and the last, and what do we do? So I think of course it’s not a race to be the best country, the best region, and I think this is very important that there is a big risk of legal colonialism that Europe can have, like Europe going to impose laws to other countries. So of course this is a risk, and this is not something that we want. I don’t think we should push for Europe to dictate the legal agenda to all other countries, right? This is not what we want. At the same time, if these laws are pro-human rights and pro-fundamental rights, then it’s also, I think, a good thing if the Brussels effect takes effect. So I mean, if other countries copy the European Union for prohibiting exploitation of migrants, or for example, a full laissez-faire approach to bias and discrimination online or to human oversight, then it’s good if other countries maybe copy European Union in that. And we already see that happening a bit. We have indeed, I was mentioning the Brazilian AI Act. It was approved by the first chamber two weeks ago. And it really reflects the European Union AI Act in the structure, in the design duties, in the governance, in the prohibitions. It reflects a lot. And it reminds how the LGBP or GDPR, which is the general data protection law in Brazil, reflects the GDPR. So I’m just saying this is possible, but there is a big caveat, a big difference. It’s difficult to export the AI Act from one point of view. The AI Act is highly rooted on political considerations that cannot be easily exported. Because for example, even the prohibition of social scoring was written having in mind examples from China, etc. So there are political considerations of what the risk to human right is. And also the same idea of fundamental rights is not applicable all over the world. So while the GDPR could be exported, I’m using the word exported, which is very brutal, but just to say the image that we have, it could have been reapplied or reproduced in other regions, because it was based on some principles that are not so difficult to reproduce in other regions. For example, a purpose limitation or storage limitation. These are kind of technical principles, right? The AI Act is rooted. on fundamental rights, in a way. And fundamental rights are not, in particular the Charter of Fundamental Rights, just applies to the European Union. So, you know, this is maybe one of the major obstacles of the Brussels effect. But we can wait and see, because I was, when we were preparing this panel, I hadn’t read the Brazilian law. And now I read it and they say, oh, it’s very similar to the AI. So I can say that even though in principles it’s difficult, in practice, I see that the Brussels effect is already maybe taking effect.
Melody Musoni: Thank you for that Gian-Claudio. And I think earlier we were talking about the geopolitics and how it also kind of shapes the approaches that we adopt when it comes to AI governance, for example. And I think that what you mentioned about, especially with the social scoring in the position that the EU AI Act adopted is just one example. So I’m going to come to advocate Lufuno again, looking at this whole geopolitics and how it’s actually shaping our approaches to AI governance. You are sitting in South Africa, and as of this month, South Africa has taken over the G20 presidency. And there are quite high and big expectations for the continent, for both South Africa and the African Union as members of the G20. And a lot are seeing it as an opportunity for Africa and South Africa to promote and to advocate for inclusive digital development.
Lufuno T Tshikalange: I can’t hear you, Dr. Milode. Technico. Okay, you were at…
Melody Musoni: Can you hear me now?
Lufuno T Tshikalange: Yes, I can hear you. You were at G20 promoting Africa and it’s cut.
Melody Musoni: Yes, so I was saying that there are big expectations that with South Africa now taking over the G20 presidency and the AAU being a member of the G20, we should see more and more conversations and discussions around digital development and AI governance. And I also mentioned that South Africa is also a member of the BRICS plus economic block who are also shaping AI policies. And what I wanted to ask you is, do you think that global South countries are able to develop maybe an alternative to what we have in the EU AI Act? Do you see us global South countries developing an alternative approach to AI governance? If so, what do you think that would actually look like?
Lufuno T Tshikalange: Thank you. Yes, we are the president for the G20 at 2025 years and the AU is a member which is an exciting opportunity for us and the theme is sustainability, equality and I am missing one word in that theme. I believe that we need to take an advantage of the G20 which I’m personally within our consultancy already doing that looking at the issue of proposing the establishment of the cyber diplomacy for the G20 and obviously, to ensure that we are assisted in a sense, as I said, that collaboration is very important. So this G20 will help us to ensure that we collaborate better and we find ourselves in terms of the principles that we want to see informing the conversation of AI governance. I don’t necessarily see us cutting and pasting the EU AI Act, but I believe that the EU has really set an example that we can study and see what lessons can be learned moving forward. And I believe, if I’m not mistaken, the EU Commission is also part of the G20 alongside the African Union. So I am seeing an opportunity that we are not competing as to who can do better than who, but we are bringing our resources together to make sure that the world becomes a better place for everyone. And I believe if we form appropriate collaborative relationship during our presidency, that we can then have opportunities to have skills exchange programs where some of the African talents can be sent to the global north to learn the skills that we naturally just didn’t manage to develop as our R&D is not very good. But I believe that there are a lot of opportunities that can be derived out of this. We’re not doing well in our cyber security space. Our cyber security posture as Africa is not something to brag about. And we are having some of the strong countries within the G20 that are doing well in that space. So I am seeing that if we really use this opportunity, there is a lot of advantages from where I am seated. And the expectations are big for sure, because we were also doing a panel at the South African Science Forum, which happened first week of December. And there is a lot of expectations that the presidents set out and the minister of Department of Science and Innovation and Technology set out for us to be able to achieve and taking advantage of us having the G20 presidency. So I hope I’m answering your questions, but a lot of challenges that we have, if we can master the collaboration approach, I believe this one year of presidency will leave us having achieved more. Thank you.
Melody Musoni: Okay, thank you, advocate. And I’m gonna come to you, Jenny. I think you already touched on this and you were kind of asking us, like, why are we competing? Okay, I’m gonna come to Jenny and coming back to… to the discussion on AI governance and you were mentioning about the race that why are we competing? And if we are competing, there’ll be winners and there will be losers. Technical team, can you help us with the audio for the online participants? Okay. So, the way I see it, maybe because I’m moving from law and moving more into policy and public policy and I see a lot of influence that is coming from different political actors on how we approach AI governance. So, just giving the example that Ajaan Claudio gave with the EUA Act and the social scoring that it’s considered as high risk. It’s definitely not allowed in terms of the regulation in Europe but yet we see other regions like China where they still have social scoring. And I think those different political standpoints are definitely going to shape the future of AI governance. In September, there was the China Africa Summit and one of the declarations and the commitments that China and Africa made was we need to support each other on discussions on AI governance. For example, on global AI governance discussions. And already that is an indication that we are going to a place where we are not going to have one uniform regulatory framework on AI governance. And my question to you is how do you think we should be able to achieve a more universal approach to AI governance or AI regulatory frameworks? And where do you actually see us moving from here? What should we start prioritizing? What kind of discussions do we need to start having where we are able to say, these are our this is our baseline or these are our minimum standards that we expect to see in different frameworks, whether it’s being adopted by the African Union, it’s being adopted in Latin America, what kind of conversations should we start having and what kind of principles should we see in these frameworks and in these discussions on AI governance?
Jenny Domino: Such a big question. I feel like if I know the answer to this, we can all go home and there would be no IGF next year because we won’t have a need for it. So my short answer to this question is, I think human rights really provides a common language. The UN Special Rapporteur on Freedom of Expression referring to platform governance at least, has described human rights as offering a universal language that everybody can understand regardless of where you are in the world, regardless of the political situation in that country, the people in those country understand a rights framework. And that’s why I think human rights groups have been criticizing the more risk-oriented approach to AI governance as opposed to a rights-centric framework. And I know that that answer can be also seen or perceived by some as naive or too idealistic, but I actually think it is pragmatic because first of all, it’s something that is understandable to everyone. It’s something that civil society and underrepresented regions can use that kind of language when talking about the risks, the harms posed by AI technology. So I think that’s something that we should aim for, to bring back human rights front and center in AI governance.
Melody Musoni: Okay, thank you so much. So we need to prioritize human rights protections in our frameworks on AI. I see we have four minutes left. I wonder, do we have questions from the audience online and here?
AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the session. So if this is repeated, you don’t need to answer it. I wanna understand when the Global South regulates or writes policies for the internet and digital technologies, they are writing policies to govern companies and organizations that exist outside of their jurisdiction most of the time. And what we see in the EU and generally in China and the US, that they are writing policies and regulations for their own leverage points and their own companies and their own markets. So when the Global South comes and writes their own policies, even when it comes to Africa, it has a huge population, a young population, and they’re consuming a lot of these technologies. They don’t have as much leverage points. And perhaps the IGF is a great space for us to collaborate to prioritize some of the things that the Global South can add. So what are these advantage points that the Global South needs to capitalize on and bring to the conversation where we can contribute to the conversation with the EU experts? I think one of the things that really makes the EU a stronger market is because they have a lot of experts who are very capable in terms of collaboration, in terms of having the organizations and the funding that enables them to make huge leapfrogs. in terms of knowing what exactly can serve the political organization and also civil society organizations and so on. So what do we need to do in the global South? What do we need to leverage? And what should we prioritize? And what should be the things that we focus on in order to help us have a productive conversation with the global North counterparts? Thanks. I take this one. So you raise an important question. I think it’s one of the questions that we always raise when we are talking about regulation of AI, especially in Africa to say, what are we regulating? And do we even have the institutional capacity to go after big tech? And in some of the conversations we have is the question of what are we regulating? And what we have is they don’t have any legal presence in a lot of countries. For example, META, you have offices in South Africa, Kenya, and Nigeria. So if there is unlawful processing of data in another country that doesn’t have data protection laws, they are not protected. And just to give an example on data protection, one approach was perhaps it makes sense from a regional perspective, if we can collaborate as a continent and have one regulatory body that represents the interests of everyone else. I remember in 2017 with the Cambridge Analytica crisis that South Africa was able to take further action against META, Facebook at the time because they already had a regulator. And similarly, I think there were discussions with Kenya as well. So learning from that example to say… for countries that already have this institutional capacity.
Lufuno T Tshikalange: Thank you, Melody, I’ll take it.
Melody Musoni: You can. Yes, at the moment with AI, it’s very difficult. We always have these conversations and my position is rather let’s start with the low hanging fruit, the data protection laws that we have and trying to see what are the laws we can kind of extend. For example, from a criminal law perspective, we’re talking about misinformation, hate speech. We already have some laws that to some extent cover issues where AI is being used for misinformation. So from a criminal law perspective, you extend the law, deletional perspective, you extend the law. So there are different ways of regulating AI without particularly having a specific law because at the moment, we definitely don’t have that capacity in the framework to go after the big tech if they are not in our countries. I don’t know, Jenny or Advocate Lufuno, if you want to contribute more on the global South. One minute each. I’ll start with Jenny and then Lufuno, you can go immediately after. Do we have our questions online as well? Advocate Lufuno, can you hear us?
Lufuno T Tshikalange: Yes, I can hear you.
Melody Musoni: Thank you. You can go ahead.
Lufuno T Tshikalange: Oh, I thought you said Jenna was going first.
Melody Musoni: Thank you.
Lufuno T Tshikalange: Yeah. I believe that from the global South perspective, just to remind you, one minute. Yes. Wrap up in one minute. From the Global South perspective, I believe that we do have enough skills and case studies that we can collaboratively use to combine our resources and come out of the consumer status that we have been for so long. We need to work towards coming into the table not as a subject matter of discussion but as a contributor. Thank you.
Melody Musoni: Okay. Thank you so much. I see we have run out of time, but I would like to take this opportunity to thank our speakers both online and here with us, as well as thanking you, the audience, for participating in our discussion. Feel free to come and engage with the speakers who are here if you have additional questions. So let’s give a round of applause to our brilliant speakers. Thank you.
Lufuno T Tshikalange
Speech speed
110 words per minute
Speech length
2051 words
Speech time
1113 seconds
African Union developing continental AI strategy aligned with Agenda 2063
Explanation
The African Union has developed a regional AI strategy adopted in 2024. This strategy aims to establish AI governance and regulations while accelerating AI adoption in key sectors like agriculture, education, and health.
Evidence
The strategy is aligned with Agenda 2063 and the digital transformation strategy. It promotes the creation of an enabling environment for an AI startup ecosystem and addresses skills shortages.
Major Discussion Point
Global approaches to AI governance
Agreed with
Jenny Domino
Gianclaudio Malgieri
Agreed on
Need for human rights-centered approach in AI governance
Lack of infrastructure and skills in developing countries
Explanation
Lufuno highlights the challenges faced by African countries in developing AI frameworks, including inadequate internet access, insufficient data for training models, limited computing resources, and skills shortages.
Major Discussion Point
Challenges in developing AI governance frameworks
Agreed with
Jenny Domino
Agreed on
Challenges in infrastructure and skills for developing countries
Need for collaboration rather than competition in governance approaches
Explanation
Lufuno emphasizes the importance of collaboration in AI governance, particularly in the context of South Africa’s G20 presidency. She sees this as an opportunity for skills exchange and addressing common challenges.
Evidence
Reference to the G20 presidency and the potential for collaborative relationships to address issues like cybersecurity.
Major Discussion Point
Implications of different regulatory approaches
Importance of Global South contributing expertise, not just being subject of discussion
Explanation
Lufuno argues that the Global South needs to move beyond being merely a subject of discussion in AI governance. She emphasizes the need for collaborative use of skills and case studies to contribute meaningfully to global discussions.
Major Discussion Point
Role of geopolitics in shaping AI governance
Jenny Domino
Speech speed
141 words per minute
Speech length
2257 words
Speech time
953 seconds
Asia taking a “wait-and-see” approach with soft law and draft initiatives
Explanation
In Asia, there is a more cautious approach to AI regulation compared to the EU. Countries are focusing on soft law approaches and draft legislative initiatives rather than comprehensive hard law regulations.
Evidence
Examples include the ASEAN guidance and draft laws in Japan, Korea, and Thailand.
Major Discussion Point
Global approaches to AI governance
Differed with
Gianclaudio Malgieri
Differed on
Approach to AI regulation
Existing human rights issues and digital divides need to be addressed
Explanation
Jenny emphasizes that basic infrastructure problems and existing human rights issues in many countries need to be addressed before focusing on AI uptake. These include access to education, digital education, and internet shutdowns.
Evidence
Examples of internet shutdowns and website shutdowns in some Asian countries are mentioned.
Major Discussion Point
Challenges in developing AI governance frameworks
Agreed with
Lufuno T Tshikalange
Agreed on
Challenges in infrastructure and skills for developing countries
Human rights as potential universal framework for AI governance
Explanation
Jenny suggests that human rights provide a common language that everyone can understand regardless of their location or political situation. She argues for bringing human rights back to the center of AI governance discussions.
Evidence
Reference to the UN Special Rapporteur on Freedom of Expression describing human rights as offering a universal language in the context of platform governance.
Major Discussion Point
Implications of different regulatory approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Explanation
Jenny emphasizes the importance of considering geopolitical tensions in AI governance discussions. She warns against framing AI regulation as a race, as it may lead to winners and losers, potentially excluding some countries.
Major Discussion Point
Role of geopolitics in shaping AI governance
Need for human-centric approach centered on human rights
Explanation
Jenny advocates for a human-centric approach to AI governance that prioritizes human rights. She criticizes the risk-oriented approach for potentially eclipsing the focus on rights.
Evidence
Reference to human rights groups criticizing the risk-based approach in AI governance frameworks like the EU AI Act.
Major Discussion Point
Global approaches to AI governance
Agreed with
Lufuno T Tshikalange
Gianclaudio Malgieri
Agreed on
Need for human rights-centered approach in AI governance
Differed with
Gianclaudio Malgieri
Differed on
Focus on risks vs. rights in AI governance
Gianclaudio Malgieri
Speech speed
138 words per minute
Speech length
3415 words
Speech time
1482 seconds
EU AI Act as first comprehensive AI regulation, balancing risks and rights
Explanation
Gian Claudio explains that the EU AI Act is the first comprehensive AI regulation in the world. It aims to balance risks and rights, with a structure based on product safety but incorporating fundamental rights considerations.
Evidence
The AI Act includes a list of prohibited practices, high-risk applications, and rules for general purpose AI.
Major Discussion Point
Global approaches to AI governance
Agreed with
Jenny Domino
Lufuno T Tshikalange
Agreed on
Need for human rights-centered approach in AI governance
Differed with
Jenny Domino
Differed on
Focus on risks vs. rights in AI governance
Difficulty in measuring and quantifying risks to fundamental rights
Explanation
Gianclaudio Malgieri highlights the challenge of measuring and quantifying risks to fundamental rights in the context of AI regulation. He points out that fundamental rights are qualitative and not easily measurable, unlike business risks.
Evidence
Reference to ongoing research on how to measure risk to fundamental rights, particularly the severity element.
Major Discussion Point
Challenges in developing AI governance frameworks
Potential for “Brussels effect” with other countries emulating EU approach
Explanation
Gianclaudio Malgieri discusses the possibility of a “Brussels effect” where other countries might adopt similar approaches to the EU AI Act. He notes that this is already happening to some extent, with the Brazilian AI Act reflecting elements of the EU approach.
Evidence
Example of the Brazilian AI Act recently approved by the first chamber, which reflects the EU AI Act in structure, design duties, and prohibitions.
Major Discussion Point
Implications of different regulatory approaches
Risk of legal colonialism if EU approach imposed on other regions
Explanation
Gianclaudio Malgieri warns against the risk of legal colonialism if the EU approach to AI regulation is imposed on other countries. He emphasizes that Europe should not dictate the legal agenda for all other countries.
Major Discussion Point
Implications of different regulatory approaches
Melody Musoni
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
G20 presidency as opportunity for Africa to shape inclusive digital development
Explanation
Melody highlights the expectations for South Africa and the African Union to promote inclusive digital development during South Africa’s G20 presidency. This is seen as a chance for Africa to influence global discussions on AI governance.
Major Discussion Point
Role of geopolitics in shaping AI governance
Differing political standpoints (e.g. on social scoring) shaping governance approaches
Explanation
Melody points out that different political standpoints, such as views on social scoring, are shaping approaches to AI governance. This leads to divergent regulatory frameworks across regions.
Evidence
Example of social scoring being prohibited in the EU AI Act but still practiced in China.
Major Discussion Point
Role of geopolitics in shaping AI governance
AUDIENCE
Speech speed
156 words per minute
Speech length
1247 words
Speech time
479 seconds
Limited leverage of Global South over tech companies based elsewhere
Explanation
An audience member points out that Global South countries often regulate companies and organizations outside their jurisdiction. This limits their leverage compared to regions like the EU, US, and China that regulate their own markets and companies.
Major Discussion Point
Challenges in developing AI governance frameworks
Agreements
Agreement Points
Need for human rights-centered approach in AI governance
Jenny Domino
Lufuno T Tshikalange
Gianclaudio Malgieri
Need for human-centric approach centered on human rights
African Union developing continental AI strategy aligned with Agenda 2063
EU AI Act as first comprehensive AI regulation, balancing risks and rights
The speakers agree on the importance of prioritizing human rights in AI governance frameworks, emphasizing a human-centric approach that balances risks and rights.
Challenges in infrastructure and skills for developing countries
Jenny Domino
Lufuno T Tshikalange
Existing human rights issues and digital divides need to be addressed
Lack of infrastructure and skills in developing countries
Both speakers highlight the challenges faced by developing countries in terms of infrastructure, skills, and digital divides that need to be addressed alongside AI governance.
Similar Viewpoints
Both speakers emphasize the challenges and importance of incorporating human rights considerations into AI governance frameworks, highlighting the difficulty in quantifying these rights within a risk-based approach.
Gianclaudio Malgieri
Jenny Domino
Difficulty in measuring and quantifying risks to fundamental rights
Need for human-centric approach centered on human rights
Both speakers recognize the significant role of geopolitics and differing political standpoints in shaping AI governance approaches globally.
Melody Musoni
Jenny Domino
Differing political standpoints (e.g. on social scoring) shaping governance approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Unexpected Consensus
Collaboration over competition in AI governance
Lufuno T Tshikalange
Jenny Domino
Need for collaboration rather than competition in governance approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Despite representing different regions, both speakers emphasize the importance of collaboration over competition in AI governance, which is somewhat unexpected given the often competitive nature of international relations and technology development.
Overall Assessment
Summary
The main areas of agreement include the need for a human rights-centered approach in AI governance, addressing infrastructure and skills challenges in developing countries, and recognizing the impact of geopolitics on governance approaches.
Consensus level
There is a moderate level of consensus among the speakers on key issues, particularly on the importance of human rights and the challenges faced by developing countries. This consensus suggests a growing recognition of the need for inclusive and rights-based approaches to AI governance globally, which could potentially influence future policy discussions and international cooperation in this field.
Differences
Different Viewpoints
Approach to AI regulation
Jenny Domino
Gianclaudio Malgieri
Asia taking a “wait-and-see” approach with soft law and draft initiatives
EU AI Act as first comprehensive AI regulation, balancing risks and rights
Jenny Domino describes Asia’s cautious approach with soft law, while Gianclaudio Malgieri highlights the EU’s comprehensive regulation through the AI Act.
Focus on risks vs. rights in AI governance
Jenny Domino
Gian Claudio
Need for human-centric approach centered on human rights
EU AI Act as first comprehensive AI regulation, balancing risks and rights
Jenny Domino advocates for a human rights-centered approach, while Gianclaudio describes the EU AI Act’s attempt to balance risks and rights.
Unexpected Differences
Perspective on global collaboration in AI governance
Jenny Domino
Lufuno T Tshikalange
Need to consider geopolitical tensions and fragmentation of internet governance
Need for collaboration rather than competition in governance approaches
While both speakers advocate for collaboration, Jenny unexpectedly emphasizes the need to consider geopolitical tensions, whereas Lufuno focuses more on the benefits of collaboration without explicitly addressing these tensions.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI regulation (comprehensive vs. wait-and-see), the focus on risks vs. rights, and the perspective on global collaboration in AI governance.
difference_level
The level of disagreement among the speakers is moderate. While there are clear differences in approaches and focus areas, there is also a shared recognition of the challenges facing AI governance globally. These differences reflect the complexity of developing a unified approach to AI governance across diverse regions and highlight the need for continued dialogue and collaboration to address global challenges while respecting regional contexts.
Partial Agreements
Partial Agreements
All speakers agree on the need to address fundamental challenges in AI governance, but they focus on different aspects: Jenny on existing human rights issues, Lufuno on infrastructure and skills gaps, and Gianclaudio Malgieri on the difficulty of quantifying risks to rights.
Jenny Domino
Lufuno T Tshikalange
Gianclaudio Malgieri
Existing human rights issues and digital divides need to be addressed
Lack of infrastructure and skills in developing countries
Difficulty in measuring and quantifying risks to fundamental rights
Similar Viewpoints
Both speakers emphasize the challenges and importance of incorporating human rights considerations into AI governance frameworks, highlighting the difficulty in quantifying these rights within a risk-based approach.
Gianclaudio Malgieri
Jenny Domino
Difficulty in measuring and quantifying risks to fundamental rights
Need for human-centric approach centered on human rights
Both speakers recognize the significant role of geopolitics and differing political standpoints in shaping AI governance approaches globally.
Melody Musoni
Jenny Domino
Differing political standpoints (e.g. on social scoring) shaping governance approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Takeaways
Key Takeaways
Different regions are taking varied approaches to AI governance, from comprehensive regulation (EU) to wait-and-see approaches (Asia)
There is a need to center human rights in AI governance frameworks rather than focusing solely on risk-based approaches
Developing countries face significant challenges in AI governance due to lack of infrastructure, skills, and leverage over tech companies
Geopolitics and differing political standpoints are shaping approaches to AI governance globally
Collaboration rather than competition is seen as key for effective global AI governance
Resolutions and Action Items
None identified
Unresolved Issues
How to achieve a universal approach to AI governance while respecting regional differences
How to effectively regulate AI in developing countries with limited leverage over tech companies
How to balance innovation with protection of fundamental rights in AI governance
How to measure and quantify risks to fundamental rights in AI systems
Suggested Compromises
Using human rights as a universal framework for AI governance that can be understood across different regions and political contexts
Focusing on extending existing laws (e.g. data protection, criminal law) to cover AI issues rather than creating entirely new AI-specific regulations in developing countries
Collaborating at regional levels (e.g. African Union) to increase leverage in regulating multinational tech companies
Thought Provoking Comments
I think this risk-based approach, Gianclaudio, that you were talking about, risk to what? So because I think in the EU-AI Act, as you have said, it started as a product liability framework and then later on we infused it with fundamental rights left, right and center, or we tried at least. But so the EU-AI Act looks at risk to what and who is determining, or what was the discussion around these different risk categories? How was that assessed?
speaker
Audience member
reason
This question challenges the fundamental approach of risk-based AI regulation and prompts deeper consideration of how risks are defined and assessed.
impact
It led to an in-depth discussion on the challenges of measuring risks to fundamental rights and the potential limitations of a risk-based approach to AI governance.
I see the discussion here as sort of mirroring the history of the UN guiding principles on business and human rights. That’s the whole reason why the UNGPs were formed because many multinational companies decades ago were operating in developing countries. And what do you do when the human rights violators sometimes are the government actors, right?
speaker
Jenny Domino
reason
This comment draws an insightful parallel between AI governance and existing frameworks for business and human rights, highlighting the complex dynamics between corporations, governments, and human rights.
impact
It broadened the discussion to consider the roles and responsibilities of different actors in AI governance, particularly in the context of developing countries.
The AI Act follows that approach. So, for example, if you commercialize in Europe, but you have produced somewhere else, but you commercialize in Europe, then you have to follow the rules of the AI Act. The GDPR went even broader because for the GDPR, even if you just monitor behaviors of people in European Union, or if you offer services even for free to people who are in the European Union, the GDPR was applicable.
speaker
Gianclaudio Malgieri
reason
This explanation clarifies the extraterritorial scope of EU regulations and their potential global impact, which is crucial for understanding the implications of EU AI governance on other regions.
impact
It sparked a discussion on the potential for a ‘Brussels effect’ in AI regulation and the challenges of exporting EU-style regulations to other contexts.
I believe that from the global South perspective, just to remind you, one minute. Yes. Wrap up in one minute. From the Global South perspective, I believe that we do have enough skills and case studies that we can collaboratively use to combine our resources and come out of the consumer status that we have been for so long. We need to work towards coming into the table not as a subject matter of discussion but as a contributor.
speaker
Lufuno T Tshikalange
reason
This comment challenges the narrative of the Global South as merely consumers of AI technology and regulation, asserting their potential to contribute meaningfully to global AI governance discussions.
impact
It shifted the conversation towards considering how the Global South can actively shape AI governance rather than simply adopting frameworks from other regions.
Overall Assessment
These key comments shaped the discussion by highlighting the complexities of AI governance across different regions and regulatory frameworks. They prompted deeper consideration of how risks and rights are balanced in AI regulation, the challenges of applying regulations across borders, and the need for inclusive global dialogue that recognizes the potential contributions of the Global South. The discussion evolved from a focus on specific regulatory approaches to a broader consideration of how to achieve a more universal and equitable approach to AI governance that respects human rights and addresses the needs of diverse stakeholders.
Follow-up Questions
What would constitute sufficient remedy for adverse human rights impacts caused by AI systems?
speaker
Jenny Domino
explanation
This is important to determine how to hold companies accountable and provide appropriate compensation to affected communities.
How can developing countries be involved in the training and labeling of AI data in a way that ensures fair labor practices?
speaker
Jenny Domino
explanation
This is crucial to address potential exploitation of workers in developing countries and ensure ethical AI development practices.
How can we ensure quality provisions for AI services across all jurisdictions in line with human rights protocols?
speaker
Audience member (via Advocate Nusipro)
explanation
This is important to maintain consistent standards and protections for users of AI systems globally.
How can countries in the Global South develop the necessary infrastructure and computing power to train their own AI models and achieve digital sovereignty?
speaker
Audience member
explanation
This is crucial for ensuring countries are not dependent on AI models from other regions and can develop AI tailored to their specific needs and contexts.
What should be the baseline or minimum standards expected in AI governance frameworks across different regions?
speaker
Melody Musoni
explanation
This is important for establishing a common ground in global AI governance while allowing for regional variations.
What are the advantage points that the Global South needs to capitalize on and bring to the conversation on AI governance?
speaker
Audience member
explanation
This is crucial for ensuring the Global South has a meaningful voice in shaping global AI governance and addressing their specific needs and concerns.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
WS #211 Disability & Data Protection for Digital Inclusion
WS #211 Disability & Data Protection for Digital Inclusion
Session at a Glance
Summary
This discussion focused on digital inclusion and data protection for persons with disabilities in the context of internet governance. Speakers highlighted the importance of involving persons with disabilities in policy-making and technology development processes. They emphasized the need for a more nuanced understanding of disability that goes beyond medical definitions and considers social barriers.
The conversation addressed several key issues, including the challenges of data collection on disability, the importance of digital accessibility, and the risks posed by automated decision-making systems. Speakers criticized the tendency to infantilize persons with disabilities in data protection laws and stressed the need for more agency and autonomy in digital spaces.
The discussion also explored the potential of local digital inclusion initiatives and the role of assistive technologies. However, concerns were raised about the unchecked optimism surrounding AI-powered assistive technologies and the need for greater accountability in their development and deployment.
Speakers highlighted the lack of representation of persons with disabilities in AI fairness conversations and governance policies. They called for more inclusive approaches to technology design and deployment that consider the diverse needs of persons with disabilities.
The importance of accessible education platforms and inclusive pedagogies was emphasized, with UNESCO’s work on guidelines for ethical AI development in education being highlighted. The discussion concluded with a call for more comprehensive definitions of disability in policy frameworks while also addressing concerns about potential misuse of disability classifications.
Keypoints
Major discussion points:
– The need for more inclusive data protection and privacy frameworks that consider the diverse needs of persons with disabilities
– Challenges in gathering accurate data on disability while maintaining privacy and autonomy
– The importance of involving persons with disabilities directly in technology development and policymaking
– Concerns about AI and automated systems potentially discriminating against or excluding persons with disabilities
– The need for a social model approach to disability rather than a purely medical one
Overall purpose:
The discussion aimed to explore how to make the internet and digital technologies more inclusive for persons with disabilities, particularly in the context of data protection and privacy regulations. It sought to identify gaps in current approaches and suggest ways to center disability perspectives in internet governance.
Tone:
The tone was largely collaborative and solution-oriented, with speakers building on each other’s points. There was a sense of urgency about addressing exclusion and discrimination. The tone became slightly more critical when discussing shortcomings of current policies and AI systems, but remained constructive overall. Speakers emphasized the importance of moving beyond surface-level inclusion to more fundamental changes in approach.
Speakers
– Fawaz Shaheen: Moderator
– Tithi Neogi: Analyst at the Center for Communication Governance
– Angelina Dash: Project officer at CCG and LU Delhi
– Eleni Boursinou: Consultant with UNESCO’s communication and information sector
– Osama Manzar: Director of the Digital Empowerment Foundation
– Maitreya Shah: Tech policy fellow at UC Berkeley, affiliate at the Berkman Klein Center for Internet and Society, disabled lawyer and researcher from India
Additional speakers:
– Dr. Mohammad Shabbir: Coordinator of Internet Governance Forum’s Dynamic Coalition on Accessibility and Disability, from Pakistan
Full session report
Digital Inclusion and Data Protection for Persons with Disabilities: An IGF Session Summary
This summary provides an overview of an Internet Governance Forum (IGF) session on digital inclusion and data protection for persons with disabilities. The discussion brought together experts from various fields to address key challenges and propose solutions for creating a more inclusive digital landscape.
Introduction
The session began with an acknowledgment of the collaborative document for best practices being compiled throughout the IGF. The moderator, Gunela Astbrink, emphasized the workshop’s policy of using people-first language and noted the accessibility considerations for the session, including real-time captioning and sign language interpretation.
Digital Accessibility and Inclusion
The discussion emphasized the critical importance of making digital technologies and services accessible and inclusive for persons with disabilities. Tithi Neogi, from the Centre for Internet and Society in India, highlighted the need for accessible consent mechanisms and digital services. Osama Manzar, founder of the Digital Empowerment Foundation, stressed the importance of involving persons with disabilities in technology development and service provision, stating, “Persons with disabilities must be part of the ecosystem in all aspects of research, data collection, and implementation.”
Eleni Boursinou, from UNESCO, addressed challenges in online education accessibility for learners with disabilities. She shared an example from Rwanda where including teachers with disabilities in digital skills development initiatives proved highly effective.
Maitreya Shah, a lawyer and researcher from India, pointed out the lack of representation of persons with disabilities in AI and technology conversations, highlighting a significant gap in current approaches to digital inclusion.
Data Protection and Privacy
The conversation revealed significant concerns about data protection and privacy for persons with disabilities. Angelina Dash, from the Centre for Communication Governance in India, criticized the tendency to treat persons with disabilities like children in data protection laws, arguing for the need to recognize their agency and autonomy. She advocated for the reintroduction of sensitive personal data as a category in India’s data protection law to provide additional safeguards for vulnerable data.
Maitreya Shah raised concerns about privacy risks associated with AI-powered assistive technologies and data collection practices. This point was further emphasized by audience members who expressed worries about private data of persons with disabilities being used to train AI systems without proper consent or oversight.
AI and Automated Decision-Making Systems
The discussion highlighted the potential risks and challenges posed by AI and automated decision-making systems for persons with disabilities. Maitreya Shah pointed out the dangers of bias and discrimination against persons with disabilities in AI systems, noting that his research at Harvard revealed how AI fairness metrics and governance policies often explicitly exclude disability from their scope.
Eleni Boursinou discussed UNESCO’s work on guidelines for ethical AI development in education, emphasizing the importance of considering the needs of persons with disabilities in these frameworks. The speakers agreed on the critical need for disability representation in AI fairness and governance conversations to ensure that these technologies do not perpetuate or exacerbate existing inequalities.
Disability Data and Definitions
A significant portion of the discussion focused on the challenges and limitations of current approaches to disability data collection and definition. Maitreya Shah criticized the medical or impairment-based approaches to disability data collection, arguing for a shift towards a social model of disability aligned with the UN Convention on the Rights of Persons with Disabilities (UNCRPD).
This perspective sparked a debate about the need for more inclusive and comprehensive definitions of disability in policy frameworks. However, audience members raised concerns about potential misuse or impersonation if definitions became too broad, highlighting the complex balance required in this area.
Conclusion and Future Directions
The discussion highlighted the complex challenges in achieving digital inclusion and data protection for persons with disabilities. It emphasized the need for more inclusive approaches to technology design, policy-making, and data collection that center the perspectives and needs of persons with disabilities.
Dr. Mohammad Shabbir, an audience member, contributed to the discussion by emphasizing the importance of considering the diversity of disabilities and the need for tailored approaches in technology development.
The session concluded with a reminder about the collaborative document for best practices and an invitation for participants to contribute their insights. This document aims to compile practical strategies for improving digital inclusion and data protection for persons with disabilities, serving as a valuable resource for future policy and technology development efforts.
Session Transcript
Fawaz Shaheen: . . Yes, I think it’s working now. Thank you so much. We’ll just start our session now. Welcome to all our on-site and our online participants. For our on-site participants, it is channel number 2. You can get in on channel number 2. And we also have a small, I mean, if you, as the session progresses at some point, you can come and check the workshop policy. And there’s a shared document that we can all be working on. We’ll talk more about it as the session goes, but at the front of the desk is my colleague, Nidhi. She has those QR codes. If anyone wants to scan and get those documents, you can do that. Now, before we start, I would just like to do some housekeeping and check if our online participants are able to speak and come on. So Maitreya, can you unmute? Or Angelina, can you unmute them one by one and see if they’re able to speak, if you’re able to hear them? Sure, I’ll just. Hello, hello. Angelina, you’re audible. You’re audible, thanks. Yes, morning, morning. You’re audible, thank you. Yeah, yeah. Hi, am I audible? Yes, yes, you’re audible. Thank you so much. I hope everyone’s able to hear them. Now, as I said, this is a session where we would like to talk about some of the most invisibilized conversations, some of the conversations that we often miss out. This is an opportunity to discuss how to make the internet more inclusive in a broader sense, but also more particularly as we all move towards establishing data protection regimes, establishing privacy regimes, including those of us in India. This is a chance to have a conversation about how data protection and privacy regimes can make the internet more inclusive instead of more exclusive. That’s the basic idea, the basic sense with which we’ve started. For our session today, we have a workshop policy to make sure that people are able to access us, people who are joining us both online and on site with diverse abilities, with different kinds of disabilities, they’re able to experience this session as well as the rest of us. So just a couple of pointers. We are requesting all of our speakers to briefly describe their physical attributes in their own words. For instance, I’m Fawaz, I’m a bearded man, kind of big, and I’m wearing a gray jacket today, I have short hair. Just something brief like that, whenever you’re speaking, it’ll be helpful to bring everybody in to make it a little bit more of an inclusive conversation. So thank you in advance for that. And apart from that, we have, if you want a detailed look at the workshop policy, although I’m sure there’s nothing very difficult, there’s nothing extraordinary that we’re asking. It’s just basic stuff, being respectful, being inclusive, bringing everyone into the conversation. What we do have the workshop policy, the QR code is with Nidhi. You can scan that, take a look. You can also use it for your own sessions, customize it, give us feedback. We also have another document during the Q&A session. We’ll show you that document. It is a shared Google document in which we’d like to build some code of best practices for inclusive internet. So that’s something we’ll invite all of you to participate in. For the online participants, my colleagues, Angelina and Tithi will be sharing them in the chat. For the onsite participants, my colleague Nidhi has a QR code that you can scan and you can get access to. Now, before we begin, before I introduce, this session has been organized and conceptualized by my colleagues at the Center for Competence, Angelina and Tithi. You can see them both on the screen right now. And they’ve also recently authored a policy brief on disability and data protection, which looks particularly at the new law in India, and it looks at its interaction with the Persons with Disabilities Act. And it tries to give a sense of where we are standing and an overall position. So before we begin, I would like to request Angelina, Tithi, just take 10 minutes, walk us through your brief and lay out what you envision for this session. And after that, we have an excellent panel of speakers. I’ll be introducing them one by one and we’ll take this forward. Thank you.
Tithi Neogi: Thank you for that introduction. Hi everyone, I’m Tithin Yogi. I’m an analyst at the Center for Communication Governance. My pronouns are she and her. I wear glasses, I have wavy hair and I’m wearing a red hoodie today. Over to you, Anjana, you can introduce yourself.
Angelina Dash: Hi everyone. My name is Anjali Nadash and I’m a project officer at CCG and LU Delhi. My pronouns are she and her and today I’m wearing a red jacket and I have wavy hair.
Tithi Neogi: Over to you, Tithin. So today’s session is based on the policy brief on data protection for persons with disabilities in India that, like Fawaz mentioned, Anjana and I have co-authored and have been working on for a while. So while India has indeed taken a step forward towards inclusion of persons with disabilities in its data protection framework online, we have identified some gaps in its nuances and we have tried to plug in some loopholes to sort of further advance the digital inclusion for persons with disabilities. So some common themes that we have identified in this data protection framework, specifically for persons with disabilities, are the themes of digital access and inclusion, data autonomy and data protection. So I’ll start off with something on what we have found on digital accessibility, digital access and inclusion. Specifically in our research, we have identified that digital accessibility is a precursor to persons with disabilities giving meaningful consent on the internet. While the disability rights framework in India guarantees the right to accessible information and communication technologies, persons with disabilities, the data protection law does not really mandate the data fiduciaries to operationalize or implement consent mechanisms that have accessible interface and can be used easily with persons with disabilities. Also, another thing that we have identified through our research is that data protection law allows guardians of persons with disabilities to give consent on their behalf. So that reduces or takes away the onus of giving consent from the person with disability and shifts that to the guardian of person with disability. And this, in turn, takes away the incentive that data producers might have had to make their consent mechanisms more accessible towards persons with disabilities, because now it’s not the persons with disabilities who are giving consent directly on these consent mechanisms, but it’s their guardians. So based on our findings on digital accessibility, we recommend making notice and consent mechanisms compliant with some accessibility standards and compatible with assistive technologies, use of audiovisual formats in these consent mechanisms, electronic braille, et cetera. I’ll hand over to Angelina to continue this discussion on digital accessibility. Over to you.
Angelina Dash: Thanks, Siddhi. So I think another concern that we have in terms of access and inclusion is in the context of limited access to digital services without the consent of their lawful guardian. So under the data protection law in India, persons with disabilities require the consent of their lawful guardian in order to access digital services on the internet. This is concerning because of two scenarios that may arise. What do these scenarios look like? In the first case, we have persons with disabilities who cannot access digital services where the support and consent of the lawful guardian may not be required at all. For instance, maybe something like a digital encyclopedia. In the second case, we have persons with disabilities who are hindered from accessing digital services which provide vital disability resources like perhaps online communities for persons with disabilities. Access may also be curbed in cases where a conflict of interest may arise between the lawful guardian and the person with disability. This could include access to digital helplines for physical and sexual abuse among other digital services. So our recommendation in this context is that persons with disabilities must be allowed to access digital services on the internet without the consent of their lawful guardian in certain cases. Over to you Tithi.
Tithi Neogi: Thanks Anjalina. I’ll now speak about the second theme that we have identified and that is of data autonomy. So in India, indirect consent through another person is collected from two kinds of data principles i.e. children and persons with disabilities. So in the Indian data protection law, children and persons with disabilities have been clubbed together and subjected to similar treatment while giving consent i.e. their parent i.e. in the case of children or their lawful guardian in the case of person with disability gives consent on their behalf. This treatment leads to infantilization of persons with disabilities on the internet since their guardians are now being equated to parents to the status of parents which is not the case because if we look at the disability rights framework in India, a lawful guardian for a person with disabilities is envisioned to be somebody who aids them or assists them and helps them with decision making process in a mutually consultative framework. So in the data protection law that India has right now, lawful guardians of persons with disabilities have that ability to give consent on their behalf completely without accounting for any mutual decision making. So in our research, we have recommended decoupling persons with disabilities from children and introducing a separate provision in the data protection law that defines persons with disabilities. This definition for persons with disabilities could take into account temporary disabilities, the degrees of support, the various degrees of support that a person with disability might need. We have also suggested that any consent collection processes online be informed by a consultative framework between persons with disabilities and their guardians, which is driven by principles of mutual decision-making. I’ll now hand over to Angelina to speak about the third common theme that we have identified. Over to you.
Angelina Dash: Thanks, Tithi. So I think another gap, as you mentioned previously as well, that we identified was in the context of data protection, specifically with regard to the absence of sensitive personal data. So the data protection law that was introduced in India last year came after a long consultative process, and previous iterations of this law had carved out sensitive personal data, or SPD. This was not included in the final law. Now, firstly, what is SPD? SPD was a distinct category of personal data warranting additional safeguards. And what did these safeguards look like? These include specific grounds for processing, including grounds like explicit consent or specific purposes of state action. Now, it’s important to question at this stage, why does the need for a separate category of SPD arise in the first place? So there are some concerns which scholars have been raising regarding sensitivity of data being used as a basis for categorization. However, we feel that India’s data protection framework is currently at a very nascent stage. And additionally, certain data of persons with disabilities can be more vulnerable, like health data of… financial data, and this data is more susceptible to misuse for the purpose of discriminating against persons with disabilities, particularly in terms of employment, health care and social welfare. Therefore, our recommendation in our policy brief is that sensitive personal data as a category should be introduced for personal data within India’s data protection law. And with this, we’d now like to talk about moving from the policy brief to today’s IGF session, where we’ll be continuing the discourse on centering disability in internet governance more broadly. So through our insights from working on the policy brief and lived experiences from stakeholders who are persons with disabilities, we have gained an understanding of how there are certain gaps in internet governance discourse with regard to disability. And I’d like to hand over to Tithi at this point, and she’ll elaborate a bit more on the gaps we identified. Over to you, Tithi.
Tithi Neogi: Thank you, Anjulina. So some of the loopholes that we have discovered in the discourse on data governance globally as well as in India, and we would really like the insights from the participants today to share their experiences and what they feel about this. So the first kind of loophole that we saw was that this explicit categorization of persons with disabilities as data principles in the Indian data protection law, this happens to be an anomaly of sorts because we haven’t really come across as this distinct categorization of persons with disabilities as data principle in any other major statute which has been enacted. So this is something that we have been discussing as to whether this separate categorization of persons with disabilities, whether this was a good measure, is this going to benefit persons with disabilities, is this the right approach or the wrong approach. We are aware that the GDPR refers to of certain vulnerable classes of data principles but does not mention persons with disabilities exclusively. We would like to take this opportunity to get insights from the participants as to whether they feel that the GDPR approach is the way to go or whether the Indian approach of specifying a certain category of data principles as persons with disabilities and having specific measures with respect to consent selection from them is that the approach to go. We are also aware of discussions on the presence effect on data governance and whether GDPR would be a good influence or is the novel approach that the Indian law is taking is this the way forward. We would really like to hear some discussions on this from our speakers as well as from the participants. I will now hand over to Angelina to discuss some other loopholes. Over to you Angelina.
Angelina Dash: Thanks Tithi. I think another loophole that we identified was the lack of a global south or a global majority perspective in discourse regarding disability in internet governance. The persons with disabilities are not a homogenous group. Members of this community especially those from global south or global majority jurisdictions like India can often face unique and diverse challenges in terms of internet accessibility and data autonomy. This may arise from intersecting marginalization in terms of gender, caste, poverty, illiteracy as well as broader infrastructural concerns and digital divide. Internet governance discourse currently does not adequately account for these challenges and that’s where we come in and we aim to highlight some of these issues through the discussions in the workshop today. And with this context and background that we’ve just provided and the gaps that we have identified we intend to build upon the work in our policy brief and sort of extend the conversation to also address disability and data protection in the context of AI and automation. decision-making systems. We hope to use this session as a forum to facilitate multi-stakeholder conversations and collaboration. This will enable the co-creation of best practices towards digital inclusion for persons with disabilities. With this, I would like to conclude our presentation and I now open the floor for any questions. Fawaz will be assisting us in this Q&A round.
Fawaz Shaheen: Thank you. Sorry, I think we’re doing a slight change in the format because we’re running a little short of time. So, what we’ll do is we’ll take all questions. We’ll have two rounds of questions. So, the presentation questions also we’ll take after the first round. We just haven’t changed on the fly, sorry. But now, without further ado, let’s move towards the conversation. Thank you for starting the conversation. We’ll now move towards the discussion. And first off, I would like to invite our on-site speaker, Dr. Osama Manzar, to please join me on the stage. We also have two excellent speakers joining us online. We have Eleni Borsino, who’s a consultant with UNESCO. And we also have Maitreya Shah, a lawyer, researcher, currently a tech policy fellow at UC Berkeley. I’ll be introducing all three of our speakers in more detail as I ask them questions. But also, just to remind all of you, the discussion, we’d like to have it as conversational as possible, as open-ended as possible. So, we’ll be doing a first round of questions. We’ll have four to five minutes each for each of the speakers. Then we’ll open it for questions. I encourage all our on-site participants to please ask all our speakers, as well as Tithi and Angelina, about this topic, about this issue. And also, all our online participants, please feel free to put your questions in the chat. Tithi, who’s our moderator online, will be taking those questions and relaying them to us for our speakers. So, after the first round of questions, round of speakers and questions. We’ll have another round, and we’ll end with a round of interventions. So we have about one hour left. 60 minutes is a good time to do this, I feel. And now to begin the conversation, I think I’ll first invite Eleni, who’s joining us. Eleni Borsino, who’s a consultant who works with UNESCO’s communication and information sector, especially on universal access to information. And Eleni, if you could unmute yourself. We would like to begin this conversation by asking you, what role do you think digital accessibility plays in furthering sustainable development goals? Especially working from your own experience, what would you describe as the role of digital accessibility and inclusive design in enhancing digital autonomy? Not just access, but also digital autonomy for persons with disabilities. But Eleni, over to you. And thank you for joining us. Thank you. Thank you for having me and for the invitation. I’m very honored to be in this panel today.
Eleni Boursinou: My name is Eleni. My pronouns are she, her. Today, I’m wearing a very Indian shirt, and I have wavy hair. You can, I mean, I don’t know if my camera can be switched on. But if you want, you can switch on. I think you can switch on the camera. Titi, can you just check? So thank you very, very much for this. So and for the very meaningful question. So the digital accessibility and the SDGs, it plays a very critical role by fostering inclusion and equity, and particularly for what you call persons with disabilities, but in general, any marginalized groups. So by removing usability barriers, it bridges the… digital divide, enabling participation in the digital economy and what we call in the UNESCO communication and information sector, the knowledge societies. So this supports the IGF and the global digital compact agenda of leaving no one behind and aligns with the UN Convention on the Rights of Persons with Disabilities, promoting social justice and equity. On the other hand, there are also accessible tools such as OER and UDL that play a significant role in empowering education, that is SDG 4, reducing dropout rates from school and enhancing educational outcomes and promoting lifelong learning. Additionally, accessible digital solutions can address challenges related to gender and disability, that is SDG 5, and can empower women and girls with disabilities to access education, employment and leadership opportunities. And finally, for SDG 17, that is international collaboration, the accessibility standards can provide open solutions that contribute to global cooperation, fostering access to information and promoting partnerships. So in the context of data autonomy, digital accessibility and inclusive design play a huge role in enabling individuals with disabilities to engage with and control their data. Accessible data systems and platforms allow users to interpret and manage data independently, ensuring that everyone, regardless of ability, can participate in data-driven decision-making. What we call in UNESCO open solutions. That is, solutions that are cost-effective with open licensing, it can be free and open source software or open educational content, and OER platforms provide resources that empower individuals to control their data and engage in lifelong learning. Digital formats and assistive technologies enhance understanding and trust in data-driven systems, while universal and inclusive design mitigate biases in automated decision-making, ensuring fairness and safeguarding marginalized communities from discrimination. So the key takeaway for embedding is that embedding digital accessibility and UDL principles in policies and practices can ensure equitable participation in the digital economy and knowledge society. And first, by improving data collection and analysis, AI can support more inclusive and equitable decision-making, ensuring that marginalized communities, including those with disabilities, are considered in policies aimed at achieving the SDGs. And second, a human-centric approach to AI and digital tools, essential for ensuring that the benefits of AI are distributed equitably and contribute to sustainable development goals, including promoting data autonomy and accessibility. So that’s all for me for now, but I’ll be happy to answer any questions you might have.
Fawaz Shaheen: Thank you, Lenny. We’ll come back to you. And now I’d like to go to our on-site speaker, Osama Manzar. Osama, as we know, is director of the Digital Empowerment Foundation, and he works with a large community network of digital fellows who are not just working on… on improving access to internet, but also on who have a mandate effectively to train people, to work with people, to make the internet a safer and more inclusive space. And today, particularly, we are very happy to have Osama with us, because recently, DEF has also come out with a report that is looking at ICT for empowerment, inclusivity, and access. And it’s a report in which they’ve spoken to more than 250 persons with disabilities and mapped the various challenges, also the opportunities. And so I would like to invite Osama to share some of those things. But also, to start off, I would like you to talk a little bit about doing this kind of work when you look at the challenges that are associated with gathering data on disability, talking to persons with disability, while also maintaining autonomy, maintaining anonymity where it’s required, doing that and balancing it with the need for having good data to work on disability. This becomes an even more urgent question when it comes to issues of census. One thing that your report is talking about is the need for a new disability census in India. So some of those things, I know it’s a very broad question, but I’d like to start with that and we can move on from there.
Speaker 1: Yeah, thank you, Fawaz. I will focus my discussion more on the ground realities and also with the entire community of PWDs or whoever we call it. We have only four or five minutes, so I’ll say that there are three things very important. One is that we treat our people with disability as subjects. you know, as, or I don’t want to say object, I would have said object, but more like somebody about whom we need to do something where there is no role play from themselves, right? Yeah, so exactly. Like we can, you know, exclude it and all that. So that’s a very behavior of the doers, whether it’s corporate sector, government sector, or able people, all philanthropists, anybody. You know, everybody thinks that there’s something need to be done. It has been done for so long that if you talk to the disabled people or people with disability, they feel like somebody will do something someday, you know, and we are just waiting, you know? So the whole ability to come out, to put their, assert themselves, demand, or, you know, ask for accountability is almost negligible. You know, almost negligible. I would say that they are not treated better than any other poor people in remote areas, or the people who are caste-wise or class-wise treated absolutely downtrodden. So I’m coming also from the perspective of large scale. We have about, what, 50 million population? Even more, about 5% of the population of India is people with disability. It’s as equal as the indigenous community. They are also about 5%. You know, that’s a huge, huge number in India. With that one, and then when we started seeing that in the last 20 years, digital’s development was becoming more like an enabler, and an automated enabler for people with disability. And let me also say, disability also see like, now, if I don’t have access to information, I am more able with digital access device, right? If I don’t want to talk to people, then I have something to talk online, and there is this. So it enables me, my confidence, my requirement. and so on and so forth. At Digital Empowerment Foundation, we realized this by going on the ground, that even though when we were going on the ground to provide digital access or infrastructure, we were seeing that disability was almost like invisible. You know, why were they invisible? They were not coming forward. We are not including them. We are not even thinking about them. All the government entitlements that we ought to deliver for them, they are not even coming to take it. We don’t know how to talk to them. And then we also realized there is a lot of dogmatic, you know, traditional look down upon kind of behavior in the community. We look at them with a lot of distance, you know. We don’t want to be in close conversation with them or touch them or feel them. They are considered as curse on the society, you know. That’s the last thing that one can imagine if you want to do a multi-stakeholder way of growing things, you know. They are not even part of the stakeholdership. So then we, I’ll just, next one minute I’ll, what we did is that we started talking to them and including them in doing, rather than talking about what’s your problem and what’s my problem and all that. We thought that, okay, this person is doing connectivity. You can also do connectivity. If this person is running a access point or a public access point, you can also run an access point. If this person is accessing computer and finding a payment for somebody, you can also do that. So everything that we thought anybody can, like anybody, if disabled person can do, we started having, working with them. So they become part of the working ecosystem and suddenly they became social entrepreneur or entrepreneur or a provider. So earlier their life was seeker. Now they are more like provider. And we just created not one, but 300, for example, now 500 people who run digital centers, digital access points in village level. and they provide service to all the able people, actually. And then what we learned from them is that now you can talk about disability, you can talk about their miseries, you can talk about their requirements, and so on and so forth. And that actually brought us at par, that now on an equal footing to discuss, right? And then we thought that can we replicate this model? So what I am now coming to is the last part, how if you, now they are able people for providing service, for talking, for infrastructure development, for digital access, everything, right? They also have more knowledge than others because they also know extra about the special ability to serve the disabled people. And then we did this entire research and found out that, oh my God, government has not done the census for people with disability for so long. Why we don’t have that? Now we are asking them, you demand it rather than we demand for you. Because we are also going to demand, we are going to demand for the census for the whole country, right? Which is also not done. But can we have a special only for that one? Yeah, it’s about 15, 20 years actually. Two decades. Yeah, yeah. One decade is lost and another decade has not come. So how can they want? The second thing, they started telling that all the government facilities are actually not implemented on the ground. Like government says that ICT enablement for access of people with disability, the whole digital centers at a public access point is not even able for that. It’s not wheelchair friendly. The point is that actually digital inclusion in our country and many other countries is make even the normal people disabled rather than the disabled people. They can’t type, they can’t go to the place. You have to do extra. You have to become absolutely audiovisual. And those kinds of things have started coming in between. Third is that being people with disability, they have more network on data about their own community, right? So they can become a secondary source of information about the spatial lag or spatial abilities of the people with disability. And that we must take advantage is one of the recommendations that we have done. And the last part is that data in any case for anybody is very important, right? About the protection. Why it is more protection-oriented approach required for people with disability is because of their extraordinary deprivation way of looking at them. So therefore you have to be more protective about them. You have to be more, you know, make sure that their participation, their sense is required. And the last part is that do not treat people with disability with, you know, let’s say mental disability, you know, like our researchers clearly said that government has created a legislation where you are infantilizing their ability. I mean, they are very much able, but why somebody else should take a decision on their behalf, you know? Just that because they cannot walk or they cannot do by hand? No, that’s not fair. You know, from that point of view, the whole illiterate people in our country, about, you know, 40% of the population who cannot access internet is disabled for accessing internet, just because you are not digitally literate. I mean, how can you do that? So that’s very, very important. But my last, you know, point in this one is that we must do conversation, action, intervention with people with disability in everything, in everything, whether you are doing research or data collection or doing something on work, they must be part of the ecosystem. Then only it becomes complete, you know, rather than we say that only conversation need to change, only something needs to change. They must be, it’s something like we say, don’t do mannel, always do panel, which must have a woman. Similarly, can we do that? There is always a panel with one person with disability, at least, you know, can you always have your discussion with one person with disability, always being part of it so that we normalize participation of disabled people into the normal conversation is what I would like to say that.
Fawaz Shaheen: No, I think that’s an excellent point. And it’s important also to highlight how even sometimes very well-intentioned interventions by civil society, by human rights actors can also be very infantilizing, patronizing. I think the point you’re making and the same point that Titi and Angelina were also making. And now on that note, I would like to move to our. Our next speaker, Maitreya Shah. Maitreya is currently a tech policy fellow at UC Berkeley, also an affiliate at the Berkman Klein Center for Internet and Society. He’s a disabled lawyer and researcher from India, and he has a unique insight into the challenges that persons with disabilities face with regards to digital access and digital protection. I would also encourage you to actually go and read some of his writings. You can find a lot of them at his page on the Berkman Klein Center, as well as on the UC Berkeley website. Do read some of his writings. But Maitreya, I would like to thank you for joining us. And also, I’d like to start by asking you, considering the increasing digitization of essential services, what are the gaps in legislation in India regarding the rights of persons with disabilities in the context of data protection? And also, how can policies that we adopt encourage user-centric approaches in the development of technology that accesses persons with disabilities? Again, a sort of broader question, but we would like you to come in on that and take us through some of these perspectives with your insight. Maitreya, if you’re able to introduce yourself. Can you hear me?
Maitreya Shah: Yeah. Hi. Yes, we can. Hi. Thank you for having us. Thank you so much for inviting me. And my pronouns are he, him. And I’m wearing a button-down with a large pair of headphones. I just started my camera. Hopefully, you can see me. I am an Indian with curly brown hair. Sorry, curly hair. So I think this is a great question. Yes, we can see you now. Thank you. Thank you. So I think this is a great question. And I’m wondering what sort of legislation I can talk about that we see. And as I said, I’ve already covered the issues of the current legislative frameworks in India. I think I’ll speak about digital accessibility in detail. protection a little broadly in the Indian context. So I think to start with, you know, India has always had this issue of privacy for people with disabilities when it comes to digital technologies or even other forms of emerging technologies. A lot of my recent work has been on how, you know, Aadhaar, the biometrics based system of India has, you know, not adequately considered the privacy and accessibility implications on people with disabilities. You know, Aadhaar scans and fingerprints that Aadhaar collects often exclude people with disabilities because the algorithms and the infrastructure that they use, quote unquote, treat people with disabilities as outliers or as non-normative. So in India, I think this issue has been long standing. Aadhaar started, you know, the project, we started way back in 2009, implementation 2012. And even as of today, people with disabilities are facing issues in rolling with the technology and also authenticating their identity, you know, especially with public services such as cash transfers or welfare programs. So this has been an intrinsically, I think this issue is long standing. But coming to kind of more legislative, more legal, more policy issues on this area, you know, as Titi and Angelina rightly said, the data protection law of India treats people with disabilities as or equally with children. And I think there are several issues with this. To start with, you know, the Rights of Persons with Disabilities Act, when it was enacted, it in a way takes precedence over other legislations when it comes to disability matters, especially. issues such as guardianship. I think my question is if the data protection law even has a prerogative to, you know, design a consent framework where guardianship and other very complex social legal issues are involved. You know, it is the, I think, the disability law that has done this efficiently and I think which is the right framework to address these issues. But the data protection law suddenly starts multiplication of, you know, these legislative provisions and starts adding new complexities for people with disabilities, right. And I think, you know, Titi and Angelina raised this very pertinent question as to whether the GDPR approach is correct or an Indian approach is correct. And I think this is again a very complicated question because, you know, India, although borrowed heavily from GDPR for the larger legislation that we enacted, when it came to disability, we, I think, did a lot of innovation when we brought this provision. And I think probably, you know, as I said, we don’t need a separate consent mechanism or a separate provision for people with disabilities at all. We might be good with the GDPR way of doing things because, you know, I think the idea is to respect the agency of people with disabilities, not develop new consent mechanisms. Instead, a focal thing on, you know, making your technologies privacy preserving, making your technologies more accessible, you know, seeing that technologies do not unlawfully access a disability or health information of people with disabilities. I think it’s more on the, you know, the focus is to shift
Fawaz Shaheen: from user to data fiduciaries or the corporations that are building technologies. I think in India, one of the biggest problems that we have had through our legislations is that a lot of onus is placed on people with disabilities. And with privacy, you know, a lot of, there’s been a lot of of scholarly research and a lot of criticism on this individual privacy frameworks, you know, individual privacy might not be the best way forward, especially with AI and emerging technologies, where usually user agency is already curtailed. So I think we we need to think about making your technologies more privacy preserving rather than keeping, you know, putting additional complexities on people with disabilities to, you know, coordinate with their guardians and then, you know, work on consent to access even the very basic necessities, you know, because I think digital access and internet access is now a necessity. And I can give you an example of how this is actually playing out on the ground. Recently, I was doing a training session for assistive technology manufacturers in India, and who are specifically building technologies for people with disabilities. And a lot of them told me how this this the guardianship provision in this, this complex consent based provision, the data protection law is, is raising many issues for them, because it’s not giving them adequate opportunity to provide their technologies to people with disability. They also wrote it, wrote to the government saying that, you know, this, this provision needs an amendment, specifically when it comes to assistive technology. I think this is this is one very broad issue. The other broad issue is that there has been this this inherent trade off between, you know, accessibility and privacy. So, at times people with disabilities are compared to give away their privacy to access digital technologies. To give you an example, I earlier this year, I written an article on how these companies that are developing automated tools, claiming that these automated tools can fix websites, make them accessible without changing the source code of it. And how these practices are inherently deceptive. So what it essentially does is it, you know, you know, in the garb of making websites accessible, it violates user privacy, because these overlay tools, as they are called, can infer disability status of individuals through, you know, collecting data on the screen reader usage, or their use of magnification device, so on. And so there has been some conversation on this outside India, especially in the United States. In India, this conversation is not happening. And why this is particularly problematic when it comes to privacy is that the adoption of these technologies, whereas it is, you know, it is, it is, in a way, restricted now in the United States, companies and even the government are increasingly adopting these problematic technologies in India. And to give you an example, you can go back and check this, you know, the India
Maitreya Shah: AI portal managed by the Ministry of Electronics and Information Technology, a quote, unquote, the knowledge platform for India’s AI economy, itself uses an accessibility overlay tool. So I think my question is, you know, we need to think about this comprehensively, digital accessibility is not a tick box solution. You know, we have to think about this a little comprehensively, especially when it comes to privacy. And there are additional challenges due to AI and emerging technologies that we are seeing that are posing, you know, issues for people with disabilities. And I think thirdly, I’ll very briefly touch on the issue of, you know, India’s digital accessibility laws are still very nascent. We have not adequately made our websites for that the government or private accessible and I think it’s also quite singular you know in a way that a lot of our digital accessibility frameworks in India focus on accessibility for people who are blind but does not adequately cater to people who have other forms of disabilities. So you know there are challenges of accessibility and privacy even for people say who have learning disabilities like dyslexia or who are autistic or who have other intellectual disabilities and I think our laws are still quite you know nascent on that front that we’re not adequately thinking about a cross disability approach when it comes to accessibility and privacy. So I think with India there are many issues at a larger policy level there are you know people with disabilities are not adequately represented in the conversation. There is a duplication and a lack of harmonization in our regulations and you know I think we’re not we’re not adequately regulating the larger sector when it comes to preserving privacy of people with disabilities and ensuring their accessibility at the same time. I’m happy to talk more about this in the question answer and so on.
Fawaz Shaheen: Thank you Mithra. I think that’s a good point to pause and get our first round of questions and observations. I know some of the conversation has been very India centric but these are problems which are quite universal and we have we are fortunate to have a lot of on-site participants from different areas different countries so if questions or any observations any intervention from your perspective we’d welcome that. Tithi also please feel free to tell us any questions from the chat from our online participants but yeah anyone has a question here or an observation? Yes.
Audience: Thank you for this opportunity. My question is to our in-person speaker. Since he tried to be practical, I am coordinating the implementation of the African Union Data Policy Framework. And as part of that, we are several stakeholder sessions as part of supporting some African countries to develop their national data policy. And our project wants to be participatory and inclusive. But there is a challenge when engaging persons with disability. So I want to know from your work, what is the best approach when it comes to engaging persons with disability, if there are any recommendations that we can also try? Thank you.
Speaker 1: OK, very practical question. And thank you, because I have a good answer. So I’ll just let you know one typical, I’m sure, African country. Which country did you say that you are from? I am from Ghana, but my work covers several countries. OK, so I’ve been to Accra, so I can see some of the villages in Accra or in Ghana and many other African countries. So listen, imagine a village where you go to just generally, people are not connected. People do not know computers. And they still need to educate themselves on computers. There are still people there who want their work to be done or withdraw their money or anything, like hundreds of services that you avail is now digitally done, right? That’s the scenario everywhere. can see. Now, when you go there, you try to see that how we can provide all this service to the people, right? It’s very simple, like, or whether you go to a school and you do the, you know, a lab, established a lab, or you create a public access point where people can come and withdraw money or file an application or whatever, whatever. What we did is that to do that, we found out is there any disabled person there, preferably woman, you know, who is ready to learn, not educated, not necessary. That’s not the basic qualification. Are you ready to take a chance? Are you ready to sit on a computer? Are you ready to handle computers? And all this service that we are talking, you provide through this center, you know, or this facility. So suddenly, so your job was actually just to find out a person with disability and with an intention to serve people, right? And that itself was such empowering thing, because now the disabled person who can’t even move and sitting and giving you education on computer or switching on and off computers and letting you know that this is the way you can work, this is the way you can fill a form, this is the way you can access YouTube, this is the way you can do something, right? So digital became an ammunition from that person to provide a service. Earlier, he was or she was sitting at home and waiting for somebody will do some favor to me, right? Suddenly, and you do this from your home, so you don’t have to go to a shop. So everybody is coming there, you know, to get those service. For that same service, people used to go to faraway places like two kilometers, five kilometers or 10 kilometers, which is very expensive. And also, if you are very highly educated, then you are also very arrogant, you know, and this person is very humble, you know? And then the less, most of the people also start coming there because they think that, oh my God, this person is doing so many things, why don’t we go there and get it done from this person? And that one example become a viral thing for the rest of the people, imagination change. And this we did not in one, now we have out of 2000 locations, 500 locations are run by. by the disabled people. So the whole narrative change, the area change. And then when you ask those questions, which I’m not going to share now, maybe if I’ll get a chance, I will share with you that what are the statistics, numbers say, because 85% of the people are with mobility disability. So you don’t have to think for another 15% to be done while you can do 85% of the things. Why don’t you just first do the low-hanging fruit? Just the mobility people, just catch them. They are the visual representative of the people with disability. So those are the things that we did and we can share with you the whole research that we have done with all these 500 people and how it can actually become a replicative initiative in many other countries.
Fawaz Shaheen: Thank you. We would also encourage more participants to come up with their questions, interventions. Before we move to the next round of discussion, any questions from the online chat? Otherwise we can move to the next round of discussion and come back also. There are no questions on the online chat, but there’s one comment from Emily commenting on Osama Manza’s point on involving persons with disabilities in the process. We could ask Emily to come in and… Is it better now? No, there’s still an echo. Can you ask Emily to come in? Hello? Yeah, I think we are better now. Sorry for that. Sorry for that. A bit of a disruption. But Eleni, could we ask you to come in? And I think there was a comment that you had. Oh, I’m on mute. Thank you. Yes. So, I was saying that it is very interesting. The point is, I think, in the meanwhile, we can continue the conversation here, especially because you already asked the question about, you know, the work that you’re doing at the digital center. The next round of discussion that we’re having is around automated decision making. But before we go there, there is the question that we wanted to ask Maitreyi also, and start with you, Mr. Manza, maybe. Because, is it? Yeah, I think there’s some problem. Should we wait a couple of minutes or we can continue? Okay. Yes. What I wanted to ask you was if we could understand a little more about setting up a local digital inclusion initiative, because it’s one aspect of what you’ve been talking about, empowering people, making them providers and enablers instead of, you know, being a recipient. But there is also the question of the data around disability is also extremely skewed. We don’t have data that adequately represents how persons with disabilities are also a diverse group. So the picture of who is a disabled person is a very settled. very stereotypical picture if I may say. But through your experience, through your work and including the interventions in the field as well as the research work, you have of course a much better See at a government level or we are trying to design intervention at a larger level. How do you think this diversity among persons with disabilities can be approached? How can that be accounted for?
Speaker 1: So I would say that it is actually innovation on, I would say, the data collection also and the data contextualization. You know, for example, let’s say if we have a diversified, if we have a data which shows the diversity of various kind of disability, let’s say. Now the tick box is what are the disability, but there is no tick box on what are the abilities, right? Because you have to ride on the abilities with the same person which is having disability. So in, I mean, what I’m giving example is that let’s say my hands are not working, let’s say. I’m one of the disability. But should we only work on making that hand work or should we work that your mouth is working, your voice is working, your eyes are working, your mind is working, your other parts of the body is working and how those working and the devices and the able devices can actually make it not even think about my hand. It’s something like we used to use remote control which is handled by hand, but now all the remote controls are voice enabled, right? So voice controlled. So I am controlling all the remote controls by voice. So I don’t need my hand for managing. I’m just giving one example. So rather than thinking that, you know, I will create, you know, something non-handheld. you think about what are the things, voice-enabled, eyes-enabled, visually-enabled. That is where the real innovation and contextualization is very important. I would like to say that while I’m not denying, and it must be that our census or data collection on the diversified people must be done. Beside that, we should have equal effort that what are the already available ability-oriented things which is available which can contextualize. For example, can we have a list of everything that a mobile phone does, which is directly related to the people with disability? Because mobile device is the most able device among all the other devices in terms of making you work in something like it can transliterate, you don’t have to see and it will speak out your messages. There are many other things that the mobile gives you, and in a very secure manner by keeping your data also very secure. The mobile does, but I don’t know, and many of the people with disability do not know. Especially, if I don’t know, which I’m a source of enabler in some of the community, then it’s a pity that I don’t know. Can there be a whole, what you call the whole writer, the checklist, and everything that these are the things that we must apply? There are all the able things that, let’s say, digital access, a public center should be at a village level. It should have a ramp, but it should have 100 other things, which is not done. For example, none of our websites are voice-enabled, none of the websites are hearing-enabled or visually-enabled. That is the first thing that we should do. To do that, you just have to make it mobile-enabled. If you do mobile-enabled, then automatically your content can also be read out and everything, rather than doing on a typical web or a traditional web version. So I’m saying that in. last sentence is that, you know, we must do the data collection and data solution from the very high level contextualization and making cross-pollination of the facilitation of all the ability-oriented things which can enable rather than only focusing on disability.
Maitreya Shah: Thank you so much. I think I’d like to add something. I think I’d like to differ with my friend a little bit here because I think, you know, the problem with the entire data collection effort is not the focus on disability or ability. In fact, I feel that the very distinction between ability and disability, I think, is a very medicalized conception of disability, you know, because we only focus on particular organ senses or impairments. And I think we have come a long way after a lot of battle and advocacy to kind of do away with that in the society and especially with governments who have increasingly been relying on, you know, impairment-based metrics to collect data about people or to categorize people with disabilities. And I think in India, this is specifically evident, you know, the 2016 Act only recognizes 21 categories of disabilities, which is a very narrow way of defining disability. You know, if you see the UN Convention on Rights of Persons with Disabilities, it talks about a very broader approach. It talks about, you know, social barriers. It talks about attitudinal barriers. And disability is defined from a more of a social model perspective where you think about how society is, you know, posing barriers for people with disabilities versus, you know, what the ability or the disability of a particular individual is. And so I think that is why, you know, this data collection efforts have been failing because, you know, is that, you know, we, the official data says that people with disabilities are, you know, 2 to 5 percent of the Indian population. But according to the World Health Organization, the 15 percent of the world population lives with some of the other form of disability. If you ask me, I think this number is only going to be more in the Indian context because we also have high rates, high numbers of people who are poor, people who face other forms of marginalization, including lack of access to healthcare. So the number of people with disabilities is going to be more. But our data collection efforts have been failing because we don’t want to count those people. We don’t want to count all those who might be eligible to identify as a person with disability. So I think this disability distinction, I feel is very medicalized and I think we need to probably take the UNCRPD approach, take a more social model approach, and think about larger barriers, larger universal access issues when we think about disability and data collection. Thank you.
Fawaz Shaheen: All right. Thank you, Mithra. I think that’s well taken. I could see you snorting most of it. We can continue this engagement also if you want to.
Speaker 1: We can have this. I don’t have any disagreement to what you’re saying because maybe when you narrate anything, there can always be a different articulation of different things. So the people who deal with data, deal with laws and orders, I don’t even know how to do that. But I totally agree that, yes, there is a perception of looking at disability from the very medical perspective. And that is the reason why I mentioned that most of our activities are related to not looking at it, looking at it with several other abilities and take positive stance on that one to make it more work.
Fawaz Shaheen: Sure. I think this… is an interesting conversation. We are down to the last 20 minutes so we’ll have one more question is to Maitreya and to Eleni and then we have some time for taking questions and observations from our participants. Before moving forward I would just like to remind all the participants who just joined us that we have a document around suggested code for best practices for an inclusive internet. This is a collaborative document. We request you before leaving please get access to the document from my colleague Nidhi who’s standing here and we will continue to take suggestions till the end of the day and we’ll try to get some of these suggestions included in the IGF call to action. So now without further ado Maitreya I would like to take the next question to you especially because you’ve been working on or not. Fairness in AI is a conversation that needs to include persons with disabilities much more and in your last answer actually you were talking about how most of the automated accessibility plugins that we see are inherently deceptive. Some of that we know is also because of as we said the corruption of the data set itself. The fact that it doesn’t account for the diversity that is there among persons with disabilities. But I guess my question to you right now is given this context how do you think we can begin to ask for accountability from automated decision-making systems especially from the perspective of persons with disabilities?
Maitreya Shah: Thank you so much. I think that’s a great question and I think I’d like to break down my answer into sort of two you know I think I’ll break it based on the two kind of broader technologies that I see in the market. One is quote-unquote technologies that are specifically designed for people with disabilities and the other is mainstream technologies that people with disabilities also use. as users. So the first category where we think about automated and AI based technologies such as assistive technologies particularly, that are designed particularly for people with disabilities. I think when we when we think about accountability here, I think my my first argument is, you know, there is a lot of pseudoscience and a lot of these broader, you know, unchecked optimism with with AI powered assistive technologies. You know, I think everyone these days I feel is coming out with an AI powered assistive technologies in the market without understanding what their implications would be on people with disabilities. To give you a very small example, especially in the care space, you know, for people with disabilities who require caregiving support people such as you know, muscular dystrophy or other cerebral palsy or other disabilities who might require based on the severity of disability might require caregiving support. A lot of AI powered smart robots are being adopted across the world saying that these technologies would be transformative for people with disabilities. Not realizing, you know, that these technologies might violate privacy of people with disabilities. You know, whether do you know, you do need these technologies at all? Like, why would you need to replace humans with a robot when it comes to disability caregiving? There are many other issues, you know, there are assistive technologies that claim to fix certain disabilities like autism. And I think that is something that is very deeply problematic, because, you know, disability is not some, you know, not not not an ailment or disease that needs to be fixed or an abnormality that needs to be fixed. And there are many AI powered technologies currently in the market that claim to quote unquote, cure So I think for me, the faintest conversation starts here to ask if we need a certain technology for people with disabilities at all. And if we need them, you know, who decides what is good for people with disabilities? Should it be a certain corporation sitting somewhere in the US run by non-disabled people thinking if a technology would be good or bad for a person with disability? Or should it be a person with disability or a group of people with disabilities thinking about technologies themselves? The second broader point is around mainstream technologies. And I think there is so much AI and so much automation around us. And I think that’s where people with disabilities are very marginalized in the conversation today. A lot of research that has happened has focused on racial minorities or for gender. But there is very little research that has focused on how AI technologies impact people with disabilities or how fairness metrics can cater to disability. A lot of my recent research at Harvard has been on how AI fairness metrics and AI governance policies both explicitly exclude disability from their ambit. So, to give you an example, a lot of LLMs these days like chatGPT are trained to not discriminate against people of color or people belonging to racial minorities. But they’re not trained on, you know, avoiding discrimination against people with disabilities. And in my work, I’ve illustrated how discrimination actually manifests with chatGPT and other generative AI tools like Gemini for people with disabilities. And I think there are several issues with this. There are issues with… with the data because you know this is a long-standing issue. We don’t have enough data on people with disabilities. The data that we have is often very biased because there has been so much societal stigma against people with disabilities and I think almost every jurisdiction or every country. So there is this problem of data. Then the other problem is that you know people with disabilities are usually never considered, they are never thought about when a technology is designed or deployed. So you know people with disabilities are usually never on the table. The workforce participation of people with disabilities especially the technology sector is very less. So the representation is an issue. And the third is when you think about governance policies, they also do not adequately consider disabilities. So when you think about how a technology should be free of biases or when they pose risks and so on, disability is not considered. And that’s something that you can see in other jurisdictions. So the European AI Act for example does not consider how a biometric technology might impact people with disabilities and they put biometric technologies in the lower threat. Whereas you know to me and to a lot of other researchers, biometric technologies pose significant risk for people with disabilities. So I think you know there are many issues with the larger AI fairness conversations, the larger AI bias conversations when it comes to disability. And I think to kind of end with, I think I like to put this in like the broad themes of one, lack of representation of people with disabilities in these conversations. Two, a lot of false optimism or a lot of hype around AI-powered assistive technology without understanding if they are even effective. of what they are seeking to do, what is the intent of the companies that are manufacturing such technologies. And I think the third is this larger marginalization of disability from different technological stages or the different stages of a technology life cycle, right from design, then to deployment, and then to governance. So I think this is broadly my understanding of the larger landscape of automated systems right now.
Fawaz Shaheen: Thank you, Mathita, that’s a very detailed, very, and I know you was trying to contain but into a very little. So thank you so much for doing that. We have 10 minutes now, and very quickly, Eleni, if we could go to you in three minutes or four minutes, if you could just, you know, you’ve had, UNESCO has done so much work on open distance, open learning, distance learning. And in your, from your own experience, if you could talk a little bit about the challenges of automated decision-making, especially in a field like education, and some either guidelines, ethical, you know, safeguards, principles that we should keep in mind when approaching AI systems for persons with, that’s it, for that impact persons with disabilities. And of course, when I say AI systems, I mean AI systems that are absolutely necessary, as Maitreya would say, not AI, not just any systems because we want to make them, but AI systems that are necessary or that are going to impact persons with disabilities. Eleni, if you could just respond to that, three or four minutes, yeah.
Eleni Boursinou: Thank you, thank you Fahad. So the work UNESCO has been doing in education is about, I mean, we see that there are really a lot of challenges by learners with disabilities in online education. So the accessibility gaps in content and in platforms have to be. recognized. There are tools that remain inaccessible, often incompatible with assistive technologies that leave learners with disabilities excluded. So these barriers are also accentuated by digital skill gaps and limited access to resources. We have a lack of inclusive pedagogies and capacity building and training for educators in universal design for learning that further exacerbates these issues. So automated decision making systems in education pose even more risks for persons with disabilities because biases in data and algorithms often result in discriminatory outcomes such as biased admissions, decisions, assessments or resource allocation. Transparency and accountability remain significant concerns and ADM systems often make unjust decisions and they fail to consider the diverse needs of persons with disabilities and what we are trying to do by addressing all the challenges is to work with member states on guidelines and implementation policies to include ethical AI development in education and also guidelines to include policies for not only persons with disabilities. What Mr Shah said in his comment that now the definition has to include all marginalized communities and vulnerable groups, even autism and people with dyslexia. and learning difficulties like this. So, what we think is… I’m going to share in the chat all the links on some documents that UNESCO has been working on. And I also want to end by a field example that we had with the Rwanda Education Board. We worked with the Rwanda Education Board and the Light for the World NGO to foster digital skills development with teachers that have disabilities. And we really saw that it is very, very important to have teachers with disabilities be represented within the communities and the whole education ecosystem. And we had a very, very interesting case study based on that. And they provided constructive feedback on the guidelines. And we actually enriched the guidelines for the governments based on their feedback. So, that’s all for me. I will share all the links in the chat.
Fawaz Shaheen: Thank you, Leni. And we have some observations from participants. I think we can… I’ll… Yeah. We’ll just get the mic to you, sir. Thank you.
Audience: To introduce myself, I am Dr. Mohammad Shabbir from Pakistan and I am the coordinator of Internet Governance Forum’s Dynamic Coalition on Accessibility and Disability. So, I am really impressed by the discussion. So, I am sorry that I joined a little bit. late due to another session that I was speaking at. So as the coordinator of Dynamic Coalition on Accessibility and Disability, I would just want to flag that we have accessibility guidelines. Dynamic Coalition on Accessibility and Disability is one of the old setups within the IGF system, and it has been advising IGF on organizing accessible meetings for people with disabilities. So we have revised guidelines out there on the DC booths, which if in-person participants can go and see there, get them from there, and if someone needs a braille copy, that is also available there. Secondly, I would agree with the speaker from Harvard. I think it was Metra. Pardon if I’m pronouncing the name wrong. There are deceptive practices within the data systems, particularly when we talk about AI and algorithm-based systems. Many of the things have already been said, but one concern that I have as a person with disability, we all use a number of AI-based systems, and we all know that data as the oil of AI is being used to train these systems. As a person with disabilities, sometimes we use these applications systems for many of our personal documents, personal performances, some work, et cetera. So there is a lot of private data needed to these applications and systems of persons with disabilities. Same is the case for people who are hard of hearing, those who use these technologies for interpretation and translation, be in sign language interpretation or otherwise in their conversations. So a lot of data is going into these applications. We don’t know because there are privacy policies, but they are, to my understanding, except for Be My Eyes or some other applications, they have not, many of the applications have not changed their privacy policies. So these also need to be looked at from a friendly perspective. One last point that, and it does not need a response, but I want to leave this to you as an afterthought. There is a disability definition by the CRPD which states people have impairments and disability occurs when those impairment interact with the societal barriers. If those barriers are removed, disabilities are removed. But broadening this definition, and this includes people with learning disabilities, autism and stuff, but my friends from India would understand there is a lot of efforts of abuse of these definitions of the disabilities and getting advantages on behalf of persons with disabilities posing as person with disabilities. So on one hand, while I agree that we need to encompass all disabilities into the definitions, but we need to have certain policies where impersonators could be kept out of those kind of facilities where people with disabilities are facilitated, but some impersonators come and get advantages in the name of persons with disabilities. Thank you so much.
Fawaz Shaheen: Thank you so much, Dr. Saab. I know that there is more scope for conversation right now, but unfortunately we are very much out of time. Thank you so much for joining us, even if it was for a little bit. And we are still here. If anyone wants to chat, have a conversation, the document. is still up, you can comment on it, leave your suggestions. And I would like to particularly thank our on-site speaker, Usama Manzar, for joining us, for sharing your learnings, your experiences. I’d like to thank Maitreya for joining us from such a different time zone and bringing your insights into this. I’m sure we have a lot to learn from you. And I encourage everyone, actually, to check out Maitreya Shah online. Check out his page on the UC Berkeley website, some very interesting articles. I was recently reading one on how the AI conversation needs to include voices with disabilities. Do go and check those out. Also, thank you so much, Eleni, for joining us, for sharing your very valuable insights. And thank you, everyone, for taking out the time. We’ll keep this conversation going. See you around. Thank you. Thanks, everyone. Thank you, everyone. Bye. Bye. Bye. Bye. Bye. Bye. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tithi Neogi
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Need for accessible consent mechanisms and digital services
Explanation
Digital accessibility is crucial for persons with disabilities to give meaningful consent online. Current data protection laws do not mandate accessible consent mechanisms for persons with disabilities.
Evidence
The Indian data protection law allows guardians to give consent on behalf of persons with disabilities, reducing incentives to make consent mechanisms accessible.
Major Discussion Point
Digital Accessibility and Inclusion for Persons with Disabilities
Agreed with
Osama Manzar
Eleni Boursinou
Maitreya Shah
Agreed on
Need for digital accessibility and inclusion for persons with disabilities
Osama Manzar
Speech speed
160 words per minute
Speech length
2804 words
Speech time
1050 seconds
Importance of involving persons with disabilities in technology development and service provision
Explanation
Persons with disabilities should be involved in providing digital services rather than being treated as subjects. This empowers them and changes the narrative around disability.
Evidence
Example of setting up digital centers run by persons with disabilities in villages, providing services to the community.
Major Discussion Point
Digital Accessibility and Inclusion for Persons with Disabilities
Agreed with
Tithi Neogi
Eleni Boursinou
Maitreya Shah
Agreed on
Need for digital accessibility and inclusion for persons with disabilities
Differed with
Maitreya Shah
Differed on
Approach to disability data collection
Eleni Boursinou
Speech speed
104 words per minute
Speech length
881 words
Speech time
505 seconds
Challenges in online education accessibility for learners with disabilities
Explanation
Learners with disabilities face significant barriers in online education due to inaccessible content and platforms. There is a lack of inclusive pedagogies and capacity building for educators in universal design for learning.
Evidence
UNESCO’s work with the Rwanda Education Board and Light for the World NGO to foster digital skills development for teachers with disabilities.
Major Discussion Point
Digital Accessibility and Inclusion for Persons with Disabilities
Agreed with
Tithi Neogi
Osama Manzar
Maitreya Shah
Agreed on
Need for digital accessibility and inclusion for persons with disabilities
Maitreya Shah
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Lack of representation of persons with disabilities in AI and technology conversations
Explanation
People with disabilities are often not considered when technologies are designed or deployed. There is low workforce participation of people with disabilities in the technology sector.
Evidence
Example of how AI fairness metrics and governance policies often explicitly exclude disability from their ambit.
Major Discussion Point
Digital Accessibility and Inclusion for Persons with Disabilities
Agreed with
Tithi Neogi
Osama Manzar
Eleni Boursinou
Agreed on
Need for digital accessibility and inclusion for persons with disabilities
Privacy risks from AI-powered assistive technologies and data collection
Explanation
There is unchecked optimism about AI-powered assistive technologies without understanding their implications on people with disabilities. These technologies may violate privacy and promote problematic ideas about ‘fixing’ disabilities.
Evidence
Example of AI-powered smart robots being adopted for caregiving without considering privacy implications.
Major Discussion Point
Data Protection and Privacy for Persons with Disabilities
Agreed with
Angelina Dash
Audience
Agreed on
Concerns about data protection and privacy for persons with disabilities
Risks of bias and discrimination against persons with disabilities in AI systems
Explanation
AI systems and large language models are often not trained to avoid discrimination against people with disabilities. This leads to biased outcomes in various applications of AI.
Evidence
Research showing how discrimination manifests in chatGPT and other generative AI tools for people with disabilities.
Major Discussion Point
AI and Automated Decision-Making Systems
Importance of disability representation in AI fairness and governance conversations
Explanation
People with disabilities are marginalized in conversations about AI fairness and governance. Disability is often not considered in policies about technology risks and biases.
Evidence
Example of the European AI Act not considering how biometric technologies might impact people with disabilities.
Major Discussion Point
AI and Automated Decision-Making Systems
Limitations of medical/impairment-based approaches to disability data collection
Explanation
Current data collection efforts often fail because they rely on a narrow, medicalized conception of disability. This approach excludes many people who might identify as disabled under a broader definition.
Evidence
Contrast between official Indian data showing 2-5% of the population as disabled, versus WHO estimates of 15% globally.
Major Discussion Point
Disability Data and Definitions
Differed with
Speaker 1
Differed on
Approach to disability data collection
Need for social model and broader definitions of disability aligned with UNCRPD
Explanation
Major Discussion Point
Disability Data and Definitions
Angelina Dash
Speech speed
150 words per minute
Speech length
820 words
Speech time
326 seconds
Issues with treating persons with disabilities like children in data protection laws
Explanation
The Indian data protection law treats persons with disabilities similarly to children, requiring consent from guardians. This approach infantilizes persons with disabilities and doesn’t account for their autonomy.
Evidence
Comparison of treatment of children and persons with disabilities in Indian data protection law.
Major Discussion Point
Data Protection and Privacy for Persons with Disabilities
Agreed with
Maitreya Shah
Audience
Agreed on
Concerns about data protection and privacy for persons with disabilities
Need for sensitive personal data category in data protection laws
Explanation
The absence of a sensitive personal data category in Indian data protection law is problematic. Certain data of persons with disabilities can be more vulnerable and susceptible to misuse for discrimination.
Evidence
Examples of health and financial data of persons with disabilities being more vulnerable to misuse.
Major Discussion Point
Data Protection and Privacy for Persons with Disabilities
Agreed with
Maitreya Shah
Audience
Agreed on
Concerns about data protection and privacy for persons with disabilities
Audience
Speech speed
124 words per minute
Speech length
620 words
Speech time
298 seconds
Concerns about private data of persons with disabilities being used to train AI systems
Explanation
Persons with disabilities often use AI-based systems for personal tasks, inputting private data. There are concerns about how this data is being used to train AI systems and whether privacy policies adequately protect this sensitive information.
Evidence
Examples of applications used by people who are hard of hearing for interpretation and translation.
Major Discussion Point
Data Protection and Privacy for Persons with Disabilities
Agreed with
Angelina Dash
Maitreya Shah
Agreed on
Concerns about data protection and privacy for persons with disabilities
Challenges in balancing inclusive definitions with preventing misuse/impersonation
Explanation
While broader definitions of disability are needed, there are concerns about potential abuse and impersonation. Policies are needed to prevent impersonators from taking advantage of facilities meant for persons with disabilities.
Major Discussion Point
Disability Data and Definitions
Agreements
Agreement Points
Need for digital accessibility and inclusion for persons with disabilities
Tithi Neogi
Osama Manzar
Eleni Boursinou
Maitreya Shah
Need for accessible consent mechanisms and digital services
Importance of involving persons with disabilities in technology development and service provision
Challenges in online education accessibility for learners with disabilities
Lack of representation of persons with disabilities in AI and technology conversations
All speakers emphasized the importance of making digital technologies and services accessible and inclusive for persons with disabilities, highlighting various aspects such as consent mechanisms, education, and technology development.
Concerns about data protection and privacy for persons with disabilities
Angelina Dash
Maitreya Shah
Audience
Issues with treating persons with disabilities like children in data protection laws
Need for sensitive personal data category in data protection laws
Privacy risks from AI-powered assistive technologies and data collection
Concerns about private data of persons with disabilities being used to train AI systems
Multiple speakers raised concerns about the inadequate protection of personal data of persons with disabilities, highlighting issues in current laws and risks associated with AI technologies.
Similar Viewpoints
Both speakers advocate for broader, more inclusive definitions of disability that go beyond medical models, while also acknowledging the need to prevent misuse of such definitions.
Maitreya Shah
Audience
Limitations of medical/impairment-based approaches to disability data collection
Need for social model and broader definitions of disability aligned with UNCRPD
Challenges in balancing inclusive definitions with preventing misuse/impersonation
Unexpected Consensus
Importance of involving persons with disabilities in technology development and service provision
Osama Manzar
Maitreya Shah
Importance of involving persons with disabilities in technology development and service provision
Lack of representation of persons with disabilities in AI and technology conversations
Despite coming from different backgrounds (grassroots implementation vs. academic research), both speakers strongly emphasized the need for direct involvement of persons with disabilities in technology development and deployment.
Overall Assessment
Summary
The speakers generally agreed on the need for greater digital accessibility, inclusion, and data protection for persons with disabilities. They also emphasized the importance of involving persons with disabilities in technology development and policy-making processes.
Consensus level
There was a high level of consensus on the main issues, with speakers complementing each other’s perspectives from different angles (policy, grassroots implementation, research). This consensus suggests a strong foundation for developing more inclusive policies and practices in digital accessibility and data protection for persons with disabilities.
Differences
Different Viewpoints
Approach to disability data collection
Osama Manzar
Maitreya Shah
Importance of involving persons with disabilities in technology development and service provision
Limitations of medical/impairment-based approaches to disability data collection
Osama Manzar emphasizes focusing on abilities and involving persons with disabilities in service provision, while Maitreya Shah argues for moving away from medicalized conceptions of disability towards a social model approach in data collection.
Unexpected Differences
Optimism about AI-powered assistive technologies
Osama Manzar
Maitreya Shah
Importance of involving persons with disabilities in technology development and service provision
Privacy risks from AI-powered assistive technologies and data collection
While Osama Manzar appears optimistic about the potential of digital technologies to empower persons with disabilities, Maitreya Shah unexpectedly raises concerns about unchecked optimism regarding AI-powered assistive technologies, highlighting potential privacy risks and problematic assumptions about ‘fixing’ disabilities.
Overall Assessment
summary
The main areas of disagreement revolve around approaches to disability data collection, the role of AI-powered assistive technologies, and the extent of representation needed for persons with disabilities in technology development and policy discussions.
difference_level
The level of disagreement is moderate. While speakers generally agree on the importance of inclusion and representation for persons with disabilities, they differ in their specific approaches and areas of concern. These differences highlight the complexity of addressing disability issues in the context of digital technologies and data protection, suggesting the need for continued dialogue and diverse perspectives in policy-making.
Partial Agreements
Partial Agreements
Both speakers agree on the importance of involving persons with disabilities in technology development and conversations. However, they differ in their approach, with Osama Manzar focusing on practical involvement in service provision, while Maitreya Shah emphasizes representation in broader AI and technology policy discussions.
Osama Manzar
Maitreya Shah
Importance of involving persons with disabilities in technology development and service provision
Lack of representation of persons with disabilities in AI and technology conversations
Similar Viewpoints
Both speakers advocate for broader, more inclusive definitions of disability that go beyond medical models, while also acknowledging the need to prevent misuse of such definitions.
Maitreya Shah
Audience
Limitations of medical/impairment-based approaches to disability data collection
Need for social model and broader definitions of disability aligned with UNCRPD
Challenges in balancing inclusive definitions with preventing misuse/impersonation
Takeaways
Key Takeaways
Resolutions and Action Items
Unresolved Issues
Suggested Compromises
Thought Provoking Comments
We must do conversation, action, intervention with people with disability in everything, in everything, whether you are doing research or data collection or doing something on work, they must be part of the ecosystem.
speaker
Osama Manzar
reason
This comment emphasizes the critical importance of including people with disabilities in all aspects of research, policy-making, and implementation related to disability issues. It challenges the common practice of making decisions for people with disabilities without their input.
impact
This comment shifted the discussion towards the importance of representation and inclusion of people with disabilities in decision-making processes. It led to further exploration of how to meaningfully involve people with disabilities in various contexts.
The problem with the entire data collection effort is not the focus on disability or ability. In fact, I feel that the very distinction between ability and disability, I think, is a very medicalized conception of disability.
speaker
Maitreya Shah
reason
This comment challenges the traditional medical model of disability and introduces the social model perspective. It highlights how current data collection methods may be fundamentally flawed due to their underlying assumptions about disability.
impact
This comment deepened the discussion by introducing a more nuanced understanding of disability. It led to a conversation about the limitations of current data collection methods and the need for a more holistic approach to understanding disability.
A lot of my recent research at Harvard has been on how AI fairness metrics and AI governance policies both explicitly exclude disability from their ambit.
speaker
Maitreya Shah
reason
This comment highlights a significant gap in current AI ethics and governance frameworks, pointing out how disability is often overlooked in discussions of AI fairness.
impact
This comment shifted the discussion towards the intersection of disability and AI, leading to a more in-depth exploration of the challenges and risks posed by AI systems for people with disabilities.
We worked with the Rwanda Education Board and the Light for the World NGO to foster digital skills development with teachers that have disabilities. And we really saw that it is very, very important to have teachers with disabilities be represented within the communities and the whole education ecosystem.
speaker
Eleni Boursinou
reason
This comment provides a concrete example of how including people with disabilities in educational initiatives can lead to more effective and inclusive outcomes.
impact
This comment grounded the discussion in practical examples, demonstrating the real-world impact of inclusive practices. It led to a discussion of best practices and successful case studies in disability inclusion.
Overall Assessment
These key comments shaped the discussion by shifting it from a theoretical understanding of disability issues to a more nuanced, practical, and inclusive approach. They challenged traditional perspectives on disability, highlighted the importance of representation and inclusion, and brought attention to emerging challenges in areas like AI and data governance. The discussion evolved from focusing on barriers and problems to exploring solutions and best practices for meaningful inclusion of people with disabilities in various contexts.
Follow-up Questions
How can we make consent mechanisms more accessible for persons with disabilities?
speaker
Tithi Neogi
explanation
This is important to ensure persons with disabilities can give meaningful consent online and access digital services independently.
Should sensitive personal data be reintroduced as a category in India’s data protection law?
speaker
Angelina Dash
explanation
This is crucial for providing additional safeguards for vulnerable data of persons with disabilities, which could be misused for discrimination.
How can we address the lack of a global south perspective in disability and internet governance discourse?
speaker
Angelina Dash
explanation
This is important to account for unique challenges faced by persons with disabilities in global south countries, including intersecting marginalization.
What is the best approach for engaging persons with disabilities in policy development processes?
speaker
Audience member from Ghana
explanation
This is crucial for ensuring policies are inclusive and address the actual needs of persons with disabilities.
How can we improve data collection efforts to better represent the diversity among persons with disabilities?
speaker
Fawaz Shaheen
explanation
This is important for developing more accurate and inclusive policies and interventions for persons with disabilities.
How can we ensure accountability from automated decision-making systems, particularly from the perspective of persons with disabilities?
speaker
Fawaz Shaheen
explanation
This is crucial to prevent discrimination and ensure fairness in AI systems that impact persons with disabilities.
How can we address the privacy concerns related to AI-based assistive technologies collecting personal data from persons with disabilities?
speaker
Dr. Mohammad Shabbir
explanation
This is important to protect the privacy and data rights of persons with disabilities who rely on these technologies.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
WS #235 Judges on Human Rights Online
WS #235 Judges on Human Rights Online
Session at a Glance
Summary
This discussion focused on the challenges and opportunities of integrating digital technologies and artificial intelligence into judicial systems. The panel, which included judges, lawyers, and technology experts, emphasized the importance of engaging the judiciary in internet governance forums to address emerging digital rights issues.
Key points included the need for judges to adapt to rapidly evolving technologies, with examples given of how AI is already being used in courts for tasks like scheduling, case assignment, and legal research. Panelists stressed the importance of developing comprehensive legal frameworks to safeguard digital rights, privacy, and transparency in the digital age. They also highlighted challenges such as cross-border jurisdiction issues in cybercrime cases and the need for better training and resources for judges in developing countries.
The discussion touched on the potential benefits of AI in improving judicial efficiency, while also cautioning about risks like AI hallucinations and the need for human oversight. Panelists agreed on the importance of including marginalized groups in digital justice initiatives and called for more inclusive policies.
The session concluded with calls for greater collaboration between the judiciary, technology experts, and policymakers. Participants emphasized the need for ongoing education and capacity building for judges and lawyers to keep pace with technological advancements. Overall, the discussion underscored the critical role of the judiciary in shaping the future of digital rights and internet governance.
Keypoints
Major discussion points:
– The importance of including judges and the judiciary in Internet governance discussions and forums
– Challenges judges face in handling digital evidence and cybercrime cases
– The use of AI and technology in judicial processes and decision-making
– The need for legal frameworks and capacity building to address digital rights issues
– Ensuring access to digital justice for marginalized groups
Overall purpose/goal:
The main goal of this discussion was to explore ways to engage and empower judges to address digital rights issues and adapt legal systems to the challenges of the digital age. The session aimed to highlight the importance of judicial involvement in Internet governance.
Tone:
The overall tone was informative and collaborative. Speakers shared insights from their experiences in different countries and contexts. There was a sense of enthusiasm about bringing judges into Internet governance discussions for the first time. The tone became more urgent when discussing challenges, but remained optimistic about finding solutions through cooperation and capacity building.
Speakers
– Nazarius Kirama: Moderator, Tanzania Internet Governance Forum
– Umar Khan Utmanzai: Advocate, practicing law at Peshawar High Court, Pakistan
– Martin Koyabe: Cybersecurity expert, Global Forum on Cyber Expertise
– Eliamani Isaya Laltaika: Judge, High Court of Tanzania
– Rachel Magege: Lawyer specializing in data protection and governance, Tanzania
Additional speakers:
– AUDIENCE: Various audience members who asked questions
Full session report
Revised Summary of Judicial Engagement in Internet Governance Discussion
This comprehensive discussion focused on the challenges and opportunities of integrating digital technologies and artificial intelligence (AI) into judicial systems. The panel, comprising judges, lawyers, and technology experts, emphasised the critical importance of engaging the judiciary in internet governance forums to address emerging digital rights issues.
Key Themes and Discussion Points:
1. Importance of Judiciary Engagement in Internet Governance
The panellists unanimously agreed on the necessity of including judges and the judiciary in Internet governance discussions and forums. Judge Eliamani Isaya Laltaika stressed that judges need to understand digital issues to properly adjudicate cases, while moderator Nazarius Kirama pointed out that the judiciary has been notably absent from Internet Governance Forum discussions. Advocate Umar Khan Utmanzai highlighted the need for legal frameworks to be updated to address digital rights effectively.
There was a strong consensus that judges should embrace AI and other technologies to improve court processes. However, this point also revealed some differences in approach. While Judge Laltaika advocated for enthusiastically embracing AI, Utmanzai cautioned about the challenges judges face due to lack of training and understanding of technology.
2. Challenges in Applying Law to Digital Spaces
The discussion highlighted several key challenges that judges and legal systems face in the digital age:
a) Complexity of Digital Evidence: Martin Koyabe, a cybersecurity expert, emphasised that digital evidence is complex and requires new skills from judges to interpret and use effectively in court proceedings.
b) Cross-border Jurisdiction: Utmanzai pointed out that the cross-border nature of the internet creates significant jurisdictional issues for courts, particularly in cybercrime cases. He elaborated on the difficulties judges face in determining jurisdiction, collecting evidence, and enforcing judgments across borders.
c) AI Hallucinations: Judge Laltaika raised concerns about AI systems potentially “hallucinating” and presenting inaccurate information, which could have serious implications in legal proceedings.
d) Lack of Precedent: Utmanzai noted that the lack of precedent in cyber cases creates difficulties for judges in making consistent and informed decisions.
3. Strategies for Improving Digital Rights Protection
The panel and audience members proposed several strategies to address these challenges:
a) Comprehensive Legal Frameworks: There was a call for the development of comprehensive legal frameworks specifically designed to address digital rights issues. Martin Koyabe emphasised the need for robust digital frameworks in countries.
b) Cross-border Collaboration: Audience members suggested strengthening cross-border judicial collaboration to tackle jurisdictional challenges in cyber cases.
c) Judicial Training: Utmanzai emphasised the need for increased judicial training on technology issues to bridge the knowledge gap. This includes incorporating cyber law into law school curricula.
d) Showcasing AI Benefits: Rachel Magege, a lawyer specialising in data protection, suggested demonstrating the benefits of AI to increase its acceptance within the legal community.
e) Judiciary Global School on Internet Governance: Judge Laltaika announced the launch of this new initiative to train judges on internet governance issues.
4. AI Integration in Judicial Systems
Judge Laltaika shared insights on the use of AI in Tanzanian courts for various purposes:
– Scheduling court sessions
– Case assignment to judges
– Language translation
– Legal research assistance
He also mentioned an upcoming session on AI ethics for judges, highlighting the proactive approach to addressing AI-related challenges in the judiciary.
5. Inclusivity in Digital Rights
The discussion touched on the importance of ensuring inclusivity in digital rights:
a) Nazarius Kirama stressed the need for policies to prevent digital exclusion of marginalised groups.
b) Rachel Magege highlighted how gender-based violence can be exacerbated online and how the digital divide affects access to justice.
c) Martin Koyabe emphasised that frameworks should embed human rights protections to ensure inclusivity.
Thought-Provoking Insights:
1. Judge Laltaika shared an anecdote about a high court judge in East Africa who was summoned by a disciplinary committee for allegedly using ChatGPT in writing part of a judgment, highlighting real-world challenges of AI use in judicial processes.
2. Judge Laltaika provided a positive example from Tanzania, where a strategic five-year plan was developed to identify judiciary needs and secure executive support for technological advancements.
3. Martin Koyabe praised Tanzania’s approach to developing its cybersecurity strategy, which embedded fundamental tools and instruments within the strategy.
4. Umar Khan Utmanzai described the situation in Pakistan, where many high court judges lack basic knowledge about the internet and AI due to their isolation from public interaction and outdated legal education.
5. Judge Laltaika used a metaphor of rebuilding a house to illustrate the need for adapting the judiciary to accommodate the digital world.
Resolutions and Action Items:
1. Launch of the Judiciary Global School on Internet Governance to train judges on IG issues.
2. Plan to include more judges and legal practitioners in future IGF meetings, including a parliamentary track room session on this topic.
3. A Tanzanian judge’s commitment to request government sponsorship for lawyers to attend the next IGF.
4. Tanzania Internet Governance Forum’s initiatives to engage judges in IG discussions.
5. Recognition of UNESCO’s guidelines for AI use by judiciaries.
Unresolved Issues and Future Considerations:
1. Balancing judicial independence with the need for technology adoption.
2. The extent to which court processes should be digitised (e.g., online marriages).
3. Addressing AI hallucination and ensuring the accuracy of AI-generated legal information.
4. Funding and resource allocation for judiciary digitisation in developing countries.
In conclusion, this discussion underscored the critical role of the judiciary in shaping the future of digital rights and internet governance. It highlighted the urgent need for judges to adapt to rapidly evolving technologies while emphasising the importance of developing comprehensive legal frameworks to safeguard digital rights, privacy, and transparency in the digital age. The session called for greater collaboration between the judiciary, technology experts, and policymakers, emphasising the need for ongoing education and capacity building for judges and lawyers to keep pace with technological advancements.
Session Transcript
Nazarius Kirama: We are on Channel 5 for the session. So we wait for like two minutes before we start. But Dr. Nazare Sukirama, I’ll be moderating the session and my fellow Daniel Turan will be moderating online, and Atanas will be our reporter, and I would like to take this opportunity to welcome all of you to this very important session, and this session is happening for the first time during the lifetime of the Internet Governance Forum. It is the first time that we’re going to have this, and we hope that next year will be bigger and the years after that. If you can put on the presentation, please. Just a moment. So like I said at the beginning, my name is Nazare Nikola Sukirama, a.k.a. Nazare Nikola, so that is my digital identity name, and today I’m going to be your pilot, and I have my co-pilot, Mr. Turan from Italy, and today we’re going to have a session on challenges on human rights online, and the overview of the session at this time when we live in the age of artificial intelligence, the digital age actually presents challenges that requires safeguards on human rights online, and when you are talking about things like privacy, freedom, inclusions are some key issues that need to be taken into account, so our session will focus on actually ways in which we can empower judges to address digital rights, and also we will explore some legal frameworks for inclusion. Like you see at the beginning, I said the judiciary, since the formation of the Internet Governance Forum in 2005, the judiciary has not been engaged in this space, if it is because of that notion of independence of the judiciary, but the judiciary as one of the four branches of government needs to be included in this space so they can not only learn, but also engage properly in the debates for how the Internet should be governed. Our attempt as Tanzania Internet Governance Forum is to engage judges, and we have had like two initiatives that are aiming for that, so you can see in the IGF multi-stakeholder model we are bringing the judiciary to be part of. Can you hear me now? Okay. The aim of this session is to make sure that there are as many stakeholders for collaboration and to protect digital rights online, and the objective is to adapt legal systems to safeguard privacy and freedom of expression, address cross-border enforcement challenges, foster inclusive policy for marginalized communities, and build judicial capacity in Internet Governance. So these are the kind of objectives that we are going to attempt to address through our esteemed panels. And the expected outcomes we believe the speakers will be able to address the legal systems better and also engage in terms of privacy and freedom of expression. Judicial collaboration is one of the expected outcomes in terms of roadmap for cross-border cooperation in enforcing digital rights consistently and fairly, inclusivity in digital policies that these are the kind of recommendations that we expect in terms of inclusive laws addressing marginalized and disabled communities to prevent digital exclusion. And the final expected outcome, identification of tools and strategies to enhance judicial engagement in Internet Governance. I would like to take this opportunity to thank all of our session makers, including the Honorable Dr. Judge Elia-Manuel Altayga for the input. Myself, Rachel Magege, who is joining us from online, Daniel Tura from Italy, who is our online moderator, Atanas Bazihire, who is on-site rapporteur, is around here, Dr. Martin Koyabe, who is our cybersecurity expert, sorry for the typo, and Pamela Chogo, who is a lecturer and a rapporteur online. We are also very glad today to introduce to you the new baby in town, the Judiciary Global School on Internet Governance, which has been registered by the Dynamic Coalition of School on Internet Governance. This basically is a platform for judges to learn about Internet Governance. governance, and it is an initiative of Tanzania IGF, ISOP Tanzania Chapter, and the Organization for Digital Africa. And we thank Dr. Eliamani, Honorable Judge from the High Court of Tanzania, for his enormous input and counsel on this initiative. So in the years ahead, we expect to continue to train judges from various jurisdictions around the world, and we look forward to engaging this critical branch of government in the IG space. We thank all of them for the opportunity that we have created to engage judges in this space. Now I would like to take this opportunity to introduce our speakers. The speakers will have like one minute to introduce themselves. You can say your name, I mean, your expertise, and the organization you come from, and then from there we will continue. Welcome to the session, ladies and gentlemen, and I look forward to your interaction. We’ll have a Q&A session, and we look forward to receiving your questions, and we hope and believe that we have in front of you our speakers who are very capable of answering and interacting with these questions. Thank you. We start with Umar from Pakistan, Advocate Umar from Pakistan.
Umar Khan Utmanzai: Hello. Thank you so much, moderator of this session. This is Advocate Umar, basically belong to Peshawar, Pakistan, practicing law at Peshawar High Court, a society organization with the name Citizen Rights and Advocacy Forum CROP, which basically targeted the awareness for the sensitization of the citizen rights, digital rights and privacy protection is one of our objectives, sensitizing the local community that accesses the internet, free and fair internet is your right, along with that, that how can we protect them, how can we sensitize them in terms of the cyberbullying, especially the women, which are very vulnerable in Pakistan. So this is what we are doing. Along with that, I’m taking these cyber cases in Pakistan, and so this is what we are doing in Pakistan. Thank you.
Nazarius Kirama: Dr. Qayyabi.
Martin Koyabe: Yeah, thank you very much, Naz, and first of all, let me take this opportunity on behalf of the Global Forum on Cyber Expertise, where I work and consult for, to really thank the organizers and judge for actually pushing this idea, I remember we talked about it in Kigali at some point, but I’m really glad to see it here. So my name is Dr. Martin Qayyabi, my real area of functionality is cyber security. I’ve worked in Africa extensively in a number of countries, specifically looking at strategy development, looking at, but more importantly, building the ICT sector within the continent. We’ll talk more within this session, and I’m really pleased to see some of you here today. Thank you.
Eliamani Isaya Laltaika: Thank you very much, facilitator. My name is Eliyamani Isayel Altaika. I’m a judge of the High Court of Tanzania and an adjunct faculty member of the Nelson Mandela African Institute of Science and Technology, where I taught for at least 10 years cyber security law, bioethics, and other law-related courses to scientists. And it’s true that I am, like we will discuss, I very much believe that without engaging judges, while building the digital economy, we will be shooting ourselves in the leg, because if you have very good policies, very good laws, and you bring a case before a judge, and that judge cannot even tell what a mouse is from another device, then you are just doing nothing. So I believe that judges are a fulcrum of any attempt to protect rights, be it digital or physical. And it is, therefore, my firm belief that in the next few years, we will have a lot of judges, and that is better for humanity. I have a dream.
Nazarius Kirama: Thank you so much, Judge, for always being there for us. I know we have had a lot to work on in terms of making the dream that Dr. Koyabe was talking about. In Kigali, it was just a dream, but now it is a reality. And we hope to continue to borrow your wisdom, engaging the judges into the intergovernance space. Now I will start with questions to our able speakers. I will start with you, Honorable Eliamani. Could you tell us why you have been advocating for inclusion of the judiciary in the IGF? I mean, why is it important?
Eliamani Isaya Laltaika: Yeah. Thank you very much. Hello? Yeah, I can hear you. Yeah, thank you very much. Like I said, this is really something very close to my heart, and it has a personal story. After my PhD, I was employed by the Nelson Mandela African Institution of Science and Technology. It is one in a network across the continent. Our elder Mandela had a dream of making Africa a hub of skills and knowledge and capabilities in STEM, science, technology, engineering, and mathematics. So he engaged with funders and donors from across the world, and through IMF and World Bank, they wrote a concept of establishing the MIT of Africa. They said, OK, we really cannot have one single MIT for Africa because of language barriers. There are countries which are Arabic, others are French, others are English, others are Portuguese. We also cannot have one MIT for the whole continent because education systems differ. At the end, they decided to establish four institutions, and I’m very glad that through diplomacy and efforts of the government of Tanzania by then, through President Jakai Amrisho Kikwete, who before that was in affairs, one of those prestigious institutions was established in my hometown of Arusha. As soon as I finished my PhD, I got into this institution, and I was tasked with preparing curriculum and course outlines to teach cyber security law, intellectual property, bioethics, and every other law that is required by scientists. It was during reading materials available in the cases that I discovered that there was a huge knowledge gap between judges or lawyers on one hand and scientists. From there, I became a link to try and bridge the gap. I conducted many seminars with the law society and also with the judges to try and alert them on fundamental issues related to ICT. I didn’t know that I would become a judge by then, but five years, six years later, I was appointed by the President to become a judge of the High Court in 2021. My friend, Nazar Kirama, doctor here, used to work with me for Tanzania IGF before he became a judge. One day, he came very humbly and tried to ask me whether after being a judge, I would still come to IGF. He was almost trembling, so I just gave him a hug and said, please sit down. It’s okay. I’m still the same. We traveled with him to Kigali, where Dr. Koyabe is staying. Throughout that trip, I was saying I need to have judges inclusive because I’m the only one who is getting this knowledge. Luckily, we traveled to Kyoto. During the closing ceremony of last year’s IGF, I was given a platform during the closing ceremony. After Prime Minister Suzuki, I took the stage and I made a joke. I said, there are about 9,000 people here, but I’m the only judge. What happened? I really pray that next year, we have more judges. I’m very glad that the IGF Secretariat took this very seriously. They wrote me an email back in Tanzania and said, we have started. Your colleague, Dr. Kirama Nazar, has brought a proposition. Today at 12, you will see now a session. established by IGF, which will compare the ethics of judges and use of artificial intelligence. Because some judges are afraid of coming here because they think artificial intelligence is eroding their ethical values. So we have a session today to answer, does use of AI make you a better judge or a worse judge? And we’ll have a very open discussion to make sure that judges all over the world, gain the confidence in coming with us. I will stop there for now because I have so many stories to tell.
Nazarius Kirama: Thank you so much, Judge Loltaika. And I think you’re one of the judges that I could dare say that you’re a judge down to earth. We have advocate Rachel Magege trying to join us online. And as soon as she joins us, we will have her introduce herself. And we continue with advocate Uma from Pakistan. Advocate Uma, I know you have been in the space of lawyering for several years. And do you think judges need to learn new tools of statutory interpretation to accommodate human rights in the digital space?
Umar Khan Utmanzai: The digital world is growing rapidly and not just law, every field is going to adopt it. So I believe that being in the field of legal paternity from the last three years, it is very mandatory for the judges as well to adopt the new tools just to face those to trial and deal with the cases related to cyber crime and other issues. Because in the traditional system of law, you cannot have such opportunities to trial cases related to head speech, freedom of expression. So there are issues like we have seen the IGF Secretariat has established the charter of human rights and principle for the internet, which has been delivered through the UDHR 1948, which is important. So I think those who rights we have been available to the citizens, to the humans under the UDHR now have been came through the charter of the human rights and principle for the internet. And so I believe that being a judge, you have those things which are not developing the way the internet has developed, the way the AI, no, no, just internet, no, the AI in the coming year, you will have more things in the era. So I believe that for the judge, it is very important to adopt himself with the tools which are going to have the citizen in the field of law. And for the judges who are going to deliver the justice, are the more important, it was one of the important stakeholder, the way that educate the judge high court, Tanzania has mentioned in a very brilliant way that why he has always in an interest to work on the digital rights. So I believe that in the coming year or in the nowadays, it is very important for the judges to adopt with the technology, with the new tools to interpret the laws. And I believe that the traditional, the old laws do not deal, do not answer and address the issues which are currently in the system of the digital world.
Nazarius Kirama: Thank you, Advocate Umar. And I think that is very important for judges to be able to keep abreast with the emerging technologies and the tools that they need to deliver justice on time. Because we know justice delayed is justice denied and technology tools have this capacity to be able to shorten the time with which the justice can be delivered. Now, we have Madam Advocate Rachel Magege. Can you hear us?
Rachel Magege: Yes, yes, good morning. I can hear you loud and clear. Can you hear me?
Nazarius Kirama: Yes, we can hear you Rachel. Thank you so much for joining us online. And if we can take one minute to introduce yourself and your institution. And if you can answer this question after that. The advocate, can gender-based violence be exasperated online? That will be your question after your introduction. Welcome, yes.
Rachel Magege: Thank you, thank you so much. And good morning to everyone. I am Rachel Magege. I am a Tanzanian citizen and I am a lawyer. My areas of practice are in data protection, data governance more generally. And I am very, very honored to be a part of this panel and a part of this conversation. I sit on the board of directors under the Tanzania Privacy Professionals Association. We have an association of privacy and data protection experts in the country. And so I’m really glad to be having this conversation. So Mr. Nazarius, on your question about gender-based violence and the digital and online platforms. Absolutely, gender-based violence can be further moved and amplified and can become even much greater because of all of these different digital and online platforms. It is very important for people to know that many times what happens in the physical is what is going to move and happen even in the online and digital spaces. Likewise, what happens in the online spaces with people who do not even know each other with regards to perpetuating harassment and bullying to specific genders also can move into the physical. So I’m usually very sensitive about this topic with the many clients and the many people that we talk to but even in the judiciary sector, because already if you’re looking at, for example, the physical space where with different backgrounds, different relationships and different structures like that, if it’s already happening in the physical and you have an institution that wants to introduce artificial intelligence or other different technologies, that same mindset is still going to happen even in the online and digital platforms. So it’s very important to make sure that in as much as people are working towards understanding and learning these emerging technologies, they also need to understand and learn more about themselves and about the biases that they carry, which can transfer even into the digital and online space. That’s a very brief response I can give for now. I’m happy to answer more questions as they come. Thank you.
Nazarius Kirama: Thank you very much, Advocate Rachel, for your… And indeed, I was tickled by what Honorable Judge said at the beginning that it is very important for the judiciary to be engaged in terms of all these issues because the cases about either internet or internet-related issues will always end up in courts. So, would always be the one that knows the issues. And this is what we are trying to achieve in terms of engagement of the judiciary in the IG internet governance space. Honorable Judge, how can the governments make sure that privacy laws keep up with the fast-changing technology? Because now we have artificial intelligence, we have all these blockchains, and there is a resistance against all four. How can these governments from across various jurisdictions with the fast-changing technology?
Eliamani Isaya Laltaika: Judge, Facilitator, one of the issues that are important for the judiciaries to do is to breathe life to legislation. When a law is enacted, be it at international level or at national level, it is a document which is more or less formless. The definitions are vague, the rights are not understandable. Sometimes if it’s an international instrument, it can stay in the cupboards for many years, but as soon as this law goes to the court, the court gives an interpretation, it breathes life, it makes that law something that people can relate with. And that is what we are trying to do currently with the data protection and privacy laws. For example, the first thing that people do not distinguish out there, lay people and many of our technocrats, is that data protection and privacy are totally different. Many people think data protection and privacy are the same. They are not. Before a judge, a judge knows that privacy is part of the human rights, but data protection and protection of one’s information is a fundamental right. If you look at the European Convention, this is article seven, article eight, and they are totally different. In our country, for example, in Tanzania, all data protection cases start with the commission because it’s a fundamental right that is protected administratively. And right issue, you go directly to the high court. You don’t even go to the administrative procedures. If a lower court recognizes that they have a case touching on fundamental human rights, they immediately ask you to go to the high court. So at the moment, what I’m seeing throughout jurisdictions is that judges are defining concepts. Judges are clarifying concepts. Judges are putting, laying down the foundations of integrating fundamental privacy and data protection laws within the larger fabric. Breaches are done by the private sector. It is the private sector which manufactures these gadgets you are seeing. But it is the role of the government to regulate them. The court stands there as an umpire to say, OK, the manufacturer is responsible for protecting privacy. When you release a headset or a computer or whatever gadget, it is upon the regulator to make sure.
Martin Koyabe: It is stored in digital form in zeros and ones. So what that means, there is a huge fundamental challenge in how we actually handle the digital evidence. The second issue is that it is very volatile. Because if you either change the time when it is stored, if you move it from one point to another, it might actually alter the actual evidence that is attributed to that digital evidence. There are some specific areas that need to be considered. It also traverses boundaries. Because you can carry digital evidence in a USB. You can carry it in a memory stick. You can carry it on any other device. And then move it from one jurisdiction to another. And that brings a huge amount of challenges when it comes to the judgment, when it comes to how we have to handle that issue. So therefore, the processing of evidence has to be based on the principles that have been agreed across jurisdictions. There is also the challenge of the dependency of digital evidence on technology. You will need a device, or at least something that will interpret what is referred to as digital evidence into a form that is admissible. And therefore, the authenticity that the judge talked about of that device being used to interpret that digital evidence becomes fundamental. And therefore, it is important that we understand the realm of association and where we are going when it comes to the issues of evidence that is admissible in court. The other issue that we have to be cognizant of is that when you consider the digital evidence, and when you look at how we need to make sure that we are able to admit it in court, then the preservation of that evidence becomes fundamental. And all this requires what we call expertise. Some expertise in terms of processing, digital evidence officers, and so forth. And then the last point that I wanted to point out is that we have a duty for those of us who work with the local government to ensure that the experts that we have, who provide maybe expertise in courts, who provide what we call the forensic expertise within the judicial system, are compensated well enough so that we can preserve that knowledge in this particular institution. So that’s how I view the whole concept of digital evidence and how fundamental it is when it comes to having court cases that are very, very fundamental. Let me just give you an example, just before I end, of what happened in a country that I know of where there was evidence of a crime that had occurred in a specific room. And this evidence was actually on the screens of the computer. But because the collection was so rudimental, the police and other officers who went to collect this evidence simply plugged the cable from what was on the screen, disappeared, which was critical for judgment within the law courts. So there’s that bit. There’s also the issue around when you issue a court order. In some cases, court orders are supposed to be issued in written form. So some laws are so arcane, they cannot accept any other form of court order unless it has been written and delivered to the individual person who is supposed to appear. So we have fundamental differences in how we need to update our courts, our judges, and also our legal framework. But we’ll come to that issue at some point. Thank you.
Nazarius Kirama: Thank you, Dr. Koyabe. And I think now we are almost going to finish this first segment of the presentations from the speakers. And before we take the questions, I would like to ask Rachel, what steps can policemakers take to ensure people with disabilities and marginalized groups have equal access to digital tools and services?
Rachel Magege: All right. Thank you so much for that question. Here’s what I’m going to say. A lot of frameworks and laws and regulations, subsidiary legislation have already been put in place. And whereas it is good to have a specific law that mentions a specific person or group of people, what I see as more beneficial is for policymakers to continuously remind and maybe even educate the implementers and the regulators to be very creative and deliberate in how they carry out the provisions of the law. Yes, because it is good to say that we want a specific law on digital platforms and the digital innovation space to say this and this about people with disabilities or the elderly and this and that. But if you already have a law, for example, the People with Disabilities Act, yes, and it’s very general, as with many…
Nazarius Kirama: Rachel, I think we lost you. Hello? Anyway, I think we can proceed. Did we lose her? Anyway, I think I will continue with Advocate Umar on… Is Rachel online? No. Can you hear me, sir? Yes, we can hear you.
Rachel Magege: My sincere apologies, the Zoom link just threw me out, but I’m back now. Let me just quickly wrap up my submission on this. So what I was saying was that as long as you have laws that are already in place, the implementers and the regulators have to be very creative in knowing that. For example, I have a provision that says everyone needs to have access to clean and safe water. All right? You as the implementer, as a regulator, are going to go to a place where you will have little children, you will have the elderly, you have people with physical disabilities, maybe may not have people who may have visual impairment, hearing impairment, and things like that. You have to be very creative into looking at all of these different groups and seeing that in order for all of them to get clean water, this person is going to need this, this person is going to need that, this person will need extra assistance in this, but at the end of the day, all of them get water. So it is the very same with digital platforms and with the digital space. And we’re already seeing a lot of these things come up once again, because what you are doing is at the end of the day, the policy makers will get to say, we have created this law. And we’re already seeing that when it is executed and implemented, that maybe young students or young girls or people with disabilities are all having their needs met, but in a different way as required for each of their different needs. So that’s the biggest thing that I would say, Mr. Nazarius. I know Tanzania has already been working on a number of different legislation for the benefit of the members in the audience to know. And currently the ICT is working on a national artificial intelligence framework and strategy. I do remember sitting down with them and with some development partners where we advised them that in as much as you are now with the ministries now conducting different assessments and different needs assessments and impacts in the country, you have to make sure that this law and this framework that you are creating, first of all, aligns with all the other ICT frameworks in the country, and especially with the Data Protection Act of Tanzania, but also make sure that it caters to every different group of people. So that is the feedback that I can give right now. And I know that once a regulator, once an implementer, you know, the commission, original commission, district commissioner, once they are creative in how they carry out this law, it is actually also going to reduce the number of complaints and lawsuits that Dr. Laitaka gets and receives in court. It’s going to reduce that a lot because you will see that this implementer, this government official has actually taken the time to understand and see what the policies have and what different groups of people need and how those different needs can be met. Thank you very much.
Nazarius Kirama: Thank you advocate for, you know, bringing the pieces from the perspective of marginalized and, you know, and sections of the society. Now we are going to go to the audience, but before we go to the audience in the room, we’d like to have a question from Zoom. There is a person from Zoom. There is a question for the court, for the judge. So if…
AUDIENCE: Can you hear me?
Nazarius Kirama: Yes.
AUDIENCE: We have a question for our horrible judge Eliamani. Nana is asking what perspectives are there for the use of AI to support, refine, clarify, enhance, influence decisions for judges? This is a question from an operational point of view.
Nazarius Kirama: Judge, that comes to you.
Eliamani Isaya Laltaika: Thank you very much for that brilliant question from our Zoom attendance. It is now an open secret that AI is used in the courtroom, including by judges and the assistants. However, there are no guidelines or regulations, and there are very, very different perspectives on how jurisdictions are embracing artificial intelligence. Within the East African community, and I’m not going to mention countries here, there is, in one country, a judge, a high court judge was summoned by the disciplinary committee because a part of his judgment was allegedly written by a chat GPT. I think there were a four and a zero that looked like a chat GPT four, and some sentences which were not very legal, and he was called to answer, and many lawyers have been reprimanded, and they got scared. Within the same East African community, just across the board, a chief justice is saying, please embrace generative AI. Make use of them to clarify what presentations by lawyers, and at the end of the day, you are responsible in what you say, but use any tools. Now, we are not sure what happens because these are countries sharing the border, sharing history of development of law from the British. However, luckily, we now have UNESCO. You talked yesterday when we were launching the guidelines for use of AI by judiciaries that is being pioneered by UNESCO. I hope that in the next few months or two years down the lane, we will have clear guidelines. To answer specifically, from my jurisdiction, we are using artificial intelligence in the court in Tanzania for at least four ways. One, scheduling. We no longer put our schedules manually. It is automated. There is electronic management system where I know the cases that will come to my chamber two weeks or next month because they are just automated. Secondly, is assigning cases. The judge in charge is no longer the only person who assigns cases. In the past, people would say, this boss is giving me investigations with advocates who are sole complainants. Nowadays, it is the artificial intelligence which just says, judge one, judge two, judge three. Cases are filed. The artificial intelligence says, lal taika, so-and-so, so-and-so. So you are sure that there is no bias. You handle your file very well. Thirdly, we use artificial intelligence in language translation. Our country uses Kiswahili, which is the national language of the court is English. So if someone speaks, we already have, we have deployed the TTS, transcriber and translation systems where you can transcribe what a person is writing and take back to. And thirdly, we use it intensively for research. And I said yesterday that when the chart GPT started in 2022, 2021, it was very inaccurate. You would get fake cases. It would come up with cases which are not anywhere in the law report. It changed it dramatically. Now you can be very sure that it provides you with cases that have been decided by the Court of Appeal of Tanzania. Only the judge at last to go and say, this is relevant here, this is not. And thirdly, to finalize that, we are actually using AI to predict whether that will get the simplified language of texts, including legislations or acts of violence.
Nazarius Kirama: Thank you, Judge, for your critical intervention. And now I would like to open Q&A to the audience in the room. If you have any question to any of the panelists, you are welcome to see your hands. Please, if you can get the microphone.
AUDIENCE: You hear me?
Nazarius Kirama: Yeah, okay, here you go.
AUDIENCE: Thank you so much. Thank you so much. Really appreciate it. This has been a very interesting conversation. I also attended the other session on AI in judiciary with Honorable Judge as well. I personally learned a lot and thank you for the other panel. I wanted to comment on the fact that there are a lot of people in the room who are not familiar with AI. And I think it’s very important to comment because we tend to talk a lot about a specific context, but there are other countries that they have a different. context, for example, in terms of their maturity to use technology. And I’m from, obviously, I’m from Iraq, and I wanted to talk about, first, the political role in the country to use technology, and the judiciary, I know that they are independent, but they are always affiliated somehow with the political, in line with the political role. Before getting to the question, I wanted to comment what Honorable Judge also said, that it’s unfortunate that we don’t have many judges, actually, or lawyers, or legal practitioners, even in the room, that we are very few. So hopefully, in the next year, we will have more judges and more lawyers and more legal practitioners in the room to share their experiences and views on that. My question will be for the Honorable Judge, because we are advocating to bring all the stakeholders, like in my country, for example, to including the judiciary. For a judge who have been heavily depending on paper, and not using technology, as you mentioned in the beginning of your speech, how would you convince someone who needs to change all this, who needs to spend more time to learn technology, to hire experts to reserve evidence, for example, analyze evidence, and things like that? What are the main three arguments, let’s say, that you will be using when you advocate for that change? And the other question, do you think at the beginning, judges will need, like in addition to investigators, will need someone who will be tech-savvy, for example, to help with all this kind of evidence? Thank you so much.
Nazarius Kirama: A very loaded question, I might say, and I think, is it a consensus as they come, or we take questions first, and then they answer? Which one should we take? As they come. Is that a consensus? I am a democratic moderator.
Rachel Magege: It’s okay with me. Thank you.
Nazarius Kirama: Okay. Thank you so much. Judge, if you can intervene, please.
Eliamani Isaya Laltaika: Okay. Very quickly. Thank you very, very much for that question. Unfortunately, it is true that we are not in isolation. Judiciary, the old-fashioned way of saying separation of powers, I am from the judiciary. My colleague is a minister. We work together. So, I meet my minister and say, okay, look here, this is the law you are proposing. It doesn’t work our way, so do this. That’s what is happening in the U.K. That’s what is happening in the U.S. But if you follow a very strict kind of separation of powers, you will be left, so everyone has to start somewhere. Because judges who are so based on paper and writing in Dar es Salaam can be encouraged to start small. In our country, it is mandatory for every judge to use a computer. You cannot avoid this. The former professor of law at the University of Dar es Salaam is at least ten times techier than me. You can confuse him with a computer scientist, because he talks about data protection and everything, and he’s the one who has brought Tanzania to that level of use of AI. It is true, question two, we first got assistants who are young lawyers. Every judge has one legal research assistant who knows the computer and has been encouraging judges. So, you can welcome you to come to Tanzania to learn. We are welcoming many countries. We get at least ten countries visiting the judiciary of Tanzania in half a year. So you are welcome from Iraq to come and we will deliberate how we can transfer the knowledge so far.
Nazarius Kirama: Thank you, judge.
Rachel Magege: Mr. Nazarious?
Nazarius Kirama: Yes.
Rachel Magege: If I may, I had written on the chat section to add just a very quick response after the honorable judge, if that’s okay.
Nazarius Kirama: Go ahead.
Rachel Magege: Yes. Honorable judge, thank you so much, as always, for your endless wisdom. And to answer this question to the gentleman who asked about what is the political will or the acceptance of these technologies, I do want to say a little something as well, if you may allow me, of course, much to the context of Tanzania in different aspects of Africa. So, because, and I’ll be very honest, because there is already a fear of technology and I’m looking at diverse groups of people, yes? We are here sitting in this room and virtually, we are lawyers, I’m a lawyer, we have judges, we have different technology experts. But outside of us, there are so many people out there, we’re looking at that digital divide. There are people in the rural areas, there are people with different levels even of education and access to education. But these are the same people who in one way or another may find themselves in courts for different matters and different reasons. And here you have the judiciary of Tanzania, for example, already using artificial intelligence. So how do you bridge that gap together? One of the things that I think really helps is also showing people the benefits and the good of AI. It has to be more of a narrative than what they are seeing online and on the internet or hearing from their neighbors, yes? Because if you’re here and telling people or telling me that because of artificial intelligence, a vulnerable judge like Taka can now read large volumes of evidence in a shorter amount of time, that it can help him as he writes his judgments, that is already a good thing. In Tanzania, for example, one of the big benefits of AI that came in just a number of projects, one of them was from the Sokoine University of Agriculture, where a lecturer has been using artificial intelligence to quickly detect diseases in cash crops, in maize, in corn and things like that. So if there is a language to use to already start communicating to lawyers, to court clerks, to judges, to many different members of society, that artificial intelligence can help you predict floods that are coming in your country, can help you do this and that. I think that might be a good way, a good avenue to use to even start creating that political will for people to see and understand that artificial intelligence can be used to make life easier and therefore frameworks to be established and this and that. So thank you so much. That was my additional contribution.
Nazarius Kirama: Thank you, Advocate. I saw a question there and over here. So you start with him and go to the gentleman over there.
AUDIENCE: Thank you. Thank you, Modrito. Sorry to put you on the spot, but I enjoyed your response to that question. Maybe drawing from your experience in Tanzania, it is true really that from when ChargePT started to now, there’s been some level of accuracy, like you said. But even still in 2024, there is still instances, drawing from experience, where we find that the AI systems hallucinate and they tend to present what is not there. So then my first question is, how does the judiciary in Tanzania take care of instances where facts are presented or authorities are presented and they do not really exist? Just on a little bit of background, AI hallucination has given rise to conversations around ensuring the human in and on the loop in the system. So what measures are there in the judiciary to correct, to ensure that materials presented by the AI systems are actually accurate? And the second question, and it’s going to be very short, the focus seems to be on judges from what you just said. We also find that lawyers coming before the courts use ChargePT for their briefs. What measures are there in place in the Tanzanian case to check accuracy in the briefs filed by lawyers in the court? Thank you.
Nazarius Kirama: Thank you. I will also take it to the gentleman, so these answers can be answered.
AUDIENCE: Okay. Thank you. My name is Doron from the UNEGOV. Thank you for this very productive session and opening call to all these relevant questions about judiciary. I will just quickly say, because we mentioned the three branches of government, executive, legislative. There is a common sense that the executive goes with full speed with these digitalization efforts. But judiciaries, there are some countries that catch the pace, but there are plenty of countries where it’s kind of left behind. And this is where the independence is coming to place because judiciary is not strong enough to have to be financially independent to take all the benefits from the digitalization. I will just briefly mention an example. We’re teaching, making courses for public prosecutors in one country. And at the end, the comments was, okay, experts, we know this is all good, data protection works, we know the rules. But the problem is we are ten prosecutors, but we have only three computers. So we wait for one prosecutor to go to court, for the others to work and to write some of the things, which means that we can have access to his evidence and the other, and so much of the privacy. My question first is, how we can push the executive power, the government to understand that the modernization, the capacity building of the judges of the skills must be supported by the government, by the government financially, mostly. And my second one is a more provocative one. How far do you think we can go with the digitalization in the judiciary? I’m saying this on the service supply side. For example, can we allow people to leave their last will on using online service, or like concluding their last will, can they conclude marriages online? How far do you think, where we set the red line, and we say, okay, because there is a reason why you can go and leave your last will in front of the judge and two witnesses to witness that it’s not under pressure. So how far do you think we can go with this digitalization?
Nazarius Kirama: Thank you so much. If we can have the answers, and then we’ll go, you know, to the last segment of our session. We only have like 15 minutes left. So let us keep our, you know, contributions short and intervention short as well. Thank you so much.
Martin Koyabe: Okay. Let me try and answer part of that question, but others, I’ll leave to the judge. The fundamental issue here is the starting point for many countries. And what we are seeing from Iraq and others is the need for having what we call robust frameworks, digital frameworks in the country. So for example, when you look at the case of Tanzania, for example, when they were developing their cyber security strategy, they were very keen that they must have those fundamental tools and instruments embedded within their strategy. So for Iraq, for example, I would urge that in your strategy and framework, you include specific areas of prescribing the type of cases that could arise, being proportionate in terms of the punishment that is allocated, and being able to have those fundamentals within that. The second thing is also to have what we call the human rights component in your framework. Things like freedom of speech, freedom of expression, being embedded within the framework and the structures of the country. And then lastly, also is the area around capacity building. And capacity building, we can’t argue. It is one of the key areas that we need to really look at. So what happens is you’ve got to have what we call the soft approach. So you take the judges. They don’t like going to workshop. They want to go for retreats. So take them where they want to go. Or to impact on the judges in a gradual manner. And take the judiciary, take the legislators, take the executive, and then train them within that concept. So that each of them can see an equitable contribution towards the functioning of what’s supposed to happen. In terms of the budget, which I’ve had here, there has to be a concerted effort politically to be able to support the judiciary to automate its systems. There are so many countries that have broken the backlog of cases. They are really efficient in terms of how they bring also the benefit to the citizens. Because their cases can be heard quicker. There are also cases where platforms have moved. We are now having cases being done online. So the idea about budgetary allocation is critical. But let me also come to the technical people who describe these things to the executives. I think we have a duty as technical people in the room how we describe and also how we explain problems. If you go to a politician, a politician doesn’t want to hear money in the middle of the attack. I don’t know what sort of winning elections is their money. So there’s a way of how you actually interpret these things to the people who make decisions. Like in your case, if you’re trying to convince other people. Thank you.
Nazarius Kirama: Thank you. Judge, if you have anything to add?
Eliamani Isaya Laltaika: Thank you very much. I really do. But I will be brief. The first question is for my brother. Okay. So it’s a new concept. Hallucination is a new concept in AI, generative AI, where the robot simply fails to understand what you’re asking. You ask them what are factual issues related to economy development in Tanzania, and they give you things from Mozambique. You have to tell it several times for it to come back. Hallucination is there to stay. But I want to be positive. Even among lawyers addressing judges, they can experience hallucination. Counsel, did you really mean this? Anyway, on a serious matter, I want to leave on legal issues that gen AI depends on large data and trained on it to become accurate. When it comes to court cases, they are massive, millions. In Tanzania, every single judgment must be uploaded online. If my principal judge gets a report that Judge Laltaika has decided 20 cases today, tomorrow he must see them on the signature and stamp of the court. So if you go to Tanzli, T-A-N-Z-L-I, you will see every case I decided for the past four years. And AI is feeding on this to develop almost exact copy of cases of authorities I need. And that is what is happening in other countries, because judgments are copyright free. The law in Tanzania says if you write anything as a judge, you cannot copyright it. It belongs to the public. That’s the case in Pakistan, in Kenya, everywhere. So out of all other fields, law and AI go so well. In the next 10 years, it will be very easy to just generate something and you get almost what you would have written as a judge. However, the second question, it is upon you to be sure. And in Tanzania, we have 15,000 advocates. These are the guardians of law. If they see anything unusual written by a judge, the screenshot, it is sent all over. Look at this judge. What is he writing? So we are fearful. We are very afraid. So if I generate something from the internet, I will still read it so carefully to make sure that it goes there while it has gone through a factual process. In Tanzania, we got out of this because we started with a strategic plan. We sat down and made a five-year strategic plan. We identified what we need. We handed it over to the executive. The executive said, okay, now we know your needs. In the past, they didn’t know what we need. They would just simply give money this month, next month, but now they have a clear picture of what the judiciary needs in the next 10, 15 years. Lastly, this is a very difficult question to answer from my brother. How far should we go with digitization? If I were to decide, I would go digitization for everything. For example, during COVID, many people were waiting to come and conduct their marriage or meet a judge. The judges are afraid. I could just say, okay, raise your hand where you are. Say this solemnly. I declare you married. Then you go. You will send me a signed copy of your signature. Why should I see you? There is a book by a professor of law from New Zealand, I don’t know, Australia, who is a futurist of virtual courts. He has written how courts will be in the next 50 years. Everything will be online. You will not need to have a lawyer come before you and inspect them as if you are a police officer inspecting. You just need material. We will be working on that. That’s how the world is moving. Thank you.
Nazarius Kirama: Thank you. Thank you, Judge. Now we go back to the last segment of our session. Advocate Umar, I don’t want to shortchange you in terms of questions. I know you have another question here. What challenges do judges face when handling cases about crimes and how can they be addressed?
Umar Khan Utmanzai: Microphone. So it’s a very important thing. Rapidly growing technology. Judges also have certain issues with the technology. In terms of the cases dealing to cyber security or cyber crime, judges are facing the complexity of the technology because judges are not well trained on the things. Like in Pakistan, the judge sitting over here is lucky to be sitting in an area like this. In Pakistan, a judge cannot even sit in public. He is not, meet the people in the public. So those who are judges in the high court are the judges from last, have graduated before 20,000. 1998, 1999, and 2000. So they even do not know about the internet, they do not know about the AI. So there is a complexity of the technology. Along with that, the jurisdictional issues, like if a person is sitting in Pakistan, has been harassed in somewhere outside of Pakistan, a case come before the judge, how will he trial that case? That is one of the biggest issues in the cyber cases. Along with that, the lack of precedent, like we in Pakistan have a prevention of electronics crime, ECPECA, which was passed by the Parliament of Pakistan in 2016. So there are not so many cases for the judges to know the lack of the precedents. Along with that, the evidential challenges, like the cases, the evidence can be altered, can be tempered. So how will a judge look it up to it, the tempered and altered, the forensic? So in so many countries, like the brother has mentioned in Iraq, in the developing state, on the global south countries, there is an issue of the technology along with the forensic. So there are certain issue according to the, that is the speed of the proceeding. Like in my country, in the whole district, there is one, this is one judge who is dealing the cases of cyber crime. So if a person is involved, or is, it will take years. So it is very, also an important factor to speed up, to make maximum number of judges for the cyber crime cases. Along with that, the awareness regarding the emerging threats, law, the technology is changing. Like five years ago, there was no AI. So if the laws were made according to the situation, now we have AI. After the five years, what will be the technology next? So the changing of the emerging threats for the judges. So I believe that the training for the judges, the curriculum for the judges, the cyber, along with that, which is very important, that the students of law should be taught the technology, should be taught regarding the cyber law. Like in Pakistan, I have been graduated from a very renowned law college, but I have never taught the cyber law as a minor subject, as a major subject. So when I came into the field, this is what I learned by myself. So I believe that the cyber law and the things related to the cyber, or the technology, should be included in the curriculum. So this is what my intervention from my side. Thank you.
Nazarius Kirama: And I think we need some kind of, like Dr. Martin Koyabe alluded to, I think we need frameworks. Is there a question from online? And then we’ll go to that lady. As the panelists, please make your final two minutes, prepare your final two minutes contribution, because the session is about to wind up. Yes, there is a lady. If you can take the microphone. Take the microphone to the lady, please. Yes, go ahead.
AUDIENCE: Hello, thank you so much for all the informative presentation that you’ve presented. My name is Hassar Taibi. I’m from Mawadda Association for Family Stability in Riyadh. I have actually not a question, but an input, if you allowed it, please. Despite rapid advancement in digital laws and regulation addressing online rights, judicial system face significant challenge in adapting to fast evolving technologies. Gaps in current legal frameworks hinder the protection of digital freedom of expression, especially in light of cross-border risk, like data breach and discriminatory AI applications. Marginalized groups face heightened barriers to accessing digital justice due to the lack of…
Nazarius Kirama: Is that a question?
AUDIENCE: No, no, no, this is just an input. No, this is just an input, if it’s okay.
Nazarius Kirama: Yeah, if you can keep it short because of time. Yeah, yeah.
AUDIENCE: Yes. Okay, sure. So we call to develop comprehensive legal frameworks, draft laws aligned with the emerging challenge to safeguard digital rights, privacy and transparency, strengthen cross-border judicial collaboration, establish mechanism to coordinate judicial decision across countries for effective handling of cross-border digital issues, and engage multi-stakeholder…
Nazarius Kirama: Thank you so much. And we are going to go to the… Thank you for your intervention. And I think we will share… Yes, yeah, the copy of that. I will share the copy extensively with the participants. Okay, sure. Because of time to go back to the panel, if you can make your parting short, two minutes of summary of today’s session, starting with Uma, please. Thank you.
Umar Khan Utmanzai: So the first thing, I’m just loving it that for the first time somebody and the legal fraternity has taken the responsibility to come. And it’s my fourth IGF, but for the first time I’m having a session on something from the judiciary. And we should include only the judges. It includes the lawyers, it includes the prosecutors, it includes all. So I think the beauty of this panel is that we have it from academia, we have from the judges, we have from the lawyer, we have from the civil society. So I believe that this should continue and I’m hopeful to collaborate with the judges and you guys for the next IGF. We might have judges from Pakistan. The Honorable Judge has talked about the visit to Tanzania. I would love to connect him to some judges who are here in Pakistan. So I believe that this should continue and the judicial system should be empowered. Thank you.
Martin Koyabe: Thank you very much for this session. Let me just make three very important issues here. So the first one is that this initiative, which started in Kigali, and I’m very glad we’re here, has actually got a lot of potential to make sure that we can do, as my colleague Omar said, have inclusivity from different facets, whether it’s from technology, judiciary, and other areas. So that’s something that I think we need to bond from this particular conversation going forward. The other thing is the frameworks that exists within member countries. And let’s try and look at also embedding into our frameworks some of the conventions, like the Budapest Convention and other conventions that are very, very straightforward, the Malabo Convention for Africa and so forth, that can enhance the way we are looking at judiciary and the laws. And then technology is here to stay. Let’s remember one thing, that technology, what technology gives, it can also take in equal measure. So as we move towards technology and embedding technology, let us also put mechanisms in place that can go wrong. Because we have advisories out there, they can attack our systems, they can take over some of the decisions, and that can be very disastrous. So let’s make sure that at every point, we do what we call privacy first, security first, as we implement some of this particular issue. Thank you.
Nazarius Kirama: Judge.
Eliamani Isaya Laltaika: Thank you very much. Three issues. First, what the lady was reading is really fundamental, and I invite you and everyone else, just after this, at 12, we are at the parliamentary track room for another session where IGF is now planning to have judges included. You will hear now this dream-taking face. So you can very much still share that, and if you don’t get an opportunity, please give it a copy to me. I know how to work with it. What you are saying is very important. Secondly, let us all imagine we are like a person who has bought a house, and they move in. They realize that the door doesn’t work, the window is too old, the bed is too small. That is what happened half quarter of the 21st century. We have to demolish a few walls to rebuild them, to accommodate the digital world. I am meeting my minister for communication from Tanzania, and we will be discussing a few things. I will ask him to sponsor a few lawyers next year. If they are not sponsored by IGF, and advocate tomorrow, you’ll be sure that next year you’ll see your fellow advocates here, because I’m meeting the minister. The first thing I’ll tell him, please tell TCRI to sponsor lawyers to come here, because if there is a joke they say, and I will finish with this, is a joke, don’t be offended, please. Are you allowing me to say this joke? Yes, they say, if you are doing anything serious, and you are not including a lawyer, there are only two things involved. Either you are not serious, or what you are doing is not serious. Thank you.
Nazarius Kirama: Thank you so much. Thank you so much for attending our session, and be sure that we continue to interact in the future. And for those who would like to follow us in terms of the global judicial network on internet governance, and school on internet governance for the judges, I think we can share contacts later on as we finish. And thank you for joining us, and thank you for your contribution. I know the time was not sufficient for everybody to be able to contribute as much as she or he would like. I think we’ll take your paper, and make part of our conversation. Thank you so much, and we look forward to collaborate with you guys. Can we have a group photo? Yes, yes, we should have a group photo. Thank you. Getting more crowded. Yes, yes.
Eliamani Isaya Laltaika
Speech speed
127 words per minute
Speech length
2902 words
Speech time
1360 seconds
Judges need to understand digital issues to properly adjudicate cases
Explanation
Judge Laltaika emphasizes the importance of judges understanding digital technologies to effectively handle cases in the modern era. He argues that without this knowledge, judges cannot properly protect rights in the digital realm.
Evidence
He gives an example of a judge who cannot tell what a mouse is from another device, illustrating the need for technological literacy among judges.
Major Discussion Point
Importance of Judiciary Engagement in Internet Governance
Agreed with
Nazarius Kirama
Umar Khan Utmanzai
Agreed on
Importance of judiciary engagement in Internet Governance
Judges should embrace AI and other technologies to improve court processes
Explanation
Judge Laltaika advocates for the use of AI and other technologies in the courtroom to enhance judicial processes. He argues that these tools can help judges in various aspects of their work, from scheduling to research.
Evidence
He mentions that in Tanzania, AI is used for scheduling cases, assigning cases to judges, language translation, and legal research.
Major Discussion Point
Importance of Judiciary Engagement in Internet Governance
Agreed with
Umar Khan Utmanzai
Martin Koyabe
Agreed on
Need for updated legal frameworks and training
Differed with
Umar Khan Utmanzai
Differed on
Approach to AI adoption in judiciary
AI systems can “hallucinate” and present inaccurate information
Explanation
Judge Laltaika acknowledges the issue of AI hallucination, where AI systems can generate inaccurate or irrelevant information. He emphasizes the need for judges to verify information generated by AI systems.
Evidence
He gives an example of asking AI about economic development in Tanzania and receiving information about Mozambique instead.
Major Discussion Point
Challenges in Applying Law to Digital Spaces
Nazarius Kirama
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Judiciary has been absent from Internet Governance Forum discussions
Explanation
Kirama points out that the judiciary has not been engaged in the Internet Governance Forum since its formation in 2005. He argues that this absence needs to be addressed to ensure proper engagement in debates about internet governance.
Evidence
He mentions that this session is happening for the first time during the lifetime of the Internet Governance Forum.
Major Discussion Point
Importance of Judiciary Engagement in Internet Governance
Agreed with
Eliamani Isaya Laltaika
Umar Khan Utmanzai
Agreed on
Importance of judiciary engagement in Internet Governance
Policies needed to prevent digital exclusion of marginalized groups
Explanation
Kirama emphasizes the need for inclusive digital policies that address the needs of marginalized and disabled communities. He argues that these policies are necessary to prevent digital exclusion.
Major Discussion Point
Inclusivity in Digital Rights
Umar Khan Utmanzai
Speech speed
166 words per minute
Speech length
1162 words
Speech time
419 seconds
Legal frameworks need to be updated to address digital rights
Explanation
Utmanzai argues that existing legal frameworks are inadequate to address the challenges posed by digital technologies. He emphasizes the need for laws that can effectively protect digital rights and handle cyber-related cases.
Evidence
He mentions that in Pakistan, the Prevention of Electronic Crimes Act was only passed in 2016, indicating the recent nature of digital rights legislation.
Major Discussion Point
Importance of Judiciary Engagement in Internet Governance
Agreed with
Eliamani Isaya Laltaika
Nazarius Kirama
Agreed on
Importance of judiciary engagement in Internet Governance
Cross-border nature of internet creates jurisdictional issues
Explanation
Utmanzai highlights the challenges posed by the global nature of the internet in legal proceedings. He points out that judges often struggle with determining jurisdiction in cases involving cross-border cyber activities.
Evidence
He gives an example of a person in Pakistan being harassed by someone outside the country, questioning how a judge would handle such a case.
Major Discussion Point
Challenges in Applying Law to Digital Spaces
Lack of precedent in cyber cases creates difficulties
Explanation
Utmanzai points out that the relative newness of cyber laws means there is a lack of legal precedent for judges to rely on. This absence of established case law makes it challenging for judges to make consistent rulings in cyber-related cases.
Evidence
He mentions that in Pakistan, the Prevention of Electronic Crimes Act was passed in 2016, indicating the recent nature of such laws and the consequent lack of precedents.
Major Discussion Point
Challenges in Applying Law to Digital Spaces
Increase judicial training on technology issues
Explanation
Utmanzai advocates for enhanced training for judges on technology and cyber-related issues. He argues that this is necessary to equip judges with the knowledge needed to handle digital cases effectively.
Evidence
He mentions that in Pakistan, many judges who graduated before 2000 lack knowledge about the internet and AI, highlighting the need for training.
Major Discussion Point
Strategies for Improving Digital Rights Protection
Agreed with
Eliamani Isaya Laltaika
Martin Koyabe
Agreed on
Need for updated legal frameworks and training
Martin Koyabe
Speech speed
151 words per minute
Speech length
1416 words
Speech time
560 seconds
Digital evidence is complex and requires new skills from judges
Explanation
Koyabe highlights the challenges posed by digital evidence in legal proceedings. He argues that the volatile and easily alterable nature of digital evidence requires judges to have specific skills and understanding to handle it properly.
Evidence
He gives an example of a case where critical evidence was lost because police officers unplugged a computer, causing the evidence on the screen to disappear.
Major Discussion Point
Challenges in Applying Law to Digital Spaces
Agreed with
Eliamani Isaya Laltaika
Umar Khan Utmanzai
Agreed on
Need for updated legal frameworks and training
Frameworks should embed human rights protections
Explanation
Koyabe emphasizes the importance of incorporating human rights protections into digital frameworks. He argues that elements like freedom of speech and expression should be embedded within a country’s digital strategy and structures.
Major Discussion Point
Inclusivity in Digital Rights
Rachel Magege
Speech speed
155 words per minute
Speech length
1506 words
Speech time
579 seconds
Gender-based violence can be exacerbated online
Explanation
Magege points out that digital platforms can amplify and exacerbate gender-based violence. She argues that what happens in physical spaces can be mirrored and intensified in online environments.
Major Discussion Point
Inclusivity in Digital Rights
Digital divide affects access to justice
Explanation
Magege highlights the issue of the digital divide and its impact on access to justice. She argues that disparities in technology access and literacy can create barriers to justice in an increasingly digital legal system.
Evidence
She mentions diverse groups including people in rural areas and those with different levels of education who may struggle with access to digital legal services.
Major Discussion Point
Inclusivity in Digital Rights
Show benefits of AI to increase acceptance
Explanation
Magege suggests that demonstrating the positive aspects and benefits of AI can help increase its acceptance in the legal system. She argues that this approach can help bridge the gap between technology and those who fear or resist it.
Evidence
She gives examples of AI being used to detect crop diseases and predict floods, showing its potential benefits beyond the legal system.
Major Discussion Point
Strategies for Improving Digital Rights Protection
AUDIENCE
Speech speed
136 words per minute
Speech length
1148 words
Speech time
505 seconds
Develop comprehensive legal frameworks for digital rights
Explanation
An audience member emphasizes the need for comprehensive legal frameworks to address digital rights. They argue that these frameworks should be aligned with emerging challenges to effectively safeguard digital rights, privacy, and transparency.
Major Discussion Point
Strategies for Improving Digital Rights Protection
Strengthen cross-border judicial collaboration
Explanation
The audience member advocates for enhanced collaboration between judiciaries across different countries. They argue that this is necessary for effectively handling cross-border digital issues.
Major Discussion Point
Strategies for Improving Digital Rights Protection
Agreements
Agreement Points
Importance of judiciary engagement in Internet Governance
speakers
Eliamani Isaya Laltaika
Nazarius Kirama
Umar Khan Utmanzai
arguments
Judges need to understand digital issues to properly adjudicate cases
Judiciary has been absent from Internet Governance Forum discussions
Legal frameworks need to be updated to address digital rights
summary
The speakers agree that the judiciary needs to be more involved in Internet Governance discussions and that judges require a better understanding of digital issues to effectively handle related cases.
Need for updated legal frameworks and training
speakers
Eliamani Isaya Laltaika
Umar Khan Utmanzai
Martin Koyabe
arguments
Judges should embrace AI and other technologies to improve court processes
Increase judicial training on technology issues
Digital evidence is complex and requires new skills from judges
summary
The speakers concur that legal frameworks need to be updated to address digital challenges, and that judges require specialized training to handle technology-related cases effectively.
Similar Viewpoints
Both speakers highlight the challenges posed by digital evidence and AI in legal proceedings, emphasizing the need for judges to have specific skills to handle these technologies effectively.
speakers
Eliamani Isaya Laltaika
Martin Koyabe
arguments
AI systems can “hallucinate” and present inaccurate information
Digital evidence is complex and requires new skills from judges
Both speakers emphasize the importance of addressing the digital divide and ensuring that marginalized groups have access to digital services and justice.
speakers
Nazarius Kirama
Rachel Magege
arguments
Policies needed to prevent digital exclusion of marginalized groups
Digital divide affects access to justice
Unexpected Consensus
Positive aspects of AI in the legal system
speakers
Eliamani Isaya Laltaika
Rachel Magege
arguments
Judges should embrace AI and other technologies to improve court processes
Show benefits of AI to increase acceptance
explanation
Despite concerns about AI hallucination, both speakers unexpectedly advocate for showcasing the benefits of AI in the legal system to increase its acceptance and improve processes.
Overall Assessment
Summary
The main areas of agreement include the need for judiciary engagement in Internet Governance, updating legal frameworks to address digital challenges, providing specialized training for judges, and addressing the digital divide to ensure inclusive access to justice.
Consensus level
There is a high level of consensus among the speakers on the importance of integrating the judiciary into Internet Governance discussions and the need for capacity building. This consensus implies a strong recognition of the challenges posed by digital technologies in the legal realm and a shared commitment to addressing these challenges through education, training, and policy updates.
Differences
Different Viewpoints
Approach to AI adoption in judiciary
speakers
Eliamani Isaya Laltaika
Umar Khan Utmanzai
arguments
Judges should embrace AI and other technologies to improve court processes
Judges are facing the complexity of the technology because judges are not well trained on the things
summary
Judge Laltaika advocates for embracing AI in judicial processes, while Utmanzai highlights the challenges judges face due to lack of training and understanding of technology.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement revolve around the approach to integrating technology in the judiciary and the extent of training required for judges.
difference_level
The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but have slightly different perspectives on implementation. This suggests a general consensus on the importance of addressing digital rights in the judiciary, which is positive for advancing the topic.
Partial Agreements
Partial Agreements
All speakers agree on the need for judges to understand digital technologies, but they differ in their approaches. Laltaika emphasizes general understanding, Utmanzai focuses on specific training, and Koyabe highlights the need for skills in handling digital evidence.
speakers
Eliamani Isaya Laltaika
Umar Khan Utmanzai
Martin Koyabe
arguments
Judges need to understand digital issues to properly adjudicate cases
Increase judicial training on technology issues
Digital evidence is complex and requires new skills from judges
Similar Viewpoints
Both speakers highlight the challenges posed by digital evidence and AI in legal proceedings, emphasizing the need for judges to have specific skills to handle these technologies effectively.
speakers
Eliamani Isaya Laltaika
Martin Koyabe
arguments
AI systems can “hallucinate” and present inaccurate information
Digital evidence is complex and requires new skills from judges
Both speakers emphasize the importance of addressing the digital divide and ensuring that marginalized groups have access to digital services and justice.
speakers
Nazarius Kirama
Rachel Magege
arguments
Policies needed to prevent digital exclusion of marginalized groups
Digital divide affects access to justice
Takeaways
Key Takeaways
The judiciary needs to be more engaged in Internet Governance discussions and processes
Judges require training and tools to properly handle digital evidence and cyber cases
Legal frameworks need to be updated to address emerging digital rights issues
Cross-border collaboration is needed to address jurisdictional challenges in cyber cases
AI and other technologies can improve court processes but also present new challenges
Inclusivity and protection of marginalized groups must be considered in digital rights policies
Resolutions and Action Items
Launch of the Judiciary Global School on Internet Governance to train judges on IG issues
Plan to include more judges and legal practitioners in future IGF meetings
Tanzanian judge to request government sponsorship for lawyers to attend next IGF
Unresolved Issues
How to balance judicial independence with the need for technology adoption
Extent to which court processes should be digitized (e.g. online marriages)
How to address AI hallucination and ensure accuracy of AI-generated legal information
Funding and resource allocation for judiciary digitization in developing countries
Suggested Compromises
Gradual introduction of technology in courts, starting with scheduling and case assignment
Use of legal research assistants to help judges navigate new technologies
Framing digitization benefits in terms of efficiency and citizen service to gain political support
Thought Provoking Comments
Within the East African community, and I’m not going to mention countries here, there is, in one country, a judge, a high court judge was summoned by the disciplinary committee because a part of his judgment was allegedly written by a chat GPT.
speaker
Judge Eliamani Isaya Laltaika
reason
This comment highlights the real-world challenges and controversies surrounding the use of AI in judicial processes, illustrating the tension between technological advancement and traditional legal practices.
impact
It sparked a deeper discussion about the ethical implications and practical challenges of integrating AI into judicial systems, leading to considerations of guidelines and regulations for AI use in courts.
In Tanzania, we got out of this because we started with a strategic plan. We sat down and made a five-year strategic plan. We identified what we need. We handed it over to the executive. The executive said, okay, now we know your needs.
speaker
Judge Eliamani Isaya Laltaika
reason
This comment provides a concrete example of how to effectively implement technological changes in the judiciary, emphasizing the importance of strategic planning and collaboration with the executive branch.
impact
It shifted the conversation towards practical solutions and strategies for digital transformation in the judiciary, encouraging other participants to consider similar approaches in their own contexts.
So for example, when you look at the case of Tanzania, for example, when they were developing their cyber security strategy, they were very keen that they must have those fundamental tools and instruments embedded within their strategy.
speaker
Martin Koyabe
reason
This comment emphasizes the importance of integrating cybersecurity considerations into national strategies, highlighting a proactive approach to addressing digital challenges.
impact
It broadened the discussion to include the importance of comprehensive national digital strategies, encouraging participants to consider how legal frameworks, cybersecurity, and digital rights can be integrated at a policy level.
In Pakistan, a judge cannot even sit in public. He is not, meet the people in the public. So those who are judges in the high court are the judges from last, have graduated before 20,000. 1998, 1999, and 2000. So they even do not know about the internet, they do not know about the AI.
speaker
Umar Khan Utmanzai
reason
This comment provides a stark contrast to the more technologically advanced judicial systems discussed earlier, highlighting the significant disparities in digital literacy and access among judges in different countries.
impact
It brought attention to the global inequalities in judicial digital literacy, prompting a discussion on the need for targeted training and capacity building for judges in less technologically advanced jurisdictions.
Overall Assessment
These key comments shaped the discussion by highlighting the complex challenges of integrating technology into judicial systems across different contexts. They moved the conversation from theoretical discussions about digital rights to practical considerations of implementation, training, and policy development. The comments also underscored the global disparities in judicial digital literacy and access to technology, emphasizing the need for tailored approaches in different countries. Overall, these insights deepened the conversation, making it more nuanced and action-oriented, while also broadening its scope to consider diverse global perspectives on judicial engagement with digital technologies.
Follow-up Questions
How can frameworks be developed to guide the use of AI in courtrooms?
speaker
Eliamani Isaya Laltaika
explanation
There is a lack of guidelines or regulations for AI use in courtrooms, leading to inconsistent approaches across jurisdictions.
What measures can be implemented to check the accuracy of AI-generated content in legal briefs?
speaker
Audience member
explanation
There is concern about lawyers using AI tools like ChatGPT for their briefs without verifying the accuracy of the generated content.
How can the executive branch be encouraged to financially support the modernization and capacity building of the judiciary?
speaker
Audience member (Doron from UNEGOV)
explanation
Many judiciaries lack the financial independence to fully benefit from digitalization efforts.
What are the appropriate limits for digitalization in the judiciary, particularly for sensitive legal processes?
speaker
Audience member (Doron from UNEGOV)
explanation
There is a need to determine which judicial processes can be safely digitalized and which require in-person interactions.
How can legal education be updated to include more technology and cyber law components?
speaker
Umar Khan Utmanzai
explanation
Many law graduates lack formal education in cyber law and technology, which is increasingly important in legal practice.
What strategies can be employed to enhance cross-border judicial collaboration on digital rights issues?
speaker
Audience member (Hassar Taibi)
explanation
There is a need for better coordination of judicial decisions across countries to effectively handle cross-border digital issues.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
WS #283 Breaking the Internet Monopoly through Interoperability
WS #283 Breaking the Internet Monopoly through Interoperability
Session at a Glance
Summary
This discussion focused on the monopolistic practices of big tech companies and the concept of interoperability as a potential solution. The speaker, Batool Almarzouq, began by highlighting the immense wealth and power of major tech corporations like Apple, Microsoft, and Facebook. She explained how these companies maintain their dominance through strategic acquisitions, leveraging legal power, and creating barriers to competition.
Almarzouq emphasized the importance of interoperability in fostering innovation and breaking monopolies. She provided historical examples, such as IBM’s decline in the PC market and Apple’s challenge to Microsoft’s office suite dominance, to illustrate how interoperability can disrupt monopolistic control. The speaker outlined three types of interoperability: cooperative, indifferent, and adversarial, with a focus on the challenges of implementing adversarial interoperability.
The discussion touched on the implications for the Global South, where countries face the dual burden of adopting tech-friendly laws favoring US corporations and using technology ill-suited to local conditions. Almarzouq stressed the need for legal changes to support interoperability and protect users’ rights to modify and control their technology.
In the group discussions that followed, participants explored various aspects of interoperability. One group focused on the implementation of the EU’s Digital Markets Act and its potential impact on messaging services. Another group discussed the need for regulatory approaches to address data privacy violations and the importance of transparency from big tech companies. The discussions highlighted the complex challenges in implementing interoperability and the need for localized approaches to tech regulation.
Keypoints
Major discussion points:
– The monopolistic power of big tech companies and how they maintain dominance
– The importance of interoperability in technology and how it can break monopolies
– Legal and business tactics used by big tech to prevent interoperability
– Implications for the Global South in terms of technology access and adaptation
– Potential solutions like new laws and rebuilding internet infrastructure
Overall purpose:
The goal was to examine how big tech companies have gained and maintained monopoly power, discuss the importance of interoperability, and explore potential solutions to create a more open and equitable technological landscape, especially for the Global South.
Tone:
The tone was primarily informative and analytical, with the speaker presenting research and concepts in an academic style. There was an underlying tone of concern about the current state of tech monopolies and their global impact. The tone shifted to be more collaborative and reflective during the group discussion portion at the end.
Speakers
– Batool Almarzouq: Computational biologist, open science advocate, working at the Alan Turing Institute
– Ghaidaa Alshanqiti: Online facilitator for the session, Civil Society, Western European and Others Group (WEOG)
– Ian Brown: Discussed interoperability in the European Union
– Panelist: Unnamed panelist who summarized discussion from a group including colleagues from Russia and Kenya
Additional speakers:
– Anne Steele: Mentioned as one of the organizers of the session
– Abdul Latif Talwan: Mentioned as one of the organizers of the session
– Cory Doctorow: Author of the book “Internet Con”, referenced in the discussion
– Jason Heckel: Anthropologist quoted in the discussion
Full session report
Expanded Summary of Discussion on Tech Monopolies and Interoperability
Introduction:
This discussion focused on the monopolistic practices of big tech companies and the concept of interoperability as a potential solution. The primary speaker, Batool Almarzouq, a computational biologist and open science advocate from the Alan Turing Institute, presented research and concepts in an academic style. The event was organized by Batool Almarzouq, Anne Steele, Abdul Latif Talwan, and Ghaidaa Alshanqiti.
1. The Rise and Dominance of Big Tech Companies:
Batool Almarzouq began by highlighting the immense wealth and power of major tech corporations. She noted that “the top 1% owns 43% of the world’s GDP… while two corporations actually control 40% of the global seed market itself.” Apple’s market value, for instance, exceeds the GDP of many countries.
Almarzouq outlined several ways these companies maintain their dominance:
a) Strategic acquisitions: Rather than competing through innovation, big tech companies often buy out potential competitors.
b) Leveraging legal power: They use various legal strategies to maintain market dominance, including the DMCA Section 1201 and trademark laws.
c) Creating barriers to competition: This includes tactics like non-compete clauses.
d) Network effects and switching costs: These make it difficult for users to switch platforms.
She also pointed out that antitrust enforcement changed significantly in the 1970s and 1980s, allowing these monopolies to form and grow.
2. The Importance of Interoperability:
Almarzouq emphasised the role of interoperability in fostering innovation and breaking monopolies. She provided historical examples:
a) IBM’s decline in the PC market: This occurred when IBM PC clones entered the market, increasing interoperability.
b) Apple’s challenge to Microsoft’s office suite dominance: This was made possible through Apple’s iWork suite and interoperability.
The speaker outlined three types of interoperability:
a) Cooperative: Where companies willingly work together (e.g., email protocols).
b) Indifferent: Where one company doesn’t actively help others.
c) Adversarial: Which was the focus of the discussion.
3. Legal and Business Tactics Preventing Interoperability:
Almarzouq explained that big tech companies now use various legal and technical barriers to prevent interoperability, especially adversarial interoperability. This has made it increasingly difficult for new competitors to enter the market or for users to switch between platforms.
4. Implications for the Global South:
The discussion briefly touched on the significant challenges faced by countries in the Global South. Almarzouq highlighted a “double burden” these countries face:
a) They are forced to adopt tech-friendly laws that primarily benefit US corporations.
b) They must use technology ill-suited to local conditions, as it’s often designed for different contexts.
This situation has led to local developers creating better-adapted versions of apps, highlighting the need for technology that considers local needs and contexts.
5. Potential Solutions and Regulatory Approaches:
Several potential solutions were discussed:
a) New laws supporting interoperability: The EU’s Digital Markets Act was cited as an example, mandating interoperability for messaging apps and potentially extending to social networking services.
b) Rebuilding internet infrastructure: This was suggested as a long-term solution to create a more open internet.
c) Collective advocacy: Almarzouq emphasised the need for users to collectively push for better interoperability.
Ian Brown discussed interoperability in the European Union, noting the high resource requirements for implementing interoperability, especially for smaller organisations.
6. Group Discussions and Additional Perspectives:
Following the main presentation, participants broke into groups to explore various aspects of interoperability. They were asked to discuss challenges, opportunities, and potential solutions related to interoperability. Key points from these discussions included:
a) Implementation challenges of the EU’s Digital Markets Act, particularly for messaging services.
b) The need for regulatory approaches to address data privacy violations by big tech companies.
c) The importance of transparency from big tech companies regarding their protocols and algorithms.
d) The need for localisation of terms of service for different markets and cultures.
e) Challenges faced by organizations like ICRC in using platforms like Teams while maintaining interoperability.
A panelist summarising discussions from a group including colleagues from Russia and Kenya emphasised data privacy violations and financial statement violations by big tech companies.
Jason Heckel’s quote about the economy and relationships based on extraction, exploitation, or care was highlighted as a key insight from the discussions.
Conclusion:
The discussion highlighted the complex challenges in implementing interoperability and the need for localised approaches to tech regulation. It emphasised the importance of collective action, regulatory intervention, and innovative solutions to address the dominance of big tech companies and create a more open, equitable technological landscape, especially for the Global South.
The organizers plan to produce a report about the session and share it with participants, encouraging further exploration of the topics discussed.
Session Transcript
Batool Almarzouq: I apologize for the delay, and thank you so much, really, for joining in this very early hour, and if it’s you, Audi, thank you so much for coming all the way to Saudi. For people who joining online, thank you for making the time to join. I want to acknowledge that this session is organized not just by me, you can just see me here in the room, but it’s organized by four people, so it’s organized by the wonderful Anne Steele, Abdul Latif Talwan, and also Ghida, who is actually our online facilitator. This is not going to be a panel. It’s not similar to the other sessions I’ve probably been to in the conference so far. So the plan, I’ll be speaking in 25 minutes at the very beginning to give some sort of introduction and speaking more about the topic, then we’re going to move into table discussion for about ten minutes. So people who are in the room, I’m going to gather you in like groups probably, and I’ve got notebooks and pens, and people who are online, we’re going to have a Google Doc that you’re going to use to share your reflections and some questions that I’ll be presenting. Again, this is the last day of the conference, so we want everyone to connect, to make a friend. network with the people next year. And the last thing, also, that I want to say, that this talk is a bit heavy with legal concept that I’m going to try to simplify. But also, I want to make sure if the sound is not good. So OK. Hopefully, the sound now is way better. So as I said, the talk is going to be a bit heavy with legal concept. But I’m going to try to make it very clear. I’m not a lawyer. And none of us are lawyers. So any information or data shared during the talk is cited. The slide is also going to be available today online. So please go back to this information and all of these citations. I’m a computational biologist by training. And I’m an open science advocate. I’m Saudi, but I’m based in the UK. And I’m currently working at the Alan Turing Institute, which is the UK National Institute for Data Science and AI. And I come from academic point of view to this topic. So what I’ll be trying to do is encouraging or trying to reflect on a question about how we encourage growth, innovation, and tech without harm, what decolonization actually looked like for the global south. And my intention, really, not to focus on the legal concept or technical one, but more about the implication and what it means for us. So the first question, why bother, actually, with the internet monopoly? Why are we speaking about this in the first place? We are all aware of the disparity between the global south and the global north when it comes to resources, wealth, domination, extractions. To get more sense of what monopoly looked like, let’s start with this stat. According to Oxfam, the top 1% owns, actually, 43% of the world’s GDP. percent of all the global financial assets, while two corporations actually control 40% of the global seed market itself. This is how serious it is. Looking at other stacks within the tech itself, again all the figures that I’m going to be presenting about the big tech are calculated in 2021 and cited with the citation here, but some might have it changed, but I don’t expect these changes to be huge or very far from these figures. So let’s start with Apple. Apple at work exceeds actually GDP of many countries like Italy, Brazil, Canada, Russia. In fact there are only seven countries in the world with higher GDP than Apple, 2.2 trillion, meaning that this tech giant is richer than a lot of the world’s population or nations. If Apple was actually a country, it might take the place of the eighth richest country in the world. That’s how serious it is. Let’s look at another example, Microsoft. Microsoft is not different. It’s 1.8 trillion market cap, but it’s really the third highest or largest company in our list, and it makes GDP on par with Canada and even richer than many developed economy like Australia, Spain, Indonesia. Actually if you take Apple and Microsoft and you combine them together, the GDP is even going to be higher than what Central Europe put together. Again, Amazon is without doubt also one of the big tech giants worldwide and has seen very, very growth in their sales during the COVID-19 and the market capitalization is 1.6 trillion, makes that company also one of the richest of the 90% of the countries within the globe. And you can see it also in this map. Again, last example, Facebook. It’s not in the top 10 with the highest value, but also has a market gap of $763 billion. And it’s had this very, very huge global player in the world stage. If Facebook were a country, it would be wealthier than Switzerland, Sweden, and the UAE combined. That’s how huge it is. In terms of GDP. So I think the question, the reason why I’m showing this sort of start, in the past, there was this dynamic and chaotic wave with startups. So people start as a startup. They rose to the peak. They become household name within just a few years. Only, sorry, I wasn’t moving this. So there was a dynamic and chaotic wave of startup. They rose to the peak, becoming household names within just a few years. And then they disappeared just as they reached the peak. And that was a very, very, very normal cycle. However, this cycle was interrupted when the new giant did not just get big, they stayed big. And then they got bigger, and bigger, and bigger. So that’s what the interruption is happening. And that’s what we’re trying to understand. So probably the question is, how did this big tech become big? And did they stay big without going through the same cycle? And I think this is a very, very, very tricky question. I’m sure some of you in the audience have different answers. There is no very clear answer. So I’m going to be mainly really focusing and using some analysis presented by Corey in his latest book, Internet Con. But again, this is something that I’ve been for debate. And I just want to highlight a few things about this big tech company. Again, when I speak about them, this is not to criticize. besides anyone who affiliated with these companies, is just more about to try to reflect about a lot of these practices, how we can sort of understand its implication into our society. So one of the things that really huge or big tech is doing, the big five tech company made hundreds and hundreds of acquisitions. We don’t have exact number. Some say it’s over 600 from 2010 to 2019. They were not tiny purchases. Each was worth over $1 million. And that shows how aggressively they were buying up other companies. When these people come and buy these companies, founders of the startups has to sign an agreement. We call this agreement non-competing clause. And this agreement prevent them from creating competing product. So effectively, they sort of try to stop this startup from, or these innovators, from starting a new rival company. And that kept talent and idea locked within the big tech companies. And many of these acquired actually a company with a startup, so like under five years old. And that suggests that this big tech was buying potential competitors before they could grow. And that pattern of eliminating this competition early before it becomes serious is one of these factors that actually contributed for these big techs staying big. For example, we all remember Facebook acquisition of Instagram and WhatsApp back in 2012. Instagram, while becoming very, very super popular, especially because they figured out how to make their social media work great on phones, something that Facebook. Facebook was a bit struggling with. So rather than Facebook trying to fix that problem, Facebook actually brought to Instagram for $1 billion. And at that time, people thought that was crazy money for an app that has no income. But Facebook knew what they were doing at that time. Then in 2014, they saw also WhatsApp becoming huge, huge in messaging. So they went even bigger, spending, I believe, $20 billion to buy WhatsApp. So Facebook’s own internal documentation showed that they were worried about people actually leaving their platform. And they knew it was very, very hard for a user to switch because all of their friends, their network, were already in Facebook. So fast forward today, this strategy has worked incredibly, incredibly well for Facebook, and now Meta. We call Facebook Meta. So they control the social media for about 4 billion people worldwide. And that making Meta basically an official ruler for half of the world’s digital social life. So I think what I’m trying to say, instead of winning by making things better product, they win by buying up anyone who might have a compete with them. And because regulators, I mean, antitrust enforcement I’m going to speak about in a bit, did not stop these purchases, Facebook has been able to keep doing this, leaving users with fewer choices and less innovation in social media. The other factors that actually contributed to big tech staying big or becoming giant in the first place is the death of antitrust enforcement. And I’m going to be focusing more about some of these in the American economy. So back in 1890, America created strong laws. We called them the Sherman Act. to prevent any company from getting too, too powerful. And the thinking was pretty straightforward. Just like you don’t want one person having too much political power, you don’t want also one company controlling too much of the markets. So this is something that, within the American system and the legal system, they have thought about it from the very, very beginning, since the 1980s. But everything changed in the 1970s when a guy named Robert Bork came along with a new idea. He basically said monopolies were fine as long as they did not raise the prices for consumers. And this was a huge, huge change from the original law, which was meant to stop any company from getting too, too big. And the idea caught fire at the University of Chicago back then, where economists created these fancy sort of models to prove that monopoly were actually good for everyone. And these models were so, so complicated that only a creator could understand them, which was perfect for a big company who would hire these same economists as consultants to justify these mergers. Things really took off in the 1980s. And then President Reagan and the US government basically stopped enforcing these antitrust laws. And that thinking spread globally through leaders like Margaret Thatcher in the UK. And the results, what we see today, we see monopolies everywhere, from airline, from eyeglasses to candy companies. So this is why Facebook could actually just buy Instagram and WhatsApp instead of competing with them. For example, it took 69 years for constant effort to break up the old telephone monopoly, AT&T, people who are in the United States familiar with it. But today, similar monopolies face practically no threat of being broken up. even have to prove they’re not monopolies anymore. They just hire economists to argue why their monopolies is actually good for everyone. This change in how we handle monopoly has completely transformed the American economy. So instead of fighting big companies getting bigger, we now celebrate it. And that’s the big reason why tech company have been able to grow so, so incredibly powerful. The last thing that helped big tech is how they leverage also legal power. So think of big tech companies as a players who not just know how tech game, but also know the legal game. They have different legal protection with enough money for lawyers, they can batch up any weakest spot by combining different laws in very, very clever way. One of the favorite laws is something called the MCA. Especially section 1201, which is a pretty, pretty intense. So if you, that section tell you if you break any digital lock, even for perfectly legal reason, you face five years in prison and 500,000 fine. So company can stop these digital lock, they can actually add these digital lock in their product and then call out anyone to try to modify or work with them. They basically make it a crime to mess with their business model. So take Apple, for example, they bought this tiny Apple logo and iPhone parts inside of the iPhone itself. Then they claim it’s a trademark violation if you use or you change some of these parts for repair. So between this and this, they control who gets to repair their devices and how. They also use complicated contract, those terms of service that nobody reads. Plus agreement that’s still. employees from working for competitors or talking about the company secret. So it’s very, very powerful when they combine all these legal tools in a way that stops any sort of competitors. So these companies are not just building teams for engineers anymore, they’re building also armies of lawyers, and if anyone tries to compete with them, they don’t need just to make their product better. They have to also understand the legal system until the other party or side actually try to give up. And this is not just in America, these are trade agreements, they spread and they run worldwide. I’m going to speak about their implication in the global south in particular. So some might feel this is very, very normal, this is how competition should be, but my takeaway is that technology has changed very, very dramatically from something anyone could think or improve into something that is controlled by a giant company. So these days having great, great technical skills is not enough, you need also a deep, deep bucket of legal battle if you want to innovate in tech. So this takes me in how tech used to grow in the past. By the ability of computer system or tech or software to exchange and make use of information. And we call that interoperable system. So actually, as Corey put it, the technological world that we inhabit today was profoundly shaped by the ability of newcomers to hack interoperable ad blocks and features into the technology that came before them. And actually, this is how all tech used to develop and work this way. So this is how Microsoft got big, this is how all of these big giants actually got big in the first place. So let’s take two examples, just what I mean by interoperable. One around hardware, another one around software. So let’s speak about a breaking hardware monopoly with IBM. People who are older than me, they remember IBM in the 1970s, 80s. IBM was this king of computing. They were so, so, so powerful that people called them the big blue. But IBM had a problem back then, had been fighting antitrust case with the Department of Justice for around 12 years. And during that battle, IBM had to reduce every memo, attend countless, countless of meetings with lawyers who were paranoid about this antitrust violation, about this big sort of violation. And this scrutiny made IBM extremely, extremely cautious back then about anything that might look monopolistic. So to avoid the trouble back then, IBM made two key decisions to avoid further antitrust scrutiny. So one thing they’ve done, they used commodity bots, like common bots, that any manufacturer could buy in the open market. And they also decided not to make their own VC operating system. Instead, they were getting it from Microsoft. And just to give some sort of Microsoft back then was not this big tech giant. It was really, really young, young company back then. So IBM voluntarily embraced interoperable component and operating system. And then something very, very interesting happened. A small company called Phoenix Computers figured out how to reverse engineer the one part of IBM that still IBM has as unique or manufactured by them, which is the ROM chip that made the computer work. And that meant the other company could actually now make complete IBM-compatible computers. So it can be companies like Dell. gateway, started making this VC clone that could run all the same programs as IBM computer, but cost less money and had better, better features. And because of this interoperability, IBM monopoly crumbled, and they eventually got out of the VC business entirely. And the interesting is that this decision helped break IBM’s monopoly even more effectively actually than the antitrust case itself, where they were fighting for 12 years. Just to, so that’s an example how you break with interoperable system actually, big giant. Another example that probably many people here are very familiar with it is Apple versus Microsoft battle on the office suite. So in the early 2000s, Microsoft had begun this new tech monopoly controlling 95% of the disk of computing through Windows. So actually the first computer that I had back then is like Microsoft and has this office that I use it for everything. And Microsoft used this power to dominate office software with the Microsoft office and created a big, big problem for Apple. Because Microsoft office software does not work for Macs, or it works extremely, extremely terribly, making it hard for Mac users to share files with Windows users. Let’s say I’ve made a presentation in Windows, if I give it to my colleague who’s using Mac back then, they could not operate it or make it work. So most people use Windows back then, which means buying Mac was very, very risky. You might not be able to work with your colleague file. So that they create it in Microsoft office like Word document. So Apple found a very, very clever way. Instead of begging Microsoft… to make better Mac software, they worked to reverse engineer the Microsoft file format. So they created something called iWork, their own Office Suite, which I believe many of you are familiar with it, which read and write Microsoft Office file. And suddenly, Mac users could actually freely exchange file with Windows users. And even better, Windows users could switch to Mac without losing access to their old file. And this helped save Apple and eventually forced Microsoft to make their file format public standard that anyone could use. So what I would like actually to highlight from this story, how interoperable can it break these monopolies in two ways? Letting your competitors make compatible hardware, like IBM VC clones, or compatible software, like Apple’s iWork. And in both cases, the monopoly company could not maintain that control once users had real choice that wouldn’t keep them away from their file or program. This is why tech giants today fight so, so hard to prevent interoperability. They’ve learned from history that one of the biggest threat to domination is actually making their hardware or software interoperable itself. So there are three distinct types of interoperable. As I mentioned, there’s cooperative interoperable, so a company willing to work together, follow these agreed on standards, like email protocol. For example, Gmail can send to Apple Mail because they share the standard based on a mutual agreement and cooperation. There is indifferent interoperability, which one company doesn’t actually actively help up like others. But the one that I would like to focus more about is adversarial interoperability. So creating compatible alternative. but created against the original company wishes. And this often involved reverse engineering, which is very, very challenging. So original company actively tried to prevent it, and that’s the example I gave with Apple iWork and Microsoft Office itself. So I think the question that, what new challenge we found when we’re trying to do this adversarial interoperability is the same as the one I’ve mentioned previously. So big company used strategy to stay at the very, very top. Legal tools and business track. So as I mentioned, the DMCA section 1201 represent one of the most powerful tool against interoperable system. Company can add simple digital lock to their product and then use DMCA to legally attack anyone who can try to modify this interoperable system. They also use the trademark, but they also, there is a business trick that keep users stuck within their service. As I mentioned, the simplest approach is like buying any company that might become a competitor with you. And the clever part is how this, they combine all these strategies. So they combine this legal, business tactic, technical one. So it’s not enough for the competitor to get vast win barrier. They have to break down multiple, multiple barriers of a protection. So even if someone is very, very smart and they figured out how technically connect with a big tech service and reverse engineer anything, they face a lawsuit and there is other, other sort of flyers that they have to overcome. So to break both type of barriers, we need really new laws that make it legal to connect with and modify existing service. And we need rules that make it easier for user to leave services without losing everything they’ve been actually trying or build. over there. So going back to the network effect we see in social media, and big tech get big through what we call the network effect. So you join Facebook because there is what everyone share photos, update, and then your friend join because you are there, and their friend join because they are there, and the cycle continue. And that’s how really what we call it, like network effect, that affect how tech companies grow massive user base very, very quickly. But getting big is not enough. They need to keep people from leaving. This is where switching codes come in. So imagine trying to leave Facebook. You would lose all of your photo, your messages, your event invitation, your business contact, and even your connection to a friend and family who only use Facebook. And the cost of switching to another service is so, so high that people stay, even if they are not happy with the service itself. Because if you move, you really lost all of your data, all of your photos, friends, network. So it’s like being. And here’s what gets very, very interesting. Technically speaking, there is no reason why you couldn’t have a different social media service that could still talk to Facebook. So all computers can run any program. And that’s just how computers are actually supposed to work or work in a practice. Like how you use Gmail to email someone using Yahoo email. They could theoretically have a different service that lets you keep in touch with your Facebook friend without actually using Facebook. But big companies don’t want to make that happen, since we can’t make it technically impossible because of how computers work. They use legal barrier instead. And they create laws, contracts, digital locks that make that very, very, very complicated. And this is why fixing the problem required change. changing law, not just building better technology. And the technology to break free from big tech control already exists, but the legal barrier is that will keep us very, very locked in. Again, how do we support interoperability? There is, think of today internet like a bunch of separate castle, like Facebook, Google, Apple, et cetera, keeping everyone trapped inside. And the immediate solution about creating doors in this world. So one of the very encouraging things that have been happening is the European Union already been doing the Digital Market Act, which basically tells big companies, you must let other companies connect to your service. It’s like forcing Facebook to let you talk to your Facebook friend, even if you don’t have a Facebook account, or even if you use a very different social network account. Another quick fix is allowing what we call a competitive compatibility. So that means let an engineer figure out how big tech service works and create compatible alternatives, like how Apple figured out how to reverse engineer and make their iWork software read Microsoft Office file. Right now, this is the kind of reverse engineer. It’s often very illegal. And making it legal would immediately give users more and more choices. The longer term solution that we can speak about is actually rebuilding the internet to be more open from the ground up. This starts actually with fixing our antitrust law, which currently lets very, very big companies buy up all their competitors, like Facebook, buying Instagrams and WhatsApp. We also need technical standards that make different services work together by design. And most importantly, I would say we need to protect people’s right to modify and control their own technology. And right now, if you buy an iPhone, I will control what app you get to install, like through App Store. And the long-term fix is giving this control back to the users. So we need both kind of solution, quick fix to give people more choices right now, and the long-term changes to make sure that big tech cannot lock everything down again in the futures. And this really, really matters, because if we can’t fix big tech control over our digital life, we’re going to be struggling to fix many, many other problems in our modern world. So we need open, fair technology to organize, communicate, and work together in all of our challenges. I want to touch very, very light implication on the global south, which is extremely, extremely huge. And that’s what actually of interest to us. So the global south face double, double burden. It’s forced into passing tech laws friendly only to US corporation. And the second part, they have to use technology designed for each country that often does not work well in your local conditions. So the technology itself often does not work well in this country. And imagine trying to download a huge, huge software update when your internet actually is very slow or expensive. I’ve been asked, I assume you always have power and internet when you don’t actually in practice. Again, people in this country find a very clever way around this problem. For example, in Syria during the civil war, someone created a better version of WhatsApp that worked better for local need. And Indonesia also drive who modified this ride-sharing app to make them work better in their city by reverse engineering both of them. And this solution works because local developers and understand local need better than Silicon Valley companies. So making technology more interoperable, meaning anyone can modify it to work better for their need, this would let country in the global south adapt technology to their local condition instead of being forced to use solution designed by Europe or the US. And this matters, because technology today is like electricity, actually. You need it to participate in modern world. But right now, global south are forced to use technology that is not built for them, while being legally prevented from adapting this technology or reverse engineer to their need itself. So these days, I see myself actually reflecting a lot on many of the problems we have in our technology and our politics, which always takes me back to the economy, to the growth. So I want to leave. This is the last slide. I want to leave. Slide does not change for some reason. It’s interesting. It’s not changing. OK, it’s fine. This last slide, I want to leave with this quote from Jason Heckel, who is an amazing anthropologist and has written a lot about de-growth economy, about climate change, about technology. He said that the economy is really this material relationship with what will surround us. And we have to decide whether we want that relationship to be based on extraction, exploitation, or in care, which is something that we leave out when we think about long-term impact of what we do currently. Probably a good reminder for ourselves that everything that we do goes back in circle to the same people who actually started it. So, I will move to questions and reflection, which we will do as a group. I’m having a problem with changing the slide completely. Would you be able to change it from your side? Or is that the last one that you have? Can you change it as well? Okay. Thank you. So, we have, I believe, around, like, 15 minutes to questions and reflection. What I want to try to do, I want to try to do something very new. I want to do some sort of group. Because what I want to do is also collect these sort of feedback, reflection into some sort of report about the session itself. We’re going to do the same with the people online. So I’m going to try to collect people in different groups for around ten, seven minutes. Then each group is going to introduce themselves, they’re going to get to know each other. This is the day before the last of the conference. So make friends, know people around you. And then I do have a few questions that we’re going to reflect on these questions. I’m also not very fond of these headphones, actually. I don’t like speaking to people in headphones when you have them, like, in front of you. It’s a different vibe for interacting and hearing people directly. So I’m going to give you some notebooks as well that we have in the back. And people who are online, we have a Google Doc provided. And we’re probably going to do a breakout room to collect all of their thoughts. And this discussion, what we’re going to be doing is we’re going to report back, like, all of these discussions, thoughts and reflection in the mic so we can have it in the transcript itself. Would you be able to change the slide one more? Okay. So that’s the last slide. So things that I want to start with is, again, each one is going to introduce themselves. to the group, and then after the introduction, choose one of these questions that resonate more with you or something that you want to sort of, you have any thoughts around it. The first question is, you don’t have to answer all of these, just one of them. Like, have you ever felt locked in into a technology service despite wanting to leave, and what kept you actually there? The second question, how might, if we make interoperability something that mandatory, how that going to change the balance of power between users and tech companies, and how might actually that affect innovation in technology? The third question, what challenges do you foresee in implementing this interoperability as a compulsory? The last question, how could users actually in your country, because we’re coming from different places, collectively advocate for better interoperability? So we’re going to do that in the last sort of five minutes. Would that be OK? Let me just try and take this away. I’m going to check with the online people if they are OK. Yeah, right now, would you be able to open breakout room and add people there, and then give them the Google Doc, please? I’m going to add the Google Doc again, just in case for people who did not see it from the very beginning.
Ghaidaa Alshanqiti: Thank you, everyone. I see the breakout room and the online, they also sort of finished. So what we’re going to do now is just that we’re going to have someone to report from each group. I’m very conscious of the time. We have another session immediately after us. So what I’m going to do, I’m going to hand it to group one and then group two. And I will see if we can do the online. If we couldn’t, we’re probably going to just catch up with the Google Doc. So I’m going to send it to everyone later on.
Ian Brown: Hello everyone, I’m Ian Brown. We were talking about interoperability in the European Union. First of all, the Digital Markets Act has been in force for over a year and therefore messaging services, particularly WhatsApp, have to enable interoperability to other messaging services. What we’ve seen so far is it’s taking some time for other organisations to develop the technology. Multiple companies are, although we don’t know who they all are yet because they haven’t said publicly, one challenge has been one open source organisation has been trying to build WhatsApp interoperability into their service based on Matrix, which is an open source protocol, but they found that the resource requirements are currently too much for them to do that without a specific customer needing it and being willing to finance it. So that hasn’t happened yet. The EU is also considering extending this legal requirement to social networking services like Facebook. That was debated when the law was passed. The European Parliament wanted that to happen, but the EU governments would not allow it to happen. But after a three year review, that may happen next time. And then the other point we discussed was our colleague from the ICRC. They are using Teams globally for meetings with 20,000 people, you said, a lot of people, but they do talk to organisations like the UN, for example, via Zoom sometimes, the European Commission, occasionally with WhatsApp, so interoperability could be a useful tool here to let different organizations using different tools bridge without having to install all the different software in all the different places. The one key point, of course, is the security policy of the ICRC would have to also be met by these other organizations. Thank you.
Batool Almarzouq: Thank you so much for that. Thank you.
Panelist: So thank you very much. I’ll give an overview. So our three colleagues, one from Russia, two of us from Kenya, we looked at several issues with respect to interoperability. So we are seeing that big tech companies are engaged in data privacy violations, financial statement violations. So we had a discussion and we had the opinion that we need to have regulatory approaches to intervene in such violations. There is need for transparency by the big tech companies in terms of, you know, explaining more about the protocols that they’re using, issues of compatibility. There’s need for transparency in using the technology. So we discussed and agreed that there’s much more neutrality that is required to ensure that these platforms are interoperable. We’re seeing that issues of big data analytics and the use of algorithms to mine data from users to try to influence our purchasing choices. So there’s need also for the big tech companies to have localization of their laws or their terms of use for the different markets in which they operate. Because many times you find that they apply their own local laws in different jurisdictions and it causes some kind of conflict because different parts of the world have different cultures and therefore different content might be offensive in one part of the world and not the other. Thank you.
Ghaidaa Alshanqiti: Thank you so much for sharing such interesting reflection and thoughts. What we’re gonna be doing is if you have taken notes I was I wanted also to share the one in the online but I’m very very conscious of the time so what we’re gonna be doing is we anyone who shared written route if they can give it to me or even their contact we will be sort of sending all of these reflection to everyone who sort of joined discussion online or in person so we’re going to be producing some sort of report about the session itself and share it with everyone that again has also everyone’s name that if they comfortable sharing their contact and names really really thankful for everyone who joined especially the one also online and for that also for facilitating the online discussion thank you so much right now and with that we’re gonna be closing the session so we allow the next session to actually prepare thank you
Batool Almarzouq
Speech speed
148 words per minute
Speech length
5372 words
Speech time
2166 seconds
Market capitalization of big tech exceeds GDP of many countries
Explanation
Big tech companies like Apple, Microsoft, and Amazon have market capitalizations that exceed the GDP of many countries. This demonstrates the immense economic power these companies wield on a global scale.
Evidence
Apple’s market cap exceeds GDP of countries like Italy, Brazil, Canada, Russia. If Apple was a country, it would be the 8th richest in the world.
Major Discussion Point
The rise and dominance of big tech companies
Agreed with
Panelist
Agreed on
Big tech companies have excessive market power
Acquisitions and non-compete clauses eliminate competition
Explanation
Big tech companies aggressively acquire potential competitors and use non-compete clauses to prevent founders from creating rival products. This strategy helps them maintain market dominance by eliminating competition early.
Evidence
Facebook’s acquisition of Instagram and WhatsApp for $1 billion and $20 billion respectively.
Major Discussion Point
The rise and dominance of big tech companies
Weakening of antitrust enforcement allowed monopolies to form
Explanation
Changes in antitrust enforcement since the 1970s, particularly in the US, have allowed big tech companies to form monopolies. This shift in policy has transformed the economy and allowed tech giants to grow incredibly powerful.
Evidence
Robert Bork’s influence on antitrust thinking, Reagan administration’s reduced enforcement of antitrust laws.
Major Discussion Point
The rise and dominance of big tech companies
Agreed with
Panelist
Agreed on
Need for regulatory intervention
Legal strategies used to maintain market dominance
Explanation
Big tech companies use various legal tools and strategies to maintain their market dominance. These include digital locks protected by laws like DMCA, trademark claims, and complex contracts.
Evidence
DMCA Section 1201, Apple’s use of trademarks on internal iPhone parts to control repairs.
Major Discussion Point
The rise and dominance of big tech companies
Interoperability historically helped break up monopolies like IBM
Explanation
In the past, interoperability played a crucial role in breaking up tech monopolies. IBM’s monopoly was effectively challenged when other companies could create compatible hardware and software.
Evidence
IBM’s decision to use commodity parts and third-party operating systems, Phoenix Computers’ reverse engineering of IBM’s ROM chip.
Major Discussion Point
Interoperability and its impact on tech monopolies
Agreed with
Ian Brown
Agreed on
Importance of interoperability
Adversarial interoperability challenged Microsoft’s dominance
Explanation
Adversarial interoperability, where companies create compatible alternatives against the wishes of the original company, has been effective in challenging tech monopolies. Apple’s creation of iWork to compete with Microsoft Office is an example of this strategy.
Evidence
Apple’s development of iWork to read and write Microsoft Office files, breaking Microsoft’s monopoly on office software.
Major Discussion Point
Interoperability and its impact on tech monopolies
Legal and technical barriers now prevent interoperability
Explanation
Today, big tech companies use a combination of legal tools and technical barriers to prevent interoperability. This makes it difficult for competitors to create compatible alternatives or for users to switch services easily.
Evidence
Use of DMCA Section 1201, trademark laws, and complex contracts to prevent interoperability.
Major Discussion Point
Interoperability and its impact on tech monopolies
Global South forced to use tech not designed for local needs
Explanation
Countries in the Global South are often forced to use technology designed for different contexts, which may not work well in local conditions. This creates challenges for users in these countries and limits their ability to adapt technology to their needs.
Evidence
Difficulties in downloading large software updates with slow or expensive internet connections.
Major Discussion Point
Implications for the Global South
Local developers create better-adapted versions of apps
Explanation
In some cases, local developers in the Global South have created modified versions of popular apps to better suit local needs. This demonstrates the potential for locally-adapted technology solutions.
Evidence
Modified version of WhatsApp created in Syria during civil war, Indonesian ride-sharing app modifications.
Major Discussion Point
Implications for the Global South
Collective advocacy for better interoperability
Explanation
Users in different countries could collectively advocate for better interoperability in technology. This could help address some of the challenges faced by users, particularly in the Global South.
Major Discussion Point
Challenges and solutions for interoperability
Differed with
Ian Brown
Differed on
Approach to interoperability implementation
Ian Brown
Speech speed
117 words per minute
Speech length
291 words
Speech time
148 seconds
EU Digital Markets Act mandates interoperability for messaging apps
Explanation
The European Union’s Digital Markets Act requires messaging services like WhatsApp to enable interoperability with other messaging services. This legislation aims to increase competition and user choice in the messaging market.
Evidence
WhatsApp is required to enable interoperability with other messaging services under the Digital Markets Act.
Major Discussion Point
Interoperability and its impact on tech monopolies
Agreed with
Batool Almarzouq
Agreed on
Importance of interoperability
Resource requirements for implementing interoperability are high
Explanation
Implementing interoperability can be resource-intensive, particularly for smaller organizations or open-source projects. This can create challenges in realizing the goals of interoperability mandates.
Evidence
An open source organization found the resource requirements too high to implement WhatsApp interoperability without specific customer financing.
Major Discussion Point
Challenges and solutions for interoperability
Differed with
Batool Almarzouq
Differed on
Approach to interoperability implementation
Panelist
Speech speed
137 words per minute
Speech length
226 words
Speech time
98 seconds
Need for regulatory approaches to address privacy violations
Explanation
Big tech companies are engaged in data privacy violations and financial statement violations. There is a need for regulatory approaches to intervene and address these violations.
Major Discussion Point
Challenges and solutions for interoperability
Agreed with
Batool Almarzouq
Agreed on
Need for regulatory intervention
More transparency needed from big tech companies
Explanation
There is a need for greater transparency from big tech companies regarding their protocols, compatibility issues, and use of technology. This transparency is crucial for ensuring fair practices and interoperability.
Major Discussion Point
Challenges and solutions for interoperability
Need for localization of terms of service for different markets
Explanation
Big tech companies often apply their own local laws in different jurisdictions, causing conflicts due to cultural differences. There is a need for localization of terms of service to better suit different markets and cultures.
Evidence
Different parts of the world have different cultures, and content might be offensive in one part of the world and not in another.
Major Discussion Point
Implications for the Global South
Agreements
Agreement Points
Big tech companies have excessive market power
Batool Almarzouq
Panelist
Market capitalization of big tech exceeds GDP of many countries
Big tech companies are engaged in data privacy violations, financial statement violations
Both speakers highlight the immense economic power and influence of big tech companies, which extends beyond national boundaries and leads to various violations.
Need for regulatory intervention
Batool Almarzouq
Panelist
Weakening of antitrust enforcement allowed monopolies to form
Need for regulatory approaches to address privacy violations
The speakers agree that there is a need for stronger regulatory approaches to address the dominance of big tech companies and their violations.
Importance of interoperability
Batool Almarzouq
Ian Brown
Interoperability historically helped break up monopolies like IBM
EU Digital Markets Act mandates interoperability for messaging apps
Both speakers emphasize the importance of interoperability in challenging tech monopolies, citing historical examples and current legislation.
Similar Viewpoints
Both speakers highlight the need for increased transparency and scrutiny of big tech companies’ practices to ensure fair competition and protect user rights.
Batool Almarzouq
Panelist
Legal strategies used to maintain market dominance
Need for transparency from big tech companies
The speakers agree that technology and policies need to be adapted to local contexts, particularly in the Global South, to better serve users in different regions.
Batool Almarzouq
Panelist
Global South forced to use tech not designed for local needs
Need for localization of terms of service for different markets
Unexpected Consensus
Challenges in implementing interoperability
Batool Almarzouq
Ian Brown
Legal and technical barriers now prevent interoperability
Resource requirements for implementing interoperability are high
While both speakers advocate for interoperability, they unexpectedly agree on the significant challenges in implementing it, including legal, technical, and resource barriers.
Overall Assessment
Summary
The speakers generally agree on the excessive power of big tech companies, the need for regulatory intervention, the importance of interoperability, and the necessity of adapting technology to local contexts.
Consensus level
There is a high level of consensus among the speakers on the main issues discussed. This consensus suggests a shared understanding of the challenges posed by big tech dominance and the potential solutions, which could provide a strong foundation for developing policy recommendations and regulatory frameworks to address these issues.
Differences
Different Viewpoints
Approach to interoperability implementation
Batool Almarzouq
Ian Brown
Collective advocacy for better interoperability
Resource requirements for implementing interoperability are high
While Batool Almarzouq suggests collective advocacy for better interoperability, Ian Brown highlights the high resource requirements as a challenge for implementation, especially for smaller organizations.
Unexpected Differences
Overall Assessment
summary
The main areas of disagreement revolve around the implementation of interoperability and the focus of regulatory approaches.
difference_level
The level of disagreement among the speakers is relatively low. The speakers generally agree on the need for interoperability and regulatory intervention, but they emphasize different aspects and challenges. This suggests a nuanced understanding of the complex issues surrounding big tech dominance and interoperability.
Partial Agreements
Partial Agreements
Both speakers agree on the need for regulatory intervention, but they focus on different aspects. Batool Almarzouq emphasizes the legal strategies used by big tech to maintain dominance, while the Panelist focuses on addressing privacy violations.
Batool Almarzouq
Panelist
Legal strategies used to maintain market dominance
Need for regulatory approaches to address privacy violations
Similar Viewpoints
Both speakers highlight the need for increased transparency and scrutiny of big tech companies’ practices to ensure fair competition and protect user rights.
Batool Almarzouq
Panelist
Legal strategies used to maintain market dominance
Need for transparency from big tech companies
The speakers agree that technology and policies need to be adapted to local contexts, particularly in the Global South, to better serve users in different regions.
Batool Almarzouq
Panelist
Global South forced to use tech not designed for local needs
Need for localization of terms of service for different markets
Takeaways
Key Takeaways
Big tech companies have achieved and maintained dominance through acquisitions, legal strategies, and weakened antitrust enforcement
Interoperability historically helped break up tech monopolies, but is now prevented by legal and technical barriers
The EU Digital Markets Act mandates interoperability for messaging apps, which could change the balance of power between users and tech companies
Global South countries face challenges in using technology not designed for their local needs
There is a need for more transparency and localization from big tech companies
Resolutions and Action Items
Produce and share a report summarizing the session’s reflections and discussions with all participants
Unresolved Issues
How to effectively implement interoperability given high resource requirements
How to address data privacy violations by big tech companies
How to ensure neutrality and compatibility across different platforms
How to balance localization of tech services with global operations
Suggested Compromises
Implementing both quick fixes (like the EU’s Digital Markets Act) and long-term solutions to address tech monopolies
Allowing competitive compatibility to give users more choices while working on rebuilding a more open internet
Thought Provoking Comments
According to Oxfam, the top 1% owns, actually, 43% of the world’s GDP. percent of all the global financial assets, while two corporations actually control 40% of the global seed market itself. This is how serious it is.
speaker
Batool Almarzouq
reason
This statistic provides a striking illustration of the extreme concentration of wealth and market power, setting the stage for the discussion on tech monopolies.
impact
It framed the subsequent discussion by emphasizing the scale and seriousness of monopolistic control in the global economy.
Instead of winning by making things better product, they win by buying up anyone who might have a compete with them.
speaker
Batool Almarzouq
reason
This succinctly captures a key strategy used by big tech companies to maintain dominance, challenging the notion that their success is purely based on innovation.
impact
It shifted the conversation to examine the business practices of tech giants, rather than just their products or services.
There are three distinct types of interoperable. As I mentioned, there’s cooperative interoperable, so a company willing to work together, follow these agreed on standards, like email protocol. For example, Gmail can send to Apple Mail because they share the standard based on a mutual agreement and cooperation. There is indifferent interoperability, which one company doesn’t actually actively help up like others. But the one that I would like to focus more about is adversarial interoperability.
speaker
Batool Almarzouq
reason
This breakdown of different types of interoperability provides a nuanced framework for understanding how tech systems can interact, highlighting the complexities involved.
impact
It deepened the technical understanding of interoperability and set up the subsequent discussion on the challenges and potential solutions in this area.
The global south face double, double burden. It’s forced into passing tech laws friendly only to US corporation. And the second part, they have to use technology designed for each country that often does not work well in your local conditions.
speaker
Batool Almarzouq
reason
This comment brings attention to the often-overlooked challenges faced by developing countries in the global tech landscape, highlighting issues of technological colonialism.
impact
It broadened the scope of the discussion to include global inequities in tech development and regulation, leading to reflections on how interoperability could benefit the Global South.
The EU is also considering extending this legal requirement to social networking services like Facebook. That was debated when the law was passed. The European Parliament wanted that to happen, but the EU governments would not allow it to happen. But after a three year review, that may happen next time.
speaker
Ian Brown
reason
This comment provides insight into ongoing regulatory efforts in the EU, showing how interoperability requirements are being considered for expansion.
impact
It grounded the theoretical discussion in current policy developments, offering a concrete example of how interoperability might be mandated in practice.
Overall Assessment
These key comments shaped the discussion by providing a comprehensive overview of the issues surrounding tech monopolies and interoperability. They moved the conversation from broad economic concerns to specific tech industry practices, then to technical aspects of interoperability, global implications, and finally to current regulatory efforts. This progression allowed for a multi-faceted exploration of the topic, touching on economic, technical, social, and policy dimensions.
Follow-up Questions
How can users in different countries collectively advocate for better interoperability?
speaker
Batool Almarzouq
explanation
This was posed as a discussion question to explore country-specific strategies for promoting interoperability
What challenges exist in implementing mandatory interoperability?
speaker
Batool Almarzouq
explanation
Understanding potential obstacles is crucial for effectively implementing interoperability policies
How might mandatory interoperability change the balance of power between users and tech companies, and how might it affect innovation?
speaker
Batool Almarzouq
explanation
Exploring the potential impacts of interoperability on power dynamics and innovation in the tech industry
How can the resource requirements for implementing interoperability (e.g. for WhatsApp) be addressed for smaller organizations?
speaker
Ian Brown
explanation
This highlights a practical challenge in implementing interoperability that needs further investigation
How can security policies of different organizations be aligned to enable interoperability between communication tools?
speaker
Ian Brown
explanation
This is important for enabling secure interoperability between organizations using different communication platforms
What regulatory approaches are needed to intervene in data privacy and financial statement violations by big tech companies?
speaker
Panelist from group 2
explanation
This area requires further research to develop effective regulatory strategies
How can transparency be improved regarding the protocols and algorithms used by big tech companies?
speaker
Panelist from group 2
explanation
Greater transparency is needed to understand and address issues related to data use and user influence
How can big tech companies better localize their terms of use for different markets and cultures?
speaker
Panelist from group 2
explanation
This is important for addressing conflicts arising from applying uniform policies globally
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Main Session | Dynamic Coalitions
Main Session | Dynamic Coalitions
Session at a Glance
Summary
This discussion focused on the contributions of Dynamic Coalitions to the implementation of the Global Digital Compact’s (GDC) five key objectives. Representatives from various Dynamic Coalitions presented how their work aligns with and supports these objectives, which include bridging digital divides, expanding digital economy inclusion, fostering safe and inclusive digital spaces, advancing responsible data governance, and enhancing global AI governance.
The speakers highlighted the diverse range of issues addressed by Dynamic Coalitions, from internet rights and children’s online safety to data governance and AI ethics. They emphasized the importance of multistakeholder engagement and the need to include marginalized voices in shaping internet governance policies. Several coalitions are working on issues such as digital inclusion, accessibility for persons with disabilities, and environmental sustainability in the digital realm.
Participants discussed the potential for Dynamic Coalitions to play a significant role in the implementation of the GDC and the broader sustainable development agenda. They stressed the importance of collaboration between coalitions and the need to focus on tangible outcomes and impacts. The discussion also touched on the importance of data-driven policymaking and the need to prioritize data integrity and integration.
The session concluded with a call for Dynamic Coalitions to be proactive in setting the agenda for the next Internet Governance Forum and to continue their work in supporting the objectives of the GDC. Participants emphasized the open nature of Dynamic Coalitions and encouraged wider participation in their activities to help shape the future of internet governance.
Keypoints
Major discussion points:
– Dynamic coalitions are working to implement the Global Digital Compact (GDC) objectives and Sustainable Development Goals
– Dynamic coalitions address issues like bridging digital divides, expanding digital inclusion, fostering safe online spaces, data governance, and AI governance
– Dynamic coalitions provide a platform for multistakeholder engagement and year-round IGF activities
– There are opportunities for dynamic coalitions to collaborate and set the agenda for GDC implementation
– Dynamic coalitions can contribute concrete outputs and recommendations to inform policymaking
Overall purpose:
The purpose of this discussion was to highlight how IGF dynamic coalitions are contributing to the objectives of the Global Digital Compact and sustainable development, and to explore how they can further engage in GDC implementation.
Tone:
The tone was informative and collaborative, with speakers providing overviews of dynamic coalition work and participants offering suggestions for future engagement. There was an enthusiastic and optimistic tone about the potential for dynamic coalitions to contribute meaningfully to internet governance processes. The tone became more action-oriented towards the end, with calls to set concrete agendas and outputs for upcoming IGF meetings.
Speakers
– Jutta Croll and Irina Soeffky: Moderators of the session
– Mark Carvell: Co-moderator of the session, former UK government digital policy official, senior policy advisor with Dynamic Coalition on Internet Standards, Security, and Safety (IS3C)
– Muhammad Shabbir: Member of Internet Society’s Accessibility Special Interest Group, member of Pakistan ISOC chapter, member of Digital Coalition on Accessibility and Disability (DCAD)
– Olivier Crepin-Leblond: Founder and investor of WAF lifestyle app, chair of Dynamic Coalition on Core Internet Values
– Tatevik Grigoryan: Co-chair of Dynamic Coalition on Interoperability, Equitable and Interoperable Data Governance and Internet Universality Indicators
– Yao Amevi Amnessinou Sossou: Project Leader and Research Associate, the Institute of International Management at FH JOANNEUM Graz
Full session report
Dynamic Coalitions and the Global Digital Compact: Aligning Efforts for Digital Progress
This session explored how Dynamic Coalitions (DCs) contribute to the implementation of the Global Digital Compact’s (GDC) five key objectives. Representatives from various DCs presented their work in relation to these objectives, demonstrating the breadth and depth of their impact on internet governance.
Introduction and Session Purpose
The moderator, Jutta Croll, introduced the session by explaining its aim: to showcase how DCs’ work aligns with and supports the GDC objectives. She emphasized that DCs are already actively working towards these goals and invited anyone interested to join their efforts.
Objective 1: Bridging Digital Divides
The challenges faced by Small Island Developing States (SIDS) in digital development were also highlighted. Participants emphasized the need for DCs to address the increasing polarization and marginalization in internet policy and strategy, particularly for SIDS and other underrepresented regions.
Objective 2: Expanding Digital Economy Benefits
Muhammad Shabbir discussed how various DCs contribute to this objective:
– DC on Financial Inclusion: Promoting access to financial services
– DC on Open Education: Enhancing educational opportunities
– DC on Accessibility: Ensuring digital inclusivity for persons with disabilities
– DC on Environment: Addressing environmental impacts of digitalization
Objective 3: Fostering Safe and Inclusive Digital Spaces
Olivier Crepin-Leblond outlined the work of several DCs in this area:
– DC on Internet Rights and Principles: Promoting human rights online
– DC on Child Online Safety: Protecting children’s rights in the digital space
– DC on Youth: Engaging young people in internet governance
– DC on Internet Standards, Security and Safety: Enhancing online security
He emphasized that at least one-third of internet users are under 18, highlighting the importance of the UN Convention on the Rights of the Child in the digital context.
Objective 4: Advancing Responsible Data Governance
Tatevik Grigoryan discussed the contributions of DCs working on:
– Data governance frameworks
– Artificial Intelligence ethics and regulation
– Internet universality indicators
She highlighted the potential for these coalitions to contribute to data-driven policymaking and national assessments.
Objective 5: Enhancing Global AI Governance
Yao Amevi Amnessinou Sossou presented the work of DCs related to AI governance:
– DC on Gender and Internet Governance: Addressing gender biases in AI
– DC on Internet of Things: Exploring AI applications in connected devices
– DC on Blockchain: Investigating AI’s role in distributed ledger technologies
– DC on Digital Health: Examining AI’s impact on healthcare
Discussion and Audience Participation
The session included valuable contributions from audience members:
– Dr. Rajendra Pratap Gupta emphasized the need to focus on job creation and connecting the unconnected. He also noted the absence of big tech companies at this IGF.
– An audience member stressed the importance of data integrity and integration.
– Dr. Gupta highlighted the potential of gaming and gamification for social good across various sectors.
– A representative from the Creators Union of Arab announced an intellectual property verification platform.
WSIS+20 Review and Dynamic Coalitions
Mark Carvell mentioned the upcoming WSIS+20 Review, emphasizing its relevance to the work of Dynamic Coalitions and their potential contributions to this process.
How to Join Dynamic Coalitions
Jutta Croll provided information on how interested individuals can join Dynamic Coalitions, emphasizing their open nature and encouraging wider participation.
Conclusion
The session demonstrated the crucial role Dynamic Coalitions play in addressing the GDC objectives and broader internet governance issues. By fostering multistakeholder engagement and tackling a wide range of topics, DCs are well-positioned to shape the future of internet governance. The discussion highlighted the need for continued collaboration, focus on tangible outcomes, and inclusion of diverse voices in the process.
Key takeaways:
1. Dynamic Coalitions are actively contributing to all five GDC objectives.
2. There is a need for greater inclusion of underrepresented regions, particularly SIDS.
3. DCs provide a platform for year-round engagement on internet governance issues.
4. The upcoming WSIS+20 Review presents an opportunity for DCs to contribute their expertise.
5. Wider participation in DCs is encouraged to enhance their impact and representation.
Session Transcript
Irina Soeffky: This session builds really well on last year’s Dynamic Coalition main session. It was called The Internet We Want, Human Rights in the Digital Space to Accelerate the SDGs. And it was really an ideal platform for sustainable digital development discussions. It also fits into the topic of this IGF very well. It has one sub-team which is particularly fitting for what we do today or intend to do today. It’s improving digital governance for the Internet we want. And with that broad picture, I hand it over to my colleague Mark who will tell you a bit more about what is about to happen here on stage.
Mark Carvell: Okay. Thank you very much, Irina. And good morning, everybody. Thank you very much for coming at this early hour during a busy week here. We’re in Riyadh at the IGF. It’s much appreciated. And welcome everybody who’s following us today online. It’s very good for you to have you with us today. As Irina said, I’m sharing the moderation with her for this session. My background is the UK government on digital policy going all the way back to 2005, and the Tunis summit of the World Summit of the Information Society. So I’ve been to most IGFs. And since leaving the UK government, I’ve continued to be engaged with the IGF community as a member of a dynamic coalition, IS3C, which is the Dynamic Coalition on Internet Standards, Security, and Safety. I’m a senior policy advisor with that dynamic coalition. And we’re one of the coalitions that we’ll cover this morning. So I have that role. And also, I’m working with EURODIG, the European Regional IGF, on the global digital compact. I’ve been chairing the consultations undertaken by EURODIG on that and inputting into the co-facilitators consultations with stakeholders on behalf of EURODIG. So it’s great to co-moderate this session, which I think will highlight, in many ways, how dynamic coalitions do have a major potential role in the global digital compact and the follow-up following its signature in New York and entering now the implementation phase with an endorsement process and so on. So the dynamic coalitions are standing ready, really, to assist with all that. And what we’re doing here today is providing them with the opportunity to explain how they do connect with the global digital compact process and the sustainable development agenda as well. There are 31 dynamic coalitions currently, covering a diverse range of technology, governance, sectoral and public policy issues, opportunities and challenges. They are very focused, year-round IGF activities, staffed by volunteers. And we will advocate people who are interested today in following up, potentially as a member of a coalition. It’s very easy to do. You just sign up, basically. It will be a great opportunity for stakeholders who want to get involved through the dynamic coalitions. We’ll talk about that right at the end. But as I say, the coalitions do cover a wide range of issues. And 21 of the 31 coalitions immediately stepped forward when the coordination group of the dynamic coalitions said, you know, we’re going to have a main session which is focused on the global digital compact and sustainable development. They stepped forward and said, look, we are doing work which is highly relevant to the scope of the compact and so on. So what we’re doing here today is we’re going through the objectives, which Irina recounted for us at the beginning, one by one with representatives of the clusters of dynamic coalitions that have said this particular objective is the one that we are the most potentially engaged in. Giving you a quick explanation of what those coalitions are for each objective. And then we’ll have a little bit of, in the panel, just a little bit of discussion for each objective and then after that we will then really open it out to everybody who’s taking part in an interactive discussion. So save your questions, comments, reactions and anything else you want to raise with any of the speakers or any of the representatives of the Dynamic Coalitions when we reach that part of today’s session. We want to hear from you, we want to know what you think the coalitions can do, perhaps more, and areas where they can collaborate amongst themselves. That’s very important to bear in mind. There are coalitions that do work in the similar sort of sectoral areas and they can usefully collaborate in this whole environment of the Global Digital Compact. So that’s what we’ll do. I think I’ve probably said enough. We will want to try and define some messages and potential outcomes from this session and Jutta Kroll, who helped to organise this session, is ready with us to come and support Irina and me in wrapping up with some potential ways forward, recommendations, what we might do in terms of engaging as a coalition community with the whole GDC process. We’ll do that at the end, in about 10 minutes. So the emphasis is on participation, hearing from you. We have a bit of a download of information, we’ll go through that as succinctly as possible, but that’s a preface really for hearing from you. I think I’ll stop there with the explanation. I hope that’s all clear. Back to you, Irina, for kicking off with our objectives.
Irina Soeffky: It’s a pleasure to first introduce June Paris, who unfortunately cannot be here today, but luckily she’s with us online. And it’s really hard to present or to introduce a person like June in just 30 seconds, which is almost impossible. She’s not only an experienced nurse, but she’s also engaged in groundbreaking research. Lately she has worked on nutrition in pregnant women. She has extensive experience in business development, research, and startup involvement, as well as business support and volunteer work. I could go on forever now, but I’m glad to hand it over to her and she will talk about the possible contributions of dynamic coalitions to the Global Digital Compact’s objective of building digital, of bridging digital divides. Over to you, June. Can you hear us? We can’t hear you yet. We can’t hear you yet, so maybe there is someone, some technical person that can help us to fix the Internet and make it open, accessible, and everything else. If not, we could also wait and maybe you try to fix the technical problems and we start with our second speaker. Perfect. Then I hand it over to you, Mark.
Mark Carvell: Okay, back to me then, Irina. Thank you, yes. Okay, let’s hopefully come back to June and Objective One a little later, when the technical issue is resolved. So let’s go to Objective Two, expanding the digital economy, inclusion, and benefits for all. So our speaker to describe the coalitions in this cluster is Dr. Mohammed Shabbir, who is a member of Internet Society’s Accessibility Special Interest Group. He is also a member of the Pakistan ISOC chapter, and a member of the Digital Coalition on Accessibility and Disability, DCAD. So Dr. Shabbir, can I hand over to you to describe the cluster of dynamic coalitions who have signaled that they are most interested in objective 2 and contributing to that objective.
Muhammad Shabbir : Thank you. Yes, thank you very much and good morning to everyone. It is a privilege to address this session where we gather as representatives of the IGF’s diverse and dynamic community to align our collective contributions with global digital compact. As part of this significant milestone in the global digital governance, our work announces and aligns with the global digital compact and indicates inclusion, accessibility, and sustainability in the digital economy. The specific objective I am addressing today, expanding digital economy inclusion and benefits for all, is foundational to achieving this vision. The GDC demands that all stakeholders work collaboratively to dismantle barriers, promote digital equity, and harness technology to empower individuals and communities. I am honored to represent four of the dynamic coalitions in our community as displayed and these dynamic coalitions collectively show that if we want we can achieve anything in the digital arena. Together these coalitions exemplify how collaborations and shared experiences strengthen the IGF ecosystem and drive the implementation of GDC. Each coalition operates with a shared ethos of inclusivity, equity and sustainability, building bridges between stakeholders and addressing the critical challenges in the way of digital development. As we all know access to digital financial services is cornerstone of economic inclusivity. We have identified key barriers such as improbability, issues, restrictive policies and gaps in the digital and financial literacy to harness access to equitable, affordable and effective digital financial services. Knowledge is a public good, and it is committed to making it universal and accessible. By promoting openly licensed content as a foundational cornerstone of digital inclusion, it contributes to advancing the goal of GDC and its principle of inclusivity, accessibility, and innovation. Open licenses are something that in educational context provide access to different linguistic and other cultures to access to the information. And by equipping individuals with the skill to drive in the digital economy, the DC is transforming education into a catalyst for sustainable development. In any of the digital inclusion services, it is required that persons with disabilities can also fully participate. The Coalition has been at the forefront of advancing for accessibility in the digital platforms and policies. The vision of the Coalition is clear where each and every person, regardless of the ability, has the access to the information and digital content. This year the Coalition is revising its accessibility guidelines for IGF meetings which can also be used by other organizations. We also have four fellows persons with lived experience of disability in these corridors participating inclusively. Environmental exclusion Sustainability is not just an environmental imperative. It is an economic opportunity and the coalition advances for the interoperability of green policies into digital governance. We all know that the work of these coalitions may seem that as if they are operating in silos, but they work together when their work is collectively seen as a comprehensive, cohesive unit, they say that economy is driven by education, includes persons with disabilities, which requires green policies. And collectively, collaboratively, we can achieve the goals of Global Digital Compact and the Vision 2030. Thank you very much.
Mark Carvell: Thank you very much. Yes, a round of applause for Dr. Shabbir. Thank you very much. All the four coalitions are doing incredibly important work and really addressing some of the crucial challenges as well as opportunities for communities which may be marginalized or disadvantaged. And I wonder, just one question, no doubt those four coalitions are identifying common barriers to marginalized, for marginalized communities from participating in the digital economy. Do you think there is scope for particularly perhaps through collaboration amongst those coalitions to address and resolve those barriers? What do you think? Thank you.
Muhammad Shabbir : Yes, thank you very much. And that’s a really crucial question where the work of one DC enters into the domain of other DC and how they collaborate with one another. If I give just the example of these four coalitions where we have been collaborating, which is for the rights of persons with disabilities has conducted webinars and sessions with the Digital DC on Financial Inclusion to make banking systems accessible, financial systems accessible for people with disabilities. Similarly, today after this session, two coalitions are joining together in a workshop where we will be discussing how digital accessibility could be ensured for everyone in education and how this interacts with persons with disabilities and without disabilities. Similarly, the coalition on Environment, we cannot forget environment is something that we all work towards and we are living where we will be impacted if we don’t care of it. So while we try to work in our individual domains, we try to inculcate steps where the policies and achievements, they are sustainable and green.
Mark Carvell: Thank you, Dr. Shabir. That’s a great example of working together on common objectives that are directly relevant to the compact. So I’m sure this may well come up again in our discussion with people taking. part today. Okay, with that I’ll hand back to you, Irina, for Objective 1, maybe, fingers crossed.
Irina Soeffky: Yeah, I think so too, and I’m checking with June Paris whether she’s hearing us.
June Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging digital divides. Yeah, I am June Paris, I am in Barbados at the moment, and it’s 2am, 2.26 in the morning. So anyway, I will speak quickly and I will read the introduction for SIDS. So I will start straight away. While many in the global internet community, especially those interested in issues surrounding internet governance, are fully engaged with and attuned to the developments surrounding WCIT, WTSA, and ICANN, and the challenges and opportunities brought about by emerging issues such as cloud computing, social media, and mobile technology, it is becoming increasingly apparent that a greater degree of polarization and marginalization in the area of internet policy and strategy has been slowly occurring. SIDS was found in the Caribbean, Pacific, and AIMS, AIMS, which is Africa, Indian Ocean, Mediterranean, and the South China Sea regions. Small island developing states, which is SIDS, which number about 52 at the last count, and which comprise approximately 60 million people, are seeking a greater voice with a higher level of volume in the international discourse, especially that relating to information and communication technology and critical resource management. According to various reports and documents published by the United Nations and other international organizations, the SIDS share several common sustainable development challenges. Small populations as low as 2,000 in one state, limited resources, remoteness, susceptibility to natural disasters. They’re vulnerable to external economic shocks, excessive dependence on international trade and extractive industries. Indeed, internal economies of many states are characterized by state monopolies, effective monopolies by NNCs, and oligopolies, which often lead to price distortions for key goods and services. In the ICT sector, especially telecommunications sector, voice and data operators are most likely to be monopolists, or oligopolies, sorry, I’m having trouble with that word, with attendant issues relating to non-competitive pricing, low levels of customer service, aging infrastructure, a lack of universal accessibility with digital inclusion and digital divide scenarios, often playing out to disadvantage of one or more sectors of the population. I have experience of that, so I can say, yes, this is what’s really happening. Further face is on a daily basis with severe environmental, energy, and natural resource management challenges. The states are hard-pressed to take full advantage of the potential in territory benefits and opportunities made available through emerging technology, such as cloud computing and on-demand type ICT services. Given the tremendous amount of consumption of energy, capital, and natural resources, that on-demand facilities of this nature demand. In this regard, and with a view to ensuring that these issues are properly ventilated amidst the debates among OECD, G20, and BRICS countries that relate to internet and ICT policy and strategy, telecommunication standards and tariffs, universal access, and sustainable development funding approaches. It is obvious that the number and volume of SIDS voices must be elevated in the design, planning, and participation and collaborative activities with their larger colleagues in order to better align and contextualize policies, positions, and strategies. If I’ve got time, I will go on a bit. Although there are shared experiences and multiple synergies among the SIDS, it is not by any means an easy task to simply organize and facilitate this intention to raise the volume. Logistically, it is near impossible to treat with the needs of 52 countries and 60 million voices spanning thousands of miles of oceans across the globe through a single or even a series of position papers or a solitary conference session. The needs and requirements of the SIDS deserve more, a forum through which international community can hear their concerns and challenges, a forum through which SIDS can sit together and collaborate to themselves, define and offer their own possible solutions to their own problems, a forum in which exchanges of opinions, views, and possible solutions can be achieved on a level and equitable playing field. It is, therefore, incumbent upon the Internet Governance Forum and, indeed, the wider WISISPAN process to provide a dedicated space forum for the SIDS to dialogue, firstly amongst themselves. and then with wider global community on a broader range of issues relating to and affecting Internet policy, modernization of critical Internet, infrastructure resources, the economies of telecommunications service provision, telecommunications service pricing, and the relationship to sustainable development and development funding, quality of service, and quality of customer service practices, all of which take full consideration the unique vulnerabilities and environmental sensitivities of these small island nations. Therefore, a dedicated and ongoing Internet governance space besets cuts across all the world’s major geographical regions and will provide a useful example for not only multistakeholderism, but also South-South multilateralization and indeed cooperation. As small island developing states face the greatest risks and challenges due to the global economic downturn, double debt recession, and the Eurozone crisis, there’s no better time than now to forge and harden this relationship, this partnership, and for the United Nations, the Internet Society, the International Telecommunications Union, and the other organizations to recognize and support this quantum leap forward. So I will end there for now. If there’s any questions, I will go on to those questions when the other speakers have spoken.
Irina Soeffky: Thank you so much, June, for this broad and very rich presentation. I think as we are already running a little bit late, I will also keep my questions for later for the exchange session and then we can get into discussions later on. I’m sure there will be contributions to that because it’s such a fundamental topic that you’ve been talking about. But I think to really to get us moving on, I’ll go to my next presenter. which is Olivier Crepin-Leblond. It’s a pleasure to have you here. You are a founder, a co-founder, and an investor of WAF lifestyle app. You describe yourself as a connector, bringing people together to achieve great things. And I find particularly impressive that you are dealing with all things internet already since 1988. So you’ve been around forever, so to speak, experienced it all. Therefore, we are very much looking forward to you talking today about how dynamic coalitions can contribute to the GDC objective of fostering a safe, secure, and inclusive digital space that upholds human rights. Olivier, over to you.
Olivier Crepin-Leblond: Thank you very much, Irina. And you know, it feels like yesterday, 1988, the start of this whole craziness of internet and so many people coming online and so on. But anyway, I have about four minutes to talk to you about five dynamic coalitions. So I hope that you’re holding onto your seats because this is going to be rather quick. Anyway, the first one is the Internet Rights and Principles Coalition. And that’s been around for quite some time, actually. And it works to uphold human rights in the online environment and to root internet governance processes and systems in human rights standards. So it does work like raise awareness of fundamental human rights on the internet, establish global public policy principles for an open internet with stakeholder involvement, encourage stakeholders to address human and civil rights in policymaking, applying human rights to the internet and ICTs, and assessing existing structures and guidelines, protecting and enforcing human rights online. That’s just a task by itself. Promoting people-centric, that’s important, people-centric public interest in internet governance, and defining the duties and responsibilities of internet users and stakeholders to preserve the public interest online. It has produced. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. This is a charter of human rights and principles back in 2011 with 21 articles and ten principles and it’s really recommended reading. You can find it online. Very, very good work. And its contribution to the SDGs have really all come from that charter. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. The next one is the Dynamic Coalition on Children’s Rights in the Digital Environment. At least one-third of Internet users are under 18. I wasn’t aware of that. It’s quite amazing. And their rights must be protected as outlined in the U.N. convention on the rights of the child. With the advent of AI and virtual reality, policies must prioritize children’s involvement in shaping these issues and to ensure this strategy has begun to have a real counter to the problematic, intrinsic experiences of им latent childhoods. The Global Digital Compact calls for national child safety priorities by 2030. And child rights impact assessments should guide legislative policy and make sure that children are protected from the threat of cyberattacks. And the International Coalition on Internet Governance has really worked on that. And they’re bringing young people, young professionals in internet governance, particularly at the annual IGF, and you will have seen quite a number of people from this coalition at this IGF. And it serves as a natural space for youth engagement on the issues that we’ve already spoken about. So thank you very much. on amplifying youth voices in digital governance, which we’ve heard also with the previous D.C. is probably not strong enough. Empowering youth through digital literacy. Closing the digital divide because it’s not only geographical, but there’s also a digital divide with as far as age is concerned. Prioritizing youth safety online, a huge issue as we are all aware of. Leveraging technologies for social good and holding platforms accountable, which I think is quite a task for them. And all of these efforts aim to foster a more inclusive and equitable digital landscape for young people. Moving on, on safety, there is an Internet Standard Security and Safety Coalition, the IS3C, very active as well. And their mission, aligning with objective three of the Global Digital Compact, is to enhance online security through effective deployment of security standards and best practices. Good practice, I think some would call it. After extensive research and analysis, the coalition has developed policy recommendations, guidelines, and toolkits for global dissemination and identified best practices for capacity building in three key areas. Evolution of secure by design technologies is one of them. The second one is addressing cybersecurity skills gaps and educational curricula. Huge topic as well. And third, strengthening public sector procurement practice as a driver for security standards implementation. Security is a huge topic, as you know, on the Internet. And the IS3C is considering a new work stream on consumer protection and digital trust for 2025. And to this end, consumer organizations are being consulted on a project proposal to assess whether the tech industry effectively addresses the security concerns of personal and business Internet users in digital products and service design. And the proposal was actually introduced at this IGF. So, if you’ve missed it, you probably have to watch the recording. I would recommend you watch the recording. And I’ll finally finish with the last one, which is the D.C. that I chair, which is the Dynamic Coalition on Core Internet Values. And that promotes Internet governance principles, engaging with diverse stakeholders, it ensures the Internet, or tries to ensure the Internet remains a global public resource through policy recommendations, collaborations, fora, all advocating for a free, secure, and resilient Internet for all. Now, you might wonder, what are those core Internet values? And I’ll just list them, because we could probably talk about those for another hour or so. But global, the Internet is global, obviously, interoperable, open, decentralized, end-to-end, people from one end to the other, user-centric, I think we’ve mentioned it with another D.C. earlier on, it’s really important, robust and reliable, yeah, the Internet is pretty darn robust, and finally, secure, which is one that we had to add, that originally wasn’t really thought of, because the Internet, everyone knew each other, but these days, you have to make sure that it is secure. So together, the work of all of these Dynamic Coalitions fulfills the objective three of the G.D.C., to the letter. And do you remember the objective? Okay, I’ll remind you, fostering a safe, secure, and inclusive space that upholds human rights. And that’s what they all work on.
Irina Soeffky: Yes, thank you for your applause. And thank you, Olivier, very impressive work there as well. As we are still running a little late, I will not use my prerogative here to ask questions first, but save my questions for later on, when we will get into a discussion. And with that, hand it over to Marc for our next guest.
Marc: Thank you, Irina. Yes, let’s go straight on to objective… We are pleased to be joined by Tatevik Grigoryan, who is a member of the Dynamic Coalition on Interoperability. She is the co-chair of the Subjective IV, Advancing Responsible, Equitable and Interoperable Data Governance. And we have the pleasure of being joined to present the cluster of coalitions relating to the Subjective by Tatevik Grigoryan. Tatevik Grigoryan is the co-chair of the Dynamic Coalition on Interoperability, Equitable and Interoperable Data Governance and Internet Universality Indicators. So we have, I think, on the screen, yes, there are three coalitions in this cluster. So over to you, Tatevik, please.
Tatevik Grogryan: I would like to start by saying that we have a number of stakeholders in this cluster, the first one of which each one of them contribute to this objective through a different type of work and focus, and as objective for advocates for robust governance frameworks that are transparent, accountable, and inclusive, involving multiple stakeholders such as governments, private sector entities, and civil government so that’s the first thing that we saw in the coalition. And, of course, the first dynamic coalition that is on digital access from the private sector. We had an important ön on the digital surplus. Differently than just the multi So, I would like to start with a brief overview of the project. The project is called Sustainable Economy that focuses on the economic dimension of the Objective 4, highlighting how equitable access to data and digital tools can reduce inequalities and enable sustainable growth. For example, through initiatives like the project CREATE, the coalition illustrates how the goal of the project is to address the challenges of digitalization and digitalization, and this is a call for equitable data governance that advances inclusion and economic empowerment. So, the next dynamic coalition that is contributing to this objective is the DC on Data and Artificial Intelligence Governance, which focuses on fostering critical discussions on data and AI and AI, and also on the importance of data governance in the context of digitalization and ensuring that the voices of underrepresented populations, particularly from the global south, are integrated into governance frameworks. The coalition promotes collective studies and multistakeholder engagement to evaluate evidence and propose policy updates that reflect the realities and aspiration of stakeholders globally. The next dynamic coalition is the IGF, which is a coalition of international partners that IGF contributes to the, not only to the objective 4, actually, but to objective 1 and to other aspects of the GDC through its Internet universality Indicators, which is a comprehensive tool. So, the next dynamic coalition is the IGF, which is a coalition of international partners that IGF contributes to the, not only to the objective 4, actually, but to other aspects of the GDC through its Internet universality Indicators, which help to evaluate evidence and propose policy updates that reflect the realities and aspiration of stakeholders globally. The next dynamic coalition is the IGF, which is a coalition of international partners that IGF contributes to the, not only to the to everyone and everyone and everybody wants to be safe around the world. These are a couple of the professionals who address the emerging digital issues. This tool advocates for Internet that is universal and based on principles of human rights, openness, accessibility and governed by the largest population of Western Europe, i.e. the United States, makes use of human security online, environmental aspect of the Internet, and AI among others. So this is a framework which as the objective for underscores the data-driven policymaking and policymaking of the data-driven policymaking, and I think it’s a very important opportunity to ensure and to support diverse stakeholders from government to civil society and private sector to contribute to data-driven policymaking, which is based on evidence. I’ll perhaps stop here.
Mark Carvell: Thank you very much. And I think it’s a very good overview of the data governance agenda. And congratulations on the updating and publication of the toolkit, the indicators toolkit, and the cross-cutting issues and, as you say, how it relates specifically to responsible data governance. It’s very impressive. Maybe there will be questions, comments about that from our participants here today.
Irina Soeffky: Thank you very much for coming. He’s a research fellow for innovation and entrepreneurship. He’s a real computer scientist, which I always find very impressive. I’m a lawyer myself, so I don’t know nothing about all that in theory and practice. He has also a master in management and interaction design, something that I find particularly interesting, but he will not talk about that, unfortunately, today. But he’ll concentrate on the contributions of dynamic coalitions to a topic that really everybody seems to be talking about at the moment. And this is enhancing global AI governance for humanity’s benefit.
Yao Amevi Amnessinou Sossou: Please, Yao, over to you. Thank you, Irina, for these introductions. Yes, and I will try to be very short as possible, because I think other colleagues also from the DC are in the room, and they will contribute and expand more on what I will be explaining about the way this for DCs. So the dynamic coalition, we also, like the other previous speakers, we in charge of expanding on the work of four dynamic coalitions that have been contributing in the unique perspective and approach in addressing key issues with the Internet governance space. So first on, regarding the dynamic coalition in gender and in IGF, we know today that the voice of women, there have been progress in the voices of women in this space, but still voices of women are still underrepresented, gender minorities, social minorities, especially in the global stuff, are still being underrepresented, and the work of the DC on gender here is making sure that they change this perspective by working with the feminists and also the intersections of coalition, different perspectives into discussion on privacy, AI access, and freedom expression for these underrepresented minority women. with theWhich is also about a receiving social discourse among 56 Seminarians with Specific Areas of Women Genders in those contributing and meaningfully impacting discussion. and their contribution highlight the importants of bringing women in the discussion and discussion with the organization impacts social discourse among women. will get into themephase of impact and will get into the also be important toå³ nurture and develop the technological ability and development of the IoT devices. And also, we prioritize safety and security from the design and all the different life cycles of these IoT devices. And their work in general highlights the importance of responsible governance in shaping the future and the benefits of those devices that we are using so that we all benefit more efficiently for these devices. Five minutes after the panel. but they develop this innovation, I think they are doing very incredible work in the space as well. up the work of the DSTC, I will say that they work on the intersections of AI and investment because they are managing an industry like the hospital and hospital health industry. there is the emphasis on the robust, and the need for robust regulations ensuring that AI in healthcare is ethical and transparent and impactful. And they are building capacity, as I mentioned, to help stakeholder in this space navigate the complexity of digital health innovations in that, and making sure that these innovations are responsible, and they are responsibly managed as well. Last, but not the least, is also the DC that I’m part of, the Data-Driven and Health Technology DC, where we’re bringing a bottom-up approach in discussion shaping the IGF. And our focus here is to, in this bottom-up approach, we want to bring up the perspective of the individuals into discussion. Well, we believe that while giving the space to the public to voice their concern, we are shaping and improving health contribution in the space of health and technology. By bringing this up front, we are, in the DC, health data-driven technology is actually contributing to the SDGs in healthcare and wellness. And all this, not spending more on detailing because of the time that we have left, I will just summarize that you have already noticed that these different DCs are focusing on key different areas in the global internet governance space. But their work, individually taken, when you analyze it, they are actually contributing to the global spectrum in tackling the issue of AI. and governance in general, if I can say so. So I will give it the floor to you back, Irina.
Irina Soeffky: Thank you very much. Perfect. Thank you so much and indeed couldn’t agree more. Thank you for your applause. Wonderful. And with that I think we’re at the end of, as you named it, the download session which I think was incredibly rich and incredibly interesting and I’m really so impressed of the diverse work that is ongoing. But I think with that we are at the core of this session that is interaction and discussion and we get ready everybody here in the room and online. So we hope for your questions, comments, thoughts, ideas, whatever. It’s open for everything. But while you get ready to prepare your contributions I’ll turn to Mark whether you have anything to add at the moment or whether we should hand it over to our participants here and online.
Mark Carvell: I think we should go straight to a discussion involving everybody here today and online. But just a word to say thanks very much to our colleagues for summarizing a vast amount of work in such an effective and comprehensive way. Many thanks indeed. Okay, so there are also representatives of dynamic coalitions in the room. So if anybody has a really specific question relating to a dynamic coalition that’s not represented by any of our colleagues of speakers here, there will be somebody in the room possibly who can pick it up or at least we can take it away. And if you want to put a question now in the room it’s a matter of going up to the podium over there with the microphone I understand is the arrangement. There is a core majority of dynamic coalitions ready to engage with the GDC process and also potentially with the WSIS plus 20 review. We haven’t touched on that. Our focus really is on the GDC and sustainable development. So what do you think? Are you assured that dynamic coalitions do have an important role to play within the IGF ecosystem and contributing to not only advancing the IGF’s role in the follow-up to the compact, but also the wider diversity of specific issues covered by all these five objectives? So who wants to take first question? I see somebody coming up to the podium. Please introduce yourself briefly and then put your question or comment, suggestion or whatever it is. Okay, thank you.
Audience: Good morning, this is Maarten Botterman. I’m also representing the Dynamic Coalition on the Internet of Things. What strikes me is that by outset, this dynamic coalition approach is one of pulling stakeholders together to discuss issues throughout the year and coming up particularly on the IGF to demonstrate what they think should be the way forward together, taking into account what happened today and what needs to be in the future. So from that perspective, I think as dynamic coalitions, we could contribute the best by formulating our view on our focus perspective for the specific dynamic coalition on how global good practice looks like, taking into account where we are, what our values are and how we go forward. So, I look forward to hear how others feel about this concept as maybe even promoting this to one of the default products, outcomes of dynamic coalitions each year.
Mark Carvell: Thank you, Martin. Who wants to react or comment or add to that very valuable intervention? Thank you very much, Martin. I see, I think it’s Wout de Natris walking up to the mic. And then I’ll check with Jutta if there’s somebody online. Okay, after Wout, and then to you, Jutta. Thank you.
Audience: My name is Wout de Natris. In the past, I was present at two sessions around the GDC and people from New York from the GDC. We have a gigantic opportunity as dynamic coalitions to be proactive. We can set the agenda perhaps a little bit, and if we wait, someone else will set the agenda for us. So, I would suggest that we set the agenda. If we have that, we can do it. We can set the agenda for what we think the implications are to be delivered in Oslo in June 2025. If we have that, we can communicate that to the people in New York on the global digital compact saying this is what we are going to contribute. And if we do that, we set the agenda, and we set the topics, at least in part. We can do that, and we can do that, and we can do that. And we can do that, and we can do that, and we can do that. So, we can do that, and from there, do our work and make sure that we have output in June. I promise you, as ISTC, we will have on quantum cryptography, for example, what’s the state in the world at this moment. We’re going to research that in the coming six months and report on it.
Mark Carvell: So, thank you for that, but I think that it’s an opportunity for all of us to look ahead to the next IGF, and that’s a very practical proposal there, to look ahead to the next IGF in Lillestrøm, in Norway, and work towards that. So I think that’s it for our discussion today, and taking into account all the relevant work we’ve been describing here that’s undertaken by the Dynamic Coalitions. So shall I turn to Xiao now to provide any, relay any comments online in relation to our discussion? Xiao, over to you. Thank you.
Online moderator: Yeah, thank you. I hope we can hear you well, and greetings from Portugal. And we already have a couple of questions here in the chat, but I’ll start with one that’s from Siva Subramanian. I hope I pronounced it correctly. And it’s directed to Tatevik, but I guess that all speakers can intervene on this question, which was a specific question on the data-driven governance model that was mentioned specifically in the context of AI. I believe the audience wanted to hear a bit more about it and about the project that was presented by Tatevik. So if you can provide more insights, I guess that the audience would be happy to hear more about this contribution to the GDC from this particular Dynamic Coalition.
Mark Carvell: Tatevik, did you want to pick that up, or did you want to go back to the question? There was a bit of an echo around here, so it’s a bit hard to… Ah, you couldn’t quite hear.
Online moderator: Xiao, could you just repeat it? Sure. So the question is quite broad, so I’m sure that others can also pick it up, but I’ll read it as it is. So what is the data-driven governance that the speaker talked about, specifically in the context of AI?
Mark Carvell: Okay. Tatevik, can you pick that up? The relation to AI, I think, is the key element, yeah? If anybody from the other Dynamic Coalitions would like to intervene.
Tatevik Grigoryan: So, I would like to talk a little bit about what is AI and how it supports data-driven policy-making. So, as I mentioned, I could talk about the Internet universality indicators and how it supports this. I work on the Internet universality indicators, so, the way it supports the data, data-driven, it supports data-driven policy-making, rather, it does have indicators on AI currently, so, there is also research on precise data-driven policy-makers, so data-driven policy-makers can um, write to privacy, for example. And data governance as well, as part of one of its themes. But, the approach here for IUI format indicators is to collect the data and help the governments formulate policy recommendations, including policy recommendations, and also to provide the government with the necessary data and opportunities in the national context, and inform the policies through the national data that is covered, that is collected and processed. So, this is the approach for the IUI ROMEX indicator, which is also a multistakeholder approach. So, the IUI ROMEX is a multistakeholder approach, and it is a multistakeholder advisory board composed of governments, civil society organizations, private sector, academia, and diverse stakeholders to also ensure that the data we collect is supported and at a later stage validated by the representatives of the stakeholder groups to ensure, to ensure that the policies we exercise are supported with accountability, curiosity, and relevance to the content involved in all of the data-released policy This is how the framework and hence our dynamic coalition supports it, for other dynamic coalitions.
Mark Carvell: Thank you. Thank you, Tetevic. That’s very helpful. Shall we just go quickly back to Zhao for any second point online, and then we’ll go back to the audio participants here in the room. Sorry, Olivier, you wanted to comment.
Olivier Crepin-Leblond: Just to jump in, I think this question was specifically regarding the Dynamic Coalition on Data and Artificial Intelligence Governance. Luca Belli is the chair of that coalition, and he is present at the IGF, so if anybody is interested in those issues, I would suggest you go and find him and ask him the question. They have elaborated and published a report in 2024, I can see here, with 44 authors and 25 analysis of this specific topic. That’s plenty of reading.
Mark Carvell: Okay, Zhao, do we have a second point online, bearing in mind the time we’ve got?
Online moderator: I’m happy to read one directed at the old panel. It’s from Luis Martinez, asking what is the view of the panel regarding how GDC is going to benefit the work of the Dynamic Coalitions?
Mark Carvell: How is the GDC going to benefit the work of the coalitions? Who would like to take that on? I mean, it’ll certainly enhance the profile of Dynamic Coalitions in the IGF ecosystem, and the model that the coalitions provide for year-round activity of the IGF. Olivier, do you want to come in?
Olivier Crepin-Leblond: Yeah, thank you, Marc. I was going to say something along the lines, ask not what the GDC can do for you, ask what you can do. do for the GDC. And in effect I think that’s what we’re, that’s what the coalitions are working on.
Mark Carvell: Very succinct, great. Okay, well anybody in the room want to raise a new angle here? Yep, do you want to go head over to the mic? That one over there maybe it’ll work. Yes, thank you. And say who you are briefly.
Audience: Thank you. I am Dr. Rajendra Pratap Gupta and I lead three dynamic coalitions, Digital Economy, Digital Health and Environment. And I congratulate all my co-chairs and co-leads for dynamic coalitions for the wonderful work they do. I want to again emphasize there are three things that we have to look at the world when it comes to internet technologies. We should look for creating jobs, livelihood for all and internet for all. I think we are still 2.6 billion people not connected to the internet. As dynamic coalitions we have to decide. I think we have indicators. So we should look at first indicators. How much did we impact in terms of livelihood creation? How much did we go to connect the unconnected? And I think there’s a very important question that was raised on data and AI. So I think data has two big issues. Integrity of data and integration of data. And we should not prioritize intelligence over it. So I think data integrity integration should come before artificial intelligence. And last but not least a very positive change. This time I don’t see big tech in this. Something that I’ve always emphasized as IGF we are the most impactful forum for internet technologies. We should aim for large number of small companies rather than small number of large companies to create the jobs we want and to shape the internet we want which is democratic. Thank you so much.
Mark Carvell: Thank you very much. We’re getting close to the end.
Audience: So, I’m Reyansh Gupta, I’m leading the Dynamic Coalition on Gaming for Purpose. So I just want to add something to the discussions that we’re having. Dynamic coalitions are doing great work across sectors, but gaming acts as a universal language and I think gaming can be integrated everywhere and I feel like there are multiple sectors and multiple areas where gamification and gaming can just contribute to how we’re contributing to the social good itself and I think that’s just what I wanted to add to it. Thank you.
Mark Carvell: Thanks very much. One more. Yes.
Audience: Hi. Good morning everyone. Thank you for your efforts for this great platform of IGF. I have made this intervention in previous sessions and I would like to emphasize the importance of taking intellectual property rights into consideration in the digital environment and I am pleased to, in behalf of my organization, Creators Union of Arab, to take this part on the coalitions and we take this opportunity in IGF this year to announce about launching a platform for protecting intellectual property in the digital area. It’s called intellectual property verification. So it’s our pleas to contribute with you this part of intellectual property rights. Thank you.
Mark Carvell: Thanks very much and best of luck with the launch of that platform. That’s great news, I’m sure. Great contribution, potentially. Okay. We better wrap up. So I think I’m going to look to Jutta and maybe Markus, who is in the room, do you want to say anything? No? Shaking hands? Okay. All right. Markus is our coordinator. And I think that’s a very good point. And I think that’s a very good point. I think that’s a very good point. Anyway, Jutta, do you want to help us wrap up today? Thank you.
Jutta Croll: Does it work? Okay. I tried to take notes of all this important content and comments that we’ve heard. And I think that’s a very good point. And I think that’s a very good point. But the GDC to life is what we have to do now. And we’ve learned today that dynamic coalitions, the 31 dynamic coalitions are already working to implement the GDC and the five main objectives. Let me just refer to some of the very important things we’ve heard. We were talking about a shared responsibility for marginalized groups, for disadvantaged groups, and also for those whose voices have not been heard so far enough. We’ve been talking about overcoming barriers, and that is also done in the collaboration of dynamic coalitions. We’ve heard that the Internet is the young people’s network, and that the GDC asks for child rights as a priority. And we’ve heard that the GDC is very active in the work of the dynamic coalitions. And we have heard about the importance of the Internet assessments in all the work that we are doing, because children are the inhabitants of the Internet world, of the digital environment. And therefore, it is very important to follow up with that within the work of the dynamic coalitions. We also have heard about the data and the importance of data in the work of the dynamic coalitions, and also in the work of the dynamic coalitions, and also in the work of the work that the dynamic coalitions do. It gives us orientation where we can, where we should go for. We’ve heard a lot about safety and security, and about the standards we need for safety and security to make our work effective. Eventually, we’ve also heard about the importance of the dynamic coalitions, and I would like to refer to the dynamic coalitions as a platform for digital services. I have had the opportunity to take that chance, and it’s very data-driven, what they are doing there. I would also like to refer the dynamic coalition on data-driven health technologies and on digital health to look at your Saudi Arabian colleagues, and I take this opportunity to thank them. We have a dynamic coalition, and we have a dynamic coalition, and we have a dynamic coalition starting with the very beginning, 19 years ago, with the first internet governance forum. So we have many dynamic coalitions that are dating back to 2007, starting 2006, starting their work then, and it’s an open concept, so it’s bottom-up. Everybody who wants to join the work of dynamic coalitions, just go to the global digital compact website, and it’s open, and the mailing lists are open. You can go there, and you can join us in our work to implement the global digital compact. Thank you so much.
Irina Soeffky: Thank you, Jutta. This is indeed also my personal takeaway that we need more work and more impact of the Dynamic Coalitions being at the implementation of the GDC or the future work of the IGF. So this has been incredibly inspiring today. So I have lots of ideas to bring back home and to think more about and to discuss more in detail. So thank you, everybody. And with that, over to you, Mark.
Mark Carvell: Great. Well, what can I add to that? I think it’s been a great session. I hope people who perhaps aren’t aware of Dynamic Coalitions have learned a lot today about the scope and range of activities and their direct relevance to the objectives of the Compact and to the Sustainable Development Agenda, the goals, the SDGs. And as I said earlier, when I touched on the WSIS Plus 20 Review, and I have a shout out there for the Dynamic Coalition on Environment. We heard in one of the sessions today that environmental issues will be a major element, a major component of the WSIS Plus 20 Review, and we have a Dynamic Coalition comprising experts in that very field. So that’s the core message, I think, today. If you go to the IGF website, you will see the list of Dynamic Coalitions. You just go to the menu and then scroll down, and then you can hit on every coalition, maybe the ones you’re particularly interested in, and you can find out a whole lot more about them, what their missions are, their action plans, what they are delivering in terms of tangible outcomes, and how they are engaging in wider processes, such as those of the UN, that are now providing the context for all our work over the year ahead. So let’s all work together on that. And I think it’s important for us to be able to do that, and to be able to ensure that dynamic coalitions fulfill that potential. Okay. I better stop there, because we are over time, I guess, and I won’t ramble on anymore. So I’ll conclude that with thanks to everybody who contributed, all the planners of the session, the technical support team here in Riyadh, the hosts who have done such a fantastic job this morning, and thanks to our panellists for their diligence and work at the hard-pressed time to capture key elements of all the dynamic coalitions that have stepped forward in the context of the GDC. So many thanks again to all of you. Okay. I’ll stop there. Thank you.
Online participant
Speech speed
118 words per minute
Speech length
892 words
Speech time
452 seconds
Small Island Developing States face unique challenges in digital development
Explanation
Small Island Developing States (SIDS) encounter specific obstacles in their digital development due to their unique characteristics. These challenges include limited resources, remoteness, and vulnerability to external economic shocks and natural disasters.
Evidence
The speaker mentions that SIDS have small populations, limited resources, and are susceptible to natural disasters and external economic shocks.
Major Discussion Point
Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives
Muhammad Shabbir
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits
Explanation
Various dynamic coalitions work together to promote inclusivity and accessibility in the digital economy. These coalitions focus on different aspects such as financial inclusion, open education, accessibility for persons with disabilities, and environmental sustainability.
Evidence
The speaker mentions specific coalitions: DCDFI (Digital Coalition on Digital Financial Inclusion), DCOER (Dynamic Coalition on Open Educational Resources), DCAD (Dynamic Coalition on Accessibility and Disability), and DCE (Dynamic Coalition on Environment).
Major Discussion Point
Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives
Agreed with
Olivier Crepin Leblond
Tatevik Grigoryan
Yao Amevi Amnessinou Sossou
Jutta Croll
Agreed on
Dynamic coalitions contribute to implementing GDC objectives
There is an opportunity for coalitions to collaborate and address common barriers
Explanation
Dynamic coalitions have the potential to work together on shared challenges and objectives. By collaborating, coalitions can more effectively address common barriers and achieve greater impact in their respective areas of focus.
Evidence
The speaker provides examples of collaboration between DCAD and DCDFI on making banking systems accessible, and between DCAD and DCOER on ensuring digital accessibility in education.
Major Discussion Point
Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation
Olivier Crepin Leblond
Speech speed
155 words per minute
Speech length
1125 words
Speech time
433 seconds
Coalitions on internet rights, children’s rights, youth engagement, and internet standards contribute to fostering safe and inclusive digital spaces
Explanation
Several dynamic coalitions work towards creating a safe and inclusive digital environment. These coalitions focus on various aspects such as internet rights, children’s rights, youth engagement, and internet security standards.
Evidence
The speaker mentions specific coalitions: Internet Rights and Principles Coalition, Dynamic Coalition on Children’s Rights in the Digital Environment, Youth Coalition on Internet Governance, and Internet Standard Security and Safety Coalition (IS3C).
Major Discussion Point
Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives
Agreed with
Muhammad Shabbir
Tatevik Grigoryan
Yao Amevi Amnessinou Sossou
Jutta Croll
Agreed on
Dynamic coalitions contribute to implementing GDC objectives
Tatevik Grigoryan
Speech speed
160 words per minute
Speech length
898 words
Speech time
335 seconds
Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance
Explanation
Dynamic coalitions focusing on data governance, AI, and internet universality indicators play a crucial role in promoting responsible data governance. These coalitions work on developing frameworks and tools for assessing and improving data governance practices.
Evidence
The speaker mentions the Internet Universality Indicators as a comprehensive tool for evaluating internet governance and policy-making.
Major Discussion Point
Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives
Agreed with
Muhammad Shabbir
Olivier Crepin Leblond
Yao Amevi Amnessinou Sossou
Jutta Croll
Agreed on
Dynamic coalitions contribute to implementing GDC objectives
Coalitions can contribute to data-driven policymaking and national assessments
Explanation
Dynamic coalitions play a role in promoting data-driven policy-making and conducting national assessments. Their work helps inform government policies and provides valuable data for decision-making processes.
Evidence
The speaker mentions the Internet Universality Indicators framework as a tool for collecting data and helping governments formulate policy recommendations.
Major Discussion Point
Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation
Yao Amevi Amnessinou Sossou
Speech speed
133 words per minute
Speech length
627 words
Speech time
281 seconds
Coalitions on gender, IoT, blockchain, and digital health contribute to enhancing AI governance
Explanation
Various dynamic coalitions work on different aspects that contribute to enhancing AI governance. These coalitions focus on gender issues, Internet of Things, blockchain technology, and digital health, all of which intersect with AI governance.
Evidence
The speaker mentions specific coalitions working on gender, IoT, blockchain, and digital health, highlighting their contributions to AI governance discussions.
Major Discussion Point
Major Discussion Point 1: Contributions of Dynamic Coalitions to the Global Digital Compact (GDC) Objectives
Agreed with
Muhammad Shabbir
Olivier Crepin Leblond
Tatevik Grigoryan
Jutta Croll
Agreed on
Dynamic coalitions contribute to implementing GDC objectives
Audience
Speech speed
164 words per minute
Speech length
1176 words
Speech time
429 seconds
Dynamic coalitions can proactively set the agenda for GDC implementation
Explanation
Dynamic coalitions have the opportunity to take a proactive role in shaping the implementation of the Global Digital Compact. By setting the agenda, coalitions can ensure their expertise and priorities are reflected in the GDC implementation process.
Evidence
The speaker suggests that coalitions should communicate their planned contributions to the people in New York working on the Global Digital Compact.
Major Discussion Point
Major Discussion Point 2: Role and Impact of Dynamic Coalitions
Coalitions should focus on creating jobs, connecting the unconnected, and data integrity
Explanation
Dynamic coalitions should prioritize three key areas in their work: job creation, expanding internet access to the unconnected population, and ensuring data integrity. These focus areas are crucial for addressing global digital challenges.
Evidence
The speaker mentions that 2.6 billion people are still not connected to the internet and emphasizes the importance of data integrity and integration.
Major Discussion Point
Major Discussion Point 2: Role and Impact of Dynamic Coalitions
Gaming and gamification can be integrated across sectors to contribute to social good
Explanation
Gaming and gamification techniques can be applied across various sectors to promote social good. This approach can enhance engagement and effectiveness in addressing social issues through digital means.
Evidence
The speaker mentions the Dynamic Coalition on Gaming for Purpose and suggests that gaming can act as a universal language.
Major Discussion Point
Major Discussion Point 2: Role and Impact of Dynamic Coalitions
Intellectual property rights need consideration in the digital environment
Explanation
The importance of intellectual property rights in the digital realm needs to be addressed. Protecting intellectual property is crucial as digital technologies continue to evolve and impact creative industries.
Evidence
The speaker announces the launch of a platform called Intellectual Property Verification to protect intellectual property in the digital area.
Major Discussion Point
Major Discussion Point 2: Role and Impact of Dynamic Coalitions
Jutta Croll
Speech speed
220 words per minute
Speech length
472 words
Speech time
128 seconds
Dynamic coalitions are already working to implement the GDC objectives
Explanation
The 31 existing dynamic coalitions are actively engaged in work that aligns with and implements the objectives of the Global Digital Compact. Their ongoing efforts contribute directly to the realization of the GDC goals.
Major Discussion Point
Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation
Agreed with
Muhammad Shabbir
Olivier Crepin Leblond
Tatevik Grigoryan
Yao Amevi Amnessinou Sossou
Agreed on
Dynamic coalitions contribute to implementing GDC objectives
Mark Carvell
Speech speed
136 words per minute
Speech length
2242 words
Speech time
988 seconds
Coalitions provide a platform for year-round IGF activity and multistakeholder engagement
Explanation
Dynamic coalitions serve as a mechanism for continuous engagement within the Internet Governance Forum ecosystem. They enable ongoing multistakeholder participation and work throughout the year, beyond just the annual IGF event.
Major Discussion Point
Major Discussion Point 3: Future of Dynamic Coalitions and GDC Implementation
Agreements
Agreement Points
Dynamic coalitions contribute to implementing GDC objectives
Muhammad Shabbir
Olivier Crepin Leblond
Tatevik Grigoryan
Yao Amevi Amnessinou Sossou
Jutta Croll
Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits
Coalitions on internet rights, children’s rights, youth engagement, and internet standards contribute to fostering safe and inclusive digital spaces
Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance
Coalitions on gender, IoT, blockchain, and digital health contribute to enhancing AI governance
Dynamic coalitions are already working to implement the GDC objectives
Multiple speakers highlighted how various dynamic coalitions are actively working on different aspects that align with and contribute to the implementation of GDC objectives.
Similar Viewpoints
Both speakers emphasize the importance of dynamic coalitions in addressing economic and developmental aspects of digital inclusion, particularly focusing on job creation and connecting the unconnected.
Muhammad Shabbir
Audience
Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits
Coalitions should focus on creating jobs, connecting the unconnected, and data integrity
Both speakers highlight the importance of data governance and integrity in the work of dynamic coalitions.
Tatevik Grigoryan
Audience
Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance
Coalitions should focus on creating jobs, connecting the unconnected, and data integrity
Unexpected Consensus
Proactive role of dynamic coalitions in shaping GDC implementation
Audience
Jutta Croll
Mark Carvell
Dynamic coalitions can proactively set the agenda for GDC implementation
Dynamic coalitions are already working to implement the GDC objectives
Coalitions provide a platform for year-round IGF activity and multistakeholder engagement
There was an unexpected consensus on the proactive role dynamic coalitions can and should play in shaping the implementation of the Global Digital Compact, rather than just responding to it. This suggests a shift towards a more active and influential role for these coalitions in global internet governance.
Overall Assessment
Summary
The main areas of agreement centered around the significant contributions of dynamic coalitions to implementing GDC objectives, their potential for collaboration, and their role in proactively shaping internet governance discussions.
Consensus level
There was a high level of consensus among speakers regarding the importance and potential impact of dynamic coalitions in addressing various aspects of internet governance and implementing the Global Digital Compact. This strong consensus implies that dynamic coalitions are likely to play an increasingly central role in future internet governance processes and in the implementation of the GDC.
Differences
Different Viewpoints
Unexpected Differences
Emphasis on specific technologies
Audience
Yao Amevi Amnessinou Sossou
Gaming and gamification can be integrated across sectors to contribute to social good
Coalitions on gender, IoT, blockchain, and digital health contribute to enhancing AI governance
While most speakers focused on broader themes, these speakers unexpectedly emphasized specific technologies (gaming, IoT, blockchain) as key areas for Dynamic Coalitions to address. This highlights a potential difference in approach between technology-specific and theme-based coalition work.
Overall Assessment
summary
The main areas of disagreement revolve around the prioritization of focus areas for Dynamic Coalitions, the balance between proactive agenda-setting and ongoing work, and the emphasis on specific technologies versus broader themes.
difference_level
The level of disagreement among speakers is relatively low. Most differences appear to be more about emphasis and prioritization rather than fundamental disagreements. This suggests that there is a general consensus on the importance of Dynamic Coalitions in implementing the GDC objectives, but some variation in how different stakeholders believe this should be approached. These differences could potentially lead to a more comprehensive and diverse approach to addressing digital governance challenges, provided that effective coordination and collaboration mechanisms are in place.
Partial Agreements
Partial Agreements
Speakers agree on the importance of Dynamic Coalitions in implementing the Global Digital Compact (GDC) objectives, but they differ in their emphasis on proactive agenda-setting versus ongoing work and engagement.
Audience
Jutta Croll
Mark Carvell
Dynamic coalitions can proactively set the agenda for GDC implementation
Dynamic coalitions are already working to implement the GDC objectives
Coalitions provide a platform for year-round IGF activity and multistakeholder engagement
Similar Viewpoints
Both speakers emphasize the importance of dynamic coalitions in addressing economic and developmental aspects of digital inclusion, particularly focusing on job creation and connecting the unconnected.
Muhammad Shabbir
Audience
Dynamic coalitions on financial inclusion, open education, accessibility, and environment contribute to expanding digital economy benefits
Coalitions should focus on creating jobs, connecting the unconnected, and data integrity
Both speakers highlight the importance of data governance and integrity in the work of dynamic coalitions.
Tatevik Grigoryan
Audience
Coalitions on data governance, AI, and internet universality indicators contribute to advancing responsible data governance
Coalitions should focus on creating jobs, connecting the unconnected, and data integrity
Takeaways
Key Takeaways
Dynamic coalitions are actively contributing to the implementation of the Global Digital Compact (GDC) objectives across various domains
Dynamic coalitions provide a platform for year-round IGF activity and multistakeholder engagement on internet governance issues
There is potential for increased collaboration between dynamic coalitions to address common challenges and barriers
Dynamic coalitions can play an important role in data-driven policymaking and national assessments related to internet governance
The work of dynamic coalitions is relevant not only to the GDC but also to the Sustainable Development Goals and WSIS+20 Review
Resolutions and Action Items
Dynamic coalitions to prepare contributions for the next IGF in Lillestrøm, Norway in 2025
Coalitions to focus on creating tangible outcomes and impacts related to GDC objectives
Encourage more stakeholders to join and participate in dynamic coalition activities
Unresolved Issues
Specific mechanisms for dynamic coalitions to formally contribute to GDC implementation process
How to measure and evaluate the impact of dynamic coalitions’ work on GDC objectives
Ways to increase visibility and recognition of dynamic coalitions’ contributions within the broader IGF ecosystem
Suggested Compromises
Balance between coalition-specific work and collaborative efforts across coalitions
Integrating new topics like gaming and intellectual property rights into existing coalition frameworks
Thought Provoking Comments
While many in the global internet community […] are fully engaged with and attuned to the developments surrounding WCIT, WTSA, and ICANN, […] it is becoming increasingly apparent that a greater degree of polarization and marginalization in the area of internet policy and strategy has been slowly occurring.
speaker
June Paris
reason
This comment highlights a critical issue of inequality and marginalization in internet governance that is often overlooked.
impact
It shifted the discussion to focus more on inclusion of underrepresented groups, particularly small island developing states, in internet policy.
At least one-third of Internet users are under 18 […] and their rights must be protected as outlined in the U.N. convention on the rights of the child.
speaker
Olivier Crepin-Leblond
reason
This statistic provides important context about the demographics of internet users and raises the issue of children’s rights online.
impact
It brought attention to the need for child-centric policies and protections in internet governance, which was further discussed by other speakers.
Your interventions are so tremendously important […] I’m speaking of content, of wanting to change things and help with the implementation. Everybody else is talking about process.
speaker
Wout de Natris
reason
This comment emphasizes the importance of focusing on concrete actions and implementation rather than just process.
impact
It sparked a discussion about how Dynamic Coalitions can be more proactive and action-oriented in contributing to the Global Digital Compact.
We should look for creating jobs, livelihood for all and internet for all. I think we are still 2.6 billion people not connected to the internet.
speaker
Dr. Rajendra Pratap Gupta
reason
This comment refocuses the discussion on core issues of internet access and economic opportunity.
impact
It broadened the conversation to include economic and development aspects of internet governance, beyond just technical and policy considerations.
Dynamic coalitions are doing great work across sectors, but gaming acts as a universal language and I think gaming can be integrated everywhere
speaker
Rayansh Gupta
reason
This comment introduces a novel perspective on how gaming can be leveraged across various sectors for social good.
impact
It opened up a new avenue for discussion about innovative approaches to achieving the goals of the Global Digital Compact.
Overall Assessment
These key comments shaped the discussion by broadening its scope beyond technical aspects of internet governance to include issues of inclusion, children’s rights, economic opportunity, and innovative approaches like gaming. They helped shift the focus from process to concrete actions and implementation, while also highlighting the need to include underrepresented groups in internet policy discussions. The comments collectively emphasized the multifaceted nature of internet governance challenges and the need for diverse, creative solutions.
Follow-up Questions
How can dynamic coalitions collaborate more effectively to address common barriers for marginalized communities?
speaker
Mark Carvell
explanation
This was raised as an important area to explore further collaboration among coalitions working on similar issues.
What specific contributions can dynamic coalitions make to the Global Digital Compact by the 2025 IGF in Norway?
speaker
Wout de Natris
explanation
This was suggested as a way for dynamic coalitions to proactively set the agenda and contribute concrete outputs.
How can the work of dynamic coalitions be measured in terms of impact on livelihood creation and connecting the unconnected?
speaker
Dr. Rajendra Pratap Gupta
explanation
This was proposed as a way to assess the real-world impact of dynamic coalitions’ work.
How can gaming and gamification be integrated into the work of other dynamic coalitions?
speaker
Rayansh Gupta
explanation
This was suggested as a potential area for cross-coalition collaboration and innovation.
How can intellectual property rights be better incorporated into discussions of the digital environment?
speaker
Unnamed participant from Creators Union of Arab
explanation
This was raised as an important area that needs more consideration in digital governance discussions.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.