WS #206 Evolving the IGF: cooperation is the only way

WS #206 Evolving the IGF: cooperation is the only way

Session at a Glance

Summary

This discussion focused on how the Internet Governance Forum (IGF) could evolve to meet the challenges of the modern digital world. Participants emphasized the need for the IGF to become more focused, empowered, and relevant to decision-makers. Key suggestions included improving the IGF’s ability to produce concrete outcomes and recommendations, enhancing its connection to other UN processes, and better leveraging its vast archive of discussions.

Several speakers highlighted the importance of making the IGF more inclusive, particularly by addressing language barriers and improving hybrid participation options. There was debate about whether to reduce the number of sessions for more in-depth discussions or maintain the current format for diversity. The need for better funding and resources was a recurring theme, with suggestions for broader stakeholder contributions.

Participants discussed applying the NetMundial multi-stakeholder guidelines to IGF processes and potentially focusing on specific themes or issues each year. The role of National and Regional IGF Initiatives (NRIs) was emphasized as a crucial element in the IGF ecosystem, particularly for engaging local communities and addressing region-specific concerns.

Looking ahead to the 2024 IGF in Oslo, speakers stressed the importance of making it strategically focused and relevant to the upcoming WSIS+20 review. Suggestions for immediate improvements included creating an AI-powered bot to make IGF archives more accessible, conducting early community consultations, and enhancing the hybrid meeting experience. Overall, the discussion underscored the IGF’s ongoing evolution and the need for creative approaches to increase its impact and relevance in global internet governance.

Keypoints

Major discussion points:

– How to evolve the IGF to be more focused, empowered, and impactful

– Improving diversity and inclusivity, especially regarding language barriers and hybrid participation

– Better organizing and utilizing the wealth of information from past IGFs

– Applying the NetMundial multi-stakeholder guidelines to IGF processes

– Making the IGF more attractive and relevant to governments and other stakeholders

The overall purpose of the discussion was to explore ideas for evolving and improving the Internet Governance Forum (IGF) to better meet current challenges and increase its impact and relevance.

The tone of the discussion was constructive and collaborative. Participants shared ideas openly and built on each other’s suggestions. There was a sense of urgency about the need for change, but also optimism about the IGF’s potential. The tone became more focused and solution-oriented towards the end as participants were asked to provide concrete suggestions for the next IGF.

Speakers

– Annaliese Williams – Chair of Australia’s national IGF, works for .au domain administration

– Chris Buckridge – MAG member

– Renata Mielli – Chair of the Brazilian Internet Steering Committee (CGI.br), Special Advisor of the Minister of Science, Technology and Innovation

– Amrita Choudhury – CCAOI, involved in Asia-Pacific regional IGF

– Plantina Tsholofelo Mokone – Internet Governance Coordinator for ZA Domain Name Authority, Secretariat for South African IGF Multi-Stakeholder Committee

– Jorge Cancio – Government representative

– Jordan Carter – Australian Domain Administration

Additional speakers:

– Wout de Natris – Consultant from the Netherlands, representing Dynamic Coalition on Internet Standards Security Safety

– Desiree Miloshevic – Role/expertise not specified

– Masanobu Katoh – IGF Japan

– Baratang Miya – GirlHype

– Anriette Esterhuysen – Association for Progressive Communications

– Galvanian Burke – Civil Society representative

Full session report

The Internet Governance Forum (IGF) Discussion: Evolving to Meet Modern Challenges

This discussion focused on how the Internet Governance Forum (IGF) could evolve to meet the challenges of the modern digital world. Participants, representing a diverse range of stakeholders from various countries and organizations, engaged in a constructive and collaborative dialogue about the future of the IGF.

Key Themes and Discussions

1. Evolution of the IGF

There was broad consensus among speakers that the IGF needs to adapt and evolve to address current global digital challenges more effectively. Amrita Choudhury emphasized the need for the IGF to become more focused and empowered, while Chris Buckridge noted that the IGF has already evolved over time and should continue to do so. Renata Mielli and Plantina Tsholofelo Mokone stressed the importance of producing more concrete outcomes and actionable items at continental and country levels. Jorge Cancio suggested viewing the IGF in the context of the broader World Summit on the Information Society (WSIS) architecture.

2. Improving IGF Programming and Format

Speakers discussed various ways to improve IGF programming and format. Wout de Natris suggested reducing the number of sessions for more focused discussions, while others argued for maintaining diversity while improving programming. Amrita Choudhury highlighted the importance of improving the hybrid format and accessibility. Chris Buckridge proposed considering different formats for different themes. There were also suggestions to organize more lively debates and make the IGF a year-round process rather than just an annual event.

3. Enhancing Multi-stakeholder Participation

Improving multi-stakeholder participation was a recurring theme. Jorge Cancio proposed applying the NetMundial multi-stakeholder guidelines to IGF processes, which sparked a significant discussion about their potential implementation. Renata Mielli emphasized the need to address language barriers to participation, sharing the example of the Lusophone Internet Forum initiative. Annaliese Williams and Masanobu Katoh discussed better coordination between global, regional, and national IGFs to increase participation. Chris Buckridge stressed the importance of maintaining a broad funding base from multiple stakeholders.

4. Improving IGF Outputs and Impact

There was general agreement on the need for the IGF to produce more concrete and impactful outputs. Renata Mielli called for more concrete recommendations and guidelines. Chris Buckridge suggested making IGF archives and data more accessible and usable, with Jorge Cancio proposing the creation of an IGF bot to facilitate this. Amrita Choudhury emphasized focusing on strategic issues related to the WSIS+20 review. Renata Mielli also stressed the importance of demonstrating the IGF’s relevance to shaping digital policies and starting consultations earlier on desired outcomes.

5. Addressing Language and Accessibility Issues

Several speakers highlighted the need to improve language accessibility and the overall user experience of the IGF. Renata Mielli discussed initiatives to address language barriers, while Plantina Tsholofelo Mokone and others emphasized the importance of improving the hybrid format. Galvanian Burke suggested enhancing digital tools and user experience for IGF attendees.

6. Role of National and Regional Initiatives (NRIs)

The importance of National and Regional Initiatives (NRIs) in the IGF ecosystem was emphasized by several speakers. Annaliese Williams mentioned the DNS Research Federation report on the impact of the IGF, which highlighted the role of NRIs. Speakers discussed how better coordination between global, regional, and national IGFs could increase participation and impact.

7. Funding and Sustainability

The discussion touched on funding issues, with Chris Buckridge emphasizing the importance of maintaining a broad funding base. Jordan Carter suggested that non-state stakeholders could contribute to the IGF trust fund to enhance its sustainability and independence.

Looking Ahead: Key Takeaways and Future Considerations

The discussion yielded several key takeaways for the future of the IGF:

1. The need for the IGF to become more focused, empowered, and relevant to decision-makers.

2. Improving IGF programming while maintaining diversity and inclusivity.

3. Enhancing multi-stakeholder participation, especially from governments.

4. Producing more concrete outcomes and actionable recommendations.

5. Improving the hybrid format, accessibility, and language inclusivity of the IGF.

6. Better coordination between global, regional, and national IGFs.

7. Leveraging the IGF community and existing resources more effectively.

8. Exploring the application of NetMundial multi-stakeholder guidelines to IGF processes.

9. Developing mechanisms for year-round engagement and earlier consultations.

10. Addressing funding sustainability through diverse stakeholder contributions.

As the IGF community looks towards the 2024 IGF in Oslo and beyond, these discussions provide a foundation for ongoing efforts to evolve and improve the forum. The upcoming IGF in Oslo holds particular significance in light of the WSIS+20 review, as noted by Jorge Cancio. The challenge lies in balancing diverse stakeholder interests while enhancing the IGF’s impact and relevance in shaping global internet governance.

Session Transcript

Annaliese Williams: that multi-stakeholder opportunities for discussion on an equal basis are a good thing and a very positive thing to have and to continue. I see our speakers are just getting themselves organized. So we we have a bit of a discussion today on how we can evolve the IGF. We recognize that a decision will be made next year as to the you know the further mandate of the IGF but I think in discussions so far at this year’s IGF there seems to be the overwhelming view that that multi-stakeholder discussions are a good thing and I just wanted to note in the DNS Research Federation did a report on the impact of the IGF and there’s a quote to the effect of you know if the IGF didn’t exist we would have to invent it. So I think you know regardless of what is decided next year there is a need for these for this international multi-stakeholder discussions to take place and we hope that the IGF will continue long into the future. So our discussion today is just seeking some some thoughts about how we can evolve the IGF from where it is today so that it can continue to to meet the challenges of a digital world. We have four speakers with us today and we have an online moderator as well who will be keeping an eye on on the online participation and will let us know if anybody has comments to contribute. My colleague Everton will read them into the into the meeting for us. So I will let our speakers introduce themselves but and perhaps you can just do that briefly as you speak but just briefly we have Chris Buckridge who wears multiple internet governance hats. We have Renata Miele from .br. We also have another ccTLD represented, we have Plantina from .za. We have civil society represented with Amrita and our online moderator today is Everton also from .br. So we’re going to ask speakers to just reflect on a few questions and then I’m hoping that there will be an opportunity for speakers to interact with each other, respond to each other’s thoughts or build on each other’s thoughts. But we will start with Amrita I think and I’ll ask you all, you’ll all have an opportunity to respond to the same question. But Amrita if you would briefly introduce yourself and then perhaps share your thoughts on how the IGF should evolve to meet the challenges of a modern digital world.

Amrita Choudhury: Thank you so much Annalise and thank you for having me here. I’m Amrita. I am from India. I work for a civil society organization called CCUI. I’m involved in the Asia-Pacific regional IGF apart from other things and to respond to your question on how the IGF should evolve to meet the modern digital world, I think it needs to be more focused. It needs to be more empowered. In fact the working group strategy where Chris, me, Jorge and many others in this room who are involved, we did create a vision document where there have been certain concrete measures being drafted on how the IGF should evolve. IGF could evolve to meet most of the requirements which is being portrayed as gap areas. For example, it could be the place where everyone can come and it could be a test bed for people. It could be a place where the GDC’s implementations could be tracked. It could also be a place where even governments come and test out what they want to do, et cetera, apart from other things. So IGF has it in them, has it in itself, but it needs to be more empowered in terms of people, in terms of money, primarily so that it can do what it has been doing, but not formally being given the mandate. I would stop at that.

Annaliese Williams: Thanks very much. Thanks, Amrita. And just before I go much further in my enthusiasm to begin the conversation today, I neglected to introduce myself. So my apologies to everybody for that. My name is Annalise Williams. I work for the .au domain administration. I’m part of the technical community and I’m also very involved in Australia’s national IGF and I’ve been the chair of the IGF for the last two years. So my apologies for being so hasty. Chris, perhaps we might go to you. How does the IGF, how should it evolve? And I do want to come back to the point Amrita made, but if any of the other speakers wanted to chime in on the points about empowerment in terms of people and money, how the IGF is going to be funded is a live question. But Chris, would you like to share your thoughts?

Chris Buckridge: Sure. So my name’s Chris Buckridge. I’m currently a MAG member for the next two days at least. and then we see what happens with 2025. Yeah, I have a few other hats that I wear, but for the purposes of this, I’m a long time IGF gadfly, who’s happy to just sort of throw some comments in as to how things might evolve. I think the important thing is the IGF is and has always been a work in progress. It’s never been static in terms of what it is. It probably feels a little, you know, we come back every year and see a lot of the same people and that’s always good and fun, but there has been evolution and well, full disclosure, I was one of the co-authors of that DNS Research Federation paper. But yeah, part of what we found there was, you know, digging back through how it’s evolved, what’s happened, what the sort of results of those processes have been was fascinating and really turned up some very interesting examples, both sort of very practical examples, how it sort of helped to foster an IXP development in Africa and other global South countries, how it fostered the NRI, National and Regional Initiative ecosystem, and how important that has proved to be in terms of developing internet governance discussions. But I mean, also things like in talking to different people, hearing some really different perspectives on the impact that the IGF had. I mean, some people sort of saying, when we talk maybe about the IANA transition, yes, it was a hugely important crucible for discussion and ideas to come together. Other people saying, no, well, it was a bit separate to that. So I think, you know, there’s lots of perspectives, it has changed and grown over time. And I mean, the intersessional activities are an area where we’ve been very clear that there has been growth, there has been change in the last two decades, and they’ve evolved into something probably of the almost most value in this IGF space. And that’s the best practice forums, which. You know, we’ve had a cybersecurity best practice forum in operation for a good number of years now and has produced some really important and insightful work. We’ve had a policy network on fragmentation, which has also been, I think, now in its third year and has done some really insightful study on a very key issue right now in internet governance. We’ve also had one on artificial intelligence, which perhaps got a bit subsumed by some of the larger scale UN discussions, but if you read the report, it actually pre-predicted, I guess, what some of the, say, the Secretary General’s AI panel said in its report about things like regulatory interoperability. So, I mean, those ideas are percolating, really, in a very early stage in the IGF, and the IGF is helping to get them to that next level. I think that evolution needs to continue. I think we’re at a really fascinating point going into next year where we’ll have a quite different context. We’ll have a much shorter timeframe to prepare. We’ll have a different kind of mag. We may see that there’s a need to sort of consolidate a little to sort of bring it in to a bit more tight focus in how it works. There’s obviously going to be an eye to the WSIS plus 20 review, which will happen a few months after the IGF next year. So, I think we want the IGF next year to be at its best, and this, you know, a little pressure can force the change that you want and create a diamond. So, I hope that’s what we’re going to see in the coming six months in terms of evolution. Obviously, funding is a perennial issue. I do think there is, it’s important to be thinking that we maintain the broad base of funding for the IGF. Any multi-stakeholder model, you know, captured by a certain stakeholder group or demographic is a concern. And that applies as much to. you know, the UN and member states as it does to any other group. I think part of the strength is having funding come from lots of different sources so that the decisions about how the IGF evolves, the decisions about where it goes have to be taken in a multi-stakeholder way rather than the person who’s outlaid the most cash gets to steer the ship. So I’ll stop there. Thanks.

Annaliese Williams: Thanks, Chris. Yeah, important observations on a number of fronts there. I would agree with you about the need for broad-based funding. I think at the Australian government booth out there, they had a little survey asking people to indicate whether they thought their stakeholder group should contribute to funding. So I’d be interested to see what the results of all those surveys were. And just your point on the policy networks and best practice forums, I think that dynamic coalitions, that is also a really important point. You know, the IGF has already demonstrated that it can evolve to meet changing needs and it has demonstrated that it can do this successfully and I’m sure it will continue to do so into the future. I might go to you next, Renata, and then Plantina.

Renata Mielli: Hello. Thank you, Annelise. Thank you for inviting me for this interesting, important session about how to evolve in IGF. I will agree with Chris because I agree the IGF is a work in progress. So I want to emphasize that the IGF is a work in progress and it has been an important platform for the discussions about… of the internet, its applications and the impacts of the new economic models and services for users and also sites. And maybe if some governments and other decision-maker organizations had more involved with IGF, some of the things we are saying now, they are knowing previously because we discussed this a lot. So I think we are doing a good work, seeking best practice, the new forum. I think all this is important. But in my point of view, it isn’t enough. And we are at a moment where we can no longer afford to gather in deep discussions on critical global issues without taking a step forward and using these debates to develop a set of consensual proposals and recommendations to present to multilateral organizations. It’s necessary to improve mechanisms for building consensus and producing guidelines and recommendations in such a way that community’s voices have an impact on multilateral and other decision-making processes. So that these effective solutions to challenges we face can be found and implemented. We need to demonstrate that there is no contradiction between strengthening multistakeholder spaces and processes and the role of multilateral spaces. We are in this crazy moment that we are something against another. And we have to stop this and work together in a complementary way. So in my opinion, that’s why evolving the IGF is a core discussion for us. We need to have the courage to look to what we have achieved till now, together, and together we need to build new ideas. We need to step out of our comfort zone and think about how to make the IJF a space that is seen as relevant for shaping guidelines and digital policies. This is our challenge in my point of view. To transform the IJF in such a space, we need to deepen transparency and strengthen multi-stakeholder participation mechanisms. And we already have a good start point, and this point is the São Paulo NetMundial guidelines. We are beginning the process of reviewing WSIS Plus 2018, and in 2025 we will have the IJF in June, some weeks before the high-level meeting in Geneva. So I would like to bring a constructive challenge, nothing new, because Chris put this, how we are going to organize the next IJF in June, like we did till now, or maybe trying to look to this opportunity to think differently and try something new by applying the São Paulo guidelines to build the next IJF. So that’s my initial proposal. I believe in the update proposed by NetMundial, we challenge multiple sectors to jointly think of solutions for the current internet challenges. And maybe we can build an experiment with the next IJF, and maybe this will be important to all the WSIS process in this regard. Thank you.

Annaliese Williams: Thanks, Renata. I think it’s definitely a time to be creative right now. What you were saying about setting against one another, I was in one of the sessions yesterday and somebody said multi-stakeholder and multi-lateral are two sides of the same coin. that was something that resonated with me. I think we certainly need both, it’s not an either or. And each process, these multi-stakeholder discussions are very much enriched by having government participation on an equal level. And there is a lot of expertise that from the technical community and from other civil society that can be very useful for multilateral processes. So we’ll go to Plantina, and then we’ve had Jorge join us as well since we started. But I’ll go to Plantina, and then if you wanted to offer some initial observations, or you can wait till the next round if you like, Jorge. Go ahead, Plantina.

Plantina Tsholofelo Mokone: Thanks, thanks, Anneliese. Thank you, everyone. Let me start also by introducing myself. My name is Plantina Mugoni. I am the Internet Governance Coordinator for the ZA Domain Name Authority. I also serve as the Secretariat for the South African Internet Governance Forum Multi-Stakeholder Committee, amongst other things that I do in my personal capacity. And just agreeing with Chris, I think we’re all gonna agree with Chris, in the sense that the IGF is a working progress. I think that a lot has been achieved from the time that I’ve been here and just witnessed just the level even of participation. I think a lot more still needs to be done, just also agreeing with my colleague. There’s a lot more that needs to be done. We need funding. However, one of the biggest things I think that remains an issue is that what happens to the discussions and that we have at IGF? What does our document, what does the IGF document turn out to do? I appreciate the fact that the multi-stakeholder forum, multi-stakeholder model that’s followed in IGF is a bottom-up approach. We solicit… inputs from different stakeholders, but what happens to those inputs? What do they lead to? What actionable items or actionable documents do they do, do they lead to? Because really if they lead to nothing, all this is is a talk show. You know, we need more funding for a continuous talk show. And I think, you know, yes multi-stakeholderism does emphasize that all parties within the multi-stakeholder model are equal, but in reality, you know, government is a decision-maker. We are not all equal. Government makes decisions. So we need to bring them into a discussion, negotiate with them. I think maybe that’s another element that we need to bring into IGF is to negotiate, because the internet as it’s evolving affects all of us in our individual and our business capacity. So I think my how the IGF can evolve is just really turn the discussions that we have within the IGF space into actionable items at continental level, at regional level, but also at country level. You know, and yeah, see how that works itself out. But really just I think the one of the things that does bother me with how we’ve been moving for the past couple of years is that we talk. And how does that translate into actionable items in individual countries? So maybe just have that as one of the action points just moving into next year in the WSIS Plus 20 review.

Annaliese Williams: Thank you. Thanks, Plantina. And we are going to sort of get into, I hope, get into a bit of discussion. And we may not come up with any solutions today, but I’m hoping that we can put forward some ideas and perhaps discuss these again or come up with a proposal for the Norway IGF and have some further discussion on them. But Jorge, did you want to offer any reflections on how the IGF could evolve? or did the processes need to change? Did you want to respond to anything that any of the other speakers have said? We’ve had some comments about governments make the decisions. I think that is, while it’s true, nationally they make laws and internationally they can make treaties. But a lot of the infrastructure is owned and operated by parties that aren’t government entities. So any thoughts you would like to throw into the mix, Jorge? And please introduce yourself as you.

Jorge Cancio: Thank you so much. Annalise, Jorge Cancios with government. So happy to be here. Happy for the invite to share some thoughts. Maybe a thought that is important is that sometimes we love so much our baby, our IGF baby, that we just look at the IGF baby and how it has to walk and talk and start to run and everything. And we forget that it’s a larger family. So and this WSIS family has more members. Each and everyone has his or her role. And really the WSIS architecture is not just some dead letter documents of 2003 and 2005 where some older people like myself participated negotiating. No, it’s really a system that is working, that is delivering for the last 20 years where we as the global community and not just member states but also private stakeholders, civil society, academia, the technical community have invested. millions of hours of work, millions of dollars of any other currency in making the vision of WSIS a reality. More connectivity, e-health, e-whatever that was the old terminology. Everything had an e. Also human rights. Many issues were already considered then. So we have to see the IGF in that context. In the wider context where we have the action lines from WSIS giving guidance to the UN agencies and to many other actors to do stuff on the ground changing the reality, really delivering on the SDGs for the people. We have the WSIS forum where we get together each year to hear what has been done to implement the action lines. We have the CSTD where we discuss what was the progress and what do we feed up into the UN system. And there we have the different roles. Then it gets very intergovernmental. It goes to ECOSOC and if somebody doesn’t know what ECOSOC is, it’s the Economic and Social Committee. It’s like the non-war, non-peace brother, twin brother of the Security Council and from there it goes up to the UN General Assembly. Those are all parts of this WSIS family, of this WSIS architecture. And the IGF is like the more innovative kid. It’s the kid where we talk about new things, we invent new things, policy networks, emerging topics, but we still have to deliver. As Plantina was saying before, we have to deliver and there are things we have on our mandate for almost 20 years, like delivering recommendations. But it was a problem because, okay, it’s nice to have that in the mandate, but we didn’t know how. How do we do this? And there were also fears. So if somebody pops up with a recommendation, how was it that recommendation was developed, etc. So I think this is where the relevance, for instance, of the Sao Paulo multi-stakeholder guidelines kick in. Because they tell us, okay, multi-stakeholder is not anything that is labeled as multi-stakeholder. It’s really something that complies with certain guidelines and those guidelines are, for instance, inclusivity and not just inclusivity with an open door where only the well-resourced pass the door. No, it’s inclusivity in a material, a substantive sense. That’s in guideline one of the Sao Paulo multi-stakeholder guidelines. And it’s also process steps. It’s really getting everybody who is relevant together. It’s consulting with the community and it’s really explaining what has been done with the inputs from the community, avoiding this black box problem we’ve seen in other processes. It’s really giving the community also a role in being able to adapt the the outputs, etc. It’s a lot of guidance that our dynamic coalitions, our BPFs, our policy networks could get inspiration from and start delivering, start delivering those recommendations we have in our mandate. So looking back if we look at the IGF as part of this wider family maybe for instance those recommendations could each and every year be addressed then by the UN agencies when they are updating the working plans in the action lines and then Plantina if I may address you then we would know okay we’ve made a recommendation on data governance and that data governance recommendations goes to action lines XYZ and then later on they can report at the WSIS forum on what they what they did and addressing reflecting on the IGF recommendations if we think it the IGF as part of a system it makes much more sense and then it it is no longer in the perception because it’s for me it’s not a talk show it’s much more than that but also in the perception of everybody it would become much more effective much more impactful if it’s really part of a working WSIS structure and I talk too much sorry.

Annaliese Williams: Thanks for that Jorge. I just wanted to sort of touch on a couple of things that you said you know firstly about the well the thinking about the IGF admiring our baby I think that is you know sometimes the conversations do get around you know to focusing on the IGF and sort of only the IGF and not the broader broader system I think for me personally anyway it’s not so it’s not so much the the IGF but it’s the the principle of multi-stakeholder conversations and sharing of, exchanging of views and doing that sort of globally, sort of hearing from parts of the world that are far away from where you might live or where you might work. And the IGF is a good platform for hearing about the concerns from, that might be very different from your own. And also I thought it was, I think we do need to sort of really focus on the, that connection to the SDGs and the WSIS process was all about development. And I think it’s important that we all, the internet is recognised and digital technologies are recognised as an enabler of sustainable development. But I think it was Doreen on the first day, sort of something like a third of the world’s population still aren’t connected. So, we can have one set of conversations in one place, but in other parts of the world, they’re having very different conversations. People aren’t connected. And I was in a session yesterday where they were talking about not having electricity all of the time. So, even if they have the internet, they don’t have enough power for data centres or can’t access the internet all of the time. So, I think that is making sure that the conversations and making sure that the space to have consideration of the issues from everybody’s point of view and make sure that it’s, you know, that the needs of those in less connected parts of the world aren’t sort of left out of the conversation. I did also want to ask for views and I’ll just ask for a volunteer, I guess. I think, you know, the need for governments and for technical stakeholders, you know, technical experts, private sector, civil society. society, the need for meaningful conversations to take place, there is a real need, but there is perhaps not always the stakeholder balance here at IGF conversations, at IGF meetings to sort of have those conversations. So I wondered if anybody has any ideas or any suggestions for how the IGF could better facilitate conversations between governments and other experts. Does anyone want to address that first?

Amrita Choudhury: I think what Chris mentioned, that the focus, for example, for IGF may be much more sharper. You have the parliamentary track, but are you actually discussing things which the parliamentarians want to hear? There could be certain things where they may want to discuss, but they may, in a public forum, may be shy to ask or understand. Are we having those kind of, I would say, innovative approaches to bring them so that they see value in coming here and discussing things, also clarifying their doubts, as well as sharing their experiences. I think an innovative approach would be good. It’s not a one-size-fits-all kind of a situation, and perhaps showing value to people coming, as in, I’m sure the MAG always has been trying to get the right kind of speakers to come from different stakeholder groups to speak. But sometimes travelling is a challenge, you can’t fund travellers, you know, not everyone has deep pockets. Many private companies also may not come to speak because they are afraid of what they would say and how it would be interpreted. So I think there are multiple issues which would have to be addressed, but I think more focused approach, innovative ways of having discussions, lesser ones, because unfortunately… Nearly two-thirds of the sessions which happens at the IGF is not in the MAG’s hand. If the MAG was to design it, perhaps it may have been done differently, and others can respond to it.

Annaliese Williams: Thanks Amrita. Does anybody else want to go ahead, Chris?

Chris Buckridge: Sorry, I said that without really thinking ahead of what I was going to say. Look, I mean, I think we have good speakers, I think, in a lot of the sessions, and we are trying to be innovative in how we plan sessions. I think, you know, that extends to remote speakers and better integrating them, and I think we need to really focus, sort of, on the IGF as a hybrid event. This needs to be a space where, you know, you don’t need to be there in person to have a meaningful role and take a meaningful part. But, that said, obviously, I mean, we know that people being there in person is a different experience. It provides not just the opportunity for, you know, to look around at your fellow speakers in the table and respond in a more organic way, but also to have conversations in the corridor, to meet people in bilateral meetings. There’s a richness there. So, I mean, we need to still focus on bringing people to the venue and making it an appealing and attractive event to have people at. Now, I think that’s, you know, increasingly a challenge, partly because we do have this proliferation of venues. Next year only serves to highlight that, where we have the WSIS Forum one week, two weeks after the IGF. Two weeks before the IGF, we have an ICANN meeting, which will have a lot of similar stakeholders there. So, I mean, next year is probably unusual, but maybe not that unusual, given the trend and the way we see this developing. I mean, if we look at the last five years, the pace of regulation, of new bodies, of new initiatives at the UN level, at the sort of regional level, at the national level, is remarkable. It really has, I don’t know if anyone’s done a line chart of it, but it would be almost exponential, I’m sure, in the increase. So we need for the IGF to find, to carve its own space there where it’s actually competing for attention against all of these other spaces. And that, I think, leads just back to the first question of how do we evolve it to better meet the needs and wants of people? And there is a hunger for some more link to decisional developments. Not that the IGF can take the role of government, not that it can sort of step in, and it would fail if it tried, I think. But it does need to be producing much more effective interfaces to the governmental processes, to the regulators, to legislators. So yeah, having a parliamentarian track is a very fundamental element of that, and I think a recognition of the need to build that interface, whether it’s perfect or not yet, I don’t know. Probably, yeah, like everything, a work in progress. But that sort of evolution needs to continue, I think. Yeah, I’ll stop there.

Annaliese Williams: Thanks, Chris. And I see Renata wanted to add on to that, and I will after Renata has spoken, but if anyone wants to think about how do we make it more appealing and more attractive, particularly to those governments who might come once?

Renata Mielli: Okay, first of all, I apologize. I didn’t present myself in the first round, so I am doing now. I’m Renata Miele, I’m the chair of the Brazilian Internet Steering Committee, CGI.br, and also… Special Advisor of the Minister of Science, Technology and Innovation. Well, I think this question is connected to the previous one, because why a government or a deputy, a parliament, come to IGF? What can we offer to them to make it interesting, important to be here discussing with us? And for me, we do marvelous debates. We do interesting workshops. We have maybe the best minds that are thinking about Internet, their applications, their impacts on social and economic levels and everything else. But why governments come here to talk with us? And for me, they will feel the need to be here if we can deliver something concrete that has some more impact in terms of discussions. We are not going to be ourselves the decision-making process, but they need to see in us, in IGF, in the community, a locus, a space relevant enough to inform and collaborate with recommendations and concrete outcomes that can impact in decision-making process. So, if we don’t do that… this, they are not going to come. Because there are a lot of spaces to go, and more relevant for them. So I think this, I don’t have the precisely magical answer to this question, but I think the start point is that, how to make IGF more relevant to the people who has the role to make decisions. So for me, this is the start point.

Annaliese Williams: Thanks, Renata. Plantina, did you want to add something?

Plantina Tsholofelo Mokone: Yes, I did. I don’t know, I think too highly of the Internet Governance Forum. I really, let me just start off by saying that. I think all the discussions that have been while I’ve been here in the Internet Governance space are relevant, they’re well-informed, and they inform government. I mean, we provide so many, they have parliamentary trackers, there’s best practice sessions where we exchange knowledge and ideas on how to implement certain things, or how to structure certain guidelines and frameworks. I think much of the discussions that we are having on Internet Governance are relevant to them. They make policies that affect our well-being with regards to the evolution of the Internet. I think Chris said it, everybody that’s at ICANN, that’s at ITU is also here, and all in a multi-stakeholder model, following a multi-stakeholder model. So I don’t know if there is anything we can do beyond this. To make it more attractive and appealing to them. Because really, we are discussing things that affect. them, that affect how they regulate us, that affect socio-economic, that have socio-economic development implications, you know. So there’s, I think, me, what I was going to say is that the discussions that we have at our annual IGF meetings need to go on beyond MAG structures. We have multiple communities and sessions and they need to go on and maybe then that sits on us as NRIs or, you know, national initiatives to take back reports back to them and say we discussed this but there’s really not much we can do to make it because everything we’re discussing is relevant to them, it affects them, it affects their ability to make decisions. I don’t know if we want to dress it like a Christmas tree next time so that it’s more appealing. There’s no way we can do that. I think highly of the discussions that we have here. I make notes. I have four books full of notes just from each session that I’ve been in. They’ve been bilateral meetings. There’s just a lot of things. It allows for a lot of things. If you want to have bilaterals, there’s a parliamentary tracker, ministers are here, there’s networking opportunities. I don’t know how else we need to dress the IGF like a Christmas tree to make it more appealing. I think it is very important, the discussions that we have are very relevant to them. I think there’s maybe a need to induct them more into IGF. Maybe it’s a lack of awareness of IGF and the importance of it. So maybe, I mean, we just had, most countries have just gone through the election so there’s a new government or a new minister so there’s a need to maybe induct them into IGF. I mean, assuming that the ministry hasn’t yet done that already but having the stakeholders that are relevant to IGF speak to them about IGF stakeholders that are involved in IGF processes, speak to them about IGF processes. You know, I think that’s the best we can do at this point.

Annaliese Williams: Thanks, Plantina. Chris wants to jump in. I see that. I will let Chris jump in. But I did want to just flag, you said something about the national and regional IGFs, and so part of this discussion was do the IGF processes need to change somehow? So does there need to be some sort of mechanism through which national IGFs and the global IGFs sort of feed into each other?

Plantina Tsholofelo Mokone: So this is my understanding of the NRIs, right? And as somebody that coordinates from a South Africa level, we have our national IGF ahead of the regional IGF and the Africa IGF and the global IGF. We write a report that we submit to the three structures, right? So it feeds the processes that it needs to go up through our minister’s office. I don’t think that should change because the reality is that each region has its own unique challenges that speak to that, so we need to also speak to regional issues and then take those regional issues up to global. So that structure should not change because SADC issues and Europe issues or Africa issues are totally different. But when we get to global, there’s best practice platforms where we exchange on a continental level, on a regional level of how things could potentially be better or how things could work. And maybe I’m just thinking of it from how practical it is for me in coordinating IGF and what I think I feed into the bigger global picture. But that can’t change because I also speak to our regional issues. I also speak to our continental issues. My report supports those dialogues that they have that address our issues. And then that report goes up into… inter-global, inter-global reports. Yeah.

Annaliese Williams: Thanks, Plantina.

Chris Buckridge: Sorry, I know I’m jumping back into here. Well, first actually I want to say to Plantina I think that that’s a really important point. On the flip side, the important, like, sending a report is wonderful. We on this side need to do something with that report. So how, what does that get, how does that get translated? And I think at the moment the IGF does not have a good sort of idea of how to do that. The other thing, the Christmas tree idea, dressing it up as a Christmas tree, my, the thing that has frustrated me for many years about the IGF and that I actually think the more we talk about it the more valuable, there is such a wealth of information in the archives of 20 years of IGF. We have videos, we have, there’s so much there. Most of it’s on YouTube, but it’s not in any usable form and we have tools, we have methods that we could pull out data, pull out summaries, pull out sort of, this is how many discussions there were of GDPR. These were some of the key themes that were talked about in relation to data governance. If there are people with deep pockets out there, listening, hello, I mean, that would be my Christmas wish for the IGF, would be to someone really step, and it has been tried before. There was a Friends of the IGF project, which worked for a while, and I think then kind of floundered and it was a very good step in the right direction. I would love to see that because there is so much information and it would be so valuable in selling the IGF, in bringing people into understanding what the IGF does, in giving them an insight into the different views of stakeholders. There is so much there and when we’re not using it and that’s my concern and my hope.

Annaliese Williams: Thanks, Chris. Jorge, I’ll come to you in a minute, but Everton has indicated that there is… a comment in the online?

Online Moderator: Not online exactly but I would like to take this opportunity just to invite as Chris was saying was talking about the hybrid IEGF so to take it as to take this opportunity to talk about the hybrid IEGF I would like to talk that we have an audience over there watching us and so they are invited to present us with comments but we have one comment one raise at hand in the room but through zoom which is Jordan.

Annaliese Williams: I’ll go can I go to Jorge first and then I’ll come to Jordan and I saw someone yeah I’ll come to you

Jorge Cancio: Jay I’m so Jorge conscious with government again no it was a very short two fingers I understand that it’s very difficult to navigate all that IEGF information if we take the position of the 2015 technology but nowadays couldn’t we train a bot an IEGF bot and you ask what’s the IEGF ideas information on this it isn’t that difficult I’m in another life I’m also a civil rights activist in my country and we’ve done that with no money with no means and it works perfectly you can train it why don’t we do that

Annaliese Williams: so Everton perhaps we can pass the microphone to Jordan and then and then what and thanks oh that’s weird

Jordan Carter: Thanks, Annalise. Hi, everyone. My name is Jordan Carter. I’m a colleague of Annalise at the Australian Domain Administration. This is a personal view. Rather than a Christmas tree, I think that there is room for some session types that we don’t necessarily have at the moment. One of them that keeps coming up in my mind is being able to engage people on a draft piece of legislation or something. It’s almost like a legislative or a regulatory workshop where some group, it doesn’t have to be a best practice forum. It could simply be a workshop proposal or some legislative testing category of session we don’t have yet. Just give someone, it might be a country, it might be a group of activists, a way to bring a legislative possibility to the IGF community for input. And then to take on board all the input they get here and then to share it out afterwards. That’s some session type innovation. Another might be, how many of us have sat in IGF sessions and really hoped there would be an argument and there wasn’t because everyone agreed on everything? Or there’s the start of a really interesting argument that only emerges in the last 10 minutes of a two-hour panel because the people on the panel didn’t spend enough time prepping to know that they disagreed with each other. So I think even within the current framework there’s the chance for more effort and more organizing to be done. To do that, one of the IGF reforms that needs to happen is that this insane process where no one is in charge of the program needs to finish. I’ve done one year on the MAG, I’ve had my thank you and goodbye letter, who knows what’s happening next time. But the MAG does about a third of the program. And so two-thirds is by either a national government or by the Secretariat. There are multiple sessions on the same topics with almost the same angles and 400 sessions. What if there was 100 sessions covering pretty much the same topics but with four times the amount of brain power that was going into them to actually generate something savvy and interesting? And then the third point I guess I’d make is echoing the point about resources. This is, governments say, the premier space for multi-stakeholder engagement and dialogue, and then governments provide enough resources for five staff members. It doesn’t add up. Like, you know, with a bit more resource to do analysis and communicate the resources that are generated through the IGF process, there could be so much more value arising from the community effort that comes here. And governments are primarily responsible for the overall digital policy architecture. It would be nice if they put a tiny little bit more money where their mouths were on that front. Hopefully none of them was too offended by any of that.

Annaliese Williams: I don’t know that they were. I would just ask, should it only be governments that contribute to the funding of the IGF? That’s something to think about. We will go…

Wout de Natris : Thank you. My name is Wouter Natris. I’m a consultant in the Netherlands, but also representing the Dynamic Coalition on Internet Standards Security Safety here in Riyadh. I think part of what I want to say was just covered by Jordan, so thank you Jordan. I hear a lot of things being said right now in the past half hour, and the only thing I can point to that in 2017 and 18, I’ve wrote two reports that were presented to the MAG, where are all sorts of recommendations how the IGF could be strengthened. And we’re talking six, seven years later about the same ideas. Read the report, it’s on the IGF website, and see what we can do with it. One of the examples is exactly, we have ten sessions on AI, on human rights, on women’s rights. All these brilliant people sit in individual sessions. Why not put them together in a room for a day and say, but you’re going to come out with recommendations, a toolkit, and guidelines, and you’re going to present them a day later. Instead of having ten sessions with perhaps five times the same people… talking on the same topic Where’s the added value because we have brilliant people to what Jorge said on the dynamic coalitions and output We as an IGF we do need to start organizing around that output I’m starting to sound like a broken record I know but this output is there if you were the main session this morning you heard what these dynamic coalitions are doing We are delivering the output, but we’re not doing anything I could not even present my report on the IGF website because it was broken they said So where do you go with working for a whole year a not being able to present it because you don’t get the time and Be not be able to put it on the website so what are you what are we doing this effort for and that’s where things are broken and that needs to be changed and The funding thing that I think that a lot of governments should start stepping up, but that we’re saying that for years as well I think you’re going to pass it on to that side so there’s a rare thing you’re here, and then they don’t decide

Desiree Miloshevic: Yeah, Desiree. I’ll switch. I’ll be very brief and Also agreeing with some innovative ways of making the IGF works More inclusively with the national IGFs and regional IGFs I don’t think we have seen a collective output of what they need and that I have been able to transfer that Even to the mag let alone the secretariat Whether it’s commitment to work on the IGF for next 10 years or some some kind of joint output but also in terms of this innovative Ways governments do get a lot of benefit. I believe out of the IGF They might have a separate governmental track, but they also have a lot of bilateral meetings that we other Stakeholders are not part of but I think there’s a lot of value in in in gathering Lastly something that’s been mentioned is this discussion of legislative proposals from region what I’m witness Here is the Arab region region has worked really in sync to look at the issues in their region. So maybe wherever the IGF takes place, this regional community should come together and have this focus, maybe workshops like what was just saying, working in a less workshoppy way, but for longer hours. Anyway, I’ll stop here for the sake of time. Thank you.

Annaliese Williams: Thanks, Desiree. We’ll go here, then here, and then I did want to turn the conversation around to the Netrundial guidelines. You may go ahead.

Masanobu Katoh: Masanobu Kato from IGF Japan. And I’d like to explain some of the experiences we had last year. You remember that we had the IGF Kyoto, which was very popular and we had many crowds there. And the Prime Minister spoke and three ministers came and talked about their policy on digital society. Very interestingly, major newspapers like the Nikkei newspaper had a report on how the Prime Minister attended the International AI Conference or something like that. I’m not specifically criticizing any particular press or so. That was a general impression of the people. This means that the IGF is not very well known to the public. This is very important. You know, people here knows what IGF is and we are talking about an improvement of IGF, but the other angle is to look at the people who do not know this and do not know the real value. And we have to be very active on getting more interest from those people. One other example, and I found this very interesting, I met with an AI expert in Japan who came to the Kyoto conference and asked him. of why. He said his counterpart in the UK or Europe invited him. So that means there are a vast majority of experts who do not know the IGF, but who are having international collaborations, meetings, and they say, oh, IGF is not a place to make any decisions. They don’t know how to deal with these very special issues they are working with. That’s not the case. Bring them in, in some way. And the one suggestion is, like Desiree said, focus more on the NRI. And NRI within Japan, we started to invite more experts, having periodical meetings, discussing on specific issues. If we invite government, for instance, in such kind of places, we can probably get their interest more. And they may even think, well, let’s see what’s going on in the world. What’s going on in the IGF meeting. And that’s the only way I think we can get more interest from other people. Thank you.

Baratang Miya: My name is Barata Mea from Gael Hype Women Who Code South Africa, and I organize women in IGF summit every year for the past three years. I want to talk about the experience, because I listened to comments prior, and maybe my experience will clarify why some things took a different turn. I was invited by the Secretary General of the UN in 2019. There was a group of us, I think there was five women who were invited across the world. And that was because prior to that, women had been saying there’s no women’s voice in IGF. there isn’t meaningful contribution of women’s voices, their workshops are being declined, and they do not know how to fill in forms, and they even had to do something, hence we were brought in. And that was in Germany, thanks to the German government, because they made sure we are there. And after that, I had learned so much about policies and how to contribute to the space, and I never stopped. I’m coming and I started knowing how to write workshops. And that has become a main thing that we discuss on the African NRI. So if you say there’s 400 sessions with lots of voices saying the same thing, it’s purely because they’re facing a challenge of who to decline, and if they decline, and they say, according to them, these are good sessions, as they make, then they might find that they declined lots of Africans, lots of women, and for diversity and equity, they need to find out how to balance it and how to get the right perspective. Maybe your suggestion of saying, put all of them in one room, and come up with a proper solution might be the best solution, but to reduce the number of session, it’s going to take the IGF backwards. We’re going to lose what the IGF has worked so hard to bring into the space, which is youth and women. Thank you.

Annaliese Williams: Sorry, I’m just writing that down so I don’t forget. So I have heard several times that focusing, or a deeper focus on the issue, rather than multiple sort of surface level discussions might be useful, and we had the observation that under the current sort of program design, there isn’t a single sort of source in charge, so perhaps that’s something that we do need to think about, how we design the programs in the future, and how we do have the space for. in actual solving problems, or at least coming to a genuine understanding of where all the differences of opinion are, instead of just talking about the issues at a surface level. And in terms of outputs of multi-stakeholder meetings, Renata, earlier this year we had the NetMundial which did come up with a multi-stakeholder output in the guidelines that could be useful for other processes, so I just wondered if we could perhaps start with you about whether or how the NetMundial guidelines could be applied in the IGF context, or other contexts as well.

Renata Mielli: Thanks, Annelies. Just before answering your question, I think we have a good discussion, this is a very good debate about the programming, because, yes, reducing the number of the sessions maybe has an impact on diversity, but in another way it’s so frustrating and exhausting trying to follow for all the panels that are… we got very crazy, to be very honest. So this is a very good point, I think MAG has, the new MAG, because we are going to be a new MAG, maybe needs to go deeper in this discussion about how to improve programming, maybe… build some consultations for the community? I don’t know. I don’t have the answer. I don’t, but I have the question. About guidelines and NetMundial plus 10 guidelines, I think we have, I think we need to start on these guidelines, but we need to not to just look to the guidelines and try to fit all them, all that points to IGF and or in the regional IGFs, but I think we need to start to discuss how and what guidelines we need to start, because there are a lot of good ideas and propositions in that guidelines regarding how we can guarantee more diversity and we call participation of all the stakeholders and how to do this, because we put the idea but we don’t say how and how to do this and I think there are a lot of things we have to think about it. For example, a very simple one, to be, to stay, to come to IGF, we need to speak English, because there are another, there is another language to talk, to participate, and this is for a lot of countries, this is a very, very, very restricted point. In Brazil, we don’t have a lot of people who speak English in civil society, even in academia, even private sector. There is no natural. So that’s one point. We say internet needs to be multilingual, needs to have language diversity. How can we do this on IGF? That’s something that occurs to me now. Because to guarantee diversity, we have to think about what the gap you need to fill. This is one, for example. The problem with fundings, the cost for traveling and staying, it’s another. Maybe the hybrid format is an answer, but it’s a part of an answer. It’s not totally. I think we need to do some work in understanding what kind of outcome the IGF can deliver. Because we say we need to have something. But what is this something? We don’t do this job on NetMundial Plus 10, for example. We say we need an outcome. We need something more concrete. But what is this thing, something more concrete, on the IGF to be produced as an outcome that has some impact? We are going to choose an issue to each IGF, maybe. I don’t know. Let’s talk about artificial intelligence to put something outcome. Or let’s talk about, I don’t know, data governance. I don’t know. how can we build that? We are going to make previously consultations for the community about some kind of ideas. I don’t know. So I think to apply the NetMundial guidelines to the IJF, I think we need to do some homework. And this is a new challenge for all of us. That’s my, previously, that’s what I see.

Annaliese Williams: Thanks Renata. And I’m just sort of listening, just listening to you speak about the language challenges and sort of thinking about the programming issues. Perhaps there is something there to better coordinate the program of the global IGF with the national and regional. So at least on some issues everybody can be having the same conversation in their region, in their language. And maybe there is something about putting forward a view for a global, for global discussion. The positions on that issue from, you know, the thoughts from those regions on a particular issue to be discussed here. I think we had a question or a comment at the back.

Anriette Esterhuysen: Just in response to Renata, I’m Renata Esterhosen from APC. Just very quickly, we had a similar session earlier today that looked at also NetMundial, Global Digital Compact and IJF. And someone from the Swiss government, who’s not called Jorge, made a very good suggestion about applying the, you know, the NetMundial guidelines about how you scope an issue and then you identify who’s affected by that issue, who are the stakeholders. But there are other guidelines as well within the NetMundial, but she suggested that we look at the IGF messages and how the IGF messages are produced and then distributed using relevant bits of the NetMundial guidelines. And I thought that was such a good, practical, concrete suggestion. Thanks Henriette. I think we have another comment, was it online or somebody else wanted to speak? Okay, go ahead. Oh, hello. Thank you.

Galvanian Burke: My name is Galvanian Burke, Civil Society. I also do have a practical suggestion. As an attendee, the experience using digital tools is quite difficult. I believe you could focus more on the hybrid format because many people cannot afford to travel, but provide them a great experience. Creating an account is difficult, browsing the schedule is difficult, finding the speakers are difficult, finding the Zoom link is difficult. So we did an event a couple of weeks ago on Zoom events, events.zoom.us. It was extremely user-friendly. Maybe using that platform as a test could prove useful and you might have many more people joining because the IGF is an incredible and unique forum. The first time I came in, I was like, I didn’t believe my eyes, it was so good. But also now I feel frustrated because finding content like the resources is extremely difficult. And maybe you do not have the money to do what you want, but you’ve got a tremendous wealth of knowledge, of people and of resources. So leverage your community, see what you do have in terms of people and what they can do. And maybe we can create a platform all together and apply the principles that we see are good for the future of the internet. Thank you.

Annaliese Williams: Thank you. Chris, did you want to speak?

Chris Buckridge: It was a very brief comment, I think. Kind of drawing on what Henriette said and on Jordan’s comment before about evolving and innovating in sort of formats, I mean, we have an IGF which usually has several sub-themes. It could be that we look at each of those sub-themes in a sense as a different distinct conference with its own modalities. So if you had, for instance, a data governance sub-theme in a year where there was a need or a desire to produce some sort of output like the equivalent of the São Paulo guidelines, that could be the focus of that theme. Whereas another theme like AI or whatever could be more traditional IGF, a bit freeform. So, yeah, I think there are possibilities there to sort of think in terms of different formats.

Annaliese Williams: Thanks, Chris. And just in terms of the language issue, Amrita, do you have any views? You’re very involved in the Asia-Pacific IGF. Anything you’d like to share?

Amrita Choudhury: So I think language is an issue, as Renata mentioned. It is at a regional level also. In Asia-Pacific, it’s more complicated because you have so many languages. It’s just not one, two. Even in a country, you may have 22 languages for that matter. So it’s difficult. Again, resources is a challenge when we have documents which are produced and then even getting it translated. But I think with technology improving, the translations may be cheaper. If not absolute accurate, but at least you can get the essence and some people can look at it. So I think those can help. I think that’s important. Another thing, Chris, while I do get your point about one topic, I also draw on it, what topic may be of importance to many parts of the world may not be for some people. So obviously a nice balance would be good enough out there. The other thing about the, I wanted to point because we were speaking about the NRIs, and that’s a huge achievement for the IGF. We kind of tend to forget it. We go to the countries which were not, you know, least developed countries having, you know, NRI initiatives. And I’m talking from Pacific, you know, Asia Pacific, we have the Pacifics doing, you have the small countries like Nepal doing, landlocked countries. You also have Afghanistan where you cannot have it, they are having it in hybrid mode. They recently had the Afghan IGF and NOG. So I think it gives the empowerment also like in places like Afghanistan where you can’t do things, where an IGF is held, we sometimes lose that, that what it is triggering. They’re talking about the SDGs, they’re talking about their national goals, which we forget. Obviously, they feed on to the IGF and whatever here, they hear, they take it down. So I think, again, getting back to how we can improve, we can improve our hybrid meetings. Sorry to say, this is not a hybrid meeting we are having at this point of time. As in, we need to be better in our hybrid modes. Because see, if you’re seeing it in hybrid nowadays, many times the text you can get, you can also translate it in your at least major languages. Those also help in YouTube, etc. We love, we hate big tech, but they help in innovation. Thank you.

Annaliese Williams: Thanks, Amrita. Jorge, did you want to? Do you hear me? Yes.

Jorge Cancio: Jorge Cancio, Swiss government again. So, just two short points. On the languages, and as we are talking about the Sao Paulo multistakeholder. guidelines. I don’t think you’ve mentioned it or Everton has neither mentioned it. It’s important to mention that they are available in about ten languages, nine languages. Surely ten will be forthcoming amongst other in the UN and the official UN languages. So Arabic, Russian, Chinese, English, French, Spanish. We have it also in Portuguese, in German, in Italian, so Japanese and there’s already talk of translating into some languages in Nigeria. So that’s very useful I think. But my Brazilian friends are too modest to say it loudly but we should because that’s very useful to apply them also at the local level because they are also useful at the local, at the regional, at many levels. And in fact the IGF could invite the NRIs to consider them, to see where they can be applied or where inspiration can be drawn from from them. And of course it would be a bit weird to invite others but not walk the talk ourselves. I think many BPFs, policy networks, dynamic coalitions, they already are producing outputs. So they are already striving for developing recommendations or best practice examples. So maybe I’m very naive but I think it would be easy to take a look at what they are doing, how they are doing it. and compare with the Sao Paulo Multi-Stakeholder Guidelines, perhaps they can improve 1.02, or maybe they already do everything perfectly, that’s possible, but at least I think we are going to have a conversation in some also national and regional NRIs, how we can look into that, and whether we, for instance, do our call for issues in a way that is consistent. Thank you.

Annaliese Williams: Thanks, Jorge. Renata, you wanted to respond? I think we’re close to…

Renata Mielli: Just to share another experience, initiative that we have. We started in CGI, the organization of the Forum Lusófono da Internet, Internet Lusófono. Lusófono is okay in English? Portuguese speaking. Okay. Forum of Portuguese speaking of Internet, governance of Internet, I don’t know how to say that. But it’s interesting, because we are bringing together all the countries that speak Portuguese to discuss the governance of Internet. We made the first edition in São Paulo in 2023, and this year in Cabo Verde in August, September, I don’t know. And we are going to have the next one in Mozambique, and this is another regional forum, and there is no regional approach, but there is a linguistic approach where we discuss how to improve the language and Portuguese on Internet. So, that’s another interesting thing. How to… This is a work in progress, too, because we are inventing new things. And this is important to achieve this goal that we need to have more people together on the governance space debating these things, and so let’s be creative.

Annaliese Williams: Thanks Renata. Evitin, was there a comment online? Yeah. Thank you Annelise.

Audience: We have three comments, or maybe some more. One from Mike Nelson, that at IGF USA they have organized some very exciting and lively debates over the years. Thank you Mike. One from Jordan Carter, that non-state stakeholders do sometimes fund the IGF trust fund and maybe more can be done. And another one from Avery Doria, that the suggestion to decrease the number of sessions has been made almost every year of the IGF, if she recalls correctly. Many however feed the many sessions, a rich resource that can be used long after the four-day meeting is over. It needs to stop thinking of the IGF as a once-a-year event. We have some inter-sessional work, but the notion of ongoing work is still foreign. So the MAGs is still a program committee for once-a-year conference. And there is also one more comment here, that improving hybrid would be a really cool idea. So one comment by Pedro Lama, and a comment about a guide to hybrid events by Kiki.

Annaliese Williams: Currently there is a guide to hybrid events, so we will take note of that. So we’re almost out of time, but I did want to just quickly ask all of our speakers in 30 seconds or so, having heard the discussion today and having your own ideas, if there was sort of one thing that you could do for the next IGF or for future IGFs, like one concrete idea, what would it be? What do you think needs to happen? Any volunteers? Should we start with you, Jorge, and just go around the table? Or should we start this way? We’ll go with Amrita first.

Amrita Choudhury: Next year is important, more strategically focused IGF. More strategically focused program? IGF to achieve the end results we want to achieve in VISIS plus 20.

Annaliese Williams: Plantina?

Plantina Tsholofelo Mokone: After the discussion we just had on language and hybrid, I think improve that to make it more inclusive of attendees that are unable, participants that are unable to attend in person.

Renata Mielli: Oh my god, it’s so difficult to choose one, but I choose maybe because I agree with Amrita, this is a very strategic IGF, maybe we can start earlier with some kind of consultations regarding what we want to achieve with VISIS plus 20. Let’s put something on the net and listen to the community before the IGF starts. Maybe it will be something interesting to put some guidelines on the Sao Paulo Net Mundial guidelines. I don’t know.

Chris Buckridge: So I’m gonna go back and my earlier point, I think the one thing I’d love to see is a focus on cataloging and making usable and useful the rich data set that is all of the IGF archives that we have.

Jorge Cancio: So I vote for a bot that makes that accessible and apart from the bot, I think we need an IGF in Oslo that is relevant, that shows that this community delivers on the WSIS vision and that we are ready to update it to make it fit for purpose.

Annaliese Williams: So that brings us to the end. I think we do have some good ideas to be thinking about and if there are any, I know they don’t know yet, but if there are any people in the room who find themselves on the next MAG, perhaps they can take some of these ideas about the strategic focus for next time into consideration. But please thank all of the speakers and thank everyone for being part of this conversation. Thanks everybody online. you you you you you

A

Amrita Choudhury

Speech speed

153 words per minute

Speech length

905 words

Speech time

353 seconds

IGF needs to be more focused and empowered

Explanation

Amrita argues that the IGF needs to become more focused and empowered to meet modern challenges. She suggests that the IGF could be a place for testing ideas and tracking implementation of various initiatives.

Evidence

Mentions the working group strategy document with concrete measures for IGF evolution

Major Discussion Point

Evolution of the IGF to meet modern challenges

Agreed with

Chris Buckridge

Renata Mielli

Plantina Tsholofelo Mokone

Jorge Cancio

Agreed on

IGF needs to evolve to meet modern challenges

Improve hybrid format and accessibility

Explanation

Amrita suggests improving the hybrid format of IGF to make it more inclusive and accessible. She emphasizes the need for better technology and translation services to overcome language barriers.

Evidence

Mentions the challenges of language diversity in the Asia-Pacific region

Major Discussion Point

Improving IGF programming and format

Agreed with

Chris Buckridge

Wout de Natris

Baratang Miya

Agreed on

Improve IGF programming and format

Focus on strategic issues related to WSIS+20 review

Explanation

Amrita proposes that the next IGF should be more strategically focused, particularly in relation to the WSIS+20 review. This would help align the IGF’s work with broader global digital governance processes.

Major Discussion Point

Improving IGF outputs and impact

C

Chris Buckridge

Speech speed

166 words per minute

Speech length

1763 words

Speech time

637 seconds

IGF has evolved over time and should continue to do so

Explanation

Chris emphasizes that the IGF has always been a work in progress and has evolved since its inception. He argues that this evolution needs to continue to meet current challenges and contexts.

Evidence

Cites examples of IGF’s impact, such as fostering IXP development in Africa and the NRI ecosystem

Major Discussion Point

Evolution of the IGF to meet modern challenges

Agreed with

Amrita Choudhury

Renata Mielli

Plantina Tsholofelo Mokone

Jorge Cancio

Agreed on

IGF needs to evolve to meet modern challenges

Differed with

Renata Mielli

Differed on

Focus of IGF discussions

Consider different formats for different themes

Explanation

Chris suggests that the IGF could have different formats for different sub-themes. This could allow for more flexibility in addressing various topics and producing different types of outputs.

Evidence

Gives an example of having a data governance sub-theme with a specific output format, while other themes could have more traditional formats

Major Discussion Point

Improving IGF programming and format

Agreed with

Wout de Natris

Baratang Miya

Amrita Choudhury

Agreed on

Improve IGF programming and format

Make IGF archives and data more accessible and usable

Explanation

Chris proposes focusing on cataloging and making the rich dataset of IGF archives more usable and useful. This would help leverage the wealth of information accumulated over years of IGF discussions.

Major Discussion Point

Improving IGF outputs and impact

Maintain broad funding base from multiple stakeholders

Explanation

Chris emphasizes the importance of maintaining a broad funding base for the IGF from multiple stakeholders. He argues that this helps prevent capture by any single group and ensures multi-stakeholder decision-making.

Major Discussion Point

Enhancing multi-stakeholder participation

R

Renata Mielli

Speech speed

105 words per minute

Speech length

1621 words

Speech time

922 seconds

IGF discussions need to lead to more concrete outcomes

Explanation

Renata argues that the IGF needs to move beyond just discussions and produce more concrete proposals and recommendations. She emphasizes the need for the community’s voices to have an impact on decision-making processes.

Major Discussion Point

Evolution of the IGF to meet modern challenges

Agreed with

Amrita Choudhury

Chris Buckridge

Plantina Tsholofelo Mokone

Jorge Cancio

Agreed on

IGF needs to evolve to meet modern challenges

Differed with

Chris Buckridge

Differed on

Focus of IGF discussions

Produce more concrete recommendations and guidelines

Explanation

Renata suggests that the IGF should focus on producing more tangible outputs such as recommendations and guidelines. This would make the IGF more relevant in shaping digital policies.

Major Discussion Point

Improving IGF outputs and impact

Address language barriers to participation

Explanation

Renata highlights the issue of language barriers in IGF participation, particularly for non-English speakers. She suggests that this is a significant obstacle to diversity and inclusivity in the IGF process.

Evidence

Mentions the example of Brazil, where many people in civil society, academia, and private sector do not speak English

Major Discussion Point

Enhancing multi-stakeholder participation

Demonstrate IGF’s relevance to shaping digital policies

Explanation

Renata emphasizes the need for the IGF to show its relevance in shaping digital policies. She suggests that this is crucial for attracting more participation from decision-makers and stakeholders.

Major Discussion Point

Improving IGF outputs and impact

Start consultations earlier on desired outcomes

Explanation

Renata proposes starting consultations earlier regarding what the community wants to achieve with the WSIS+20 review. This could help in setting clear goals and expectations for the IGF.

Major Discussion Point

Improving IGF outputs and impact

P

Plantina Tsholofelo Mokone

Speech speed

155 words per minute

Speech length

1205 words

Speech time

464 seconds

IGF should produce actionable items at continental and country levels

Explanation

Plantina argues that IGF discussions should lead to actionable items at continental and country levels. She emphasizes the need for practical outcomes that can be implemented in individual countries.

Major Discussion Point

Evolution of the IGF to meet modern challenges

Agreed with

Amrita Choudhury

Chris Buckridge

Renata Mielli

Jorge Cancio

Agreed on

IGF needs to evolve to meet modern challenges

J

Jorge Cancio

Speech speed

111 words per minute

Speech length

1240 words

Speech time

666 seconds

IGF should be seen in context of broader WSIS architecture

Explanation

Jorge emphasizes that the IGF should be viewed as part of the larger WSIS family and architecture. He argues that understanding this context is crucial for the IGF’s evolution and effectiveness.

Evidence

Describes the WSIS architecture, including action lines, WSIS forum, and CSTD

Major Discussion Point

Evolution of the IGF to meet modern challenges

Agreed with

Amrita Choudhury

Chris Buckridge

Renata Mielli

Plantina Tsholofelo Mokone

Agreed on

IGF needs to evolve to meet modern challenges

Apply NetMundial multi-stakeholder guidelines to IGF processes

Explanation

Jorge suggests applying the Sao Paulo NetMundial multi-stakeholder guidelines to IGF processes. He argues that these guidelines could improve the IGF’s inclusivity and effectiveness.

Evidence

Mentions that the guidelines are available in multiple languages and could be applied at local, regional, and global levels

Major Discussion Point

Enhancing multi-stakeholder participation

J

Jordan Carter

Speech speed

182 words per minute

Speech length

492 words

Speech time

162 seconds

IGF needs to be more relevant to decision-makers

Explanation

Jordan argues that the IGF needs to become more relevant to decision-makers. He suggests introducing new session types, such as legislative workshops, to engage policymakers more effectively.

Evidence

Proposes a ‘legislative testing’ category of session where draft legislation could be brought for community input

Major Discussion Point

Evolution of the IGF to meet modern challenges

G

Galvanian Burke

Speech speed

148 words per minute

Speech length

217 words

Speech time

87 seconds

IGF should leverage its community and resources better

Explanation

Galvanian suggests that the IGF should better leverage its community and resources. He emphasizes the need to improve the digital tools and user experience for IGF participants.

Evidence

Mentions difficulties in using current digital tools for IGF participation and suggests using platforms like Zoom events

Major Discussion Point

Evolution of the IGF to meet modern challenges

W

Wout de Natris

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Reduce number of sessions for more focused discussions

Explanation

Wout suggests reducing the number of IGF sessions to allow for more focused and productive discussions. He argues that this could lead to more concrete outputs and recommendations.

Evidence

Proposes putting experts together for a day to come up with recommendations, toolkits, and guidelines

Major Discussion Point

Improving IGF programming and format

Agreed with

Chris Buckridge

Baratang Miya

Amrita Choudhury

Agreed on

Improve IGF programming and format

Differed with

Baratang Miya

Differed on

Number of IGF sessions

B

Baratang Miya

Speech speed

170 words per minute

Speech length

334 words

Speech time

117 seconds

Maintain diversity while improving programming

Explanation

Baratang emphasizes the importance of maintaining diversity in IGF sessions while improving programming. She argues that reducing the number of sessions could negatively impact the inclusion of voices from underrepresented groups.

Evidence

Shares personal experience of being invited to IGF to address the lack of women’s voices

Major Discussion Point

Improving IGF programming and format

Agreed with

Chris Buckridge

Wout de Natris

Amrita Choudhury

Agreed on

Improve IGF programming and format

Differed with

Wout de Natris

Differed on

Number of IGF sessions

A

Annaliese Williams

Speech speed

135 words per minute

Speech length

2280 words

Speech time

1012 seconds

Better coordinate global IGF with national/regional IGFs

Explanation

Annaliese suggests better coordination between the global IGF and national/regional IGFs. This could help address language barriers and ensure more diverse participation in global discussions.

Major Discussion Point

Enhancing multi-stakeholder participation

M

Masanobu Katoh

Speech speed

130 words per minute

Speech length

351 words

Speech time

161 seconds

Leverage national/regional IGFs to increase participation

Explanation

Masanobu proposes leveraging national and regional IGFs to increase participation in the global IGF. He suggests that this could help attract more experts and raise awareness about the IGF among those who are not familiar with it.

Evidence

Shares experience from IGF Japan and interactions with AI experts

Major Discussion Point

Enhancing multi-stakeholder participation

A

Audience

Speech speed

118 words per minute

Speech length

170 words

Speech time

86 seconds

Organize more lively debates

Explanation

A comment from the audience suggests organizing more exciting and lively debates at the IGF. This could make sessions more engaging and productive.

Evidence

Mentions experience from IGF USA

Major Discussion Point

Improving IGF programming and format

Make IGF a year-round process, not just annual event

Explanation

An audience comment proposes making the IGF a year-round process rather than just an annual event. This could help in producing more substantial outcomes and maintaining ongoing work.

Major Discussion Point

Improving IGF programming and format

Agreements

Agreement Points

IGF needs to evolve to meet modern challenges

Amrita Choudhury

Chris Buckridge

Renata Mielli

Plantina Tsholofelo Mokone

Jorge Cancio

IGF needs to be more focused and empowered

IGF has evolved over time and should continue to do so

IGF discussions need to lead to more concrete outcomes

IGF should produce actionable items at continental and country levels

IGF should be seen in context of broader WSIS architecture

Speakers agree that the IGF needs to adapt and evolve to address current global digital challenges more effectively, with a focus on producing more concrete and actionable outcomes.

Improve IGF programming and format

Chris Buckridge

Wout de Natris

Baratang Miya

Amrita Choudhury

Consider different formats for different themes

Reduce number of sessions for more focused discussions

Maintain diversity while improving programming

Improve hybrid format and accessibility

Speakers agree on the need to improve IGF programming and format, balancing the need for more focused discussions with maintaining diversity and improving accessibility.

Similar Viewpoints

These speakers emphasize the need for the IGF to produce more concrete, actionable outputs that can be applied at various levels of governance.

Renata Mielli

Plantina Tsholofelo Mokone

Jorge Cancio

Produce more concrete recommendations and guidelines

IGF should produce actionable items at continental and country levels

Apply NetMundial multi-stakeholder guidelines to IGF processes

Both speakers highlight the importance of addressing language barriers and improving accessibility to enhance participation in the IGF.

Amrita Choudhury

Renata Mielli

Address language barriers to participation

Improve hybrid format and accessibility

Unexpected Consensus

Leveraging IGF archives and community resources

Chris Buckridge

Galvanian Burke

Make IGF archives and data more accessible and usable

IGF should leverage its community and resources better

Despite coming from different backgrounds, both speakers unexpectedly agree on the need to better utilize existing IGF resources and community knowledge, suggesting a shared recognition of untapped potential in the IGF’s accumulated expertise.

Overall Assessment

Summary

The main areas of agreement include the need for IGF evolution to meet modern challenges, improving IGF programming and format, producing more concrete and actionable outputs, and enhancing accessibility and participation.

Consensus level

There is a moderate to high level of consensus among speakers on the need for IGF reform and evolution. This consensus implies a strong foundation for implementing changes to make the IGF more effective and relevant in addressing global digital governance challenges. However, the specific methods for achieving these goals vary among speakers, suggesting that detailed implementation plans would require further discussion and negotiation.

Differences

Different Viewpoints

Number of IGF sessions

Wout de Natris

Baratang Miya

Reduce number of sessions for more focused discussions

Maintain diversity while improving programming

Wout de Natris suggests reducing the number of IGF sessions for more focused discussions, while Baratang Miya argues that reducing sessions could negatively impact the inclusion of underrepresented voices.

Focus of IGF discussions

Renata Mielli

Chris Buckridge

IGF discussions need to lead to more concrete outcomes

IGF has evolved over time and should continue to do so

Renata Mielli emphasizes the need for more concrete outcomes from IGF discussions, while Chris Buckridge focuses on the ongoing evolution of the IGF process itself.

Unexpected Differences

Approach to IGF programming

Chris Buckridge

Audience

Consider different formats for different themes

Make IGF a year-round process, not just annual event

While both suggestions aim to improve IGF programming, they represent unexpectedly different approaches. Chris suggests varying formats within the annual event, while the audience comment proposes extending the IGF process throughout the year, which could significantly change the nature of the forum.

Overall Assessment

summary

The main areas of disagreement revolve around the structure of IGF sessions, the focus of discussions, and the nature of IGF outputs. There are also differing views on how to improve accessibility and relevance to decision-makers.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the need for IGF evolution, speakers have varying ideas on how to achieve this. These differences reflect the complex nature of internet governance and the challenges in balancing diverse stakeholder interests. The implications of these disagreements suggest that any changes to the IGF format or focus will require careful consideration and compromise among different stakeholder groups.

Partial Agreements

Partial Agreements

These speakers agree on the need for more concrete outputs from the IGF, but propose different approaches: Renata suggests focusing on recommendations and guidelines, Jorge proposes applying existing NetMundial guidelines, and Jordan suggests new session types like legislative workshops.

Renata Mielli

Jorge Cancio

Jordan Carter

Produce more concrete recommendations and guidelines

Apply NetMundial multi-stakeholder guidelines to IGF processes

IGF needs to be more relevant to decision-makers

Both speakers agree on the need for more practical outcomes from the IGF, but Amrita focuses on improving the hybrid format for better accessibility, while Plantina emphasizes producing actionable items at different geographical levels.

Amrita Choudhury

Plantina Tsholofelo Mokone

Improve hybrid format and accessibility

IGF should produce actionable items at continental and country levels

Similar Viewpoints

These speakers emphasize the need for the IGF to produce more concrete, actionable outputs that can be applied at various levels of governance.

Renata Mielli

Plantina Tsholofelo Mokone

Jorge Cancio

Produce more concrete recommendations and guidelines

IGF should produce actionable items at continental and country levels

Apply NetMundial multi-stakeholder guidelines to IGF processes

Both speakers highlight the importance of addressing language barriers and improving accessibility to enhance participation in the IGF.

Amrita Choudhury

Renata Mielli

Address language barriers to participation

Improve hybrid format and accessibility

Takeaways

Key Takeaways

The IGF needs to evolve to be more focused, empowered, and relevant to decision-makers

There is a need to improve IGF programming by reducing the number of sessions while maintaining diversity

Enhancing multi-stakeholder participation, especially from governments, is crucial

The IGF should produce more concrete outcomes and actionable recommendations

Improving the hybrid format and accessibility of the IGF is important

Better coordination between global, regional, and national IGFs is needed

The IGF should leverage its community and existing resources more effectively

Resolutions and Action Items

Consider applying NetMundial multi-stakeholder guidelines to IGF processes

Focus on making the 2024 IGF in Oslo strategically relevant to the WSIS+20 review

Improve cataloging and accessibility of IGF archives and data

Start earlier consultations on desired outcomes for the next IGF

Explore ways to address language barriers in IGF participation

Unresolved Issues

How to balance reducing the number of sessions with maintaining diversity and inclusivity

Specific mechanisms for producing more concrete outcomes from IGF discussions

How to secure sustainable and diverse funding for the IGF

Ways to make IGF more appealing and relevant to government stakeholders

How to effectively transform IGF into a year-round process rather than just an annual event

Suggested Compromises

Using different formats for different thematic tracks within the IGF to balance focused outcomes with diverse discussions

Leveraging technology like AI bots to make IGF archives more accessible while working on more comprehensive solutions

Coordinating global IGF themes with regional and national IGFs to allow for discussions in local languages while feeding into global conversations

Thought Provoking Comments

IGF could evolve to meet most of the requirements which is being portrayed as gap areas. For example, it could be the place where everyone can come and it could be a test bed for people. It could be a place where the GDC’s implementations could be tracked. It could also be a place where even governments come and test out what they want to do, et cetera, apart from other things.

speaker

Amrita Choudhury

reason

This comment provides concrete suggestions for how the IGF could evolve to become more relevant and impactful.

impact

It set the tone for discussing specific ways the IGF could change and expand its role, leading to further discussion on outputs and government involvement.

We need to demonstrate that there is no contradiction between strengthening multistakeholder spaces and processes and the role of multilateral spaces. We are in this crazy moment that we are something against another. And we have to stop this and work together in a complementary way.

speaker

Renata Mielli

reason

This insight challenges the perceived dichotomy between multistakeholder and multilateral approaches, suggesting a more integrated perspective.

impact

It shifted the conversation towards considering how different governance approaches could work together rather than in opposition, leading to discussion of the IGF’s role in the broader internet governance ecosystem.

What if there was 100 sessions covering pretty much the same topics but with four times the amount of brain power that was going into them to actually generate something savvy and interesting?

speaker

Jordan Carter

reason

This comment proposes a radical restructuring of the IGF format to potentially increase its impact and efficiency.

impact

It sparked a debate about the trade-offs between quantity and quality of sessions, as well as considerations of diversity and inclusivity in programming.

To reduce the number of session, it’s going to take the IGF backwards. We’re going to lose what the IGF has worked so hard to bring into the space, which is youth and women.

speaker

Baratang Miya

reason

This comment provides an important counterpoint to suggestions of reducing sessions, highlighting potential unintended consequences.

impact

It added complexity to the discussion about IGF reform, emphasizing the need to balance efficiency with inclusivity and diversity.

I think we need to do some work in understanding what kind of outcome the IGF can deliver. Because we say we need to have something. But what is this something?

speaker

Renata Mielli

reason

This comment cuts to the heart of the IGF’s purpose and challenges participants to define concrete goals.

impact

It refocused the discussion on the fundamental question of the IGF’s purpose and outputs, leading to more specific suggestions about potential outcomes.

Overall Assessment

These key comments shaped the discussion by moving it from general observations about the IGF’s challenges to more specific proposals for reform. They introduced tension between different priorities (efficiency vs. inclusivity, concrete outputs vs. open dialogue) that reflect the complex nature of the IGF’s mission. The discussion evolved from identifying problems to proposing solutions, while also recognizing the potential trade-offs and unintended consequences of various reform ideas. This led to a more nuanced understanding of the challenges facing the IGF and the careful balance required in any attempts to evolve the forum.

Follow-up Questions

How can the IGF better facilitate conversations between governments and other experts?

speaker

Annaliese Williams

explanation

This is important to address the lack of stakeholder balance at IGF meetings and improve meaningful dialogue between different groups.

How can the IGF be made more appealing and attractive, particularly to governments who might only attend once?

speaker

Annaliese Williams

explanation

Increasing government participation is crucial for the IGF’s relevance and impact on policy-making.

How can the IGF program be better coordinated with national and regional IGFs?

speaker

Annaliese Williams

explanation

This could help address language challenges and ensure more coherent global discussions on key issues.

How can the IGF’s vast archive of information be made more accessible and usable?

speaker

Chris Buckridge

explanation

Utilizing this wealth of information could enhance the IGF’s value and impact beyond the annual event.

How can the IGF improve its hybrid format to provide a better experience for remote participants?

speaker

Galvanian Burke and Amrita Choudhury

explanation

Enhancing the hybrid experience is crucial for increasing participation and inclusivity, especially for those who cannot afford to travel.

How can the IGF address language barriers to increase participation from non-English speaking countries?

speaker

Renata Mielli

explanation

Overcoming language barriers is essential for true global representation and diversity in IGF discussions.

How can the IGF produce more concrete outcomes or recommendations that have an impact on decision-making processes?

speaker

Renata Mielli and Plantina Tsholofelo Mokone

explanation

Creating more tangible outputs could increase the IGF’s relevance and influence on internet governance policies.

How can the IGF program be restructured to have fewer, more focused sessions without compromising diversity?

speaker

Jordan Carter and Baratang Miya

explanation

Balancing the need for more in-depth discussions with maintaining diversity of voices is crucial for the IGF’s effectiveness.

How can the São Paulo NetMundial guidelines be applied to improve the IGF process?

speaker

Renata Mielli and Jorge Cancio

explanation

Implementing these guidelines could enhance the multi-stakeholder nature of the IGF and improve its outcomes.

How can the IGF be more strategically focused in preparation for the WSIS+20 review?

speaker

Amrita Choudhury and Renata Mielli

explanation

A more strategic approach could help the IGF demonstrate its relevance and impact in the context of the upcoming WSIS review.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #255 AI and disinformation: Safeguarding Elections

WS #255 AI and disinformation: Safeguarding Elections

Session at a Glance

Summary

This discussion focused on the impact of artificial intelligence (AI) on elections, disinformation, and democratic processes. Panelists from various countries shared insights on how AI has been used in recent elections worldwide. While fears of widespread AI-generated deepfakes disrupting elections did not fully materialize, AI was utilized in campaign strategies, such as automating responses to voter inquiries and creating personalized content. The discussion highlighted both positive and negative aspects of AI in elections. On one hand, AI tools have empowered smaller candidates to compete more effectively with limited resources. On the other hand, concerns were raised about the potential for AI to be used for voter manipulation and the spread of misinformation.


The panelists emphasized the need for transparency in how social media platforms use algorithms to promote political content. They discussed the challenges of content moderation, particularly in languages with limited online representation. The case of Romania’s recent election cancellation due to foreign interference and algorithmic manipulation was cited as a wake-up call for the potential risks of AI in electoral processes. The discussion also touched on the broader implications for democracy, including the need to update electoral institutions and processes to address technological challenges.


Participants debated the role and accountability of social media platforms in elections, with some arguing for increased regulation and others cautioning against over-reliance on these private entities. The conversation concluded by acknowledging that while technological governance is crucial, addressing underlying social issues like poverty and isolation is equally important in combating the spread of misinformation and preserving democratic integrity in the age of AI.


Keypoints

Major discussion points:


– The impact of AI on elections and disinformation, including both fears and realities


– The use of AI by political campaigns, both for promotion and attacking opponents


– Platform governance and transparency issues related to AI and elections


– The potential for AI to both help and hinder election integrity and democratic processes


– The need for regulatory frameworks and digital literacy to manage AI risks in elections


The overall purpose of the discussion was to examine how AI is affecting elections globally, looking at both positive and negative impacts, and to consider what policy and governance approaches may be needed to address emerging challenges.


The tone of the discussion was largely analytical and cautiously optimistic. While speakers acknowledged serious risks and challenges posed by AI in elections, they also highlighted potential benefits and ways to mitigate negative impacts. The tone became somewhat more concerned when discussing specific cases of election interference, but remained focused on finding constructive solutions.


Speakers

– Tapani Tarvainen: Moderator


– Ayobangira Safari Nshuti: Member of Parliament of Democratic Republic of Congo


– Roxana Radu: Chair of the Global Internet Governance Academic Network, Assistant Professor at Oxford University


– Babu Ram Aryal: Chair of Digital Freedom Coalition from Nepal


– Dennis Redeker: Online moderator


Additional speakers:


– Nana: Works on AI and ethics


Full session report

The Impact of AI on Elections and Democratic Processes


This discussion, moderated by Tapani Tarvainen, brought together experts from various countries to examine the impact of artificial intelligence (AI) on elections, disinformation, and democratic processes. The panel included Ayobangira Safari Nshuti, a Member of Parliament from the Democratic Republic of Congo; Roxana Radu (participating online), Chair of the Global Internet Governance Academic Network and Assistant Professor at Oxford University; and Babu Ram Aryal, Chair of the Digital Freedom Coalition from Nepal.


AI’s Evolving Role in Recent Elections


The panellists observed that AI’s impact on elections in 2023-2024 differed from initial expectations. Ayobangira Safari Nshuti noted that AI was primarily used for self-promotion rather than attacks on opponents, citing examples such as AI-generated speech in Pakistan. Interestingly, AI tools helped smaller candidates compete more effectively with larger ones by providing similar campaign capabilities, potentially levelling the playing field in political contests.


Roxana Radu emphasized that AI has been used for both positive and negative purposes in elections. She highlighted positive examples, such as AI’s use in India to enhance voter outreach and improve campaign efficiency. However, serious concerns were raised about AI’s potential for spreading disinformation and manipulating public opinion. A stark example of AI’s disruptive potential was the cancellation of Romania’s recent election due to foreign interference and algorithmic manipulation, which is currently under investigation by the European Commission.


Platform Governance and Transparency


A major point of discussion was the need for greater transparency from social media platforms regarding their algorithms and content promotion practices. Ayobangira Safari Nshuti stressed the importance of understanding how algorithms treat information from different sources to ensure fairness in election-related content distribution. He also mentioned a collaboration between Meta and the election commission in Congo as an example of platform engagement.


Roxana Radu pointed out that platforms have reduced staff monitoring election content, increasingly relying on AI for content moderation despite its limitations. Babu Ram Aryal highlighted that AI tools are not effective for monitoring content in local languages, creating a significant gap in content moderation capabilities. This issue raised concerns about the potential for unchecked spread of misinformation in languages with limited online representation.


The speakers agreed on the need for platforms to be more accountable, especially during sensitive periods like elections. However, an audience member questioned whether platforms should be trusted at all, given their profit-driven nature, highlighting a more fundamental disagreement about the role of platforms in democratic processes.


Election Integrity and Trust


The discussion emphasized the critical importance of election integrity, particularly in light of the Romanian election cancellation. Roxana Radu argued for the need for safeguards across the entire election process, not just voting itself, and suggested rethinking democratic processes in light of new technologies.


E-voting and AI security concerns were prominent topics. Ayobangira Safari Nshuti and Babu Ram Aryal raised concerns about the vulnerability of e-voting machines to hacking, including through AI-powered attacks. The moderator also noted the potential for AI to be used to undermine trust in elections by simulating attacks.


Addressing Disinformation and Underlying Issues


The speakers agreed that multiple stakeholders have responsibility in combating election disinformation, including platforms, election officials, and voters themselves. Babu Ram Aryal emphasized the need for fact-checkers and digital literacy initiatives to combat disinformation effectively.


While technological solutions were discussed, the conversation also touched on the importance of addressing underlying social issues. An audience member pointed out that factors such as poverty and isolation contribute to the spread of disinformation and need to be addressed alongside technological interventions. This highlighted the need for a comprehensive approach to preserving democratic integrity in the age of AI.


Unresolved Issues and Future Challenges


Several unresolved issues emerged from the discussion, including:


1. How to effectively regulate AI use in elections without infringing on free speech


2. The appropriate level of trust to place in social media platforms during elections


3. Safeguarding the entire election process against AI-enabled interference


4. Balancing the benefits of e-voting with cybersecurity concerns


5. Addressing AI-generated disinformation in languages not well-represented online


The panellists suggested some potential solutions, such as focusing on transparency and labelling of AI-generated content, combining technological solutions with efforts to address underlying social issues, and enhancing digital literacy.


Conclusion


The discussion highlighted the complex and evolving nature of AI’s role in elections. While some feared disruptions did not fully materialize, new challenges emerged, demonstrating AI’s potential to both enhance and undermine democratic processes. The panellists emphasized the need for a multifaceted approach involving technological governance, digital literacy, and addressing broader societal issues to ensure the integrity of elections in the AI era. As the moderator noted in closing, this discussion marks the beginning of an ongoing conversation about AI and elections, recognizing that adaptive strategies will be crucial as AI continues to advance.


Session Transcript

Tapani Tarvainen: Okay, sorry about this little confusion here. So we have three distinguished panellists, Ayubanjira Safarinshuti, Member of Parliament of Congo, the Democratic Republic of Congo, and Roxana Radu is online, Chair of the Global Internet Governance Academic Network and Assistant Professor at Oxford University, and Baburam Ariyal, Chair of Digital Freedom Coalition from Nepal. And we are about to talk, as the title says, about elections, AI and disinformation. As I presume most of you have heard, this has been a major election year around the world. I haven’t been able to determine the exact number, but some 60 countries have had elections this year, and at least two more are to come. Chad and Croatia will have elections later this month. And it was feared in advance that disinformation generated by AI would be a major factor in elections. One question we have to talk about is, did that actually happen, and if so, what should be done about it? Now, you may have noticed that there have been some less than perfectly fair elections even before AI and the Internet, all kinds of election campaign meddling has happened in the past. Governments, the people in power have, let’s say, creatively used their power to influence the outcome of elections. So how big a difference does AI and the Internet make on that? We’ll start with that. Have you, did their fears about AI messing up all elections come true? Let’s start with Mr. Safaree, is that okay? You go first.


Ayobangira Safari Nshuti: Okay, it’s okay. What I can say, as you say, in 2024, this year, there was a lot of election, and there was a lot of concern about having AI used by some of the actors to gain more a result on election. But I think we had a lot of fear. It doesn’t happen as we was here. And it’s like maybe people was also prepared to face the AI, to take some measure against the AI. But on our side, as a legislative people, we didn’t make done really a work on that. It’s like only people themselves, the politician actors in that field, have either take some measure to fight by themselves, by their team, the use of AI by the opponent. And also those who was planning to use it, maybe they didn’t use it as much as they would like to use. Because the community was already prepared to see the use of AI. And I think we had a lot of fear compared on what how really happened on the ground.


Tapani Tarvainen: OK, so do you think it was more scary that did not come to reality as yet anyway? Maybe I’ll try to reach Roxana next. Perhaps you might want to comment on what’s happened in Romanian elections. And did AI have anything to do with that?


Roxana Radu: Yes, absolutely. First of all, apologies for not being able to join you physically this year at the IGF. But thank you for the invitation to join online. So I wanted to bring in the example of Romania. I think for the first part of the year, we’ve heard quite a bit of comments about AI. And as we’re approaching the end of the year, people started to feel that AI is just another tool in the toolbox of technologies that we have available around the election. But the case of Romania changes the narrative completely. As you might have seen about two weeks ago. the Constitutional Court of Romania decided to cancel the results of the first round of presidential elections. So it’s the first time it has happened in the Romanian history, it’s also the first time it has happened since we’ve introduced AI. And of course, there are several reasons behind this decision, but it’s very clearly linked to electoral interference from foreign states, in particular one, Russia, as it was revealed by the intelligence reports. It was very clearly linked also to algorithmic treatment, in particular preferential treatment of one of the 13 candidates in the elections. And the decision of the court cited the illegal use of digital technologies, including artificial intelligence. So this is a case that tells us, in a way, it’s a wake up call, right? It’s just the fact that all of this can be abused massively, and it hasn’t happened in other presidential elections, it hasn’t happened in other parliamentary elections. But it doesn’t mean it’s not something we should have on our radar. I wrote a report earlier this year with a colleague of mine, looking into some positive uses of AI during elections. So we took the Indian case and we concluded, in fact, in India, we could see some very creative uses of AI, both to motivate people to go out and vote, but also to promote campaigns in ways that were fair, and also to promote inclusivity, translating in real time some of the politicians’ speeches, some really useful ways to reach out to a larger voter base. At that point, it didn’t look, so this was May, June, right, the Indian elections. At that point, it didn’t look like there was a lot to worry about. So by the time we entered the American elections, there was quite a bit of attention paid to the use of AI. And yet it happened in a country that was not in the media spotlight. And I think that’s something that we should also bring into the discussions. All elections have their own stakes. But I think it’s useful to think about this experimental use of AI for both some of the good uses and some of the really bad outcomes. I’ll stop here, but happy to jump in later in the conversation.


Tapani Tarvainen: Roxana, so very nice to observe that AI is a double-edged sword that can be used for good and bad also in election contexts. But maybe I’ll hand over to Babu now on your notion of what happened, what could have happened, what should have happened.


Babu Ram Aryal: Thank you very much. It’s my pleasure to be here and talk about this very interesting topic. And I have the privilege to speak with the honorable parliamentarian who fought elections and got through the process of all these things. And then whether it’s scary or it’s normal, it’s very rightly mentioned that it has two sides. It has the bad side as well. So benefit is that it has become very easy to make political advertisement for candidates, especially using their campaign and developing some contents. And it has become very easy for them. But simultaneously, there is a bigger risk that opposition or some other stakeholders may influence their election campaign using similar content, which could be detrimental to their characters and all these things. So one of the major issues in the political campaign is the advertisement of political campaign. And this is one issue. And another issue is the transparency of the campaign. Now, we can see that various platform providers, they have their own regulations internally. Platform providers have also their own provision about what kind of advertisement limitation could be there. And their own filtering some content using AI as well. If I recall, various platform providers, including Facebook or Twitter or TikTok as well, they themselves removed many political contents of their campaigns. And then later on, there were contest or challenge by the politicians themselves. So there is another risk of using AI in election process by the platform providers themselves on the filtering of their content. And another issue was, as I mentioned, that the development of content which could be useful and which could be detrimental. And different contents are also there that are damaging immediately the campaign of election. But when intervening on that content, it could be very late. A content can damage a politician in a few seconds or a few minutes. Even if it is removed in a few hours, that could be not sufficient to repair the damage of the political campaign. So that is another issue that could be seen in the field of election during the election. And another thing is. When we are talking about from – we are talking from campaign perspective, content perspective, also the major issue is coming from, like Roxana just mentioned, that foreign influence in the election process or election day, like intervening on data, intervening on system of ballot papers or ballot process. So this is a very significant part. And another thing is, like, when this comes to the remedy process, whether we have sufficient regulatory approach or not, whether our courts, election courts are also – need to be very clear on this kind of recognition or identifying these contents or effects. So these issues are evolving around the election and disinformation and misinformation. So if we have proper regulatory framework or understanding or digital literacy as well, so we can manage the risk of AI and using from a positive perspective.


Tapani Tarvainen: Thank you. You made some very keen observations there, notably that in elections time is everything. Also if an AI system were to, say, try to remove misinformation or whatever and then accidentally remove somebody’s political advertisement, and it takes days before it comes back online and they lose election because of that. That’s also a problem. So it can cut both ways. So the question is…


Roxana Radu: can jump in very quickly here. I definitely want to talk a little bit more about the question of transparency because that has been part of the regulatory agenda for a while, not necessarily in the context of election but platform transparencies with regard to their to their algorithmic practices has been on the mind of policymakers for a while now and in the EU we do have a framework for that is the Digital Services Act and right now the European Commission has decided to open an investigation, a formal investigation in the case of TikTok with regards to the Romanian elections so this was the platform that was scrutinized for this illegal use of AI. It turns out the transparency prerogative was not really working in this case. One of the candidates received preferential treatment without ever having their electoral content labeled as such so it would appear in all sorts of feeds without ever mentioning that this was in fact part of the the campaign and this is obviously in breach of the laws in place in Romania which is why the court had to issue the decision but we also saw now that the European Commission looking at this case, so Romania is one of the members of the European Union, there is a framework in place at the EU level and the Commission has asked for a couple of things. First of all, already on the 5th of December it asked TikTok to retain all the information that had to do with the elections for a particular period of time I think it was between end of November and going all the way to to March 2025, TikTok is now under obligation, as per this EU order, to retain all the information that has to do with any national election. So that will include the upcoming elections in Croatia as well. For the Romanian one, they said this will be a matter of priority. So we’ll complete this investigation in a speedy manner. And they want to look at what was recommended content during the period of election and also what was a potential intentional manipulation of the platform. So there are quite a few aspects that will now come into question with regard to the practices of TikTok. My previous speaker also mentioned different platforms taking action throughout this year. And it’s true, we have seen lots of statements from both Meta across their different platforms, from Instagram all the way to Facebook, but also from Twitter. I think we’ve had mixed messages in this period. But the truth is also that many of these platforms have actually reduced the number of staff working on these issues, on the issue of monitoring electoral content. So at the end of the day, I think we have to put that in balance. On the one hand, they’ve cut all the funding they had towards proper ways of dealing with this and outsourced a lot to AI, in fact, using AI tools to detect some of this content. Turns out it doesn’t work all that well. And on the other hand, they’ve made all these statements about the proactive attitudes towards preventing electoral interference. I think the truth sits somewhere in the middle. So it’s a lot more mixed than we have seen. And the reality is the AI tools we have today are probably better and better in particular languages, especially widely used languages. But they are not very good in languages that are not as well represented on the internet. So ultimately, if AI is supposed to be in charge of monitoring how AI is used on platforms, we can’t really trust that to be very, very accurate. Thank you. I’ll stop here.


Tapani Tarvainen: Thank you, Roxana. An interesting point here that historically, freedom of speech has been the freedom of newspaper owners to propose whatever they want. And of course, a platform on the internet can also have its own political position. It’s just that they should be open about it. The AI might think something like through social or whatever, which is explicitly on one politician’s platform. But pretending to be neutral and not being it is something that’s definitely bad. Maybe I’ll hand over to Safari at this point. How do you feel about this, especially in the Congolese point of view, if you have some observations there? Is AI a different issue there?


Ayobangira Safari Nshuti: On Australia, the concern we had, as she’s saying, some of the problem we have with having AI to monitor the content is the problem of the language. Because many of the communication will be done in our local language first. And also, even some words that may be in English or in French, they don’t have the same meaning locally. We used to call some of the political party using one name, which is a common name in English. But that means really different things. We have in our country some. some part of the political side that we can use to call them, to identify them regarding the somewhat like a Taliban. When you say Taliban in English you may think is someone in Taliban but on our side is another, it is another meaning. It’s member of the majority, you see. So the AI will not see that context, will not, that’s why we need really to have someone, some men, some real people in a background and will know the local context. If I come back on the use of AI, what I was saying is not to say that AI was not used in the election but it was not on the way people was expecting it. Everyone was looking on the US election on Deepfake but as you say it happened in Romania but people was looking on the US and also even the US. AI was used not mainly to make Deepfake but to promote themselves, like people who were using the AI to make some, some chatbook to respond to email, to respond to phone call automatically. There was even in Pakistan I heard that one of the candidates, the former prime minister used the AI to make speech because it was in prison but was able to make speech, live speech using AI by cloning his voice. So there was a use of AI but because many people was waiting to see it on Deepfake side, I think people have shifted, instead of attacking their opponent they start to promote themselves. They start to use the AI to reinforce their own campaign team. So mainly in the US it was to respond to email, to make call, to make speech, to make some advertising. to make some some nice video for themselves, some nice picture from themselves. In Congo we had election just before the end of 23, we are not on 24, we are not concerned, but it was at the end, on the last day of 23, so we was also part of that big game of the election on 24. On our side before election we had a meeting with even a team from META who came to see our election committee and they agreed to work with us and to help us to put in place a team to monitor all that content. And we can say that it worked partially because in the 23 election last year we didn’t have so much deepfake that it was like it was a before, because before the election we had a team from META who came in the country and put a place in place a strategy and work together with the our election a commission to see how they can fight they can fight against the deepfake and the misinformation.


Tapani Tarvainen: Okay, thank you Safari for that. It’s an interesting observation that AI has been used as a tool for election campaigns and then the question comes does it help more those who have been trouble being getting their things the underdogs because they have the same tools now they multiply their voice or will it actually just help those who are already powerful more? It helped as the


Ayobangira Safari Nshuti: report I saw it helped mostly those small candidate we are who was under under the table because they was able to to put much effort on AI like in the US I know there was a some in the country there was one of the small candidates who was able to gain more voter than Joe Biden in his state by using just the AI. He didn’t have the budget like the Joe Biden budget, but putting much effort on AI. It also happened for a small candidate in Japan, also putting much effort on AI. So it really helped those who were seen as small candidates. It gave them the same tools like those powerful candidates.


Tapani Tarvainen: That’s interesting. So it turns out that it can be a force for good. But maybe Babu has a point of view here. Maybe things are different in Nepal or other observations.


Babu Ram Aryal: Not really. It’s a similar kind of context in Nepal as well. We had election in 2022, just two years ago. When we had election, the AI tools were not that much used in that sense. But nowadays, this is a big discussion. After three years, we’ll have a new election. We are already discussing about the potential risk of using AI, especially on influencing the result of election. So in this context, our speaker Rakshana also raised some of the issues of platform governance. Previously, we considered platform as trusted third party. Media were not taking side on the content. Digital could be there. Media might have endorsed any candidate, but not through the content. But this time, we observed that, especially during US election, ex-owner was putting his… content post repeatedly and that post are coming to our account as well repeatedly and that significantly influence the result of election, this is said. So in this point what I’m talking about that now if platform owners are using platforms for their personal desired candidates then it’s a big risk and if they use AI based content and the process then that is more dangerous thing in democratic process and democratic, this is not the standard that we expect in the democracy. So in that perspective how we make more accountable to these platforms is one thing. Then another thing is very significant, previously there were in Nepal as well, in Nepal’s election context as well, this business platform operators they wish to have their business and if election commission also work with them or election commission also influenced by them then there will be more risky. In 2022 in Nepal some of the candidates they had some problem with the election commission and then they were complaining that their contents were asked by the commission to remove to the platform providers. So that is another very big risk on platform governance and the institutional mechanism of election, you know election commission as well. So these are the very significant issues if these are are influenced using AI, then there could be more risk.


Tapani Tarvainen: Thank you. At this point, I understand we have some online questions. Maybe Dennis would like to read out some questions for us.


Dennis Redeker: I’m happy to do so. This is a fantastic discussion and we have some questions in the chat. So I thought there were some public and one privately reached me. So I thought I’m gonna share those with you. And I thank you all for the speakers so far. The first question by Ahmed is only identified with the first name here or last name, asked what is the role of e-voting in this? And this is maybe something that where you think this might be a different conversation, but maybe it isn’t because maybe this is also about trust and e-voting and AI are both matters of trust when it comes to elections. And it would be certainly something that is interesting perhaps for some of the speakers to pick up on. So how does that combine? So having AI power disinformation online and then also providing you a vote online, potentially, how does that play together? The second question here is by Tanka Ayal, who asked about the positive uses of AI in elections. And I think that refers in part or is meeting in part what Roxanna has already presented about positive use of AI. I think in the context of India, I think you mentioned, maybe that’s something that you can go in some more detail, but also saw that you already posted the link to the report in the chat online as well. And the third question is on the risk of, elections being canceled. And we just had this in Romania and that also relates to trust, I think, in election integrity. under which conditions could elections be cancelled and what does it do to us as voters when we go into an election not knowing whether this will be an election that is fought fairly and whether it will be cancelled by a court later on and so maybe this is a question to Roxana but also for those others perhaps, what does it make with a community when you cannot trust that the election will go forward and the manipulation actually might mean having to retreat, take back the results of an election.


Tapani Tarvainen: So much from the online moderation team here. Okay, thank you for those. Let’s ask if anybody wants to pick up on the e-voting issue, how much if at all that relates to AI. Is e-voting going on in Estonia for a long time for example but I don’t think we have any Estonians around to talk about that but has it had anything specifically to do with AI? Anybody want to


Babu Ram Aryal: pick on that? Can I take this question? Yeah, as being a neighbor of India, Nepal and India we share a border and recently India had an election and in India there were many challenges on the compromise of voting machines and also Elon Musk in one statement said that voting machine could be compromised and then that sparked a bit of debate in Indian context and obviously this is a very challenging thing and then in Nepal’s context we may not have so far, let’s say, may not have foreign influence in the election process but if our, you know, talking from Indian perspective again, India has a range of positions like rural, on education is very much there so but still they are using voting machines so in that context it’s very risky when we use this and at the beginning I also mentioned that data systems and voting machines are very vulnerable, critical infrastructure when we talk about from an election perspective. And if our data system and the voting machine system are not securely protected, then in that case, there could be a big chance of compromise. And as Tanka asked about positive side of election, that at the beginning also, I mentioned that this has given a power to a common person to participate in the political process. And lots of examples we have seen, even in Nepal, we have seen a single person without any campaign group, only using platforms, got elected in mayor or parliamentarian. So yes, it gives significant power, as our honorable MP Safari also mentioned, that unknown person also could have been elected using this content and participating in that process.


Tapani Tarvainen: Kio, it seems like Safari has something to add to that.


Ayobangira Safari Nshuti: Yeah, I would like to say that the link between electronic vote and AI, it is not straight. Because on the electronic vote, the problem we have is maybe to corrupt the vote, to change the vote. You vote for A, and the machine will count for B. That can be done by attacking that machine. But also, what AI bring, AI also work in the cybercrime. Because to attack a machine, normally it requests some kind of skills. But AI give those skills to normal people. You can attack, you can hack something just using AI. It gives you a skill. So voting machines, they are now vulnerable. not only to those high-profile hacker, even normal people hacker, normal people using AI that are able to hack the system. But for now most of the use of AI that can interfere in election is using those deepfakes and using the misinformation to change the perception of the voter themselves so they can be convinced to vote for someone they will not vote for him normally. And on that way is I don’t know how you will cancel an election because the voter have voted for someone maybe you have influences him. It’s like even the advertising on the TV, the normal advertising, but it’s not it’s not easy to detect, to determine if that that kind of deepfake was not there the impact it will have been on election. It’s not very easy but for trafficking the voting machine there is very easy to see how how many vote was changed by by the attack and that’s where the AI gives some access for those for anyone now to be able to act and to change the result on those


Tapani Tarvainen: machine. I’m not sure if I read between you’re implying that AI could be actually useful in e-voting that it could be used to detect certain kinds of tampering as well but otherwise the link is definitely not direct. But thinking of the third question there that cancelling election can be a problem so maybe if AI can cause so much distrust in elections that they tend to be cancelled too easily that could be a problem. Maybe Roksana could like to address that


Roxana Radu: possibility. Yes thank you very much for this question. I think it’s a very important one, and it’s definitely on everybody’s mind back home in Romania, I can tell you that. With the court decision announced at the beginning of December, we still don’t know the dates of the next election, but everybody’s thinking, can we actually trust the next round of presidential elections if here we have proven post facto, so after the fact that there was so much interference, what are we putting in place to prevent that this is going to happen next time? And it’s a big question because we’ve just had elections for the parliament and those elections were not challenged from the perspective of the process, but they showed that the vote was very split, so we need a coalition in place to be agreed before we have the date of the new presidential elections. So it’s going to take a while and we’ll see what happens in between and whether we have institutional measures to address this. But just on the question of trust, right now, there’s also an indirect undermining of the democratic process through the cancellation of elections, right? On the one hand, yes, we had this in reaction to what has happened, but also for most people, this decision is perceived as a violation in some respect of the democratic process itself, that there is a core decision that comes in and annuls the vote of 52% of eligible voters. So this is something that needs to be addressed in a broader conversation around how democracy itself transforms with the rise of AI and digital technologies more broadly. In a way, the processes we’ve had in place for so long, including some of the institutions that are overseeing the democratic process, were created. in an era that had very little technology around. Right now we’re talking about transforming these processes all together, and we probably have to rethink a little bit the relationship between the forms of democracy we have and the technology that is available. And just very briefly, if I may jump in on the question of e-voting, I’ll just say very briefly that if we look at the data on this, it’s actually very few countries around the world that have opted for e-voting. We have obviously very good examples in that category, Estonia being one of them. We have a couple of examples from outside of the Western world as well. But altogether, many countries have stayed away from that because the feeling is that we are not able to prevent any sort of manipulation that might happen with e-voting. So most democracies, at least in Europe, have had that conversation, and most have decided to not move their voting processes online, which ultimately may or may not mean it makes a big difference, because in the case of Romania, we had paper ballots, and yet the whole integrity of the process had been compromised. So before you go to that final stage of whether the vote is cast online or on paper, we need to think about those other intermediary stages, whether that’s electoral registration, whether that’s the campaigning you might have for the elections, the vote counting itself and the verification reporting. And it seems that in the Romanian case, there were cyber attacks happening at the time of the vote counting and those paper ballots being introduced into the system, as well as post-election audit. This is another very important part of the democratic process, and we have to have… safeguards in place across the whole cycle of the electoral process, not just at the time of counting the vote or casting the vote.


Tapani Tarvainen: Thank you. It does occur to me that somebody might want to set up deliberately pretending to be attacking the election so as to get the vote cancelled in order to undermine trust on the system. So even instead of actually trying to affect the election, just make the impression of that so that people don’t trust the system anymore. And AI may also make that easier or even just impossible without it maybe to do effectively. And another interesting observation here that in some countries the incumbent have so much power in the situation that they tend to win that actually foreign interference might be good for the democratic process here. But that’s also something that’s very difficult to, let’s say, assess in any useful way. But maybe you want to carry on from that or if not, I might suggest that you’d consider what kind of power AI actually does for the specific issue of disinformation spreading. A question is there. A question inside. Okay. Hands up. Who’s first? Sorry for not noticing.


Audience: Okay. My name is Nana. Can you hear me? Okay. I have a question, especially as someone who works specifically on AI and ethics. Considering the very big distinction between algorithms and AI, because they’re very different, there’s a lot of conversation around algorithmic discrimination against specific candidates. And from what I hear, there seems to be a lot of responsibility placed on the platforms. Beyond the responsibility, I’m also hearing a lot of trust, because the words like trusted partner has been used. And I’m wondering, is it not too much? Because in the real world sense of it, platforms are like vendors, right? They’re business set up for profit. They’re not NGOs. They’re not civil society organizations. It’s like expecting a newspaper to publish your views, and not the views of those who pay, and not the views of the people who set it up. it up to push their own agenda. I’m wondering if it would not be more beneficial to push for algorithmic transparency, in the sense that publications that would allow people understand how decisions were made by those algorithms, what did the algorithm consider in pushing this content towards someone’s feed and all of that? Because feedback, that we have received a lot of feedback from very right-wing people around platforms like X, and platforms like TikTok, platforms like IG. And that feedback says that previously, these platforms used to be very left-wing agenda, very liberal, very, this is what we want to see. This is how the world should be run. It was run like an alternate universe to the actual real life. But there’s like a push or a shift in agenda. And now that they feel like some sort of balance has been achieved. I disagree with this, but that’s different. But this is the conversation. And I’m wondering that in demanding certain things from the platforms, are we not, one, trying to curb free speech? And because the free speech doesn’t look like the speech that we’re used to, or the speech that we like. And two, why do we trust these platforms? Why do we expect these platforms to comply to other things other than regulatory requirements? Why do we trust this platform so much? That’s like my big question. Why do we trust this platform so much? Thank you.


Tapani Tarvainen: Thank you. And I’ll hand it over to you quickly, but I’ll have to ask you to please be brief. We have only 10 minutes left of the session.


Audience: Okay, I’ll be really brief. Although there was a story to be told about this. I was thinking more very, very similar to your question, I suppose. I’ll start with the example, because during the COVID pandemic, there was a lot of conspiracy theories. A lot of people felt isolated online and started to believe that there are certain larger agendas in the world, and which is why we’re subject to them. And what a lot of research found was that these people were generally ostracized in society, left isolated. there’s a poverty problem, there’s a socio-economic problem that left people out. And I feel like we see this also playing out in the election space, where when people are isolated to believe in such disinformation campaigns, deep fakes, different examples like that. So when talking about governance, and this is to all the speakers, do you think there should be, to what extent, if any, do you think that the intervention should be more on a social aspect rather than tech governance or platform governance?


Tapani Tarvainen: So two very good interventions and questions there. Who would like to go first? I think Babu looks like he wants to speak, go ahead.


Babu Ram Aryal: I was also supposed to come into the very topic, disinformation and the election in our topic. So who is providing disinformation? Who are the agent of disinformation, especially in election process? And it has to be very clear that. So now we are very clear. There is possibility of misinformation and disinformation in election process or in common communication platforms. So to whom to trust and whether we need to trust or not to the platform providers. If we engage on our own, then we don’t have choice to trust. But it’s our choice to confine our engagement in the platform. If you lock your privacy system, and if you limit your engagement, then there will be more secured process, right? So it’s very important that we ourselves set our design that what level of engagement we do in the platform. And when there are disinformations or misinformation, who are responsible? able to remove that. Now, platform providers, they have their own system. There are two models. One is automated, AI-based. Millions of contents are moderated by the platform providers based on their own standards. And there will be another level of moderation, manual moderation. When you complain, then platforms will be responding on that. And they will be evaluated. And then if they think that it has to be removed, then they’ll remove that. And also, now, significant agents are there now. Now, it’s like now lots of fact-checkers are there. And the role of fact-checkers, responsible fact-checkers, are very significant during the election and during the regular time as well. But during the election, it’s the responsibility of the actor of that election has to be very precise on how we fight with this disinformation. Actor means election commission, law enforcement, politicians who are standing as a candidate and voters and civil society. All of them have to be more careful than the regular time because there could be targeted disinformation supplied during that process. And it’s not only the business. Business also should be accountable. Accountability comes when you start your business. Any kind of business, accountability comes together. It’s not a different thing that you do business, you don’t be accountable. So it’s very important that platform providers also be more accountable when there is a sensitivity. They have to take more care on that, that they have more responsibility. So in this way, we’re when, how we can address the major things and like that. And also on law and ethics perspective, of course, we need certain model of governance or regulatory perspective. And in that case, yes, in that way, we can address this thing. Thank you very much.


Tapani Tarvainen: I was just reminded that we have only five minutes left of the session. So we’ll have to start wrapping up slowly, but let’s go one more round of our panelists commenting that.


Ayobangira Safari Nshuti: Yeah, I will be just short, very short. I should say, previously, some of those platform, they were seen as left-wing, but the perceptions change. Where the perceptions change is because somehow we think something has changed in the algorithm they use. And as a parliament, what we want to, from those platform is one thing you say, is transparency. We just to know what is being run in the background. So we can see if there’s some fairness, some equity, how they treat information coming from different sources. If we have that transparency, the trust will be more. Thank you.


Roxana Radu: Yeah, very briefly on the first question, I agree with Babu, there’s a need to have more transparency over funding, over the labeling of that content, and also over promotion, right? These algorithms are not a different species, right? They are, they should not be completely unaccountable. We need to look into how they promote the content and why, and whether there is that preferential treatment or not, and whether that results in manipulation or not. That’s the second part of the question, but there is funding involved, obviously, and that has to be also. transparent and placed under scrutiny. Since platforms have become the new public sphere, they are not just businesses, they are more than businesses, they are the new public sphere, that’s where communication actually happens. People might not turn on the TV anymore but they will receive their news from encrypted groups, from different platforms and so on and so forth. So they provide a public channel for communication during elections and most countries have rules in place for how you promote yourself during the elections and the platforms can be living in a different universe, they need to abide by those rules, they are bound to apply national legislation on these electoral cycles. So this is something that is only a question of respecting existing legislation. And on the second question, very briefly, should the intervention be broader than just tech governance? Should we look at social aspects as well? And absolutely, I agree with you. I think we need to work on multiple levels and so far we’ve given quite a bit of attention to technology, albeit imperfectly, we have not found the right solution to all of these problems, but we haven’t really looked at what could be done on the social level, beyond just saying more digital literacy and just having a level of awareness that is better. I think we need to work on issues of poverty, on issues of connectivity, we need to work on many other aspects including welfare and so on, to be able to give people equal chances in society and that’s going to make democracy a better place for everybody.


Tapani Tarvainen: My watch says we have 45 seconds to go. I would like to hand over to Dennis if you have a final comment here to make.


Dennis Redeker: Let me just say that this conversation has been been thrilling. I really appreciate both the positive and the scary scenarios for the use and also the misuse of AI in the context of elections. I think this is only the start of a conversation that we’ll be having. And the way that we started off this planning of the session at a time when we thought AI and elections in 2024 is going to be scary, this is mirroring what Rosanna said earlier, that we had a phase where we thought we have nothing to talk about in December because nothing is going to happen. And then came along the Romanian elections and there will be more. And there’ll be more things that we have to deal with. So I think this is the start of a conversation and also start to what more regulation and more transparency in that field. Thank you everyone from the side of the Inherent Rights and Principles Coalition. Thank you for the speakers and the moderators to jump in to this frame.


Tapani Tarvainen: Well, panelists and Jerenis and everything and for the audience as well and for the great questions we had. But now we are 30 seconds over time, so let’s close it here. Thank you.


A

Ayobangira Safari Nshuti

Speech speed

129 words per minute

Speech length

1249 words

Speech time

578 seconds

AI use in elections less widespread than feared

Explanation

The speaker suggests that the use of AI in elections was not as extensive as initially anticipated. There was concern about AI being used to influence election results, but it did not materialize to the extent expected.


Evidence

The speaker mentions that people were prepared to face AI and took measures against it.


Major Discussion Point

Impact of AI on Elections


Differed with

Roxana Radu


Differed on

Impact of AI on election outcomes


AI used to promote candidates rather than attack opponents

Explanation

The speaker notes that AI was primarily used by candidates to promote themselves rather than attack opponents. This shift in usage was different from what people initially expected.


Evidence

Examples given include using AI for chatbots to respond to emails and phone calls, and to make speeches.


Major Discussion Point

Impact of AI on Elections


AI helped smaller candidates compete with larger ones

Explanation

The speaker argues that AI tools helped level the playing field for smaller candidates. It allowed them to compete more effectively with larger, better-funded candidates.


Evidence

Examples of small candidates in the US and Japan gaining more votes by using AI effectively.


Major Discussion Point

Impact of AI on Elections


Agreed with

Roxana Radu


Babu Ram Aryal


Agreed on

AI has both positive and negative impacts on elections


E-voting machines vulnerable to hacking, including through AI

Explanation

The speaker points out that e-voting machines are vulnerable to hacking, and AI can potentially make these attacks easier. This vulnerability extends to both sophisticated hackers and ordinary people using AI tools.


Evidence

Mention of AI giving hacking skills to normal people, making voting machines more vulnerable.


Major Discussion Point

Election Integrity and Trust


R

Roxana Radu

Speech speed

141 words per minute

Speech length

2098 words

Speech time

892 seconds

AI used for both positive and negative purposes in elections

Explanation

The speaker points out that AI has been used for both beneficial and harmful purposes in elections. While there are creative uses to promote inclusivity, there are also cases of electoral interference.


Evidence

Positive example from Indian elections using AI for voter motivation and campaign translation. Negative example from Romanian elections where AI was used for electoral interference.


Major Discussion Point

Impact of AI on Elections


Agreed with

Ayobangira Safari Nshuti


Babu Ram Aryal


Agreed on

AI has both positive and negative impacts on elections


Differed with

Ayobangira Safari Nshuti


Differed on

Impact of AI on election outcomes


Platforms have reduced staff monitoring election content

Explanation

The speaker notes that many social media platforms have reduced the number of staff working on monitoring electoral content. This reduction in human oversight has led to increased reliance on AI tools for content moderation.


Evidence

Mentions of platforms like Meta and Twitter reducing staff working on these issues.


Major Discussion Point

Platform Governance and Transparency


Agreed with

Babu Ram Aryal


Ayobangira Safari Nshuti


Agreed on

Need for increased transparency and accountability from platforms


Differed with

Babu Ram Aryal


Differed on

Effectiveness of AI in content moderation


Romanian election cancelled due to foreign interference and AI use

Explanation

The speaker discusses the cancellation of the Romanian presidential election due to foreign interference and illegal use of AI. This case is presented as a wake-up call for the potential misuse of AI in elections.


Evidence

Specific mention of the Constitutional Court of Romania’s decision to cancel the election results due to electoral interference and AI use.


Major Discussion Point

Election Integrity and Trust


Need for safeguards across entire election process, not just voting

Explanation

The speaker emphasizes the need for safeguards throughout the entire electoral process, not just during voting. This includes stages such as electoral registration, campaigning, vote counting, and post-election audits.


Evidence

Mention of cyber attacks during vote counting and post-election audit in the Romanian case.


Major Discussion Point

Election Integrity and Trust


B

Babu Ram Aryal

Speech speed

112 words per minute

Speech length

1580 words

Speech time

844 seconds

AI tools not effective for monitoring content in local languages

Explanation

The speaker highlights that AI tools are not very effective in monitoring content in local languages. This is particularly problematic in countries where multiple languages are used.


Evidence

Example of words having different meanings in local contexts, which AI may not understand correctly.


Major Discussion Point

Impact of AI on Elections


Agreed with

Ayobangira Safari Nshuti


Roxana Radu


Agreed on

AI has both positive and negative impacts on elections


Differed with

Roxana Radu


Differed on

Effectiveness of AI in content moderation


Risk of platform owners using AI to influence elections

Explanation

The speaker expresses concern about platform owners potentially using AI to influence election outcomes. This is seen as a significant risk to the democratic process.


Evidence

Mention of platform owners potentially using their platforms to promote desired candidates.


Major Discussion Point

Platform Governance and Transparency


Multiple actors responsible for fighting disinformation

Explanation

The speaker argues that combating disinformation is a shared responsibility among various actors. This includes election commissions, law enforcement, politicians, voters, and civil society.


Major Discussion Point

Addressing Disinformation


Need for fact-checkers and digital literacy

Explanation

The speaker emphasizes the importance of fact-checkers and digital literacy in combating disinformation. These are seen as crucial tools in maintaining the integrity of the electoral process.


Major Discussion Point

Addressing Disinformation


Agreed with

Roxana Radu


Ayobangira Safari Nshuti


Agreed on

Need for increased transparency and accountability from platforms


Platforms should be more accountable during sensitive periods

Explanation

The speaker argues that platform providers should be held to a higher standard of accountability during sensitive periods like elections. This increased responsibility is seen as necessary due to the potential impact on democratic processes.


Major Discussion Point

Addressing Disinformation


Agreed with

Roxana Radu


Ayobangira Safari Nshuti


Agreed on

Need for increased transparency and accountability from platforms


T

Tapani Tarvainen

Speech speed

150 words per minute

Speech length

1136 words

Speech time

451 seconds

Question of how much trust to place in platforms

Explanation

The speaker raises the question of how much trust should be placed in social media platforms during elections. This reflects the ongoing debate about the role and responsibilities of these platforms in democratic processes.


Major Discussion Point

Platform Governance and Transparency


A

Audience

Speech speed

150 words per minute

Speech length

567 words

Speech time

225 seconds

Question of whether to trust platforms as neutral actors

Explanation

An audience member questions the level of trust placed in platforms, pointing out that they are profit-driven businesses rather than neutral actors. This raises concerns about their role in shaping public discourse during elections.


Evidence

Comparison of platforms to newspapers, which have their own agendas and business interests.


Major Discussion Point

Platform Governance and Transparency


Need to address underlying social issues, not just technology

Explanation

An audience member suggests that addressing disinformation requires looking beyond just technological solutions. They argue for a broader approach that includes addressing social issues such as poverty and isolation.


Evidence

Reference to research findings about conspiracy theories during the COVID pandemic being linked to social isolation and economic issues.


Major Discussion Point

Addressing Disinformation


Agreements

Agreement Points

AI has both positive and negative impacts on elections

speakers

Ayobangira Safari Nshuti


Roxana Radu


Babu Ram Aryal


arguments

AI used for both positive and negative purposes in elections


AI helped smaller candidates compete with larger ones


AI tools not effective for monitoring content in local languages


summary

The speakers agree that AI has dual impacts on elections, offering benefits like leveling the playing field for smaller candidates, but also posing risks such as ineffective content monitoring and potential misuse.


Need for increased transparency and accountability from platforms

speakers

Roxana Radu


Babu Ram Aryal


Ayobangira Safari Nshuti


arguments

Platforms have reduced staff monitoring election content


Platforms should be more accountable during sensitive periods


Need for fact-checkers and digital literacy


summary

The speakers agree on the need for greater transparency and accountability from social media platforms, especially during elections, and emphasize the importance of fact-checking and digital literacy.


Similar Viewpoints

Both speakers emphasize the need for a comprehensive approach to safeguarding elections, involving multiple stakeholders and addressing various stages of the electoral process.

speakers

Roxana Radu


Babu Ram Aryal


arguments

Need for safeguards across entire election process, not just voting


Multiple actors responsible for fighting disinformation


Unexpected Consensus

AI potentially benefiting smaller political candidates

speakers

Ayobangira Safari Nshuti


Babu Ram Aryal


arguments

AI helped smaller candidates compete with larger ones


AI tools not effective for monitoring content in local languages


explanation

While discussing the challenges posed by AI, there was an unexpected consensus on its potential to benefit smaller political candidates, leveling the playing field in elections. This positive aspect of AI in elections was not initially anticipated in the discussion.


Overall Assessment

Summary

The main areas of agreement include the dual nature of AI’s impact on elections, the need for increased platform accountability and transparency, and the importance of a comprehensive approach to election integrity involving multiple stakeholders.


Consensus level

Moderate consensus was observed among the speakers on key issues. While there were differences in specific examples and experiences, there was general agreement on the broader challenges and necessary actions. This level of consensus suggests a shared understanding of the complex relationship between AI and elections, which could facilitate more targeted and collaborative approaches to addressing these challenges in the future.


Differences

Different Viewpoints

Impact of AI on election outcomes

speakers

Ayobangira Safari Nshuti


Roxana Radu


arguments

AI use in elections less widespread than feared


AI used for both positive and negative purposes in elections


summary

While Safari Nshuti suggests AI use was less widespread and impactful than feared, Radu points out significant cases of both positive and negative AI use in elections, including electoral interference.


Effectiveness of AI in content moderation

speakers

Babu Ram Aryal


Roxana Radu


arguments

AI tools not effective for monitoring content in local languages


Platforms have reduced staff monitoring election content


summary

Aryal highlights the ineffectiveness of AI in monitoring local language content, while Radu notes that platforms are increasingly relying on AI for content moderation despite its limitations.


Unexpected Differences

Trust in platforms

speakers

Babu Ram Aryal


Audience member


arguments

Platforms should be more accountable during sensitive periods


Question of whether to trust platforms as neutral actors


explanation

While Aryal suggests increased accountability for platforms during elections, an audience member unexpectedly questions whether platforms should be trusted at all, given their profit-driven nature. This highlights a more fundamental disagreement about the role of platforms in democratic processes.


Overall Assessment

summary

The main areas of disagreement revolve around the extent and impact of AI use in elections, the effectiveness of AI in content moderation, and the level of trust and responsibility that should be placed on platforms.


difference_level

The level of disagreement among speakers is moderate. While there is general agreement on the need for measures to ensure election integrity, speakers differ in their assessment of AI’s impact and the most effective approaches to address challenges. These differences reflect the complex and evolving nature of AI’s role in elections, suggesting that a multifaceted approach may be necessary to address the various concerns raised.


Partial Agreements

Partial Agreements

All speakers agree on the need for measures to ensure election integrity, but they focus on different aspects: Safari Nshuti emphasizes digital literacy, Aryal stresses platform accountability, and Radu advocates for comprehensive safeguards throughout the election process.

speakers

Ayobangira Safari Nshuti


Babu Ram Aryal


Roxana Radu


arguments

Need for fact-checkers and digital literacy


Platforms should be more accountable during sensitive periods


Need for safeguards across entire election process, not just voting


Similar Viewpoints

Both speakers emphasize the need for a comprehensive approach to safeguarding elections, involving multiple stakeholders and addressing various stages of the electoral process.

speakers

Roxana Radu


Babu Ram Aryal


arguments

Need for safeguards across entire election process, not just voting


Multiple actors responsible for fighting disinformation


Takeaways

Key Takeaways

AI’s impact on elections in 2023-2024 was less dramatic than initially feared, with more use for self-promotion than attacks


AI helped smaller candidates compete with larger ones by providing similar campaign tools


There are both positive and negative uses of AI in elections, including for voter outreach and disinformation


Platform governance and algorithmic transparency are major concerns, especially given reduced human content moderation


Election integrity remains a critical issue, as demonstrated by the cancellation of Romania’s election due to foreign interference and AI use


Multiple stakeholders have responsibility in combating election disinformation, including platforms, election officials, and voters


Underlying social issues like poverty and isolation contribute to the spread of disinformation and need to be addressed alongside technological solutions


Resolutions and Action Items

Need for greater transparency from social media platforms about their algorithms and content promotion practices


Platforms should be more accountable and take extra precautions during sensitive periods like elections


More fact-checkers and digital literacy initiatives are needed to combat disinformation


Unresolved Issues

How to effectively regulate AI use in elections without infringing on free speech


The appropriate level of trust to place in social media platforms during elections


How to safeguard the entire election process against AI-enabled interference, not just voting itself


Balancing the benefits of e-voting with cybersecurity concerns


How to address AI-generated disinformation in languages not well-represented online


Suggested Compromises

Focusing on transparency and labeling of AI-generated content rather than outright bans


Combining technological solutions with efforts to address underlying social issues contributing to disinformation spread


Thought Provoking Comments

But the case of Romania changes the narrative completely. As you might have seen about two weeks ago. the Constitutional Court of Romania decided to cancel the results of the first round of presidential elections.

speaker

Roxana Radu


reason

This comment introduced a concrete, recent example of AI interference in elections having major consequences, shifting the discussion from theoretical concerns to real-world impacts.


impact

It changed the tone of the conversation from speculative to more urgent and serious. It led to further discussion about the specific ways AI was used to interfere in the Romanian election and the implications for future elections.


So at the end of the day, I think we have to put that in balance. On the one hand, they’ve cut all the funding they had towards proper ways of dealing with this and outsourced a lot to AI, in fact, using AI tools to detect some of this content. Turns out it doesn’t work all that well.

speaker

Roxana Radu


reason

This insight highlighted the paradox of using AI to police AI-generated content, and the inadequacy of current approaches by platforms.


impact

It deepened the conversation around platform responsibility and the challenges of content moderation, leading to further discussion about the need for human oversight and the limitations of AI in addressing disinformation.


Everyone was looking on the US election on Deepfake but as you say it happened in Romania but people was looking on the US and also even the US. AI was used not mainly to make Deepfake but to promote themselves, like people who were using the AI to make some, some chatbook to respond to email, to respond to phone call automatically.

speaker

Ayobangira Safari Nshuti


reason

This comment provided a nuanced perspective on how AI was actually being used in elections, contrasting expectations with reality.


impact

It shifted the discussion from focusing solely on negative uses of AI to considering how it was being used as a campaign tool, broadening the scope of the conversation.


Considering the very big distinction between algorithms and AI, because they’re very different, there’s a lot of conversation around algorithmic discrimination against specific candidates. And from what I hear, there seems to be a lot of responsibility placed on the platforms. Beyond the responsibility, I’m also hearing a lot of trust, because the words like trusted partner has been used. And I’m wondering, is it not too much?

speaker

Audience member (Nana)


reason

This question challenged the assumption that platforms should be trusted partners in addressing election interference, raising important points about the nature of these companies as profit-driven entities.


impact

It led to a deeper discussion about the role of platforms, the need for transparency, and the balance between regulation and free speech. It also prompted panelists to clarify their positions on platform responsibility.


Overall Assessment

These key comments shaped the discussion by grounding it in concrete examples, challenging assumptions, and broadening the scope of the conversation. They moved the dialogue from theoretical concerns about AI in elections to a more nuanced exploration of real-world impacts, the complexities of platform governance, and the balance between leveraging AI’s benefits and mitigating its risks. The discussion evolved from focusing solely on disinformation to considering both positive and negative uses of AI in elections, as well as the broader societal context in which these technologies operate.


Follow-up Questions

How can we ensure algorithmic transparency in social media platforms during elections?

speaker

Ayobangira Safari Nshuti


explanation

Understanding how algorithms treat information from different sources is crucial for ensuring fairness and equity in election-related content distribution.


What are the most effective ways to combat disinformation in local languages and contexts?

speaker

Ayobangira Safari Nshuti


explanation

AI tools struggle with local languages and context-specific meanings, making it challenging to detect and counter disinformation in diverse linguistic environments.


How can we balance the positive uses of AI in elections (e.g., increasing voter participation) with the risks of manipulation?

speaker

Roxana Radu


explanation

Understanding this balance is crucial for leveraging AI’s benefits while mitigating its potential negative impacts on democratic processes.


What measures can be put in place to prevent the cancellation of elections due to AI-related interference?

speaker

Dennis Redeker


explanation

Addressing this issue is vital for maintaining trust in the democratic process and ensuring the integrity of future elections.


How can we improve the security of e-voting systems against AI-powered cyber attacks?

speaker

Babu Ram Aryal


explanation

As AI enhances the capabilities of potential attackers, ensuring the security of electronic voting systems becomes increasingly important.


What reforms are needed in electoral institutions and processes to adapt to the challenges posed by AI and digital technologies?

speaker

Roxana Radu


explanation

Existing democratic institutions and processes may need to be updated to effectively address the new challenges presented by AI in elections.


How can we address the underlying social and economic factors that make people susceptible to election-related disinformation?

speaker

Audience member


explanation

Tackling root causes like poverty and social isolation may be crucial in combating the spread of disinformation during elections.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #55 Future of Governance in Africa

WS #55 Future of Governance in Africa

Session at a Glance

Summary

This workshop focused on the future of governance in Africa, exploring the intersection of technology and governance. Participants discussed how digital transformation is reshaping governance across the continent, highlighting both opportunities and challenges. Key themes included the need for infrastructure development, capacity building, and inclusive policies to bridge the digital divide.

Speakers emphasized the importance of leveraging technology to enhance democratic processes, improve economic governance, and manage resources more effectively. However, they also noted concerns about cybersecurity, misinformation, and the potential for technology to exacerbate existing inequalities. The role of social media platforms in elections and political discourse was a significant topic, with calls for responsible use and effective regulation.

The discussion highlighted the need for a multi-stakeholder approach to digital governance, involving governments, private sector companies, civil society, and international organizations. Participants stressed the importance of developing context-specific solutions while also engaging in global governance initiatives to address the transboundary nature of digital technologies.

Several speakers emphasized the potential of e-governance and digital public services to improve efficiency, transparency, and accountability in government operations. The importance of data sovereignty and building local capacity in emerging technologies like artificial intelligence was also underscored.

Overall, the workshop concluded that while digital transformation presents significant opportunities for improving governance in Africa, it requires careful management, appropriate legal frameworks, and sustained investment in both infrastructure and human capital to ensure its benefits are equitably distributed across the continent.

Keypoints

Major discussion points:

– The impact of digital technologies and social media on governance, elections, and democratic engagement in Africa

– The need for regulatory frameworks and multi-stakeholder approaches to govern digital spaces and emerging technologies like AI

– Strategies for African governments to leverage digital transformation for economic governance and resource management

– Challenges around digital infrastructure, skills, and inclusion that need to be addressed for Africa to fully benefit from digital transformation

Overall purpose:

The purpose of this discussion was to explore the intersection of governance and technology in Africa, examining both the opportunities and challenges presented by digital transformation for improving governance, economic development, and resource management across the continent.

Overall tone:

The tone was largely optimistic about the potential for digital technologies to enhance governance and development in Africa, while also being realistic about the challenges that need to be overcome. Speakers emphasized the need for African-led solutions and frameworks. The tone remained consistent throughout, balancing enthusiasm for technological possibilities with pragmatism about implementation hurdles.

Speakers

– Moderator: Workshop moderator

– Salah Siddig Hammad: Head of African Governance Architecture Secretariat at the African Union

– Nasir Aminu: Ambassador of the Federal Republic of Nigeria to Ethiopia and Permanent Representative to the African Union

– Christina Duarte, Undersecretary General and Special Advisor on Africa

– Selma Bakhta Mansouri, Representative of the Minister of Foreign Affairs of Algeria and chairperson of the APR Committee of focal points

– Marie-Antoinette Rose Quatre, Chief Executive Officer of the African Peer Review Mechanism

– Vasu Gounden, Civil Society, African group, Republic of South Africa

– Kashifu Inuwa Abdullahi, Director General of the National Information Technology Development Agency, Nigeria

– Jimena Sofía Viveros Álvarez, Managing Director and CEO of Equilibrium AI

– Mercy Ndegwa, Director of Public Policy, East and Horn of Africa for Meta

– Nomalanga Mashinini, Senior lecturer from the University of the Witwatersrand in Johannesburg, South Africa

– Ismaila Ceesay, Minister of Information of the Republic of Gambia

– Adeyinka Adeyemi, Director General of Africa e-Governance Conference Initiative in Rwanda

– Uyuyo Edosio, Principal Innovation and Digital Expert at AFDB in Côte d’Ivoire

Additional speakers:

– Susan Mwape: Founder and Executive Director of Common Cause Zambia, panel moderator

– Desmond Oriakhogba: Panel moderator

Full session report

Revised Summary of Discussion on Digital Governance in Africa

Introduction

This workshop focused on the future of governance in Africa, exploring the intersection of technology and governance. Participants discussed how digital transformation is reshaping governance across the continent, highlighting both opportunities and challenges. The discussion brought together a diverse group of speakers, including representatives from the African Union, national governments, international organisations, tech companies, and academia.

The Role of APRM in Promoting Good Governance and Digital Transformation

A key theme of the discussion was the role of the African Peer Review Mechanism (APRM) in promoting good governance and digital transformation in Africa. Ambassador Salah Siddig Hammad, Head of African Governance Architecture Secretariat at the African Union, emphasized the importance of APRM in advancing good governance and implementing Agenda 2063, Africa’s blueprint for development.

Opportunities and Challenges of Digital Transformation

Speakers highlighted several ways in which digital technologies can enhance governance in Africa:

1. Improving resource management in agriculture and natural resources

2. Digitising government services to improve efficiency and reduce corruption

3. Leveraging digital tools for financial inclusion and economic growth

Dr. Uyuyo Edosio highlighted that Africa generates the least data globally, leading to underrepresentation in AI models and tools. She also noted the lack of local language models for African languages, emphasising the need for increased data generation and representation.

Challenges discussed included:

1. The digital divide between urban and rural areas, males and females, and across generations

2. Lack of digital infrastructure and connectivity

3. Low levels of digital literacy

4. Cybersecurity threats and data protection concerns

5. Potential for technology to exacerbate existing inequalities

Specific Initiatives and Policies

Speakers mentioned several specific initiatives and policies related to digital governance:

1. Nigeria’s signing of the Malabo Convention and ratification of the Protocol on the Rights of Persons with Disabilities

2. The Gambia National Digital Transformation Policy 2021-2025

The Role of Social Media and Tech Companies

The discussion highlighted the complex role of social media and tech companies in African governance. While these platforms offer potential for enhancing democratic engagement, speakers also emphasised the need for responsible use and effective regulation.

Merci Ndegwa from Meta discussed the implementation of content moderation policies and fact-checking partnerships, while also stressing the importance of self-regulation and community standards for social media users.

Multi-stakeholder Approach

Several speakers emphasized the importance of a multi-stakeholder approach to digital governance, involving governments, tech companies, civil society, and international partners. Dr. Nomalanga Mashinini advocated for participatory and collaborative approaches between government and industry in developing governance frameworks for digital technologies.

Conclusion and Future Directions

The workshop concluded that while digital transformation presents significant opportunities for improving governance in Africa, it requires careful management, appropriate legal frameworks, and sustained investment in both infrastructure and human capital to ensure its benefits are equitably distributed across the continent.

Key takeaways included:

1. The potential of digital technologies to enhance governance, resource management, and economic growth in Africa

2. The need to address challenges around digital infrastructure, literacy, and the digital divide

3. The importance of a multi-stakeholder approach to digital governance

4. The crucial role of APRM in promoting good governance and digital transformation

In his closing remarks, Ambassador Salah Siddig Hammad reiterated the importance of leveraging digital technologies to enhance governance in Africa while addressing the associated challenges. He emphasized the need for continued collaboration and innovation in this rapidly evolving field.

The discussion emphasized the need for African-led solutions and frameworks, while also recognising the importance of engaging in global governance initiatives to address the transboundary nature of digital technologies. Overall, the tone remained optimistic about the potential for digital technologies to enhance governance and development in Africa, while also being realistic about the challenges that need to be overcome.

Session Transcript

Moderator: we will also increase awareness and understanding of how emerging digital technologies impact on governance. So to kick off the workshop we are privileged to have esteemed panel whose insights will shape today’s discussion. Ladies and gentlemen, it is my great pleasure to invite Ambassador Salah Siddiq Hamad to the podium to moderate on the opening sessions. Our moderator Ambassador Salah Siddiq Hamad is the head of African Governance Architecture Secretariat at the African Union with extensive experience in governance, human rights, continental policy development. Ambassador Hamad has been at the forefront of promoting democratic principles and good governance across Africa. His leadership at the Agra Secretariat underscores his commitment to fostering cooperation among AU member states to achieve sustainable governance and development. Ambassador Hamad is widely recognized for his strategic vision and dedication to advancing governance agenda. Ambassador.

Salah Siddiq Hamad: Thank you very much and a very good afternoon to all of you and allow me to stand on the existing protocols since we are running out of time. We have an opening ceremony this afternoon before we kick off our session this afternoon and it seems like it’s a very short session of 20 minutes so hopefully we will be accomplishing our goal of having this session within the next 20 minutes. Excellencies, ladies and gentlemen, as has been mentioned this session is on the future of governance in Africa, exploring the nexus between governance and technology, assessing the impact of rapid technological advancements. on Governance and Fostering the Future of Governance in Africa. Welcome to Riyadh, the capital city of the Kingdom of Saudi Arabia, and welcome to this session organized by APRM, the African Peer Review Mechanism. We will begin the opening ceremony with welcoming remarks from His Excellency Ambassador Nasir Aminu. Ambassador Nasir Aminu is the Ambassador of the Federal Republic of Nigeria to the Democratic Republic of Ethiopia and the Permanent Representative to the African Union based in Addis Ababa. Your Excellency, you have the floor, please. Thank you.

Nasir Aminu: Ambassador Nasir Aminu, The Chief Executive Officer of APRM, Excellencies, Honorable Ministers, Invited Guests, Members of the Press, Distinguished Ladies and Gentlemen, It’s a great privilege and honor to welcome you to this important workshop that seeks to redefine governance in our dear continent, Africa. while leveraging on the rapidly changing global technological advancement for improved efficiency and service delivery to our people. Let me seize this opportunity to express our deep appreciation to the government and good people of the Kingdom of Saudi Arabia for the hospitality and excellent conference facilities extended to us since our arrival in this beautiful city of Riyadh. Similarly, I wish to commend the able leadership of the African Peer Review Mechanism, APRM, for the excellent work you have been doing over the years. This workshop is a testament of your hard work and commitment in promoting not only political stability in Africa, but also supporting innovative activities that are essential for Africa’s digital transformation and advancement for our sustainable development. The future of governance in Africa is a topic that demands critical examination, taking into account our diverse cultures, economies and political landscapes, which calls for a collaborative effort to reshape governance models that prioritize inclusivity, transparency and technological advancement towards addressing our economic and socio-political challenges. I wish to highlight Nigeria’s recent milestones, underscoring our commitment to advancing governance, human rights and technological innovation across Africa. Under the leadership of His Excellency President Bola Ametunegbu, Nigeria signed the African Union Convention on Cyber Security and Personal Data Protection. This landmark agreement, known as the Malabo Convention, establishes a critical legal framework for enhancing cyber security, safeguarding personal data. and fostering a secure environment for electronic commerce across the continent. By aligning our national priorities with this continental vision, Nigeria reaffirms its dedication to building robust digital infrastructure founded on transparency, accountability, and inclusivity. Additionally, the Federal Republic of Nigeria has demonstrated its unwavering commitment to inclusion by ratifying the Protocol to the African Charter on Human and People’s Rights on the Rights of Persons with Disabilities. This action reflects our recognition of the transformative role technology can play in ensuring the full participation of persons with disabilities in governance and societal development, paving the way for a more equitable digital future. These actions resonate with the objective of the Program of the Future of Governance in the Digital Era, which seeks to harmonize digital transformation with the imperative of good governance and human dignity. Cybersecurity and data protection are prerequisites for trust in digital governance, and inclusion remains central to building a sustainable governance system. Your Excellencies, as we embrace these initiatives, it is imperative to institutionalize our commitment at the continental level. On this note, I therefore call on the APRM to advocate for the establishment of a continent-wide data protection authority. This body would provide unified oversight, enforce consistent data protection standards, and ensure the ethical use of technology across the continent. Such an authority would strengthen trust in digital governance and safeguard the rights of all Africans in the digital age. Excellencies, let me conclude by affirming Nigeria’s readiness to collaborate with relevant partners to make this vision a reality. Together, let us harness technological advancement, prioritize regional cooperation, and grassroots participation. As we navigate the complexities of the 21st century, it’s imperative that we prioritize these strategies to ensure a prosperous and democratic future for all Africans. I thank you for your attention.

Moderator: Thank you very much, Your Excellency, for your remarks. Lately, Excellencies, ladies and gentlemen, the African Union has been advocating for the promotion of the nexus between good governance, peace and security, and development. And seeing the advancement of technological processes within that nexus is quite important for the advancement of human, the promotion of human and people’s rights in Africa. That is, of course, all within the implementation of Agenda 2063. And that is all, of course, within the implementation of Agenda 2063, for an Africa that we deserve, an Africa that we want. The next speaker, Excellencies, ladies and gentlemen, is a statement by the Honorable Baji Lamamly, Chairperson of the Committee on Transport, Industry, Communications, Energy Science and Technology. the government. The honorable is not with us so we will move of course to the keynote address by his excellency Amara Kalloun, the minister of political and public affairs of the Republic of Sierra Leone. I’m not sure if her excellency Selma Bahati-Mansouri is in the room. Her excellency. She’s not. She will be coming later. Okay. Let me now call on Mrs. Christina, Christina Dautry, the undersecretary general and a special advisor on Africa. Good afternoon. Good afternoon. Can you hear me well? Yes, we do hear you loud and clear. Please proceed. Thank you. Greetings.

Speaker 1: Thank you very much for the invitation to my dear sister Marie Antoinette. All protocols observed. Governance and technology. This is the subject of our conversation today. And when preparing myself for this conversation, I decided before jumping to a technological conversation, technological solutions. and even technological challenges such as the digital divided daughters, I do believe that we need to understand what is the status and the state of governance in Africa and the reasons behind that. This is the reason that my intervention, and I hope to be in the five minutes, will be designed to complement those that have been more focused on technological, because I believe the technological is just a tool. We need to understand the phenomenon so we can design more appropriate technical solutions. So my intervention will be touching three essential aspects. The first, historical roots of African governance challenge. I believe that is important for us Africans to understand. Second, the absence of the state, at the end of the day, a sort of governance threat. And third, breaking the trap, rebooting policymaking through four conceptual pieces. So going to the first, and I’ll be very quickly, historical roots of African governance challenge. So the governance challenge facing Africa today are deeply rooted in history. We cannot just erase that as a black chess board. It’s not possible. Where the structures and functions of the state, what essentially is shaped to serve the interest of external power. And we should say it in a very normal and calm way. So at independence, African nations in a rate of fragile and ill-suited state apparatus, which was fundamentally incompatible with aspirations of independence and development. So basically, we Africans, we need to acknowledge that the colonial state that we narrate had essentially two primary functions. Enforcing the rule of law. door to maintain colonial order, normal, logical. Second, resource extraction to serve, as I said, economic interests of these external powers. So this extractive and minimalist model of governance lacks mechanisms for cost-inclusive economic growth, social equity, and long-term development. So when African nations gained independence, the institutional capacity of the innate colonial state was fully inadequate for delivering the ambitious development. So there was a mismatch at that time. There was a mismatch between development aspirations and institutional capacity made failure almost inevitable. So in the aftermath of early governance failures, everyone knows in the 60s, but more in the 70s, so in the aftermath of these early governance failures, exacerbated by, essentially, independent economic struggles, et cetera, et cetera. So we know that the brethren of institutions offer prescriptive solutions based, essentially, on neoliberal theories. Instead of addressing the structural, let’s say, weakness of the innate state, everybody knows that they blame more the perceived overreach of the post-independence state. This is the reason that African states were accused to be too big and too interventionist, which supposedly stifled market dynamics and economic efficiency. So the results, of course, was the implementation of the search and adjustment programs that were initiated in the 90s, where, basically, these programs sought to rule back the state by reducing public spending, privatizing state-owned enterprises, liberalizing markets, opening economies to foreign competition. So basically, this approach. as a solution to state inefficiency, weakened African states even further. So rather than revitalizing African economies, as everybody knows, structures that were progressed deepened their vulnerabilities. Public service deteriorated, state capacity eroded, poverty increased. So the withdrawal of the state from economic governance left markets poorly regulated and economic actors unaccountable. So African states, as I said, became weaker, more fragmented, and less able to deliver basically their development responsibilities. So as a result, many African states today lack the capacity to manage their economy, their financial flows, deliver public goods effectively. In essence, the state has been sidelined in favor of market forces, foreign actors, and I would say global institutions. So over three decades, a weakened state has left African nations unable to, first, to control economic and financial. Basically, African economies remain highly dependent on external actors, significant capital supply, illicit financial. Second, left African nations unable to manage assets for development, natural resource, infrastructure, financial system. Third, left African nations unable to deliver public, let’s say, service. So the absence of the state has, in a certain way, perpetuated a cycle of dependence under development and social instability. So locking Africa in a sort of governance trap. And you need to understand that. So we can design, we can conceptualize, but in a more efficient way, the technological solution. So there is a need today in the 21st century, the need to break the trap. So rebooting policymaking through four conceptual pillars and I’ll be just naming them. The first conceptual pillar that we Africans we need to understand, that the only way to deliver the labor peace is by delivering sustainable development. Short-term solutions are mundane and don’t address the challenge. The second conceptual pillar is to see that the sustainable development requires sustainable finance, means substantial and long-term. Financing must go beyond short-term aid to support structural transformation, must be nationally owned, so it must be resilient. Financial systems must be able and capable of withstanding economic shocks and global market vulnerability. So to secure sustainable financing, we reach a point that we owe African policymakers and African partners must prioritize domestic resource mobilization as a driver of financing for development, shifting the paradigm. Of course, the third conceptual pillar is if the labor peace requires sustainable development, sustainable development requires sustainable financing, while it’s clear that sustainable financing requires control over economic and financial tools, and control over economic and financial tools requires strong and effective state institutions. And this fourth conceptual pillar, in my opinion, should be the driver when talking about the future of governance in Africa, which is today’s, let’s say, workshop. So strong institutions provide the foundation of economic sovereignty. sustainable finance and durable peace. So in the 21st century, to deliver control over economic and financial flow, to deliver sustainable finance, to deliver sustainable value, to deliver durable peace, digital transformation anchored on consistent investments of digital public infrastructure is not a policy option, but an imperative in terms of rescue the future of governance in Africa, and of course, to design the technological solutions to address the root causes of, let’s say, of inefficiency of today’s African governance. And I would like to stop here, back to you. Thank you very much for the opportunity.

Moderator: Thank you very much, Honorable, Your Excellency, Under-Secretary-General and Special Advisor on Africa for your statement. We really appreciate it, and I hope you will stay with us a bit longer. Now I have the honor to call on the representative of the Minister of Foreign Affairs of the People’s Democratic Republic of Algeria, and the chairperson of the APR Committee of focal points, to deliver a welcoming remarks. Please, thank you. Thank you. Thank you.

Speaker 2: Thank you. Your Excellency, Ambassador Marion Tawanatros-Kater, CEO of the APRM Continental Secretariat, dear participants, ladies and gentlemen. First of all, allow me to convey to you the warmest greetings and best wishes of success of Her Excellency, the State Secretary, For African Affairs, in the Ministry of Foreign Affairs of Algeria, Mrs. Selma Bakhta Mansouri, Algeria’s National Focal Point and Chair of the APRM Committee of Focal Points. Her Excellency, the State Secretary, due to her busy schedule, was not able to take part and or to attend this important workshop on governance and technology organized by the APRM. Allow me also to extend my heartfelt gratitude and special thanks to the authorities of the Kingdom of Saudi Arabia for their hospitality and all the facilities they made available to us for the holding of this important workshop. My presence here today testifies and reflects the commitment of my country, Algeria, under the leadership of the President of the Republic, His Excellency Abdelmajid Boun, President of the Forum of Heads of States and Governments of the APRM, to continue its support to the APRM in the accomplishment of its mission in promoting good governance in Africa. The APRM serves as a crucial self-monitoring tool aimed at fostering political stability, sustainable development, and economic growth across the African continent. Today, as we witness rapid technological advancements reshaping global governance frameworks, it is imperative that we come together to navigate the intricate intersection between technology and governance. Africa’s digital transformation which presents both opportunities and challenges should be positioned among the top priorities of African Agenda 2063. As it connects all sectors, it requires a more transversal than vertical approach as well as a more intense intersectoral coordination that will make it possible to achieve the objectives of the Agenda 2063. Where technology holds the potential for inclusive development, it also amplifies existing inequalities and poses ethical and legal dilemmas. It is our collective responsibility to bridge the digital divide and ensure that technological advancements are harnessed for the benefit of all, especially the marginalized communities. Ladies and gentlemen, dear participants, let’s recall in this perspective the African Union demonstrated leadership and commitment to Africa’s digital future by adopting the African Digital Compact and the Continental Artificial Intelligence Strategy in July 2024. These are not just policy documents, they represent a unified vision. The African Digital Compact aims to harness the power of digital technologies for economic growth, societal well-being and long-term development across the continent. The Continental Artificial Intelligence Strategy aims to leverage artificial intelligence for sustainable development in Africa, aligned with Agenda 2063 and the Sustainable Development Goals. 2030. Debating on governance and technology nexus in Africa is a perfect opportunity to unite the vision for Africa’s digital future as it explores how digital transformation can address Africa’s unique challenges and pave the way for progress towards Agenda 2063. This unity of vision is what will drive Africa’s digital future. In this context, this workshop serves as a platform for African member states and all stakeholders and partners to collaborate in leveraging technology ethically and inclusively for the advancement of our continent. By fostering a shared understanding among policymakers, civil society, academia, and other stakeholders, we aim to navigate the opportunities and risks posed by technological innovation and align them with Africa’s development goals. Together, we should develop strategies to leverage the benefits of frontier technologies while mitigating associated risks. Through a multi-stakeholder approach, we will explore the applications of technology in governance and advocate for ethical, equitable, inclusive, and transparent use of emerging technologies. Let’s work towards establishing a vibrant ecosystem for digital governance in Africa, enhancing sectoral performance, strengthening regional cooperation, and accelerating progress towards our development goals. Together, we can shape a future where technology empowers us to build a more inclusive and sustainable Africa. Ladies and gentlemen, thank you, and let’s make today’s workshop an invaluable opportunity for us to gather insights, share best practices, and identify priorities for advancing digital governance and transformation. Looking forward and to embark on this journey of collaboration, knowledge sharing, and innovation for the betterment of our continent, I thank you for your kind attention. Thank you very much.

Moderator: Thank you very much indeed. This is a very good opportunity and an excellent platform for us to share best practices and information on how to advance technology in Africa. Not only technology, but governance as, of course, the ultimate goal through the advancement of technology. I’m not sure if His Excellency the Minister from Sierra Leone is with us, Gibran. Can we check online to see if he’s available now? His Excellency, the Minister of Public and Political Affairs of Sierra Leone. All right, the time has come for us, Excellencies, ladies and gentlemen, to listen to the opening remarks by Her Excellency, Ambassador. Marie Antoinette Rose Quadri, the Chief Executive Officer of the African Peer Review Mechanism. Your Excellency, you have the floor, please.

Speaker 3: Excellencies, distinguished guests, ladies and gentlemen, all protocols observed. It is a profound honour to join you at this defining moment for our continent. A moment when winds of technological transformation are sweeping across our continent, offering opportunities to reimagine governance and secure a future that is more inclusive, accountable and prosperous. Today we stand at the intersection of innovation and responsibility at a time when the choices we make about governance and technology will echo across generations. Let me begin with a reflection. Governance, at its core, is about people and their hopes, their aspirations and their dreams for a better tomorrow. Technology, meanwhile, is not just a tool. It is a bridge that connects those dreams to reality. It can amplify voices, illuminate truths and inspire innovation. But, as with all powerful tools, it can also divide, distort and exclude. But let’s also be clear. Technology is not a cure-all. Its power is only as good as the principles that guide its use. Without ethics, it can divide us. Without inclusivity, it can deepen inequalities. Without accountability, it can divide us. it can undermine trust. That’s why we’re here, to ensure that Africa leads this digital era with integrity and purpose. Our mission here today is to ensure that technology serves as a force for unity and progress, not division and stagnation. This workshop is not just another event. It’s much more than governance or technology alone. It’s about people. It’s a clarion call. A call for leaders, thinkers and doers from across Africa to chart a course forward. One that ensures technology strengthens governance rather than undermines it and empower citizens rather than marginalizes them. It’s about the young entrepreneur in Lagos coding solutions to connect rural farmers. The student in Nairobi pushing for transparency through digital activism. And the policy leaders across our member states working to build systems that reflect the aspirations of their people. Excellencies, ladies and gentlemen, over the coming hours, we will explore the nexus between governance and technology guided by critical questions. How can technology strengthen democratic processes across the continent? How do we confront cyber threats and misinformation that erode trust in governance? And how do we ensure that Africa’s digital transformation leaves no one behind, whether rural or urban, young or old, rich or poor, men or women? The APRM has long championed the principles of accountability, inclusivity and innovation through our e-governance initiatives and in partnership with the United Nations Office of the Special Advisor on Africa, UNOSA. We are strengthening e-governance in Africa through policy innovation and transformative technologies, as directed by the United Nations General Assembly. Furthermore, our collaboration with the United Nations Department of Economic and Social Affairs, UNDESA, on capacity building has been instrumental in capacitating APRM member states. As we begin, I will ask you to approach today’s discussions with an open mind and collaborative spirit, because the future of governance in Africa will not be defined by any one of us, but it will be shaped by all of us together. I am confident that this room holds the visionaries who will shape it. Let us make this moment where bold ideas meet transformative action, where the vision for a digitally empowered Africa becomes reality. It is my great privilege to officially open the workshop on the future of governance in Africa. Thank you, and I look forward to these extraordinary outcomes we will achieve together. I thank you.

Moderator: Your Excellency, thank you very much for your opening remarks. Indeed, while we are proceeding with the implementation of Agenda 2063 and the advancement of technology in Africa, no one should be left behind. No women, no men, no children, no adult, no people in the urban or rural areas of Africa. This brings us to the end of the opening ceremony, Excellencies, Ladies and Gentlemen. And now I would like to democratically hand over the microphone to Professor… This month, this month is an associate professor, the Department of Private Law at the University of UWC. The session, session number one. Thank you. Thank you very much, Ambassador. Okay. Your Excellencies, ladies and gentlemen, I think for now I’m actually playing the role of a forerunner, and I will simply be introducing the next person, the person who would actually moderate the next session, and she is none other than Susan Mwape, who is the founder and Executive Director of Common Cause Zambia. I’ll be coming after her, like John the Baptist, I’ll be after her, right, like Jesus Christ came, you know, after John the Baptist. Good afternoon, ladies and gentlemen. I hope you’re able to hear me. I can’t hear myself. Good afternoon. All right, so thank you very much. Thank you Desmond for the introduction. I will at this point be inviting somebody who has already been introduced and we will have just a brief fireside chat with Ambassador Sala. Hamad. As has already been introduced, Ambassador Sala is the head of the African Governance Architecture Secretariat at the African Union. And Ambassador, you’re very much welcome. Follow your work and you have been doing such great work at the AGA Secretariat. I’ll start by asking you, how has digital transformation reshaped governance in Africa, particularly in enhancing democratic engagements, political pluralism, and of course the big issue, electoral integrity?

Salah Siddiq Hamad: Thank you very much indeed. As he for she, I’m quite honored that you are the one moderating this session. Thank you. I’m not saying anything against Professor Desmond, but it’s an honor to be interviewed by a woman because this is what we believe in in Africa, that Africa cannot really be developed and built without the active participation of women. How digital transformation reshape governance in Africa? I think before we speak about the future, we need to really make reference to the past and to the present. Africa, as we all know, has been going through a lot of challenges, even before independence, slavery, colonialism, and now civil wars in many African countries, apartheid in the Southern African region, and you name it. so many challenges. So therefore, the advancement of good governance is absolutely one of the ultimate objectives that we need to reach before we proceed with the implementation of Agenda 2063, the blueprint for Africa, for building the Africa we want and the Africa we deserve. But where are we now from that objective? I would say that it’s a work in progress, but we need to really do more. Why? Because despite the fact that Africa has advanced a bit in issues related to elections and in particular the electoral processes, we still see some setbacks in some of the African countries. It’s not because the technology is not working, but because the other infrastructures are not available. The technology by itself will not work to advance governance, but we need really the people to know their rights and duties and to understand what is the political processes and the governance advancement is all about. So without orientation, without really raising awareness, I think technology by itself will not really work. In some African countries also, we need to really face the reality that the infrastructure that will be based for the advancement of technology is also lacking. So we need to look into the infrastructure that is needed to make sure that the advancement of technology is helping. Also, I think to focus more on the question, how is it reshaping governance and advancing election in Africa? I think we need to also look into how many Africans do have access to internet, how many Africans do have access to a smartphone that could be used to access internet. how many women versus men? How many young men and women versus old? I think all of these questions need to be addressed. In addition to that, the issue of infrastructure, including, I would say, even power in some African countries where all of this technology needs to be powered. Are we utilizing solar systems to empower our infrastructure so we can have a valid infrastructure and platform for empowering or for advancing technology? All of these questions need to be addressed. And I think most of all, again, we need to really make sure that all Africans are quite informed and aware of their duties and rights as part of the advancement of good governance, role of law and technology, based on technology in Africa. Thank you.

Moderator: Thank you very much for that, Ambassador. And I think you raised a lot of very valuable issues. When we talk about Africa, I think the digital divide remains one of the biggest challenges, issues of infrastructure. We are seeing a lot of rapid progress in terms of what kinds of development is happening across the continent based on international standards and best practices that are being recommended. I think one thing that would come to mind would be the DPI initiatives. And we are still trying to figure out how that will speak to our rural communities, for example, and those that are already in the divide. But then maybe in your view, what would you say is the role of the different digital tools that we have in improving governance? I’ll pass it.

Salah Siddiq Hamad: The different tools that we currently have do have impact on our processes and also on our efforts. to promote good governance and e-governance, in particular in Africa. But of course, I think there’s also, we need to speak about the need for political support from our governance to allow these processes to exist and to proceed. Without political support, it would not be easy really to achieve that goal. Secondly, changes is always looked at as something that could be of disturbing nature. People by nature don’t really accept changes easily. And all of this technological infrastructure that we are talking about and mechanisms and tools, they are to some extent quite new, if not new. And therefore, we need to also accompany the process with, I would say, a stronger orientation processes to make sure that these people, our African people, are looking into the positive side of implementation of all of these tools and processes that we have. I’m saying this because in many cases, during elections in some African countries, the initial response from government will be to block internet. Why? In their view, the internet will be a tool for spreading fake news and bad news and news that will disturb the election processes. How can we prevent all of this without blocking the internet? How can we allow the African people to enjoy and to benefit from the internet while voting? How can the internet be used to ease the access to information and to also ease access to information that will allow them to vote in a way that it will make them benefit from the entire process as African citizens? Again, political support is quite important. but also general orientation is needed to allow the African citizens to know what is going on in that sphere.

Moderator: Okay, I will ask you my last question because that’s all we had time for. We have initiatives such as the African Peer Review Mechanism, which also somehow serves as an early warning too, in addition to the review processes that in the text. But at the same time, the continent also have several other early warning mechanisms. How then would you say technology can play a role in strengthening those kinds of mechanisms, such as the APRM, which is a co-governance tool that Africa is using and all these other mechanisms?

Salah Siddiq Hamad: This is an excellent question. And I think this is also an opportune moment to congratulate APRM for a job well done. APRM since its existence has been an excellent mechanism as part of the African Union family to promote good governance, democracy, rule of law in Africa through different and various mechanisms and tools. And I think one of the greatest tools that has been used in addition to the review and all of these other reviews and processes is the Africa Governance Report, which is currently one of the Africa’s, I would say, reports that speaks to the reality of governance at the national and continental level. On the other hand,

Speaker 4: Moderator, excellencies, ladies and gentlemen, let me say that it is an honor today to address you. And I’m sorry that I cannot be with you in the beautiful kingdom of Saudi Arabia. I am here sitting in Durban, my internet is a little unstable, so I may at some point just have to switch off the camera, and I have a PowerPoint presentation which I see somebody is sharing. But let me quickly go through this presentation in the interest of time moderator. Let me switch off the camera, otherwise it will switch off. Cyber diplomacy in Africa and the pivotal role it plays, moderator, in securing our continent’s future, is a very, very important topic for us today on the continent. And digital technologies must be one more tool for us to break the chains of colonialism and neocolonialism, and not allow it to be used once again to imprison us. In an increasingly interconnected world, African nations must establish a unified approach to cyber diplomacy. One that balances, as you see there, the national interests with regional security and developmental and governance goals. Let us examine these challenges through five key questions, if we can get to the next slide. How can African countries create a unified framework for cyber diplomacy? The answer, colleagues, begins with cooperation. Regional organizations such as the African Union and its African Peer Review Mechanism are uniquely positioned to drive this process. The AU has already provided foundational initiatives, like the Malabar Convention on Cyber Security and Data Protection. By adopting such agreements… African countries can harmonize national cyber security policies under a collective framework. A regional cyber diplomacy council supported by the AU, and I know the earlier speaker talked about a data central authority, so either a data central authority or a regional cyber diplomacy council supported by the African Union to serve as a platform to coordinate interests, resolve conflicts, and promote Africa’s shared goals of security, development, governance, and digital inclusion. Second excellencies, what capacity building initiatives are needed to enhance African diplomacy? Cyber diplomacy requires a unique blend of negotiation, mediation, and technical expertise. Capacity building programs should focus on training African diplomats to navigate cyber-related disputes and threats. Establishing regional cyber academies and centers of excellence in collaboration with international partners will be essential to this effort. Technology transfer agreements can help African states build indigenous skills while fostering partnerships with global cyber security leaders. Thirdly, what mechanisms can ensure transparency, accountability, and trust? Trust is the cornerstone of successful cyber diplomacy. African nations must develop mechanisms for confidence building, such as cyber incidence response frameworks, joint exercises, and data sharing agreements. The establishment of a regional cyber dispute resolution body can further provide neutral ground for resolving conflicts related to sovereignty, data protection, and security. and cyber crime. Additionally, adopting international norms like the UN Group of Governmental Experts Principles on Responsible State Behavior will reinforce Africa’s commitment to a rules-based digital order. Fourth, how can cyber diplomacy promote peacebuilding and prevent cyber conflicts? Cyber incidents can escalate quickly into broader conflicts if left unchecked, and we see this all across our continent. Africa must productively use cyber diplomacy as a peacebuilding tool. For example, early warning systems and cyber confidence-building measures can prevent misunderstandings between nations. Africa can also learn from the experiences of regions like the European Union and ASEAN, which have successfully implemented cyber dialogue platforms to manage disputes. By fostering regular communication and sharing best practices, we as African nations can prevent cyber threats from becoming destabilizing forces. Fifthly, what role can public-private partnerships and civil society play? The private sector and civil society are key stakeholders in Africa’s cyber diplomacy agenda. Public-private partnerships can drive innovation, enhance infrastructure, and provide expertise in addressing complex cyber challenges. Private companies can also assist in developing standards for cybersecurity and digital trust. Civil society, meanwhile, bring inclusivity and accountability to cyber policymaking. By engaging these stakeholders, African nations can ensure that cyber diplomacy outcomes are equitable, innovative, and resilient. And in conclusion, moderator, let me say that the path towards a unified… framework for African cyber diplomacy is challenging, but achievable. Through regional cooperation under the leadership of the African Union, strategic capacity-building initiatives, robust transparency mechanisms, and active stakeholder engagements, the APRM can shape a cyber diplomacy agenda that promotes security, peace, governance, and development, and use these digital tools to liberate our continent and not imprison us again. I thank you, moderator.

Moderator: Thank you very much, Dr. Vasu. Can we please give him a round of applause? All right, so we go straight to the next panel. It’s going to be a panel discussion, and I will introduce the panelists, and then we get on to the discussion without wasting time. So the first on my list will be Dr. Kashifu Inua Abdullahi, who is the Director General of the National Information Technology Development Agency, Nigeria. We also have as a member of the panel Mr. Denise Sousa. Mr. Denise Sousa is the Governance and Public Administration Officer, Division for Public Institutions and Digital Government of the United Nations Department of Economic and Social Affairs. Is he here present? Is he physically present? Mr. Denise, is he here? Okay, I will introduce him when he comes back into the hall. We also have Ms. Jimena Sofia Viveros Alvarez. I hope I got the name correctly. She is the Managing Director and CEO of Equilibrium AI, and I also understand that she is a lawyer from Mexico. All right. We have joining us virtually Dr. Nomalanga Mashinini, who is a senior lecturer from the University of the Vist Water Strands in Johannesburg, South Africa. And also we have from Malawi, Dr. Jeanne Filippo, who is the Director General of the Financial Intelligence Authority in Malawi. Is she participating virtually? Okay, so Dr. Jeanne is participating virtually. And we have the last but not the least, Ms. Arusha Goyal, who is the Policy Lead, Middle Eastern Africa Chain Analysis, United Arab Emirates. Okay, we will announce her presence when she joins us. Without wasting time, we’ll go straight to the question, and it is my pleasure to call on, the first question will be actually pinned by Dr. Kashifu. Nigeria recently just left an electionary period, and for some of us who observed the elections from afar, especially from social media, the engagement was quite charged. You know, lots of misinformation flying around, a lot of fake news, a lot of exaggeration. But again, amidst those, we still had a lot of informed, you know, discussion by the Nigerian citizens. You know, they were very interested in who was going to lead them. So drawing from that experience, from that, you know, the outcome of the elections, and from what your agency, NIDDA, is doing in Nigeria. We would need to know what NIDDA is putting in place to promote informed and ethical engagement of Nigerian citizens in the democratic process in our future elections.

Speaker 5: Thank you. If you look at the history of internet and social media, when it started in early 2000, we all started or rushed after it without thinking of putting guardrails around it. Like we had the John Ballos declaration saying that internet is ungoverned space. We have the big techs as of that time saying that internet and social media is a free space. Nobody can govern it. But from 2016, things started changing after the Cambridge Analytica issues, whereby the big techs started calling for regulation. But the challenge is we don’t know how to regulate internet or social media because there is no legal books or history books that we can read to understand how to regulate these spaces. Because the generation before us never encountered these kind of challenges. So it’s something that we need to co-create how to regulate. And countries today are grappling on how to regulate these spaces. And we are looking at it from different perspectives. Then in Nigeria, we had an incident in 2000 and 2021 when Twitter was banned because of… this kind of challenges, people were misusing the platform. At that time, there was no any contact between the government and these big techs. And that’s when Nidia moved in to fill the gap. We did that by creating a code of practice, because social media is not something or technology you can just say you regulated this way, because it always changed. And also one thing with the techie guys, they always try to look at how to stretch the law or to hack the law to bypass it. So the best thing to do was to say that anything that is illegal offline is illegal online. So how can we move our law from the physical world we are to the virtual world we are creating? So we came up with the code of practice to let these big techs, the social media platforms and so on, to understand our laws so that they can apply it in their platforms in Nigeria. Because most of developing countries, we don’t have data sovereignty. We don’t have operations sovereignty. Because the big techs will decide how to operate their platforms without consulting us. And we don’t have digital sovereignty. And they don’t listen to us in most cases. So that COP brought them to the table where we sat together to create how we can navigate this platform together or how we can navigate the challenges. So we came up with the COP, the code of practice, whereby they need to respect all Nigerian laws. They need to register. in Nigeria, they need to understand content that is harmful in Nigeria and we need to agree on take down. We categorize content into two, there are illegal and legal, lawful and unlawful content. Lawful content should be allowed on the platform, while unlawful they should take it down immediately. But there are content that are lawful but harmful, which need to be reviewed to understand those content before you take them down. So that COP also provided a platform for us to engage when there are issues, we escalate, we look, we sit together and review them. And we also get them to engage fact-checkers in Nigeria because at the time we had that Twitter issue, there wasn’t any certified fact-checker in Nigeria, mostly they use fact-checkers from the US, from the western countries to look at the content in Nigeria, which they don’t even understand because there are local things that you cannot understand if you are not in Nigeria, even English words, there are English words we speak in Nigeria you cannot understand if you are not a Nigerian. So we need fact-checkers that understand the local context and also that can be able to translate things before they take decisions. So that really helped us to moderate that space. And this year they filed a report because part of the COP they need to be filing annual report to look at the number of content they take down, the number of content they put back after taking down, because sometimes also there are… is cyberbullying. If someone doesn’t like your content, people can gang around to flag your content. And this platform, they take it down. And you can initiate the process to put it back. Based on the report last year, they removed more than 60 million contents in Nigeria, harmful content. And also, they reinstated many. I don’t have the figures, but we published the report just a few weeks ago. It’s available online. And also, that get them to start filing taxes in Nigeria. Because in the half year this year, in between January to June, they paid more than 2.5 trillion Naira in VAT in Nigeria, which before, they don’t used to pay most of that.

Moderator: Thank you. Thank you very much, Mr. Dr. Kashifu. As you were speaking, one important point I got from your discussion is the fact lack of moderation of engagement on the social media can actually threaten security. And with the emergence of the AI technology, that can actually, that situation can be exacerbated. So it can pose a greater risk to national security, to peace, peaceful coexistence among citizens. And that leads me to the question that I would like Ms. Jimena to address. I would want to know what governance framework you think that countries, especially in the global south, and particularly Africa, can adopt to address the risk that AI poses to peace and security. Well, first of all, thank you for having me here, your excellencies, distinguished panelists.

Speaker 6: So I am Ximena Riveros, and I am Mexican, but I live in Zimbabwe, so that is my link to this conference. I’m also a member of the high-level advisory body on AI that the Secretary General of the UN created. So we have been looking at a whole different set of solutions or recommendations for AI governance at the global level. Our conclusion was that, although obviously there’s a need for a regional kind of context-specific approach, there cannot be a real governance if we don’t talk about global governance, because the technology is completely transboundary, it’s transregional, and if we just have regional or national initiatives, we’re just witnessing like a patchwork of all of these different things that are not in a coherent manner adhered. So that’s what we are aiming for. So what struck us the most was, so we did a survey, and out of all of the 193 countries of the United Nations, 118 are not a part of any of the international governance initiatives worldwide, 118, out of which over 50 of Africa are in those 118. And only 7 are part of all of them, the G7. So that really is very striking, because we need more inclusivity. and we need more access to these conversations and more engagement as well. Because if we don’t do so, there’s going to be just the widening of the digital gap and more inequalities and a bunch of problems that come with the exclusion, especially with the marginalized communities, the lack of data and etc. So the way we need to achieve this is by creating synergies, strategic synergies, strategic partnerships. We discovered that the Global South to Global South kind of cooperation is more efficient and more welcome than Global North to Global South because simply within the Global South we understand the problematics, we understand each other whereas the Global North doesn’t really and their priorities are different. And what we want to avoid is this techno has been addressed or this new techno-colonization whether it’s for resources or for permitting, just the dependency itself because that’s going to bridge us even further from where we need to be in terms of trying to achieve the sustainable development goals by 2030 and even beyond. So we need to have this strategic vision that goes even further and is ahead thinking. So where does this place us in terms of governance? We need governance that is resilient, that is techno-neutral in order for it to be adaptive to the evolution of the technology itself which is extremely fast-paced and that needs to be generalistic because we cannot separate, it’s dual use by nature, right? So the technology, so we cannot separate the military domain, the civilian domain. domain, I call it the peace and security domain, but on purpose, because there are implications that are, you know, intersecting both domains. For example, it is the exact same technology that is being used by the militaries, which is a state actor, but it’s also being used by all other type of state actors, such as law enforcement or border controls, which are civilian by nature. And then we also have non-state actors, which are more relevant for our region than for, say, the global north, which are organized crime, terrorism, mercenaries, and so on. And the dire reality is that in our regions, this type of groups might have even more capabilities than the government itself, and the governments themselves don’t even have the response, the capacity of responding to attacks by these groups, with or without the technology, but exacerbated by their access to this technology, dictated by the proliferation of it, because we don’t have a governance regime. So this is critical to address. So what is the landscape in regards to the governance of specifically, say, what regards to the military domain? So in the HLAB, so the High-Level Advisory Report, there was even a discussion whether to include it or not. Fortunately, we did, because of these reasons, because it’s dual use by nature, and we cannot exclude it. So we did have a lot of considerations into the peace and security domains. For those who haven’t read our final report, which was submitted in September, right before the end of the year. summit of the future and then it was subsequently our recommendations were adopted into the the global digital compact and the pact for the future we do include recommendations for peace and security so that’s that’s a starting point so we have the pact of the future that focus then we also have the GG on laws which group of governmental experts that are within the UN’s conventional for conventional weapons which I think a little bit ironic because say autonomous weapons are the least of the conventional weapons but anyway that the discussions have been kind of deadlocked for the past over 10 years so now we’re really and this is for AI in general since this is a multi-stakeholder environment as the IGF is and how AI should also be covered and all of knowledge is because of the shift of dynamics and that are happening all over the world we need to reimagine how this governance can be achieved so the traditional methods are no longer working because then that the conversations and the discussions are deadlocked when you know one of the p5 states just decides that it’s done so the whole veto system of the Security Council and even the the structure of the GG’s has not proven to be successful for our interests obviously for them it does work out but anyway so in that sense we have that we also have the Global Commission for the Responsible Use of AI in the military domain where I’m also Commissioner you need here has a group of experts also called race then you did but the greatest achievement I think so far is moving the conversation away from the EGE on laws to the UN General Assembly, because then we have witnessed some very important resolutions which include explicitly the call for AI regulation in the military domain, which hasn’t been seen before. And we also have a very important call for action by the Secretary General, Guterres, and by the ICRC to have a binding treaty on autonomous weapons by 2026. So we really hope that this can be achieved, because the dire consequences on our region, as we have seen, for example, in Gaza, well, and also in Ukraine, but it’s just an example of how it’s disproportionately affecting the global south. Because this technology, these weapons, are not going to be deployed in the global north. We are going to be the recipients, we are the recipients, and they’re being field tested as they are deployed. There’s no testing in between, and there’s no accountability, and there’s so many problems with their deployment because of bias. And that also comes back to the point of the missing data, because, for example, these models that are, you know, the weapons that are targeting civilians are working on the data that it’s been trained on, and, for example, it just depends on who has been training the models and which data they have been trained on. So obviously these are global north enterprises that are training these models and creating them and then being deployed into the field. So obviously there are racial, there are even gender and age, all types of biases that are imprinted, and that’s how they’re targeting and attacking civilians.

Moderator: Yeah, thank you very much. I know you’ve got a lot to say, but for want of time, we’ll just proceed. We’ll come back to you again to share some thoughts. But as you were speaking, I heard you talk about multi-stakeholder approach, and that takes me back to Dr. Kashifu’s presentation where he talked about the role that Facebook Cambridge Analytica played. And of course, all these are enhanced by the emerging AI destructive technology. So I would now want to call on Ms. Merci Ndegwa, who is a director of public policy, east and on of Africa for Meta, Meta or Facebook, to intervene in the discussion. I mean, and we want her to tell us how Meta is leveraging its platform to support democratic management, engagement, sorry, in Africa, particularly during elections and periods of political transition. Thank you so much, Professor. I hope you can all hear me okay.

Speaker 7: Yes, we can. I thank you for having me in this session, and I apologize that I’m not able to join you in person in Riyadh, but I’m glad that we have this tech-enabled session where we can still participate even though we’re not there in person. So please bear with me in case of any challenges with regards to the connection. I’m grateful for the opportunity to come and join you to have this conversation. And I just, before I answer your question, I think I want to just draw us back to the context that was really helpful as I was preparing for this session that was basically looking at what, you know, what’s the nexus between technology and digital governance pretty much is. And one of the things that I think was alluded to and was brought up very strongly. even in the keynotes that were presented before my notes here right now, alluded to the fact that technology has actually been a very huge enabler in advancing governance across our countries and it has actually also enabled us to be able to do so at scale and to be able to empower communities and bring a lot of benefits on board. But even as I was thinking about this, I think in the earlier remarks that have been presented here, there is a lot of focus, I’d say, on social media specifically within the digital governance space. But I’d love to draw attention to the broader, let me say, environment that is the digital space because we’re talking about social media being one. We definitely are players in that space. There is e-commerce, there is e-governance, e-government services, there is payments themselves, all of which are but a few of the areas that I would say are the broader ecosystem within which we are speaking about digital governance. And so, again, to the points that were raised before, the question is not so much what a single player or players in a specific area could do, but more so how can we all be involved in ensuring that digital governance is upheld and that we continue to improve on how we promote this. When we think about digital governance at META, we’re thinking primarily about, I’d say, three or four areas. One, and this is to answer your question, whether it be in our engagements around elections or it be around mitigation of any integrity risks, as we call them, around privacy and security, around misinformation and disinformation, any cybersecurity risks that may emerge with the use of our platforms. I think we have had in earlier sessions and even a bit of this as mentioned within the keynote, issues around algorithmic bias and having understanding around that, there is the question of exclusion of people from the digital space. So still elements of digital divide being a key part of some of the digital governance areas that I think continue to undermine peace and could actually be areas of improvement for all of us to work on, to see how we can promote better digital governance and peace and security across. And so for us at Meta, like I was saying earlier, we think about this from primarily three to four areas. One is how can we ensure that there is self-monitoring by all of those who are coming onto our platforms. Today, our platforms support over 3.3 billion users who come onto our platforms actively every month across Facebook, Instagram, WhatsApp, Messenger. And all of these users are interacting for different reasons. Some are coming to connect with one another because they are either geographically removed from each other. Others are connecting because they have similar hobbies or interests. And others are coming to connect because they want to promote either business or let me say, digital economy related work streams. So you’re looking at create ecosystem and others like that. And all of these are really fundamental in ensuring that we continue to advance economic development and also social impact and economic growth across. And for us, when we think about the self-regulation, we want to ensure that those users as they come onto our platforms are very clear about what rules we have with regards to how they conduct themselves on our platforms. So we have very elaborate, what we call community standards or guidelines across these platforms that inform our different users with regards to what we allow or not allow on our platform. So that being the most effective. way to start with. But we recognize that a lot of people also may not necessarily look into this community guidelines and rules as we would like, even though we take a lot of effort and time in making sure that these policies are regularly reviewed and updated and that we get into a lot of consultation with experts, bodies, governments, associations, civil society organizations and others to draft them. We then have to think about how can we as a responsible organization then ensure that we can use other, let me say strategies within our control to support and uphold the integrity and governance across our platforms. And so this is where our second intervention comes in. We think about how can we do this, for example, using machine learning tools or artificial intelligence to be able to address some of these integrity risks at scale. And we have found that this has been extremely effective when we think about, for example, being able to bring down fake accounts or accounts that are created by folks with the intention of either using them to propagate bad activity online or to target people. And we find that when we’ve done that, a lot of the content that could have been problematic is then addressed and brought down even before that content is published or even seen by users. But there are more sophisticated actors who may come onto our platforms with different intentions. And I think some of these challenges are part of what we’re discussing here. And the question at hand you’ve mentioned is part of this problem. And we are constantly then making sure that to our third intervention is that we are working in partnership with others who are experts in this area. We are a technology company and we have a lot of expertise in-house drawn from different disciplines. But then over and above that, we want to ensure that we are working very closely and in tandem with others who also have the depth and expertise in those different areas of verticals to get support in making sure that we have the right oversight about how we create our. policies, how we develop our products, and how then we execute on programs on the ground, which then brings me to that last part, the programs themselves. Once we have identified, we have our policies in place, we have developed our machine learning and AI tools that help us to address some of these concerns at scale, we also do then partner on the ground with local partners to make sure that we can address the challenges we may have or gaps we may have from either a local language perspective or certain nuances with regards to either culture, religion, and so on that may be very unique to an area or a region, to make sure that we have a proper understanding of what risk there may be that may be exacerbated by our platforms, and then we get the right collaboration around those to be able to address them. So I think I will pause there, but then I’m happy to expand a little further when I have a chance to speak a little more, but I thank you for the opportunity. Thank you very much.

Moderator: So we take the last intervention in this panel, and we’ll be inviting Dr. Nomalanga to simply tell us from a legal point of view, what legal frameworks will be useful to ensure that government and the governed actually use the social media platforms responsibly, safely, and ethically to ensure a smooth governance process in Africa? Thank you, Desmond. And this is very brief because we’re running against time.

Speaker 8: Thank you, Desmond. I think one of the important things that we need to understand is something that Cisco recently reported, which is that cyber attacks, and sometimes committed through social media platforms, account for a 10% drop in Africa’s GDP as a whole, which in monetary terms results in billions of money, billions of dollars that are lost in our GDP. And I think There are two main points that I just want to briefly make here regarding legal framework in order to provide safe and resilient social media use in governance. The one is a participatory approach. I do think that a number of African governments have adopted a mere one-way communicative approach in using social media to reach out to people. We saw this even in the election season. We don’t really see a participatory mechanism where conversations flow both ways. And this results in opportunists, particularly fraudsters on social media, taking the opportunity to communicate very important but also misinformed or disinformed news to the communities that the government itself or other governmental organizations should be communicating. I also think one of the other ways, in addition to a participatory approach, is a collaborative one between government and industry. For example, in South Africa, we are starting to see some policy and legislative interventions and contributions coming from the Association on Comms and Technology, which basically is made up of different information and communications technology companies such as South Sea, MTN, RAIN, etc., coming together with government to create new forms of interventions that actually influence how we should regulate internet access, social media, and the use thereof. So I do think that there is a need for some focus to shift into looking to how we can get citizens involved in the process of lawmaking and as well as policy. policymaking in this regard. Is that brief enough, Desmond?

Moderator: Yeah, thank you. Thank you very much. Super, super brief. You know, let’s give a round of applause to all the speakers. And you know, one thing I take from the old message, I mean, one message I take from the old discussion is that there is actually a need for a multi-stakeholder approach towards developing an e-governance, or if you like, a digital governance framework in Africa. And of course, no agency would better, you know, galvanize that process than APRM. And I’m hoping that the CEO is taking notes, you know, for the work that needs to be done in the future. I will hand over the mic, my job is done. I will hand over the platform again to Ms. Susan. Thank you very much. Yeah, thank you. Thanks. Thank you. Thank you. Thanks everyone. Thank you very much. All right. Can we, can we please give another round of applause to the panel? Thank you very much for a well-articulated panel. And as they are winding up, I’ll be introducing the last panel for this afternoon. All right. Thank you very much. So I’ll be introducing the last panel for this afternoon, and we’ll be looking at the impact of digital transformation on the future of economic governance and resource management in Africa. At this point in time, allow me to call upon the minister, His Excellency, the Minister of Information of the Republic of Gambia, who will come to talk to us about leveraging digital transformation for Africa’s economic governance and resource management. Please, a round of applause.

Speaker 9: Good evening, everyone. My Excellency, Ambassador Marie Antoinette Rose Quattro, Chief Executive Officer of the African PREV Mechanism, good evening, Excellencies, distinguished personalities here present today. It gives me great pleasure to be here today and a very big privilege to give a speech, a brief one though, on leveraging digital transformation for Africa’s economic governance and resource management. As Africa enters the digital age, the continent faces a historic opportunity to transform its economic governance and resource management. Through the strategic adoption of digital technologies, Africa can address many of the challenges it faces, from inefficiency in the public service or public administration and e-government through various programs, such as the Gambia National Digital Transformation Policy 2021-2025, which aims to enhance digital infrastructure and services across all sectors. In addition, as part of our drive for digital transformation, we have started initiatives to support economic governance. We are making efforts to digitize government through e-government services, and this is growing, including online portals for tax collection, business registration, and public service delivery. For example, the Gambia Revenue Authority, which’s mandate is to collect revenue for the government, has introduced… digital tax platforms to streamline tax collection, reduce corruption and improve efficiency in government revenue management. That is why recently the government has done a lot in our local government funds, exceeding the target that was set for this year. We’re also about to launch the digital identity systems to improve service delivery and ensure that citizens can access social services and financial resources, especially in remote areas. When it comes to resource management, digital tools offer unparalleled potential. Africa’s vast agricultural sector can benefit from technologies like mobile apps, GPS-based farming solutions and drone surveillance. These innovations enable farmers to increase productivity, optimise water usage and adapt to the challenges of climate change. Similarly, digital technologies like Internet of Things, sensors and blockchains are helping to monitor and manage Africa’s natural resources, ensuring sustainability and curbing illegal activities like poaching and deforestation. One of the most promising areas of digital transformation is financial inclusion, with mobile money platforms proliferating across the continent. Millions of Africans who were once excluded from the financial system now have access to banking, saving and lending services. This promotes economic growth, reduces poverty and helps build more resilient communities. However, for Africa to fully realise the benefits of digital transformation, we must tackle key challenges, and these include but are not limited to improving digital infrastructure, bridging the digital divide between urban and rural areas, between male and female as well, and across generations, and investing in digital literacy and cyber security. These investments will ensure that digital transformation reaches all corners of the continent and benefits every African citizen. Africa’s digital future is bright. By leveraging digital transformation, we can revolutionize economic governance, optimize resource management, and create a more inclusive and sustainable future for all Africans. Thank you.

Moderator: Thank you Honorable Minister for your remarks and indeed Africa’s future is bright. At this point in time, allow me to call on my panelists. I’d like to call upon Dr. Uyuyo Edosio, who is the Principal Innovation and Digital Experts at AFDB in Côte d’Ivoire. Okay, we may be facing some technical challenges, but I will then move on to Mr. Adeyinka Adeyemi, who is the Director General of Africa e-Governance Conference Initiative in Rwanda. Please give him a round of applause.

Speaker 10: Thank you very much. As we have run out of time, we only have three minutes and I hope I can ask you to help us deliver a huge task in in the three minutes. My question to you is, in what ways can digital transformation enhance economic governance and accountability within African nations? Right, so we can do this in two minutes. At the Africa e-Governance Conference Initiative, what we do is to promote e-governance. And I’d like to just, if I go to that question, to just say that the future of governance in Africa is actually e-governance, which is what everybody have said in different ways. So to promote e-governance in Africa or digital transformation, and some of the values have been mentioned, but for me, a lot would have to do with how it enables decision-making. Obviously, there are data-driven systems and infrastructure that aids decision-making across Africa using digital means. There’s also the value creation for sectors like agriculture, for instance. The minister from Gambia did mention a couple of things. So what we have done across Africa is to work with the agencies that promote digital transformation because we see a lot of value in the nexus or the relationship between digital or e-governance and the sectors. If you go to education, if you go to finance, which also has been mentioned, there’s just a lot of things that are happening across Africa. But there’s more to come if we have the enabling laws and frameworks. If we have the, I don’t know if you can hear me. If you have the enabling laws and frameworks in place, and if a lot of the African countries can ensure that they follow those roadmaps that have been created across Africa, and just make sure that we do. what is right, physically.

Moderator: Thank you very much, sir. Please, a round of applause. I am Uyuyo, Dr. Uyuyo is online. Can you hear us? Yes, I can. I am, apologies. I wasn’t granted access to unmute my mic nor my videos. All right, it’s okay. We have come towards the end of our program and you have three minutes to respond to one question. And I would like you to share with us what do you think are some of the strategies that African governments can use to adopt, I’m so sorry, I’ll come back again. What strategies can African governments adopt to harness the benefits of digital transformation while ensuring the resilience of SOEs? Thank you.

Speaker 11: Thank you very much for that question. And really, apologies, I cannot be with you in rehab. Really, back to your question what can African governments use to really harness the full potential of digitalization? It’s in two ways, I’ll come to the answer in two ways. First, there is a supply side, right? And it’s been said across this conference, underlying infrastructure. There is no digital transformation without underlying infrastructure. And the first being that is good connectivity. So if Africa has made progress in terms of mobile connections, at least 88% of Africa is covered by from 2G. If you look upwards, you’ll see this coverage. some way, somewhat, some part. That’s great because with 2G, you know you can make more, I mean 2G, 3G, you can make good mobile phone connections and calls. But if you’re looking at critical connectivity to contribute to the AI age, you need to be backed by fiber backhaul or some sort of very strong connectivity back and base so that you can contribute. And many of our African countries, we still are suffering from lack of quality infrastructure. Another thing is the affordability of this internet. Although I know that the prices of internet is definitely driven by the forces of demand and supply, but we need to think about this critical and how we use our universal service funds to really make connectivity affordable, especially for those that need it the most. So when you look at those critical infrastructure in terms of connectivity, you’ll see that that’s one of the backbones of digital transformation. On the back of that, I will build that the government needs to digitize more of their government services. Now the reason for this is that the more we digitize government services and provide e-government services such like digital birth certificates or digital IDs, bank identity, you’re generating data of your citizen. And this AI generation and this more so data-driven generation depends on data. Now what’s happening is that Africa generates the least data across the whole continent. And the fact remains that if we have little data, then we have little representation. So it’s no worry when you read AI models and you read AI tools and they’re not able to recognize some black faces or some languages. We still even have a very long way to go on our local language models. I mean, Africa is a unique continent of diverse language skills. But if we look at how much recordings we have just in local languages, alone, you will see that it’s minute. Now, if we set that infrastructure, underlying infrastructure in place, the next thing the government would have to do is to build their human capital, because it’s one thing to supply infrastructure, it’s one thing to build good government services that people can access, but if people don’t have the skills, how would they use it? How would, not just even using it, how would they contribute to this digital age? Because the aim is not just for Africa to be consumers, but to also be… Hello, are you still there? It appears that… But this is one percent, there’s so much more that we could leverage, can you hear me? Yes, we can. You have to wind up, because we are really running short of time, and we had lost you a little bit there. You can go. No worries. So, on the supply side, I would say the government really needs to focus on providing critical infrastructure, infrastructure being both soft and hard, underlying connectivity and also service layers that build on top of that to make citizens want to consume. Now, on the demand side, I would say we need to build critical skills, and digital skills range from basic, intermediate and advanced, so that we know that at every sphere we are training people, because literacy, digital literacy is no longer a luxury, it’s actually right now a human right, because the future of the world will be those who are digitally literate or not, and those who are digitally connected or not. That’s the future bias. In fact, I’m saying future, but it’s actually the present bias. So, that’s how I’ll summarize it for my intervention.

Moderator: Thank you so much for your intervention. Please, a round of applause, and thank you very much for joining us. I will pass on to you. So, in just one minute, I’d like to say that One of the critical things we need to do across Africa, and this is very clear from our work in Rwanda and in Zambia, is what we call the model adaptation, where you see models that are working in some parts of Africa that needs to be introduced and adapted in other parts. And I’m talking about e-governance models. I mean, Rwanda has a fantastic model that is supported by GIZ. GIZ actually is working with Irembo in Rwanda, and that works. The question is, why can we not adapt that model in other countries? So governments need to be more open where you have those interventions, especially from development partners, as in the case of Rwanda. There’s also what Zambia is doing with the Smart Zambia initiative that can be replicated in other parts of Africa. We don’t always need to reinvent, to start from scratch. We can reinvent the wheel across Africa, and that gives us a perfect solution. Thank you. Thank you very much. Please give my panelists a round of applause. Thank you very, very much. You all have been wonderful. Thank you for staying with us. And we apologize for the short time that we had to conclude. At this point, allow me to invite Ambassador Sala again to the podium. Thank you.

Salah Siddiq Hamad: Thank you. Thank you very much. Thank you very much. You can hear me? Fantastic. Well, we are coming to the end of this workshop. Before we depart this room, I think it’s quite important that we recommit ourselves as Africans to this very important cause. We’ve been hearing that this connectivity and the advancement of technology is no longer a luxury thing, but in fact it’s a human right thing that we need to advance as such. And without advancing technology, we would not be able to build the Africa we want and the Africa we deserve. The Africa that is prosperous, united and peaceful. With that, I hope we will all go back to share this information that we gained and learned today within our networks, but also of course without the support and the political commitment of our member states and here represented by the able permanent representative of the Federal Republic of Nigeria and of course also from Algeria. And all other member states that are with us here this afternoon, I hope your political support will always be there to advance these very important processes. Before we officially close the meeting by the announcement by the CAO, I would like to invite you all to come here to the stage for a group photo and then the meeting will be declared officially closed by Her Excellency, the CAO of IPRM, Madam.

N

Nasir Aminu

Speech speed

104 words per minute

Speech length

575 words

Speech time

328 seconds

Digital technologies can enhance democratic engagement and electoral integrity

Explanation

Digital technologies have the potential to improve democratic processes and ensure the integrity of elections. This includes using technology for voter registration, election monitoring, and result transmission.

Evidence

Nigeria signed the African Union Convention on Cyber Security and Personal Data Protection (Malabo Convention) to enhance cyber security and safeguard personal data.

Major Discussion Point

The impact of digital technologies on governance in Africa

Need for a continent-wide data protection authority

Explanation

A pan-African data protection authority is necessary to provide unified oversight and enforce consistent data protection standards across the continent. This would strengthen trust in digital governance and safeguard the rights of African citizens in the digital age.

Evidence

The speaker calls on the APRM to advocate for the establishment of a continent-wide data protection authority.

Major Discussion Point

Legal and ethical frameworks for digital governance

Agreed with

Selma Bakhta Mansouri

Nomalanga Mashinini

Agreed on

Importance of legal frameworks and data protection

S

Salah Siddig Hammad

Speech speed

136 words per minute

Speech length

1372 words

Speech time

602 seconds

Technology alone is not sufficient; infrastructure and awareness are also needed

Explanation

While technology is important for advancing governance, it must be accompanied by proper infrastructure and public awareness. Without these elements, the benefits of technology cannot be fully realized.

Evidence

The speaker mentions the need for infrastructure like power systems and the importance of raising awareness among citizens about their rights and duties.

Major Discussion Point

Challenges and strategies for digital governance in Africa

Agreed with

Ismaila Ceesay

Uyuyo Edosio

Agreed on

Importance of digital infrastructure for governance

Need for political support and orientation to implement technological changes

Explanation

Implementing technological changes in governance requires strong political backing and proper orientation of the public. Without political will and public understanding, technological advancements may face resistance or misuse.

Evidence

The speaker emphasizes the importance of political support and the need for orientation to allow African citizens to understand and benefit from technological processes.

Major Discussion Point

Challenges and strategies for digital governance in Africa

S

Adeyinka Adeyemi

Speech speed

117 words per minute

Speech length

306 words

Speech time

156 seconds

Digital transformation reshapes governance by enabling data-driven decision making

Explanation

Digital transformation allows for the collection and analysis of data, which can inform and improve decision-making processes in governance. This leads to more efficient and effective governance practices.

Evidence

The speaker mentions data-driven systems and infrastructure that aid decision-making across Africa using digital means.

Major Discussion Point

The impact of digital technologies on governance in Africa

S

Ismaila Ceesay

Speech speed

112 words per minute

Speech length

486 words

Speech time

259 seconds

Digital tools offer potential for improved resource management in agriculture and natural resources

Explanation

Digital technologies can enhance the management of agricultural and natural resources in Africa. This includes using tools for monitoring, optimizing resource use, and adapting to climate change challenges.

Evidence

The speaker mentions mobile apps, GPS-based farming solutions, drone surveillance, Internet of Things sensors, and blockchains as technologies that can help manage resources more effectively.

Major Discussion Point

The impact of digital technologies on governance in Africa

Agreed with

Salah Siddiq Hamad

Uyuyo Edosio

Agreed on

Importance of digital infrastructure for governance

Importance of addressing the digital divide and investing in digital literacy

Explanation

To fully benefit from digital transformation, Africa must tackle the digital divide between urban and rural areas, genders, and generations. Investing in digital literacy is crucial for ensuring widespread adoption and use of digital technologies.

Evidence

The speaker mentions the need to bridge the digital divide and invest in digital literacy as key challenges to be addressed.

Major Discussion Point

Challenges and strategies for digital governance in Africa

Agreed with

Uyuyo Edosio

Agreed on

Need for digital skills development

Digitization of government services to improve efficiency and reduce corruption

Explanation

Implementing e-government services can streamline public administration, enhance efficiency, and reduce opportunities for corruption. This includes digital platforms for tax collection, business registration, and public service delivery.

Evidence

The speaker cites the example of the Gambia Revenue Authority introducing digital tax platforms to improve efficiency in government revenue management.

Major Discussion Point

Digital transformation for economic governance and resource management

Leveraging digital tools for financial inclusion and economic growth

Explanation

Digital technologies, particularly mobile money platforms, can promote financial inclusion by providing access to banking, saving, and lending services to previously excluded populations. This can drive economic growth and reduce poverty.

Evidence

The speaker mentions the proliferation of mobile money platforms across Africa, enabling millions of Africans to access financial services.

Major Discussion Point

Digital transformation for economic governance and resource management

Differed with

Christina Duarte

Differed on

Focus of digital transformation efforts

S

Jimena Sofía Viveros Álvarez

Speech speed

129 words per minute

Speech length

1272 words

Speech time

589 seconds

Necessity of a unified global governance approach for AI and emerging technologies

Explanation

A global approach to AI governance is crucial due to the transboundary nature of the technology. Regional or national initiatives alone are insufficient to address the challenges posed by AI and emerging technologies.

Evidence

The speaker cites a survey showing that 118 out of 193 UN countries are not part of any international AI governance initiatives, with over 50 African countries among them.

Major Discussion Point

Challenges and strategies for digital governance in Africa

Differed with

Kashifu Inuwa Abdullahi

Differed on

Approach to AI governance

Requirement for governance frameworks to address AI risks to peace and security

Explanation

Governance frameworks for AI must consider its dual-use nature and potential impacts on peace and security. This includes addressing the use of AI in both military and civilian domains, as well as by non-state actors.

Evidence

The speaker mentions the deployment of AI-powered weapons in conflicts and the disproportionate impact on the Global South, as well as efforts to create binding treaties on autonomous weapons.

Major Discussion Point

Legal and ethical frameworks for digital governance

S

Christina Duarte

Speech speed

119 words per minute

Speech length

1089 words

Speech time

546 seconds

Requirement for strong state institutions to control economic and financial tools

Explanation

Effective digital governance and economic sovereignty require robust state institutions. These institutions are necessary to manage the economy, financial flows, and deliver public goods effectively.

Evidence

The speaker discusses the historical context of weak state institutions in Africa and the need to strengthen them to control economic and financial tools in the digital age.

Major Discussion Point

Challenges and strategies for digital governance in Africa

Differed with

Ismaila Ceesay

Differed on

Focus of digital transformation efforts

S

Kashifu Inuwa Abdullahi

Speech speed

120 words per minute

Speech length

814 words

Speech time

405 seconds

Implementation of content moderation policies and fact-checking partnerships

Explanation

Social media platforms need to implement content moderation policies and partner with local fact-checkers to address misinformation and harmful content. This helps maintain the integrity of online discourse, especially during elections.

Evidence

The speaker mentions the development of a code of practice in Nigeria that requires social media platforms to respect local laws, register in the country, and engage local fact-checkers.

Major Discussion Point

Role of social media and tech companies in African governance

Differed with

Jimena Sofía Viveros Álvarez

Differed on

Approach to AI governance

S

Mercy Ndegwa

Speech speed

167 words per minute

Speech length

1311 words

Speech time

470 seconds

Importance of self-regulation and community standards for social media users

Explanation

Social media platforms should have clear community standards and guidelines for user behavior. Self-regulation by users, guided by these standards, is crucial for maintaining a safe and productive online environment.

Evidence

The speaker mentions Meta’s elaborate community standards and guidelines that inform users about allowed behavior on their platforms.

Major Discussion Point

Role of social media and tech companies in African governance

S

Nomalanga Mashinini

Speech speed

121 words per minute

Speech length

314 words

Speech time

155 seconds

Need for participatory and collaborative approaches between government and industry

Explanation

Effective digital governance requires collaboration between government and industry stakeholders. This approach ensures that policies and regulations are practical, effective, and aligned with technological realities.

Evidence

The speaker cites the example of South Africa, where the Association on Comms and Technology, comprising various ICT companies, collaborates with the government on policy and legislative interventions.

Major Discussion Point

Role of social media and tech companies in African governance

Need for legal frameworks to ensure responsible use of social media platforms

Explanation

Legal frameworks are necessary to promote safe, responsible, and ethical use of social media platforms in governance processes. These frameworks should balance freedom of expression with the need to prevent misuse and misinformation.

Major Discussion Point

Legal and ethical frameworks for digital governance

Agreed with

Nasir Aminu

Selma Bakhta Mansouri

Agreed on

Importance of legal frameworks and data protection

S

Selma Bakhta Mansouri

Speech speed

98 words per minute

Speech length

700 words

Speech time

426 seconds

Importance of cybersecurity and data protection for trust in digital governance

Explanation

Strong cybersecurity measures and data protection policies are essential for building trust in digital governance systems. These elements ensure the safety and privacy of citizens’ data, encouraging greater participation in digital governance initiatives.

Evidence

The speaker mentions the adoption of the African Digital Compact and the Continental Artificial Intelligence Strategy as examples of efforts to address cybersecurity and data protection.

Major Discussion Point

Legal and ethical frameworks for digital governance

Agreed with

Nasir Aminu

Nomalanga Mashinini

Agreed on

Importance of legal frameworks and data protection

M

Moderator

Speech speed

115 words per minute

Speech length

2590 words

Speech time

1350 seconds

Potential for social media to enhance democratic engagement during elections

Explanation

Social media platforms can play a significant role in facilitating democratic engagement during elections. They provide a space for political discourse, information sharing, and citizen participation in the electoral process.

Evidence

The moderator references the recent Nigerian elections, noting the high level of engagement and informed discussion by Nigerian citizens on social media platforms.

Major Discussion Point

Role of social media and tech companies in African governance

S

Uyuyo Edosio

Speech speed

163 words per minute

Speech length

714 words

Speech time

261 seconds

Importance of underlying infrastructure and connectivity for digital transformation

Explanation

Robust underlying infrastructure, particularly in terms of connectivity, is crucial for digital transformation. High-quality connectivity is necessary for African countries to fully participate in and benefit from the digital age.

Evidence

The speaker mentions that while 88% of Africa is covered by 2G or higher, there is still a need for stronger connectivity, such as fiber backhaul, to support more advanced digital services.

Major Discussion Point

Digital transformation for economic governance and resource management

Agreed with

Salah Siddig Hammad

Ismaila Ceesay

Agreed on

Importance of digital infrastructure for governance

Need for building digital skills across basic, intermediate, and advanced levels

Explanation

To fully leverage digital transformation, African countries must invest in building digital skills at all levels. This includes basic digital literacy as well as more advanced skills needed to contribute to and benefit from the digital economy.

Evidence

The speaker emphasizes that digital literacy is no longer a luxury but a human right, as the future (and present) will be divided between those who are digitally literate and connected and those who are not.

Major Discussion Point

Digital transformation for economic governance and resource management

Agreed with

Ismaila Ceesay

Agreed on

Need for digital skills development

Agreements

Agreement Points

Importance of digital infrastructure for governance

Salah Siddig Hammad

Ismaila Ceesay

Uyuyo Edosio

Technology alone is not sufficient; infrastructure and awareness are also needed

Digital tools offer potential for improved resource management in agriculture and natural resources

Importance of underlying infrastructure and connectivity for digital transformation

Multiple speakers emphasized the critical role of robust digital infrastructure in enabling effective digital governance and resource management in Africa.

Need for digital skills development

Ismaila Ceesay

Uyuyo Edosio

Importance of addressing the digital divide and investing in digital literacy

Need for building digital skills across basic, intermediate, and advanced levels

Speakers agreed on the importance of investing in digital literacy and skills development across all levels to ensure widespread adoption and effective use of digital technologies in governance.

Importance of legal frameworks and data protection

Nasir Aminu

Selma Bakhta Mansouri

Nomalanga Mashinini

Need for a continent-wide data protection authority

Importance of cybersecurity and data protection for trust in digital governance

Need for legal frameworks to ensure responsible use of social media platforms

Multiple speakers highlighted the need for robust legal frameworks and data protection measures to ensure trust, security, and responsible use of digital technologies in governance.

Similar Viewpoints

Both speakers emphasized the importance of content moderation and community standards on social media platforms to maintain the integrity of online discourse, especially during elections.

Kashifu Inuwa Abdullahi

Mercy Ndegwa

Implementation of content moderation policies and fact-checking partnerships

Importance of self-regulation and community standards for social media users

Both speakers advocated for collaborative approaches in developing governance frameworks for digital technologies, emphasizing the need for cooperation between government, industry, and international stakeholders.

Nomalanga Mashinini

Jimena Sofía Viveros Álvarez

Need for participatory and collaborative approaches between government and industry

Necessity of a unified global governance approach for AI and emerging technologies

Unexpected Consensus

Global approach to AI governance

Jimena Sofía Viveros Álvarez

Nasir Aminu

Necessity of a unified global governance approach for AI and emerging technologies

Need for a continent-wide data protection authority

While focusing on different scales (global vs. continental), both speakers unexpectedly agreed on the need for unified governance structures for emerging technologies and data protection, highlighting a shared recognition of the transboundary nature of digital challenges.

Overall Assessment

Summary

The main areas of agreement included the importance of digital infrastructure, the need for digital skills development, and the significance of legal frameworks and data protection in digital governance. There was also consensus on the role of content moderation and collaborative approaches in developing governance frameworks.

Consensus level

There was a moderate to high level of consensus among speakers on key issues related to digital governance in Africa. This consensus suggests a shared understanding of the challenges and opportunities presented by digital transformation in governance, which could facilitate more coordinated efforts in policy development and implementation across the continent.

Differences

Different Viewpoints

Approach to AI governance

Jimena Sofía Viveros Álvarez

Kashifu Inuwa Abdullahi

Necessity of a unified global governance approach for AI and emerging technologies

Implementation of content moderation policies and fact-checking partnerships

Jimena Sofía Viveros Álvarez advocates for a global approach to AI governance, while Kashifu Inuwa Abdullahi focuses on national-level content moderation policies and partnerships.

Focus of digital transformation efforts

Ismaila Ceesay

Christina Duarte

Leveraging digital tools for financial inclusion and economic growth

Requirement for strong state institutions to control economic and financial tools

Ismaila Ceesay emphasizes the use of digital tools for financial inclusion, while Christina Duarte stresses the need for strong state institutions to control economic tools.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI governance, the focus of digital transformation efforts, and the methods for regulating social media platforms.

difference_level

The level of disagreement among the speakers is moderate. While there are differences in approaches and focus areas, there is a general consensus on the importance of digital transformation and the need for governance frameworks. These differences reflect the complexity of implementing digital governance in Africa and highlight the need for a multifaceted approach that considers various perspectives and local contexts.

Partial Agreements

Partial Agreements

All speakers agree on the need for regulation of social media platforms, but differ in their approaches. Kashifu Inuwa Abdullahi focuses on content moderation and fact-checking, Mercy Ndegwa emphasizes self-regulation and community standards, while Nomalanga Mashinini advocates for collaborative approaches between government and industry.

Kashifu Inuwa Abdullahi

Mercy Ndegwa

Nomalanga Mashinini

Implementation of content moderation policies and fact-checking partnerships

Importance of self-regulation and community standards for social media users

Need for participatory and collaborative approaches between government and industry

Similar Viewpoints

Both speakers emphasized the importance of content moderation and community standards on social media platforms to maintain the integrity of online discourse, especially during elections.

Kashifu Inuwa Abdullahi

Mercy Ndegwa

Implementation of content moderation policies and fact-checking partnerships

Importance of self-regulation and community standards for social media users

Both speakers advocated for collaborative approaches in developing governance frameworks for digital technologies, emphasizing the need for cooperation between government, industry, and international stakeholders.

Nomalanga Mashinini

Jimena Sofía Viveros Álvarez

Need for participatory and collaborative approaches between government and industry

Necessity of a unified global governance approach for AI and emerging technologies

Takeaways

Key Takeaways

Digital technologies offer significant potential to enhance governance, democratic engagement, and resource management in Africa

Challenges remain around digital infrastructure, literacy, and the digital divide that need to be addressed

A multi-stakeholder approach involving governments, tech companies, civil society and international partners is needed for effective digital governance

Legal and ethical frameworks, including data protection and cybersecurity measures, are crucial for building trust in digital governance

Digital transformation can improve economic governance through e-government services, financial inclusion, and data-driven decision making

Resolutions and Action Items

APRM to advocate for establishment of a continent-wide data protection authority

African governments to invest in digital infrastructure and literacy

Tech companies to implement content moderation policies and fact-checking partnerships

African nations to adopt and implement the African Digital Compact and Continental AI Strategy

Unresolved Issues

How to effectively bridge the digital divide between urban and rural areas

Balancing innovation with regulation of emerging technologies like AI

Addressing potential misuse of digital tools for misinformation or election interference

Ensuring equitable access to digital resources and skills across demographics

Suggested Compromises

Balancing government regulation with industry self-regulation of digital platforms

Adapting successful e-governance models from some African countries to others

Leveraging public-private partnerships to accelerate digital infrastructure development

Thought Provoking Comments

Africa, as we all know, has been going through a lot of challenges, even before independence, slavery, colonialism, and now civil wars in many African countries, apartheid in the Southern African region, and you name it. so many challenges. So therefore, the advancement of good governance is absolutely one of the ultimate objectives that we need to reach before we proceed with the implementation of Agenda 2063, the blueprint for Africa, for building the Africa we want and the Africa we deserve.

speaker

Salah Siddig Hammad

reason

This comment provides important historical context for the governance challenges facing Africa and frames the discussion in terms of long-term development goals.

impact

It set the tone for the discussion by emphasizing the importance of good governance as a prerequisite for Africa’s development agenda. This framing influenced subsequent speakers to consider both historical challenges and future aspirations.

The colonial state that we narrate had essentially two primary functions. Enforcing the rule of law. door to maintain colonial order, normal, logical. Second, resource extraction to serve, as I said, economic interests of these external powers. So this extractive and minimalist model of governance lacks mechanisms for cost-inclusive economic growth, social equity, and long-term development.

speaker

Christina Duarte

reason

This comment provides a critical analysis of the historical roots of governance challenges in Africa, highlighting how colonial structures were not designed for inclusive development.

impact

It deepened the conversation by encouraging participants to consider how historical legacies continue to impact governance in Africa. This historical perspective informed subsequent discussions on the need for transformative approaches to governance.

We need governance that is resilient, that is techno-neutral in order for it to be adaptive to the evolution of the technology itself which is extremely fast-paced and that needs to be generalistic because we cannot separate, it’s dual use by nature, right?

speaker

Jimena Sofía Viveros Álvarez

reason

This comment introduces the important concept of techno-neutral governance, highlighting the need for adaptable and flexible governance frameworks in the face of rapidly evolving technology.

impact

It shifted the discussion towards considering more holistic and forward-looking approaches to governance that can accommodate technological change. This perspective influenced subsequent comments on the need for strategic and adaptive governance models.

Digital literacy is no longer a luxury, it’s actually right now a human right, because the future of the world will be those who are digitally literate or not, and those who are digitally connected or not.

speaker

Uyuyo Edosio

reason

This comment reframes digital literacy as a human right, emphasizing its critical importance in the modern world.

impact

It elevated the urgency of addressing digital literacy and connectivity, influencing the discussion to consider these as fundamental rights rather than optional benefits. This perspective shaped subsequent comments on the need for inclusive digital transformation strategies.

Overall Assessment

These key comments shaped the discussion by providing historical context, highlighting the need for transformative and adaptive governance models, and emphasizing the critical importance of digital literacy and connectivity. They encouraged participants to consider both the historical challenges and future aspirations of African governance, while also emphasizing the urgent need for inclusive and forward-looking digital transformation strategies. The discussion evolved from a focus on past challenges to a more strategic consideration of how to leverage technology for inclusive governance and development in Africa.

Follow-up Questions

How to establish a continent-wide data protection authority for Africa

speaker

Nasir Aminu

explanation

This would provide unified oversight, enforce consistent data protection standards, and ensure ethical use of technology across the continent

How to address the digital divide between urban and rural areas, males and females, and across generations in Africa

speaker

Minister of Information of the Republic of Gambia

explanation

Bridging these divides is crucial for ensuring digital transformation benefits all African citizens

How to increase Africa’s data generation and representation in AI models

speaker

Dr. Uyuyo Edosio

explanation

Africa generates the least data globally, leading to underrepresentation in AI models and tools

How to develop and implement local language models for African languages

speaker

Dr. Uyuyo Edosio

explanation

There is currently very little data and few models for Africa’s diverse languages

How to adapt and replicate successful e-governance models across different African countries

speaker

Adeyinka Adeyemi

explanation

Successful models like Rwanda’s Irembo and Zambia’s Smart Zambia initiative could be adapted for use in other African countries

How to ensure political support and commitment from African member states for advancing technological processes

speaker

Salah Siddig Hammad

explanation

Political support is crucial for implementing and advancing technological initiatives across Africa

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #64 Designing Digital Future for Cyber Peace & Global Prosperity

WS #64 Designing Digital Future for Cyber Peace & Global Prosperity

Session at a Glance

Summary

This discussion focused on strategies for achieving cyber peace and building a resilient digital future in an increasingly interconnected world. Experts from various sectors explored the challenges of cybersecurity and potential solutions. Key points included the need for multi-stakeholder collaboration, the importance of digital literacy and awareness, and the role of emerging technologies like AI in both creating and mitigating cyber threats.


Participants emphasized the critical need for trust-building among nations and stakeholders to facilitate information sharing and collaborative defense against cyber attacks. The discussion highlighted the importance of bridging the digital divide to ensure all nations, regardless of economic status, are adequately protected in cyberspace. Experts also stressed the need for ethical considerations in cybersecurity policies, particularly regarding the balance between security and privacy.


The role of the private sector in combating cybercrime was discussed, with examples of successful collaborations between industry and law enforcement agencies. Participants also addressed the challenges of holding actors accountable for cyber warfare under international law. The discussion touched on the potential of AI and other emerging technologies to both enhance cybersecurity and create new vulnerabilities.


Overall, the panel emphasized the urgent need for action beyond dialogue, calling for concrete steps to stem the rising tide of cybercrime and build a more secure digital ecosystem. The discussion concluded with a proposed framework for cyber peace, emphasizing collaboration, education, and resilience in the face of evolving cyber threats.


Keypoints

Major discussion points:


– The importance of multi-stakeholder collaboration and trust-building to address cybersecurity challenges


– The need for greater digital literacy, awareness, and capacity building globally


– The role of emerging technologies like AI in both creating new cyber threats and potentially mitigating risks


– Balancing innovation, security, and privacy concerns in cybersecurity policies and regulations


– Addressing the digital divide and ensuring inclusive cybersecurity frameworks that protect all nations


The overall purpose of the discussion was to explore strategies for fostering global cyber peace and building a more secure, resilient digital future for all. The panelists aimed to identify key challenges and propose solutions for enhancing international cooperation on cybersecurity issues.


The tone of the discussion was largely constructive and solution-oriented, with panelists offering insights from their diverse backgrounds in government, industry, academia, and civil society. There was a sense of urgency in addressing growing cyber threats, but also optimism about the potential for collaborative approaches. Towards the end, the tone became slightly more pessimistic when discussing the difficulties of holding actors accountable for cyberattacks, but overall remained focused on finding ways forward.


Speakers

– Subi Chaturvedi: Moderator, Global SVP, Chief Corporate Affairs and Public Policy Officer of Inmobi


– Kumar Vineet: Founder and CEO of Cyber Peace


– Major General Pawan Anand: Director of the Center for Atmanirbhar Bharat, United Services Institution of India


– Melodina: Professor of Innovation Management at the Mohammed Bin Rashid School of Government in Dubai


– Genie Sugene Gan: Head of Government Affairs and Public Policy, APJ and Meta Regions for Kaspersky


– Sanjeev Relia: Chief Strategy Officer for Athenian Tech Limited and consultant for Center for Humanitarian Dialogue


Additional speakers:


– Jonah Klivnovic: Managing Partner and CEO of IT and Risk Management


Full session report

Expanded Summary: Strategies for Achieving Cyber Peace and Building a Resilient Digital Future


This discussion brought together experts from various sectors to explore strategies for achieving cyber peace and building a resilient digital future in an increasingly interconnected world. The panel, moderated by Subhi Chaturvedi, focused on the challenges of cybersecurity and potential solutions, emphasising the need for multi-stakeholder collaboration, digital literacy, and the role of emerging technologies.


Key Themes and Discussion Points:


1. Multi-stakeholder Collaboration and Trust-building


A central theme throughout the discussion was the critical importance of collaboration between various stakeholders to address cybersecurity challenges effectively. Kumar Vineet emphasised that “governments cannot do it alone” and called for a platform where “industry, academia, civil society, government, netizens, all of us need to come” together. This sentiment was echoed by other speakers, including Major General Pawan Anand and Genie Sugene Gan, who stressed the importance of building trust and awareness among stakeholders and promoting public-private partnerships.


Genie Sugene Gan highlighted the No More Ransom initiative as a successful example of multilateral cooperation. This project, involving law enforcement agencies, cybersecurity companies, and other partners, has helped decrypt devices affected by ransomware and saved victims millions in potential ransom payments.


Sanjeev Relia highlighted the need for establishing communication channels between nations for information exchange, particularly regarding cybercrime, cyberattacks, and zero-day vulnerabilities. He also emphasized the growing threat of cyber espionage, underscoring the importance of international collaboration in addressing these challenges.


2. Digital Literacy, Capacity Building, and Awareness


The panel agreed on the urgent need for greater digital literacy, awareness, and capacity building globally. Kumar Vineet addressed the challenges of language barriers and accessibility issues in cybersecurity awareness, emphasizing the need for interactive and engaging formats to capture people’s attention. Jonah Klivnovic noted that people typically only dedicate about two minutes per year to cybersecurity awareness, highlighting the challenge of effective education.


Vineet provided examples of evolving cybercrime patterns, such as the Jamtara district in India becoming a hub for phishing scams and the rise of Cambodia-based scams, illustrating the need for adaptive awareness programs.


Sanjeev Relia emphasised the importance of enhancing capacity building efforts, especially for developing nations. The discussion highlighted the persistent digital divide between technologically advanced and developing countries, with both Melodina and Sanjeev Relia acknowledging this as a significant obstacle to achieving global cyber peace.


3. Emerging Technologies: Opportunities and Challenges


The role of emerging technologies, particularly artificial intelligence (AI), in both creating new cyber threats and potentially mitigating risks was a key point of discussion. Jonah Klivnovic provided a balanced perspective, noting that AI is “changing the fabric of society” and increasing the velocity and reducing the costs of cyberattacks. However, he also highlighted its potential as a tool for cybersecurity practitioners.


Melodina raised ethical concerns about the securitisation and militarisation of AI, emphasising the need for human-centred approaches and the protection of human rights. She cited the “Lavender Project” as an example of ethical concerns in AI-driven security decisions, stating, “If our mandate is being human-centred, that it is at the end of the day about people and all lives have value… then we have a big challenge on how we’re doing this.”


4. Balancing Security, Privacy, and Innovation


The discussion touched upon the delicate balance between security needs, privacy concerns, and innovation in cybersecurity policies and regulations. Melodina highlighted the ethical challenges of collecting information on individuals for cybersecurity purposes while respecting privacy rights. The rapid pace of technological change outpacing policy frameworks was identified as a significant challenge, with speakers calling for more cohesive global governance frameworks for cybersecurity.


5. International Collaboration and Information Sharing


Enhancing threat intelligence sharing between nations and industries emerged as a crucial strategy for combating cybercrime. Sanjeev Relia stressed the need for information exchange on various cyber threats and attacks. Melodina advocated for developing global standards and alignment on cybersecurity practices, emphasizing the need for political will to achieve this alignment. The audience raised concerns about the lack of accountability in cyberspace.


Unresolved Issues and Future Directions:


Despite the constructive discussion, several issues remained unresolved, including:


1. Ensuring accountability in borderless cyberspace


2. Effectively bridging the digital divide between nations


3. Balancing innovation with regulation in a rapidly evolving technological landscape


4. Combating sophisticated cyber attacks and cyber warfare


5. Creating truly inclusive governance frameworks


The panel proposed several action items and potential compromises, including:


1. Developing more cohesive global governance frameworks for cybersecurity


2. Enhancing threat intelligence sharing between nations and industries


3. Creating cyber rehabilitation programmes for both survivors and criminals, as suggested by Vineet


4. Establishing communication channels between nations for information exchange


5. Focusing on capacity building efforts, especially for developing nations


6. Balancing security needs with privacy concerns through ethical AI frameworks


7. Combining top-down policy approaches with bottom-up education and awareness efforts


Conclusion:


The discussion emphasized the urgent need for action beyond dialogue, as highlighted by Jonah Klivnovic, calling for concrete steps to stem the rising tide of cybercrime and build a more secure digital ecosystem. Dr. Subhi Chaturvedi proposed the “SECURED” framework, which encapsulates key points from the panel:


S – Stakeholder engagement


E – Education and awareness


C – Capacity building


U – Understanding evolving threats


R – Resilience through collaboration


E – Ethical use of technology


D – Digital inclusion


This framework, along with the upcoming Internet Governance Forum (IGF) in Norway mentioned by Subhi Chaturvedi, provides a roadmap for future efforts in achieving cyber peace. The proposed approach emphasizes collaboration, education, and resilience in the face of evolving cyber threats, with a strong focus on multi-stakeholder engagement and international cooperation.


Session Transcript

Kumar Vineet: towards cyber peace and ensuring a resilient digital future for all. As we stand at the crossroad of technological innovation and societal transformation, we are reminded of the immense power and potential that cyber space holds, not just for a few, but for every citizen, organization, and nations across the interconnected world. Today we gather not just to discuss, but to catalyze action towards a future where digital prosperity is linked with global peace and security. We are joined by our esteemed panel of experts who bring with them a wealth of knowledge from groundbreaking advancement in technology security to innovative educational frameworks that aim to empower and protect communities at every level. As we navigate through our discussions, I encourage each one of you to think beyond the immediate. We are here to challenge the status quo, to question and to create a unique opportunity to mold a safer digital landscape. Let us explore bold and transformative ideas and address the root causes of cyber vulnerabilities, bridge the digital divide, and foster an environment where an individual, organization, and even nation-states can thrive in safety and dignity in cyberspace. Together, let’s set the stage for a dialogue that is as profound as impactful, ensuring that our collective journey towards cyber peace transcends the boundaries and sets new benchmarks for global cooperation. With that, let me introduce our online moderator. I can see Dr. Subhi has joined. Dr. Subhi Chaturvedi. She has done her PhD from IIT Delhi. She currently is the Global SVP, Chief Corporate Affairs and Public Policy Officer of Inmobi. Inmobi is India’s first UNICORN. or Global Tech MNC across 26 country, distinguished professor of IIM Jammu, distinguished professor of University of Delhi, professor of practice of JHU Delhi. She has been part of the governing board member of the Indian National Science Academy. She also served as a chair of the Working Group 7, Inclusive Growth, Entrepreneurship, Startups, and MSMEs of the US-India CEOs Forum. So long introduction, Subi, and plenty of achievements. But one thing that I’d like to call it out to everyone here in the room, is she was also the former MAG member of the Internet Governance Forum. And she was appointed by the UN Secretary General. So welcome, Subi. Now, over to you.


Subi Chaturvedi: Thank you, Vineet. It’s such a pleasure being back in this room. As you know, I’m an old time MAG member. It is always a joy coming back. And what better session than to look at how we can design cyber peace for a more inclusive world. I’m so glad that you could make it in person, despite the fact that you are unwell. So 100 marks and an A plus for all effort. I think we have a fantastic panel today. And we have a wonderful audience that’s joined us online as well. So without further ado, when we are looking at framing the larger questions, I don’t think there’s any deliberation on cyber warfare not being a distant threat anymore. It is actually present. And it’s reality with 13 attacks per second happening on critical infrastructure in 2023 alone. And there’s an anticipated surge in cyber crime. The costs are approximately $9.5 trillion by 2024 itself. The urgency for action of the world coming together with multiple stakeholders has never been clearer. So I’m truly delighted that we have a true, true multi-stakeholder panel today in every sense of the word. I think the deliberate design of digital technologies takes on heightened significance, which is offering both opportunities for promoting cyber peace and un… addressing emerging challenges in the realm of cyber conflicts. Today in this session where we have about an hour, what we are going to do is first turn to each of our panelists for three minutes each in terms of first level, which is a more detailed intervention. And then we’ll come back for round two so that we have at least 10 minutes for questions. The second round will be a two minute intervention by all the panelists where we post questions so that we can look at actual solutions and not just address more questions and bring them into the room. The panel today will explore integrate interplay between technological innovation, cyber threats and the pursuit of peace in the online world. And this session will highlight critical importance of preserving principles such as openness, interoperability, user centricity and a champion of human rights. These are all core values of the internet and we are ever so grateful to the two fathers of the internet. I think understanding cyber conflicts and threats, navigating ethical considerations around AI, probably the most used word, I think in this edition’s IGF and cyber warfare, fostering collaboration for cyber peace, innovating towards cyber resilience and policy implications for cyber peace are major topics that this panel hopes to cover. Some of the key policy questions that our panelists will address, they include the role of policymakers in how they can balance innovation and security. We have equal representation from gender. We are so happy that this is not a manual and we have equal representation from industry and all other stakeholder groups as well, including intergovernmental members. We are also going to be exploring in this panel policy implications of emerging cyber threats and global cybersecurity efforts and what are the peace building initiatives that we can look at. And we are so happy that the next IGF will take place in Norway, which is known for its ability to broker consensus and build peace. So how can international norms and agreements shape governance of cyber peace? What are the strategies that can be employed to… foster collaboration among stakeholders, and how can we together enhance cyber defense capabilities, mitigate the risk of cyber warfare? And lastly, but very, very important, how can regulatory frameworks adapt to the rapid pace of technological innovation while upholding principles of human rights? And now for the cherry on the cake, the fantastic panelists that we have brought together in this room include Major General Pawan Anand, he’s the Director of the Center for Atmanirbhar Bharat, which stands for a self-reliant India. He leads the United Services Institution of India, and is also the co-champion of the Joint Cyber Peace Center for Building Peace along with USI. Major Vineet Kumar, who you just heard from, my co-organizer and co-moderator for today, he’s the founder and CEO of Cyber Peace. And Colonel Sanjeev Raylia, he’s the Chief Strategy Officer for Athenian Tech Limited and consultant for Center for Humanitarian Dialogue. We are so delighted to have Melodina Stephens, she’s a Professor of Innovation Management at the Mohammed Bin Rashid School of Government in Dubai. We have Jonah Klivnovic, Managing Partner and CEO of IT and Risk Management. And last, but certainly not the least, we are so happy to have Jeanne Sugin-Gann, Head of Government Affairs and Public Policy, APJ and Meta Regions for Kaspersky. And some of the considerations, before we get into the meat of the matter, the first malicious cyber attack actually came to us in 1998 with the Morris firm designed by a Cornell University graduate student. And it was intended only as an experiment to measure the size of the internet, but it inadvertently caused significant disruption. It affected and infected approximately 6,000 computers, 10% of the internet at the time, causing systems to slow down or crash. This is but a grim reminder of what some of the unintended consequences, which can then turn into national level catastrophes and disasters can be. In 2024, we’ve had several attacks per second. This has cost us about 4.88 million US dollars, the average cost of a data breach in 2024. This has seen a 10 percent increase over the last year, which is the highest total ever. Forty percent of all data breaches have involved data stored across multiple environments, and that brings in the question of, how should global governments engage with industry, which obviously ask for a balance between innovation and the global cost of cybercrime, is only projected to reach over $10 trillion in 2024, which is a 15 percent increase. Lastly, it is human error, which has still remained a significant factor. How do governments, industries, also engage with citizens and consumers? Because they’ve contributed to 88 percent of cybersecurity breaches. With that, I come to our illustrious panelists for the day. Major General Pawan Anand, if I can please start with you, and we can look at a perspective, which you can share with us around resilience against cyber threats. So what would be some of the strategies that, according to you, nations and corporations can implement to build a resilient internet? They can build resilience against increasingly sophisticated cyber threats, while promoting an open and secure global digital economy. The floor is yours. Can I please request the host to also make him the co-host? I think he’s trying to unmute himself.


Major General Pawan Anand: Can you hear me, Subhi?


Subi Chaturvedi: Yes, I can hear you. You’ve been unmuted. We can hear you loud and clear. Over to you, Sam.


Major General Pawan Anand: All right. I don’t seem to be able to disable… start video. So maybe I’ll just stick with the voice. Thank you so much, Abhi, for such a brilliant introduction. Always expected of you. So it’s an honor to be on this panel with you. It appears to me that, you know, with such a brilliant set of people that you’ve already got, there’s very little that a person like me would be able to say that others would probably not be saying. But let me start with saying that the main issue that we probably are looking at when we talk about cyber peace is building up an environment of trust. And that environment can only be built up when agencies and people come together to discuss and outline the various guidelines for each other. I think a lot has already been worked out in terms of the internet, but we would be failing in our duties if we did not take it forward. And I think the IGF meet really helps in taking this forward. As far as government strategies are concerned, I think the first thing that most governments would be keen to do is to build up a strategy, build up a national strategy for cybersecurity. More important than building that up is also to bring in the digital public infrastructure in place. And I think India leads in that. India has also been offering the DPI to various countries of the global south. And hopefully after we have coming into the digital world and increasing their digital penetration as much as there has been in India, I think the next stage would be to ensure that everybody is safe. So the first strategy is to bring in digital penetration. And the second would be to ensure that it is absolutely safe. How do you then build up this confidence to keep the trust in the digital world? And the most important in all that would be spreading awareness. When knowledge is spread, when people are aware of the threats that exist, when people know how much of cyber hygiene has to be maintained, I think we will be able to achieve our strategies correctly. So I pause over here and let you go on with the others. And maybe when there’s something more specific, I’ll definitely intervene. Thank you, Subi.


Subi Chaturvedi: Thank you. Thank you so much, Major General Pawan Anand. Great remarks as always. And we’ll come back to you for round two. I now turn to Melodina. And we want your perspective. You have a profound academic journey. It’s such an impressive CV. It was such a pleasure going through it. We would love to hear your perspective on emerging technologies. How do emerging technologies like quantum computing and AI influence the current cyber landscape? What are some of the potential risks and benefits that you are seeing of these technologies in the context of building global cyber peace? And how do you think we can mitigate risk? Please, the floor is yours, Melodina.


Melodina: Thank you so much. So first, I would like to start with what do we mean by cyber peace, right? So I think it is peace in the cyber atmosphere. Or do we also mean it’s cyber spilling over to physical? And I think it is both of that. If I think of peace as the absence of war, then what are we having a war with? And I think it’s very clear. It’s share of heart, share of mind, share of resources. If I think of share of mind, we are already seeing misinformation, disinformation campaigns. And this is profound. Part of it comes from ignorance. We have people who don’t understand, maybe don’t have the knowledge of general affairs, or even cyber security. So even before I think of cyber, I think general knowledge, there’s a missing piece, education piece that’s there. And this makes them a lot more susceptible sometimes to these outside influences. When I think of share of heart, I’m thinking of soft power in some ways. And we do see right now with algorithms this is easy to do. There was an interesting case in Romania right now, the elections have been postponed simply because the candidate got a huge amount of share of likes, for example, on social media. And then they found out this was bot introduced, right? So I this is really a challenging issue. And if I think of the last one, share of resources, hardware for cyber is extremely expensive. We’re not talking small amounts of money. So this comes along with funding, and funding has strings globally. So this is also something we must talk about. There is a talent shortage. This is something we have to talk about. There is a resource shortage. For example, with the war that took place in Europe, I believe a significant amount of the Argonne production, which is needed for semiconductors, was wiped out. So we are looking at these kind of narratives. If we want cyber peace, we have to address all three. Now your question was on quantum computing. This is a challenging one, because the speed at which we compute will increase. So if I take, for example, Google Willow, which was just introduced a few weeks back ago, it can do in 10 minutes what a supercomputer will take 10 raised to the power of 25 years to do. There is no way human beings can match this speed. But this raises also challenges, because it can crack codes. Blockchain is no longer secure. So how do we manage in this environment where we’re looking at the speed of, I don’t know, light? and human beings don’t have the capacity. So the question is, do we need the technology for the sake of the technology, or is the technology actually beneficial to people? And I’m not sure we’ve answered that question. So I’m just gonna stop over there.


Subi Chaturvedi: Thank you. I think it was fantastically articulated, really, really well put, and some pertinent questions brought to the floor. Thank you for sharing with us. I think, Vineet, now it’s time to bring you in. I believe civil society has been at the forefront of pushing for user interest, asking industry as well as governments for greater accountability, ensuring security first practices. So what role do you see organizations like CyberPeace, which is working at the grassroots, which many a times act as a bridge builder, how do you see CyberPeace playing a definitive role at a local level in the global scheme of things? When it comes to establishing peace and brokering peace processes? Over to you, Vineet.


Kumar Vineet: Thanks, Suvi, great question. And let me share that CyberPeace, like you rightly mentioned, that we are a grassroots NGO and a policy think tank. We are the voice of the people on the ground. The kind of issues, challenges that the netizens face, whether it is the child sexual abuse material, ransomware, the issue that the startups, MSMEs, and all different sectors like the critical infrastructure face. So we work closely with them and we kind of identify the issues and challenges and take it to the policy maker. One thing that I keep mentioning everywhere that cyber security and peace, in fact, there’s a very interesting paper that CyberPeace we have written. It’s called Crowdsourcing CyberPeace and CyberSecurity. Because we generally believe that governments cannot do it alone. When we talk about peace and responsible online behavior, when we talk about security, all of us need to be… need to come on a ground. All of us need to come on a platform. Industry, academia, civil society, government, netizens, all of us need to come. And that’s where CyberPeace, as a nonprofit, as a grassroots NGO, we are kind of working out on a model. We are working out, in fact, to create a platform where all these individuals can be brought out. Connecting the unconnected people and people at the grassroots, those who do not have the reach, basically, to get their views shared or get their voices to a forum like IGF or any such major forum. We act as a bridge. We act as a kind of a platform where these voices come. And then we share the voices with policymakers, with industry, academia, civil society, and government. So that’s what we are doing. And I generally believe crowdsourcing is the way forward. One of the things that we have tried doing it is, I mean, and the way we try to mention that we can achieve peace is by creating people as first responders, making them sure that they kind of count the emerging crimes, the emerging set of defects, or AI-generated misinformation that’s being distributed in the community. They need to be well aware about it. They need to be aware on how to address the kind of challenges and how to report challenges that are coming up. Trying to create these first responders across the country in India. In fact, in the next two years, we are going to create around 8 million first responders. And with the help of the Commonwealth Secretariat, we are just trying to work out a program on how these first responders, the Cyber Peace Corps, and the first responders could be actually, the network could be spread in Commonwealth countries and later to other set of countries. So what I believe is we need to come. and the IGF Secretariat, and the remote hosts who are making participation possible. I remember back in 2012 when we started as part of the Manifesto, we had a meeting with the IGF Secretariat and the remote hosts, and we had a discussion about how we can address these issues together. So with that, I pass it over to you, Subi, again. Thank you.


Subi Chaturvedi: Thank you, Vineet. It’s such a delight. At this point, I want to really call out the IGF Secretariat and the remote hosts who are making participation possible. When we started as part of the MAG, this used to be one of the asks to connect the unconnected and also be the voice and give people voice and agency. So it is a great beginning. With that, Jonah, I come to you. You fight the good fight. You’ve been heading risk. Can you elaborate for us on the importance of threat intelligence, collaboration, threat intelligence, sharing between nations and industries? How can trust be built amongst various stakeholders to share sensitive information more openly, more promptly, and learn from these models? Over to you, Jonah.


Audience: Thank you, Dr. Subi, for the great question. And it’s a difficult one and one that I’ve been kind of grasping with my entire professional career. I think there’s a big difference, and I think we should start off with that, between intelligence and information. So information is just factual, while intelligence is contextualized information that is timely, actionable, and relevant. And this is where I get to the operationalization aspect of it. As a former practitioner in a large-scale bank and also as a member of an intelligence-sharing community that was EC3 under Europol, I can tell you that, in principle, intelligence sharing works really well. And it’s something we definitely should do. But the actual operationalization of it is not well executed in a lot of instances. And what I mean by this is how you build trust between the public and the private sector. sector is that the public sector, which has enforcement capability, needs to showcase how the intelligence that was provided from the private sector has actually led to enforcement capabilities being enacted. Because it’s tremendously taxing on the private sector. And I know I was running one of these teams, and you’re very short-staffed, and you don’t have the talent. And then you have reporting requirements. Every quarter, you need to produce new data, and you need to submit it, and it goes into the ether. And then nothing is heard, and you don’t know how this actually influenced the mitigation of the capabilities of wider threat actors. So I think it’s enhancing the dialogue to really focus on how we are actually dealing with suppressing these threat actors. And there’s a lot of discussion in terms of how shared information and intelligence can contribute to fighting digital threats, and especially cybercrime. And this is true. But also, then, there’s a regulatory aspect to it. Because there’s a lot of regulatory friction in terms of sharing data, even of fraudsters. And we’ve encountered this in Europe, where I actually couldn’t share the details of a Belgian entity with the details of a French entity due to GDPR considerations. And these were details of fraudsters. And there needs to be some regulatory streamlining, I think, on a global level, where a lot of consideration needs to go into understanding that while the private sector does bear a big brunt of responsibility, its resources are not infinite. And there needs to be kind of a consideration that there cannot be seven or eight reporting protocols that need to be done in a quarter. That there needs to be a streamlining of information flow and an enhanced view on operationalization of what is being done. So I think those are the core key factors. And that is how trust builds.


Subi Chaturvedi: Thank you so much, Jonah, for your passion and the fact that you’re still at it. We’ve all looked at. of the EU as a great source of inspiration where you can look at brokering consensus with industry, where industry looks at more creative ways of coming back to regulation, saying what’s practical, what’s feasible, what’s not going to throw the baby out with the bathwater. I’m so glad that we have people like you who are still in the room, who are still at it. Jeannie, with that, I come to you. Kaspersky is no novice to protecting ecosystems, protecting computers, protecting equipment. What role do you believe the industry can play in global cyberspace? And we’re having this discussion at the IGF. There is no better platform for a true example of multi-stakeholder interaction. Can you give us examples where intervention by the industry have led to significant advancements in cyber peace? Over to you, Jeannie.


Genie Sugene Gan: Right. Thank you. Thank you for that. Thank you for that question. Yes. Well, I think the private sector has a big… Yes. The point of view that has gone before me have already quite a fair bit of the other aspects of the topic. And I really want to focus on something which I think they haven’t just done when it comes to cyber crime. And I want to list on cyber crime prevention


Subi Chaturvedi: you, we want to go off with you that we might be able to hear you better. Go ahead, go ahead Jeannie. While I think Jeannie reconnects, Colonel Sanjeev Raylia, okay, we have Jeannie back with us again. Jeannie, do you want to try my next video?


Genie Sugene Gan: Yes.


Subi Chaturvedi: Go ahead. Go ahead. Go ahead. Okay, let me come to you, Sanjeev, we will come back to Jeannie when she has better connectivity. Sanjeev, you’ve got a background in both the Signal Corps, the Indian Army, you spent over two decades, and then you’ve also looked at industry in terms of selling cybersecurity solutions, designing them. And currently, you’re the India representative for the HD Cyber, which is looking at humanitarian dialogues around conflict resolution, peace building. Could you speak to the question of how important is multi-stakeholderism? How is it that, you know, all of us can foster more international dialogues, where we can push a common understanding, where we can look at track two diplomacy engagements, you’ve been fostering some of those, how can we establish more norms, standards, and just look for common interests between nations? Over to you, Sanjeev.


Sanjeev Relia: Well, thank you so much, Subhi, for inviting me. And before I come to the question, I would like to briefly speak on two aspects. One is that when we talk of the critical infrastructure, we only speak of the attacks that are happening, which is primarily aimed towards sabotage. We often forget to talk about the cyber espionage, which itself is a big part of, you know, the threat to the CIA. So we must keep in mind that. it is not just an attack which is to bring down the critical infrastructure, but there is a huge amount of espionage that is also happening against nation states, against critical infrastructure of nation states. Two, the cyber security itself is changing its form. From what used to be a very, very preventive kind of environment, today with the use of artificial intelligence and machine learning, we live in a world where cyber security means not just prevention, but also deception and detection and response. So the entire concept of cyber security is changing. Now, coming back to your question, how can we foster more multilateral and more multi-stakeholder dialogue? Yes, I’m presently involved in a dialogue like this, where we are trying to negotiate cyber peace and build a cyber ecosystem between India and one of our neighbors through dialogue. Right now, it is at track two level. Now, the only way that, of course, United Nations is the apex body in this, which is putting in a lot of effort. But I personally feel that just the effort of United Nations may not be adequate because we are a big world. We have a large number of users of the cyberspace. And we have a whole lot of, you know, the world economy today depends greatly on the cyberspace. So we need to have much more involvement, way beyond what the UN can do, actually. And there comes in maybe bilateral dialogues, maybe multilateral dialogues. And, of course, the best solution is a multi-stakeholder solution, where every stakeholder who has a say in the cyberspace, who’s a user, a major user of cyberspace, can be brought in to bring out their issues, to bring out the… the problems that they face, and then a solution needs to be worked out, especially when nation-states today are involved or non-state actors on behalf of nation-states are involved in carrying out these attacks. So multi-stakeholder is perhaps the only solution which will be able to foster in a good cyber ecosystem.


Subi Chaturvedi: Thank you, Sanjeev, very well put. I’m so glad we have more advocates in the room for multi-stakeholderism. Can’t emphasize the importance of people-to-people connect better. We’ve got Jeannie back in the room. Jeannie, please feel free to unmute yourself. We’d love to hear your perspective.


Genie Sugene Gan: Yes, yes, yes, yes. Sorry about that as I’m on the road to the airport, actually. So I’m trying to do this as well, being on the road. Well, I was actually going to talk a little bit about giving some examples of how the private sector really has a part to play in this entire topic that we’re discussing today, because I think we’re talking about, and also I think Sanjeev was talking about the multilateral stakeholders, cooperation and approach to how we solve these issues. So I think I want to give some examples of how we have been doing this at Kaspersky. So I think from a context setting point, sort of an approach, I want to just say that, you know, I think we all know that there has been a lot of that’s been rapid development of network technology. Incidents of cybercrime have also, of course, been rising globally, right? So we’re talking about malware attacks, we’re talking about phishing, we’re talking about distributed DDoS attacks, ransomware, all of which, of course, significantly harmed the interests of network users, causing huge losses to society. Just to put things in perspective, according to some industry organizations, actually losses from all forms of cybercrime about 10 years ago in 2015 amounted to about three trillion US dollars. And in 10 years time, which is by next year, we actually will be tripling, more than tripling that. And we’re talking about annual losses of about more than 10 trillion dollars. So, I think it’s obvious a conclusion to say that it’s imperative to stop cybercrime. And so, Kaspersky, as one of the world’s leading cybersecurity solutions provider, we have been extremely committed to providing comprehensive and effective cybersecurity protection to users around the world, obviously. But also, a lot of what we do is to actively cooperate with law enforcement agencies around the world, including Interpol, Afropol, ASEANAPOL, and so on and so forth. So, just, you know, within Asia-Pacific alone, for instance, Kaspersky actually has been given awards also for being part of Singapore Police Force’s Alliance of Public-Private Cybercrime Stakeholders and, of course, also recently appointed to the Hong Kong Police Force’s Cybersecurity Action Task Force. What am I really trying to say here? What I’m really trying to say here is that the private sector really has a part to play. And instead of, you know, even when we’re talking about, you know, for-profit companies, which have a lot, a huge amount of expertise that resides within the company, and we’re talking for companies like Kaspersky, we actually have got a great amount of specialists in the cybercrime research area. And we’ve got the threat intelligence that we provide in our bid to fight cybercrime together. So, in September 2023, for example, during Interpol’s second operation to combat the surge in cybercrime in Africa, we actually provided threat intelligence data that enabled investigators to identify compromised infrastructure and arrest suspected cybercrime threat actors across the African region. So, this operation actually resulted in the arrest of 14 perpetrators and the identification of related network infrastructure that resulted in economic losses of more than US$40 million. So, I just really want to pause there. I think these initial remarks are probably enough for context setting. And we can discuss a little bit more in detail if there are more specific questions that come up along the way.


Subi Chaturvedi: Fantastic. I’m so glad that we have you on the panel. These were great examples of some of the things that we’re going to pick up now when we get into specifics of building capacity. With that, General Anand, I want to come to you. This is going to be a shorter round of conversation. We’re keeping about a minute, minute and a half, two minutes each for everyone. In the context of cyber warfare, what are some of the specific strategies that nations can adopt to prevent A, escalation, B, look at de-escalation and also contribute to global peace building when it comes to regional cooperation? You always need to be nice to your neighbor. You don’t choose your neighbor. So go ahead, please take it away.


Major General Pawan Anand: You know, cyber space today is a place of, is literally a space of contestation. It’s also a space where so much good is happening, but it’s also a space where a lot of malafide actors come into play. I think the major issue that we are looking at over here is how cyber space is getting exploited by state and non-state actors for waging warfare in a manner where there is complete lack of accountability and lack of identification. That makes it very easy for nation states and non-state actors to actually remain completely anonymous and make sure that they’re able to disable, disrupt services that are taking place in a country. The potential for harm in hybrid warfare, I think is huge. And therefore there is this deep need for at least well-meaning actors to come together and ensure that cyber warfare as we see it today, is more a movement of cyber peace as we spoke of earlier.


Subi Chaturvedi: Thank you. Thank you so much and safe travels. I want to come to you Jonah again. When you’re looking at emerging technologies like AI, a lot of responses from government sometimes is, hey, where’s the kill switch or how do we regulate? Can you look at positive use cases of how emerging technologies like AI can play an actual role in identifying, sometimes even mitigating risks as well as predicting cybersecurity practices. Over to you.


Audience: Thank you so much for the question. I think when you asked me that a case comes to mind and it’s a practitioner’s dream. It’s the New Zealand government actually launched their own chat bot that was called Rescam. And the main point of this chat bot would be if you get a phishing email or a scamming message, you could send it to the chat bot and the chat bot would continue conversation indefinitely with the fraudster to cause resource attrition. And some of the conversations are pretty good. And once you see that as an application for the prevention of cybercrime, that’s quite funny to see. But of course, fraudsters use chat bots. So we’re probably witnessing some chat bot on chat bot fraud happening right now. But in all seriousness, I mean, AI as a technology is going to change the fabric of society. And it already is. We’re seeing the velocity of cybercrime and velocity of cyberattacks significantly increasing. We’re seeing the costs going down. But as much as it is a tool for the other side, it is a tool for cybercrime and cybersecurity practitioners. Research analysis is easier now. Code analysis is easier now. So I think there’s use cases in both sides because this is just elevating the playing field to a much more high velocity environment. So I think that, you know, in this kind of high velocity environment, something that is. is I think going to be very important that we touched on is this digital literacy and awareness of the general population that is really going to be a core focus. And I’m going to leave you with the final fact in this section that we did an internal study when I was working for BNP Paribas amongst our clients. And we found that people have two minutes a year to listen about cybersecurity and fraud and cyber crime and stuff like that. So that’s the attention span we’re dealing on average with average people. And now how do you cram in what is relevant and what is pertinent to the people at the time? So this is where I think from an operational standpoint we need to work better to not just carpet bomb people but actually educate.


Subi Chaturvedi: Thank you. Such a great thought. I think we’ll have to look at elevators, pitches on the lines of startup founders. Melody and I come to you next. We are seeing an increasing dependence on AI and automation. You talked about a crisis of resources and overall our goal is to look at a more inclusive world. What kind of ethical considerations do you think we should be prioritizing when it comes to cybersecurity policies?


Melodina: So first of all, I think when we look at cybersecurity policies, there’s a fine line between security and privacy. And that’s a challenging one to manage when you look at it from an ethical point of view. So at what point do I justify that the information I’ve collected on an individual is a security threat or a security deterrent? And again, there are nuances to that all the way to warfare. And you can see that with the Lavender Project that was, and I think the foreign policy wrote a very nice article on this one where they collected I think 23,000 identifiers to identify people. It was then used in drone strikes against these people. Now, if I asked you, should we kill X? Yes or no? How much time do you think you would wanna spend on that? And the research. that they found was it was 20 seconds. So I think the high velocity part that Jonas mentioned is really, really critical. AI is taking us to a whole other realm where it’s not about accountability, but it’s about we’ve immuned ourselves to the decisions we take in the security space, and that is scary. And I think more importantly, if our mandate is being human-centered, that it is at the end of the day about people and all lives have value, and that is the fundamental principle of universal human rights, then we have a big challenge on how we’re doing this. So if we come really down to the basics, the securitization, the militarization of AI is going to be a big challenge in the future.


Subi Chaturvedi: Thank you. Thank you so much, Melody, for sharing that. And Jeannie, I come to you next. If you could share with us a successful example or a use case of multilateral cooperation, that would be wonderful in terms of templatizing. It may not be a one-size-fits-all, but a great beginning. Over to you.


Genie Sugene Gan: Thank you. Well, I think one success story, I think from our lived experience would be our No More Ransom initiative, which Kaspersky co-sponsored together with the Royal Netherlands Police and other partners, including other private vendors as well in the cybersecurity space. So together, regardless, just ignoring competition for a moment and everything, we all just came together and we worked on this project that we call No More Ransom, which is really an initiative to fight ransomware attacks. And it actually generated more than 360,000 downloads of our decryption tools over a seven-year period, raising awareness across the globe about the perils of ransomware attacks and prevention. and also providing decryption tools so that people will not have to pay ransom if they were, well God forbid, if they were ever to be attacked by ransomware. So, and this initiative is a prime example of how multilateral stakeholders cooperation can, at all levels actually, in terms of breadth and depth, be able to help to prevent cybercrime. And in the process, we have actually helped nearly 2 million victims decrypt their devices and save them from having to pay a single dollar of ransomware, a single ransomware dollar. So I think this is a good example from my end. I hope that inspires people to sort of come up with initiatives like that and promote more cooperation.


Subi Chaturvedi: Thank you, Jeannie. I still remember back in 2012, Kaspersky, Microsoft, Meta, which was then Facebook and Google, they came together in India and they brought together the Stop Think Connect program, which was one of the first interventions that anybody had made across India, working with the youth across colleges. And the most interesting part was the use of street plays, theater, and radio. And there was a cross-fertilization of ideas. So I think Kaspersky has always been a great support and we truly appreciate your initiative. Vineet, I come to you next. You’ve been working as a volunteer and you’ve been serving also with the territorial army. You run a center of excellence. And inadvertently, you’ve consciously created a space where you’ve ended up being a bridge with governments as well as different arms of security agencies. How do you think governments can play a more active, let’s say a proactive role in ensuring cybersecurity? How do you see them balancing regulation and innovation? Because you work with the industry equally well. I’d love to hear, you know, like a two-minute intervention from you. Go ahead.


Kumar Vineet: Subhi, for this question, I think two minutes are not enough. I probably need ten minutes to address everything. But I’ll come to some basic issues. While we keep talking about global policy frameworks, the Cyber Crime Treaty, the Cyber Peace Index that Cyber Peace is also currently working on. So we are talking on different policies, frameworks, regulations that need to come. There are some challenges that remain on the ground. And these challenges are some, I would say, unique challenges. And something that Jonah mentioned is awareness. Awareness still remains the number one challenge among the people. Because what we also see is a lot of reinvention is happening. People reinvent the same wheel. There are awareness programs, campaigns. But we kind of reinvent the same wheel, rather than focusing on a collective impact that can be done. And more and more people will be made aware. Second is the language in which we do these awareness programs. So we have to reach the last mile. And we have to communicate these awareness programs in the manner, in the language that the end users understand. Not everything could be PPT-based, video-based. Something could be interactive, like skits or road shows or something like that. Something very engaging that people understand, basically, the issues and problems. This is more on the prevention front. The other is the research front. On the research side, we need to kind of make sure. And I, again, keep mentioning it. We are mentioning that technologies, emerging technologies, are coming up. And they are bringing a lot of challenges. While they’re bringing challenges, I feel that these technologies only have the solution with the right balance and the right human connect. These technologies can come up with a good set of solutions which can impact the society at large. So that’s where the constant research, because technology is changing. Crime patterns are changing. Cybercrime hotbeds are also. changing. Like in India we had, for those who aren’t aware, there is a small district in the state of Jharkhand called Jamtara. There’s a Netflix show on the basis of the kind of fraud in the network that has happened. Now these kind of hotbeds are coming across the country, not just in India but in other countries also. We have seen people getting scammed, Indians getting scammed from people who are residing in Cambodia. In fact there are some Indians who got traffic to Cambodia and then they were asked to scam people in India. So there are unique set of problems, unique set of challenges that are coming up and it requires, I would say, a very strategic thinking and a collective thinking where all of us need to come on board together and try to see how we can kind of solve this problem rather than doing efforts which are scattered and something that does not have an impact. We need to focus on something very impactful so that we reach the last mile. Otherwise frauds are just increasing. Like at CyberPeace every day we are flooded with calls of children getting abused online, issues of child sexual abuse, revenge porn, financial frauds and now cyber enabled trafficking. In fact these kind of new set of challenges where humans are getting trafficked in other countries. So all cyberspace has become the medium now. So we need to think on how we can come up with programs like a cyber rehabilitation program where we need to rehabilitate the kind of survivors and also we need to rehabilitate the cybercrime, cyber criminals. They also need to be rehabilitated so that overall we need to kill or remove the kind of hotbeds that are coming up in cybercrime. So overall these are the things and while we keep talking about ethical AI frameworks and MDX and different policies. We need to address the issues on the ground at the grassroot. And overall, that help us in establishing a peaceful and resilient cyberspace for everyone.


Subi Chaturvedi: Thank you, Vineet. Thank you so much. Sanjeev, I want to come to you quickly. What would be your top two priorities? Let’s make it a Twitter intervention. How can international collaboration be enhanced to combat the rise in cybercrime and cyberterrorism? Because we know MLADs exist between countries. But when it comes to practical approaches, what would be your top two points? Over to you. Just a quick one, Sanjeev. We want to keep some time for questions from the floor and online participants as well. Go ahead.


Sanjeev Relia: So two things which will improve the situation from what it is today is one is that building of communication channels between nations, which unfortunately are not there. We need to exchange information, whether it is on cybercrime, whether it is on cyberattacks, whether it is on zero-day attacks, whatever be the case. We need to exchange information. We need to update each other. And we have to have these communication channels. We can’t be working in silos in the cyberspace. And two is capacity building. I personally feel that the globe today lacks capacity to fight the challenges of cyberspace. So hopefully, we need to develop capacities within the nation, with developed nations helping other nations to build up the capacity to fight the menace in the cyberspace.


Subi Chaturvedi: Thank you, Sanjeev. And a plus one to you for sticking to time. I think it’s been a fantastic conversation. Before we open the floor for questions, I was listening actively to all the participants. We’ve come up with somewhat of a high-level framework based on the deliberations. This is Dr. Subbi for the record again. I think we can look at something like secured, which is a framework that we are proposing. And we can look at collaboration across all stakeholder groups. So the S here stands for… for strengthening global governance. The E stands for empowering communities through digital inclusion. The C stands for championing ethical technology development. U calls out for uniting stakeholders for collaborative innovation. R is calling out for resilience in cyber ecosystems. E is greater education and awareness. Thank you for emphasizing Jonah and Melody both on that point. Vineet for doing the heavy lifting. And D here is digital transformation for sustainable growth. The floor is now open for questions and my heart is filled to the brim. Thank you for being such fantastic panelists, both people in the room as well as people who’ve joined us online despite their commitments. Please feel free to ask any questions, direct them to the panelists. You can even make an observation or a comment. Do we have any questions online? We don’t seem to have any questions from the room. We do, Vineet. We’ve just received a question. This is a question from Nabil Syed. All the panelists can actually respond to it. Great question. What steps can be taken to create a more cohesive global governance framework for cybersecurity, which is ensuring that all nations, regardless of their economic or technological standing, are adequately protected? We did speak briefly to that point saying that who are the people who are not in the room today? How do we look at giving voice to the voiceless, giving more agency despite the fact that this could be an emerging country, it could be a developing economy, it could be a small island nation? Would any of the panelists like to go ahead and take a stab at it?


Melodina: So I think if you really want to do it well, you need both a bottom-up approach and a top-down approach. If you start from the bottom, it’s education. We spoke about that very strongly. So it’s a very strong literacy on AI, some of its vulnerabilities, and how to safeguard and protect yourself, right? Then at the second level, you’ve got to go back to people who are designing these AI systems. They have to understand it needs to be inclusive. So the moment you’re not inclusive, people are going to be left behind, and they’re going to be vulnerable. So designing systems, taking into consideration people’s limitations, whether it’s technology, language, access, will actually help them. Third, you need to go to the industry and set standards. We do have some standards, but they’re not yet very global, and I don’t think we have much alignment on them. So this is important, having global alignment on some standards. And last, of course, you’ve got to have political will. Because the world is so fragmented at this point in time, and I think it will continue over the next year or so, we need to have political will on what does cyber peace mean or what does peace mean, and what are we willing to pay to get that?


Sanjeev Relia: Can I add to that, Subhi?


Subi Chaturvedi: Absolutely, Sanjeev. Over to you.


Sanjeev Relia: Thank you. So my personal view is that a digital divide will remain forever. Like in the physical space, there are powerful nations, and there are nations which are not so powerful. There are rich nations, and there are rich which are not so rich. Similarly, in the cyberspace, we will have nations which will always be way ahead in technology, and there will be nations who are trying to catch up. Now, how can we bridge this digital divide so that we ensure that we all are equally safe in the cyberspace? First is that we need to prioritize the inclusivity and equality. We have to bring everyone on board. And second is we need to leverage technology in this. I personally feel that emerging technologies like AI have tremendous scope to bridge this gap. So we need to do that. And one last thing is that we need to foster trust amongst nations. I personally feel that right now this trust is missing as far as cyberspace is concerned. So there have to be confidence building measures which need to be put into place. We need to encourage dialogues between nations which have to be multi-stakeholder. So that is the only way we will be able to bridge the digital divide and bring in more security.


Subi Chaturvedi: Thank you, Sanjeev. Great thought. CBMs are wonderful. We have another question from an online participant. Pragya Tantia is asking, as cyber attacks become more sophisticated, how can the international community recognize and respond to cyber warfare in the same way as traditional warfare, ensuring accountability and protection under international law? A very, very interesting question. Jonah, do you want to come in? I would love to hear your thoughts or Jeannie or Major General Pawan Anand.


Audience: I can give my opinion. I might be a bit cynical because I don’t think the international community is doing a great job at holding people accountable in traditional warfare, let alone in cyber warfare. So I don’t think it’s easy because our societies are split by borders and by nations and by different cultures that govern us and guide us. And unfortunately, the cyberspace isn’t. And accountability is always going to be an issue in cyberspace. And I don’t see a clear-cut solution. I think multilateralism and building of this coherent collective trust is one baby step in the way forward. This also pertains to the previous question. We need to build a governance framework that caters to the lowest common denominator. And this is extremely difficult. But more importantly, we need to stop spending time only in dialogue. Things need to start happening. Because dialogue and discussions and everything is great. from an operational perspective, the more time we waste, the worse things we’ll get. If you look at these projections in cybercrime losses, in infrastructure damage and stuff, it’s only getting worse. When is the tide going to be stemmed? So this is kind of a bit of a pessimist operational view. So it’s a spanner in the works.


Subi Chaturvedi: Thank you so much, Jonah. And that’s all that we have time for. I want to thank all our 23 participants who’ve dedicatedly stayed online. The two, Nabeel and Pragya, thank you for your questions from India. Vineet, thank you for being a fantastic co-host and moderator despite your ill health. I hope you recover soon. To all the panelists, you’ve been absolutely outstanding. I couldn’t have asked for a better panel. General Pawan Anand, Jonah, Melodina, Jeannie, Vineet, Sanjeev, thank you very much. And also to the team and our wonderful hosts in Saudi Arabia, thank you for giving us a fantastic IGF. We look forward to the next edition. Vineet, I hand over the mic to you to close the proceedings. Thank you.


Kumar Vineet: Thank you, Subhi, for being an excellent moderator. And you actually stretched the session very well, bringing out all the best pointers and suggestions that the session could have actually been able to bring out. So thank you. And thank you, all the speakers, panelists, the audience who is present here. Overall, we wish to work together for a peaceful and resilient cyberspace for everyone, where we protect communities at large, the society at large, and it starts with here. So with that, we’d like to end the session. Thank you, everyone, once again. Thank you.


M

Major General Pawan Anand

Speech speed

135 words per minute

Speech length

561 words

Speech time

248 seconds

Building trust and awareness among stakeholders

Explanation

Major General Pawan Anand emphasizes the importance of building trust and spreading awareness among stakeholders in cyberspace. He suggests that this can be achieved through knowledge sharing and maintaining proper cyber hygiene.


Evidence

The speaker mentions the need for digital penetration followed by ensuring safety in the digital world.


Major Discussion Point

Strategies for Cyber Peace and Security


Agreed with

Kumar Vineet


Sanjeev Relia


Agreed on

Need for capacity building and awareness


K

Kumar Vineet

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Crowdsourcing cybersecurity efforts and creating first responders

Explanation

Kumar Vineet advocates for a crowdsourcing approach to cybersecurity, involving all stakeholders. He emphasizes the importance of creating first responders who can address emerging cyber threats at the grassroots level.


Evidence

CyberPeace’s plan to create 8 million first responders in India over the next two years, and efforts to expand this network to Commonwealth countries.


Major Discussion Point

Strategies for Cyber Peace and Security


Agreed with

Major General Pawan Anand


Genie Sugene Gan


Sanjeev Relia


Agreed on

Importance of multi-stakeholder collaboration


Differed with

Audience


Differed on

Approach to addressing cybersecurity challenges


Language barriers and accessibility issues in cybersecurity awareness

Explanation

Kumar Vineet highlights the challenge of language barriers and accessibility in cybersecurity awareness programs. He emphasizes the need for communicating in a language and manner that end-users can understand.


Evidence

Suggestion to use interactive methods like skits or road shows for engaging awareness programs.


Major Discussion Point

Challenges in Achieving Global Cyber Peace


Agreed with

Major General Pawan Anand


Sanjeev Relia


Agreed on

Need for capacity building and awareness


A

Audience

Speech speed

167 words per minute

Speech length

1054 words

Speech time

377 seconds

Enhancing threat intelligence sharing between nations and industries

Explanation

The speaker emphasizes the importance of sharing threat intelligence between nations and industries. They highlight the need for operationalizing intelligence sharing and building trust between public and private sectors.


Evidence

Example of regulatory friction in sharing data of fraudsters between European entities due to GDPR considerations.


Major Discussion Point

Strategies for Cyber Peace and Security


Differed with

Kumar Vineet


Differed on

Approach to addressing cybersecurity challenges


Lack of accountability in cyberspace

Explanation

The speaker points out the difficulty in holding actors accountable in cyberspace compared to traditional warfare. They express skepticism about the international community’s ability to enforce accountability in cyber warfare.


Major Discussion Point

Challenges in Achieving Global Cyber Peace


Creating inclusive governance frameworks that cater to all nations

Explanation

The speaker emphasizes the need for a governance framework that caters to the lowest common denominator. They stress the importance of moving beyond dialogue to taking concrete actions.


Evidence

Reference to projections of increasing cybercrime losses and infrastructure damage.


Major Discussion Point

Building International Collaboration for Cybersecurity


M

Melodina

Speech speed

172 words per minute

Speech length

984 words

Speech time

342 seconds

Leveraging AI and emerging technologies for cybersecurity

Explanation

Melodina discusses the potential of AI and emerging technologies in cybersecurity. She highlights both the challenges and opportunities presented by these technologies in the context of cyber peace.


Evidence

Example of Google Willow’s computing capabilities compared to supercomputers.


Major Discussion Point

Strategies for Cyber Peace and Security


Rapid pace of technological change outpacing policy frameworks

Explanation

Melodina points out that the speed of technological advancement, particularly in AI, is outpacing our ability to create appropriate policy frameworks. This creates challenges in managing the ethical implications of these technologies.


Evidence

Reference to the high-velocity environment created by AI and its impact on decision-making in security spaces.


Major Discussion Point

Challenges in Achieving Global Cyber Peace


Balancing security needs with privacy concerns

Explanation

Melodina highlights the ethical dilemma of balancing security needs with privacy concerns in cybersecurity policies. She emphasizes the challenge of justifying information collection for security purposes while respecting individual privacy.


Evidence

Reference to the Lavender Project and its use of identifiers in drone strikes.


Major Discussion Point

Challenges in Achieving Global Cyber Peace


Developing global standards and alignment on cybersecurity practices

Explanation

Melodina emphasizes the need for global alignment on cybersecurity standards. She suggests that this is crucial for creating a more cohesive global governance framework for cybersecurity.


Major Discussion Point

Building International Collaboration for Cybersecurity


G

Genie Sugene Gan

Speech speed

156 words per minute

Speech length

876 words

Speech time

335 seconds

Promoting multi-stakeholder cooperation and public-private partnerships

Explanation

Genie Sugene Gan emphasizes the importance of multi-stakeholder cooperation and public-private partnerships in combating cybercrime. She highlights how private sector expertise can contribute to global cybersecurity efforts.


Evidence

Example of Kaspersky’s cooperation with law enforcement agencies like Interpol, Afropol, and ASEANAPOL, and their involvement in Singapore Police Force’s Alliance of Public-Private Cybercrime Stakeholders.


Major Discussion Point

Strategies for Cyber Peace and Security


Agreed with

Major General Pawan Anand


Kumar Vineet


Sanjeev Relia


Agreed on

Importance of multi-stakeholder collaboration


S

Sanjeev Relia

Speech speed

155 words per minute

Speech length

773 words

Speech time

297 seconds

Digital divide and unequal technological capabilities between nations

Explanation

Sanjeev Relia acknowledges the persistent digital divide between nations in cyberspace. He suggests that this divide will continue to exist, similar to power disparities in the physical world.


Major Discussion Point

Challenges in Achieving Global Cyber Peace


Establishing communication channels between nations for information exchange

Explanation

Sanjeev Relia emphasizes the need for building communication channels between nations to exchange information on cybercrime, cyberattacks, and zero-day attacks. He argues that working in silos is not effective in cyberspace.


Major Discussion Point

Building International Collaboration for Cybersecurity


Enhancing capacity building efforts, especially for developing nations

Explanation

Sanjeev Relia highlights the global lack of capacity to address cybersecurity challenges. He suggests that developed nations should help other nations build their capacity to combat cyber threats.


Major Discussion Point

Building International Collaboration for Cybersecurity


Agreed with

Major General Pawan Anand


Kumar Vineet


Agreed on

Need for capacity building and awareness


Fostering trust and confidence-building measures between countries

Explanation

Sanjeev Relia emphasizes the importance of fostering trust among nations in cyberspace. He suggests implementing confidence-building measures and encouraging multi-stakeholder dialogues between nations.


Major Discussion Point

Building International Collaboration for Cybersecurity


Agreed with

Major General Pawan Anand


Kumar Vineet


Genie Sugene Gan


Agreed on

Importance of multi-stakeholder collaboration


Agreements

Agreement Points

Importance of multi-stakeholder collaboration

speakers

Major General Pawan Anand


Kumar Vineet


Genie Sugene Gan


Sanjeev Relia


arguments

Building trust and awareness among stakeholders


Crowdsourcing cybersecurity efforts and creating first responders


Promoting multi-stakeholder cooperation and public-private partnerships


Fostering trust and confidence-building measures between countries


summary

Speakers agree on the critical need for collaboration between various stakeholders, including governments, industry, and civil society, to address cybersecurity challenges effectively.


Need for capacity building and awareness

speakers

Major General Pawan Anand


Kumar Vineet


Sanjeev Relia


arguments

Building trust and awareness among stakeholders


Language barriers and accessibility issues in cybersecurity awareness


Enhancing capacity building efforts, especially for developing nations


summary

Speakers emphasize the importance of building capacity and raising awareness about cybersecurity issues, particularly in developing nations and at the grassroots level.


Similar Viewpoints

Both speakers stress the need for international cooperation and standardization in cybersecurity practices to address global cyber threats effectively.

speakers

Melodina


Sanjeev Relia


arguments

Developing global standards and alignment on cybersecurity practices


Establishing communication channels between nations for information exchange


Unexpected Consensus

Persistent digital divide

speakers

Melodina


Sanjeev Relia


arguments

Rapid pace of technological change outpacing policy frameworks


Digital divide and unequal technological capabilities between nations


explanation

Despite coming from different backgrounds, both speakers acknowledge the persistent digital divide and the challenges it poses to global cybersecurity efforts. This consensus highlights the complexity of achieving universal cyber peace.


Overall Assessment

Summary

The main areas of agreement include the importance of multi-stakeholder collaboration, the need for capacity building and awareness, and the challenges posed by the digital divide and rapid technological advancements.


Consensus level

There is a moderate level of consensus among the speakers on the fundamental challenges and approaches to cybersecurity. However, there are variations in the specific strategies and focus areas proposed by different speakers. This level of consensus suggests that while there is a shared understanding of the core issues, there is still room for debate and diverse approaches in addressing global cyber peace and security.


Differences

Different Viewpoints

Approach to addressing cybersecurity challenges

speakers

Kumar Vineet


Audience


arguments

Crowdsourcing cybersecurity efforts and creating first responders


Enhancing threat intelligence sharing between nations and industries


summary

Kumar Vineet emphasizes a grassroots approach with first responders, while the Audience speaker focuses on enhancing threat intelligence sharing at higher levels.


Unexpected Differences

Optimism about addressing cybersecurity challenges

speakers

Audience


Kumar Vineet


arguments

Lack of accountability in cyberspace


Crowdsourcing cybersecurity efforts and creating first responders


explanation

The Audience speaker expresses skepticism about addressing cybersecurity challenges, while Kumar Vineet presents a more optimistic view with concrete solutions.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to addressing cybersecurity challenges, the level of optimism about solutions, and the specific focus areas for international collaboration.


difference_level

The level of disagreement among speakers is moderate. While there are differences in approaches and emphasis, there is a general consensus on the importance of international collaboration and the need to address cybersecurity challenges. These differences in perspective can potentially lead to a more comprehensive approach to cybersecurity if integrated effectively.


Partial Agreements

Partial Agreements

Both speakers agree on the need for international collaboration, but Melodina emphasizes global standards while Sanjeev Relia focuses on establishing communication channels.

speakers

Melodina


Sanjeev Relia


arguments

Developing global standards and alignment on cybersecurity practices


Establishing communication channels between nations for information exchange


Similar Viewpoints

Both speakers stress the need for international cooperation and standardization in cybersecurity practices to address global cyber threats effectively.

speakers

Melodina


Sanjeev Relia


arguments

Developing global standards and alignment on cybersecurity practices


Establishing communication channels between nations for information exchange


Takeaways

Key Takeaways

Building trust and awareness among stakeholders is crucial for cyber peace


Multi-stakeholder cooperation and public-private partnerships are essential


Emerging technologies like AI can be leveraged for cybersecurity but also pose new challenges


There is a significant digital divide between nations that needs to be addressed


International collaboration and information sharing are key to combating cybercrime


Balancing security needs with privacy concerns remains an ongoing challenge


Resolutions and Action Items

Develop more cohesive global governance frameworks for cybersecurity


Enhance threat intelligence sharing between nations and industries


Create cyber rehabilitation programs for both survivors and criminals


Establish communication channels between nations for information exchange


Focus on capacity building efforts, especially for developing nations


Unresolved Issues

How to ensure accountability in cyberspace given the lack of borders


How to bridge the digital divide between technologically advanced and developing nations


How to balance innovation with regulation in rapidly evolving technology landscape


How to effectively combat sophisticated cyber attacks and cyber warfare


How to create truly inclusive governance frameworks that cater to all nations


Suggested Compromises

Balancing security needs with privacy concerns through ethical AI frameworks


Combining top-down policy approaches with bottom-up education and awareness efforts


Leveraging AI and emerging technologies to help bridge the digital divide between nations


Thought Provoking Comments

We generally believe that governments cannot do it alone. When we talk about peace and responsible online behavior, when we talk about security, all of us need to be… need to come on a ground. All of us need to come on a platform. Industry, academia, civil society, government, netizens, all of us need to come.

speaker

Kumar Vineet


reason

This comment emphasizes the critical need for multi-stakeholder collaboration in addressing cybersecurity challenges, highlighting that no single entity can solve these issues alone.


impact

It set the tone for discussing collaborative approaches throughout the session and led to further exploration of how different sectors can work together.


AI as a technology is going to change the fabric of society. And it already is. We’re seeing the velocity of cybercrime and velocity of cyberattacks significantly increasing. We’re seeing the costs going down. But as much as it is a tool for the other side, it is a tool for cybercrime and cybersecurity practitioners.

speaker

Jonah


reason

This comment provides a balanced perspective on AI’s impact on cybersecurity, acknowledging both its potential for harm and its utility for defense.


impact

It deepened the discussion on emerging technologies by highlighting the dual-use nature of AI and prompted further consideration of how to harness AI for cybersecurity while mitigating its risks.


If our mandate is being human-centered, that it is at the end of the day about people and all lives have value, and that is the fundamental principle of universal human rights, then we have a big challenge on how we’re doing this. So if we come really down to the basics, the securitization, the militarization of AI is going to be a big challenge in the future.

speaker

Melodina


reason

This comment brings ethical considerations to the forefront, emphasizing the need to prioritize human rights and human-centered approaches in cybersecurity and AI development.


impact

It shifted the conversation towards ethical implications of cybersecurity measures and AI, prompting participants to consider the human impact of technological solutions.


We need to exchange information, whether it is on cybercrime, whether it is on cyberattacks, whether it is on zero-day attacks, whatever be the case. We need to exchange information. We need to update each other. And we have to have these communication channels. We can’t be working in silos in the cyberspace.

speaker

Sanjeev Relia


reason

This comment underscores the importance of information sharing and open communication channels between nations to combat cyber threats effectively.


impact

It led to further discussion on international collaboration and the need for trust-building measures between nations in cyberspace.


Overall Assessment

These key comments shaped the discussion by emphasizing several crucial themes: the necessity of multi-stakeholder collaboration, the dual nature of emerging technologies like AI in cybersecurity, the importance of maintaining a human-centered and ethical approach, and the need for enhanced international cooperation and information sharing. The discussion evolved from identifying challenges to exploring potential solutions, with a focus on balancing technological advancements with ethical considerations and fostering global cooperation to address cybersecurity issues effectively.


Follow-up Questions

How can we address the challenge of limited attention span for cybersecurity awareness?

speaker

Jonah


explanation

Jonah mentioned that people have only about two minutes a year to listen about cybersecurity, which poses a significant challenge for education and awareness efforts.


How can we balance security and privacy considerations in cybersecurity policies?

speaker

Melodina


explanation

Melodina highlighted the ethical challenge of balancing security needs with privacy rights when collecting information on individuals for cybersecurity purposes.


How can we address the securitization and militarization of AI in cybersecurity?

speaker

Melodina


explanation

Melodina pointed out that the increasing use of AI in security decisions raises ethical concerns about accountability and human-centered approaches.


How can we develop more effective cyber rehabilitation programs for both survivors and cybercriminals?

speaker

Kumar Vineet


explanation

Vineet suggested the need for programs to rehabilitate both cybercrime survivors and perpetrators to address the root causes of cybercrime.


How can we improve communication channels between nations for sharing cybersecurity information?

speaker

Sanjeev Relia


explanation

Sanjeev emphasized the need for better information exchange between countries on various cyber threats and attacks.


How can developed nations help build cybersecurity capacity in less developed countries?

speaker

Sanjeev Relia


explanation

Sanjeev highlighted the global lack of capacity to address cybersecurity challenges and the need for developed nations to assist others.


How can we create global alignment on cybersecurity standards?

speaker

Melodina


explanation

Melodina pointed out the lack of global alignment on cybersecurity standards and the importance of addressing this issue.


How can we leverage AI to bridge the digital divide in cybersecurity?

speaker

Sanjeev Relia


explanation

Sanjeev suggested exploring the potential of AI to help address disparities in cybersecurity capabilities between nations.


How can we foster more trust among nations in the cyberspace domain?

speaker

Sanjeev Relia


explanation

Sanjeev emphasized the need for confidence-building measures and multi-stakeholder dialogues to build trust between nations in cyberspace.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #49 Digital Policy as a Catalyst for Economic Growth in Nigeria

Open Forum #49 Digital Policy as a Catalyst for Economic Growth in Nigeria

Session at a Glance

Summary

This discussion focused on data policy as a catalyst for economic growth in Nigeria, bringing together government officials, legislators, and industry stakeholders. The conversation highlighted the need for consistent and coherent policies in the digital space, with participants emphasizing the importance of collaboration between agencies and stakeholders. Key challenges identified included policy inconsistencies, mandate overlaps between agencies, and funding constraints.

Participants stressed the need for better awareness and implementation of existing policies, as well as the importance of capacity building for legislators and other stakeholders. The discussion touched on the role of civil society organizations in policy formulation and implementation, with calls for greater engagement and ease of access to policymakers.

The importance of youth involvement in the digital economy was emphasized, along with the need for Nigeria-specific solutions to address local challenges. Cybersecurity and data protection were highlighted as critical areas requiring attention, particularly in light of the growing digital landscape.

The discussion concluded with a call for more regular meetings and collaborations between stakeholders, including the proposal for quarterly meetings and the establishment of a WhatsApp group for ongoing communication. Participants agreed on the need to leverage Nigeria’s large population and talent pool to drive digital economic growth, while also learning from best practices in other countries.

Overall, the discussion underscored the importance of a unified approach to policy formulation and implementation in Nigeria’s digital economy, with a focus on collaboration, capacity building, and addressing specific challenges to drive economic growth.

Keypoints

Major discussion points:

– The need for policy coherence and consistency across government agencies in the digital/tech sector

– Challenges with implementing policies and regulations, including funding constraints and overlapping mandates

– The importance of multi-stakeholder collaboration and engagement, including with legislators, civil society, and youth

– Building capacity and awareness around digital policies, both within government and for the general public

– Leveraging Nigeria’s large youth population and tech talent for digital economic growth

Overall purpose:

The goal of this discussion was to identify challenges and opportunities related to digital/tech policy in Nigeria, and to determine concrete action items for improving policy formulation and implementation going forward.

Tone:

The tone was largely constructive and collaborative, with participants openly sharing challenges and proposing solutions. There was a sense of urgency to make progress, balanced with recognition of existing efforts. The tone became more action-oriented towards the end as participants focused on next steps and commitments.

Speakers

– Engr. Kunle Olorundare: Moderator

– Dr. D. S. Wariowei: Chairman of IGF Multi-Stakeholder Advisory Group in Nigeria, Director of Corporate Planning and Strategy at NEDA

– Mary Uduma: Chairman of West African Internet Governance, Coordinator of West African Internet Governance

– Panelist: Representing the Honorable Minister of Communication, Innovation and Digital Economy

– Niteabai Dominic: Representative from NIDA (National Information Technology Development Agency)

– Dr. Vincent Olatunji: National Commissioner, Nigeria Data Protection Commission

– Khadijah Sani: Representative from Nigerian Communications Commission (NCC)

– Benjamin Akinmoyeje: Representative from Internet Society Nigeria Chapter

– Adedeji Stanley-Olajide: Chairman, House Committee on ICT and Cybersecurity

Additional speakers:

– Dr. Jimson Olufuye: Long-time advocate in the technology sector

– Amina Ramallan: Youth representative, works for Nigerian Communications Commission

– Aishat Bashir Tukur: Representative from Federal Inland Revenue Services, also representing a startup

– Sinhwe Zamobilo: Senior Program Officer with Pridem Initiative, a civil society organization

Full session report

Expanded Summary of Discussion on Data Policy as a Catalyst for Economic Growth in Nigeria

Introduction:

This discussion brought together government officials, legislators, and industry stakeholders to address the role of data policy in driving economic growth in Nigeria. The conversation highlighted the need for consistent and coherent policies in the digital space, emphasising the importance of collaboration between agencies and stakeholders. Participants identified key challenges and opportunities, proposing concrete actions to improve policy formulation and implementation in the digital sector.

Key Discussion Points:

1. Policy Development and Implementation:

The need for policy coherence and consistency across government agencies in the digital sector emerged as a central theme. Engr. Kunle Olorundare stressed the importance of a multistakeholder approach, including sub-national entities, in policy development. Niteabai Dominic highlighted challenges in implementing policies due to overlapping mandates between agencies. An audience member raised the importance of evaluating existing policies, suggesting a focus on improving current frameworks rather than creating new ones. The recent development of an AI strategy by the Ministry of Communication was noted as a positive step.

2. Digital Infrastructure and Access:

While progress has been made in internet access, participants noted the need for further infrastructure investment. Khadijah Sani from the Nigerian Communications Commission highlighted funding challenges for the Universal Service Provision Fund, which aims to expand connectivity to underserved areas.

3. Data Protection and Cybersecurity:

Dr. Vincent Olatunji discussed the establishment of the new Data Protection Commission, highlighting challenges such as funding issues and the need for increased awareness of data protection rights and obligations among citizens. Aishat Bashir Tukur raised concerns about cybersecurity policies, particularly in relation to youth protection online. An audience member mentioned implementation challenges related to the recent blockchain technology policy.

4. Legislative Engagement:

Adedeji Stanley-Olajide, Chairman of the House Committee on ICT and Cybersecurity, stressed the need for capacity building among legislators on technology issues. She called for improved collaboration between agencies and legislators, noting the unique opportunity presented by having industry experts chairing relevant committees in both the Senate and House of Representatives. An audience member suggested the need for easier access to legislators by stakeholders. Dr. D. S. Wariowei emphasized the importance of parliamentary involvement in the Internet Governance Forum.

5. Youth Engagement in Digital Economy:

Several speakers highlighted Nigeria’s large youth population as an opportunity for digital economy growth. Amina Ramallan emphasised the need to integrate youth voices in decision-making processes and the importance of including digital literacy in school curricula.

6. International Cooperation and Best Practices:

Mary Uduma, Chairman of West African Internet Governance, discussed learning from other West African countries’ digital initiatives and highlighted the role of the Nigeria Internet Governance Forum (NIGF) as a platform for stakeholder engagement. Benjamin Akinmoyeje from the Internet Society Nigeria Chapter stressed the importance of participating in global internet governance forums and the Society’s role in representing various stakeholders. Dr. Jimson Olufuye mentioned the recent development of the Global Digital Compact and its relevance to Nigeria. However, an audience member cautioned that while learning from international best practices is valuable, there is also a need for Nigeria-specific solutions to address local digital challenges.

Areas of Agreement:

Participants largely agreed on the need for a multistakeholder approach in policy development, the importance of capacity building and awareness initiatives, and the need for further investment in digital infrastructure. There was also consensus on the importance of addressing youth-specific concerns in digital policies and leveraging Nigeria’s large youth population for digital economic growth.

Areas of Disagreement:

Some differences emerged in perspectives on policy implementation challenges. While Niteabai Dominic focused on overlapping mandates as a key issue, others emphasised the need for stronger institutions. There were also varying approaches to inclusive decision-making, with some focusing on sub-national entities and others on youth integration.

Key Takeaways and Action Items:

1. Establish quarterly meetings for stakeholders in the digital sector

2. Create a WhatsApp group for continued communication between stakeholders

3. The National Assembly to take a lead role in bringing stakeholders together

4. Explore organising a Nigeria-specific World Summit on the Information Society (WSIS) forum

5. Integrate digital literacy into school curricula

6. Provide capacity building for legislators on technology issues

7. Improve access to legislators and policymakers for stakeholders

Unresolved Issues:

Several issues remained unresolved, including how to effectively address overlapping mandates between agencies, specific strategies for increasing funding for digital infrastructure projects, and detailed plans for improving cybersecurity, especially for youth.

Thought-Provoking Comments:

Dr. Vincent Olatunji’s comment that “We never land good policies anywhere. Our major challenges have been the implementation, funding, infrastructure, and labour market” shifted the focus from policy creation to implementation challenges. Adedeji Stanley-Olajide’s observation about the underutilised opportunity of having industry experts chairing key committees in the National Assembly sparked discussion about improving collaboration between government agencies and legislators.

Conclusion:

The discussion underscored the importance of a unified approach to policy formulation and implementation in Nigeria’s digital economy. Participants emphasised the need for collaboration, capacity building, and addressing specific challenges to drive economic growth. The conversation highlighted the potential of Nigeria’s large population and talent pool in the digital sector, while also acknowledging the need for tailored solutions to local challenges. Moving forward, regular meetings, improved communication between stakeholders, and increased parliamentary involvement were identified as crucial steps in advancing Nigeria’s digital policy landscape.

Session Transcript

Engr. Kunle Olorundare: Thank you. Good afternoon. A brief welcome remarks. The chairman of IGF Multi-Stakeholder Advisory Group in Nigeria, who is also the director in charge of Corporate Planning and Strategy of NEDA, I welcome Mr. Weoradi, Dr. Weoradi, and please give us a very brief opening remarks. You’re welcome.

Dr. D. S. Wariowei: Thank you, distinguished speaker. Thank you very much for coming. I have the privilege to welcome all of you to this meeting. This is actualizing our NIGF in Nigeria. Now we’re doing this in Saudi Arabia. Briefly, we had our NIGF in Nigeria in Port Harcourt last two months, in October precisely, and we had several sessions, the youth session, the women IGF, as well as the internet school. We all had that. It was quite a successful one, and now that we are doing it here in Saudi Arabia, we’ve learned quite a lot, and as we intend to implement what we have learned when we get back to Nigeria. Specifically, I note the parliamentary session that I attended a few minutes ago, and it was glaring that so much can be achieved if we involve the parliamentarians in our NIGF journey. And this far, they’ve been slightly far away, and I think that by the time we return back to Nigeria, we should co-opt them so that they should be part of it, so that in our next IGF meeting sometime next year, they will have a full representation to guide how we go about. this thing. A few other things we’ve learned here, we intend to apply when we get back to our country. So on that note, I welcome all of you to this session, knowing that at the end of the session, we should be able to come out with action points that we can implement. Thank you very much.

Engr. Kunle Olorundare: Thank you so much, the Chair of Nigeria, Mike. And at this point in time, I’m going to be inviting Engineer Farag Yusuf. First, he will give the opening remarks of the Honorable Minister of Communication, Innovation and Digital Economy. And after that, I will also be inviting him to commence the conversation. But for now, let him put on the cap of the Minister. So Honorable Minister of Communication, Information and Digital Economy, you have a few minutes to make the opening remarks of the Minister. I hope you can all hear me. The Chairman, Senate Committee on ICT and

Panelist: Cybersecurity, our very able distinguished Senator, Shaib Apolabi. Our Chairman, House Committee on ICT and Cybersecurity, also Honorable Adedeji Stanley. The representative of the Director General of NIDA, the Director of ICT at the Federal Ministry of Communications, Innovation, and Digital Economy. Very distinguished guests, ladies and gentlemen, a very good evening to everyone. It is indeed a great pleasure for me to, first of all, be at the Internet Governance Forum 2024 here in Riyadh for the first time. I’m actually attending this for the very first time. so you can as well imagine my excitement about what I’ve seen in the last few days. That said, I’m here standing as being introduced, speaking on behalf of His Excellency the Honorable Minister of Communications, Innovation and Digital Economy, Dr. Bosun Tijani, who would have been here but for other very cogent reasons. And I spoke with him this morning. I mentioned to him that there is this session that is going to be coming up and he asked that I should send his very warm greetings to all of you and to express the appreciation of the Ministry of Communications for most of you that have taken the pains to be here to represent Nigeria, especially those members of the Nigerian Internet Governance Forum who have steadily been hands-on both here in Riyadh and before now in ensuring that the digital economy of Nigeria continues to strive. And more importantly, for the fact that today has been scheduled specifically to discuss a very important topic in Nigeria. I think there would be no better topic to be discussed in this kind of time, reflecting on so many things. The fact that Nigeria Nigeria has already positioned itself in terms of our activities and our recognition as a leading, if not the most, the fastest growing country in terms of digital economy. Of course, without a doubt, we have witnessed phenomenal growth with regards to innovators, entrepreneurs, and even investors within the last couple of years. We’ve seen unicorn companies emerging out of Nigeria’s digital economy, companies like Plotterwaves, Paystack, and a lot of others that are at different stages of incubating themselves. So it is fine that in this kind of time, we sit around the table like this and reflect back and see what are those opportunities and what are those challenges that we need to address. This can only be done through policies. And that’s why, again, this meeting is extremely important. It’s no more news that the Nigerian population is well over 200 million people. And we’re also very much aware that out of this, over 65% are currently under the age of 35. And that placed Nigeria appropriately to lead in the economic, I mean, digital economy transformation agenda, not only of our. country, but of our region and indeed of the world. We can only do that if we have the right policies in place. Policies that create the enabling environment for us to build a world-class infrastructure in terms of fiber optics, connectivity infrastructure, fiber optics satellites, microwaves, and the rest. It’s also only through policies that we’re able to position Nigeria with regards to talents, our ability to make good use of the resource, human resource we have, and channeling them towards local and international capacities for the growth of this very important sector. Of course, as we speak, internet access is almost 75%, which means almost 150 million people. The entire population of the Gulf having access in our country for internet. And that’s a huge opportunity that requires this kind of discussion. Again, we have big opportunities in terms of our infrastructure. The meeting we are having today, although it’s mostly a mix of different Nigerian agencies, I think it presents us with a very unique opportunity to think around this. that we have the benefit of having our legislators here with us, come up with big, fast ambitions that could literally allow for us to move forward. Before I round up, I want to just mention a few policy issues that have recently happened in the country so that it will help our discussant to chart even a better course. By 2021, we already had the National G7 Economy Policy and Strategy. As a beginning of 2023, mid-2023, we had a ministerial strategic blueprint that is built on five pillars of knowledge, talent, and literacy, second being policy, third being infrastructure, fourth being innovation, entrepreneurship, and capital, and the fifth being trade, particularly inter-African trade and international trade. So with this, also, the ministry was able to develop an AI strategy. Many of us have had, throughout the times we’ve been here, the discussion has all around and centered around artificial intelligence. And we are told, and we can clearly see, that any country that is not in the pool fund stand the risk of being left behind. And in a very proactive manner, the leadership of the Ministry of Communication, working with all of you here as agencies and legislators, were able to develop this forward-looking strategic blueprint that led into that artificial intelligence strategy, which I believe is one of the leading documents, not only on African continent, but around the world. And this is because of the manner in which the policy was developed. It was actually created by bringing the best friends of Nigeria’s origin from all over the world, from MIT, Harvard, Oxford, TM, and the rest, coming together to spend a whole week. Abuja, thinking about the best practices around the world. So I believe Nigeria is currently in a position and taking advantage of the enormous knowledge we have gathered from this event. We should be able to go back home and then translate these learnings into implementing our already existing policies and frameworks back at home. So on this note, distinguished senator, distinguished member, other distinguished guests, I want to thank everyone for being here. I also want us to be open-minded. This is a platform and opportunity for us to discuss robustly so that we can take away something that can be applied back home for the good of our people. I thank you for listening. Thank you.

Engr. Kunle Olorundare: Nigeria, you are part of Nigeria. Please clap for yourself. I’m sorry, my hands are ticking. But I’ll take you to stop before you go to Riyadh. So you will have to thank everyone and stuff for that. I’m going to do that. So maybe for them. It will take a lot of efforts to see the minister or the peers back in Nigeria. Coming to National Assembly is also not an easy task. Madam Uduma is always traveling. At an internet society. is always on the internet. So now that we’re here, I want us to be able to live here with some concrete action points, so that we just don’t come here and just talk and then we go. This is a Nigeria Open Forum, so it’s an interface between us and all the stakeholders. I want us to be able to live here with some two, three, four, five action points that when we get back to Nigeria, we cannot begin to follow up on, so that it doesn’t take us meeting again in Norway next year before we call another Nigeria Open Forum, I’ll be going to discuss the same issue we have discussed all over again. So I’m gonna be asking specific questions. So I’m gonna start with Madam Uduma. Today we are talking about data policy as a catalyst for economic growth. As the Chairman of West African Internet Governance, Coordinator of West African Internet Governance, are there insights you can share with us of things that have happened in other West African countries and possibly other countries? And you have five minutes to be able to do this so that we can have an opportunity of taking interventions also from the audience. Are there things we can learn from other West African countries when it comes to policy formulation, particularly in a digital space?

Mary Uduma: Thank you very much. I hope you can hear me. Yes, first of all, I want to thank, I want to stand on the existing protocol that the Honorable Minister and the PAMSEC of our ministry has established and the Senator as well. I greet you all. And the truth is this, that every African country, West African country, is interested in digital economy, digital transformation, development, security, data, protecting our data, making sure our privacy is respected. So- And some have gone far, you know, like in Benin, where they now merge their identity. Your identity in Benin, your data identity in Benin, is what you use for your hospital, is what you use for your bank, is what you use for registration, for any identity process you want to carry out in Benin. I just want to use Benin as an example. So they have done that, harmonized their data, harmonized their processes, and they’re also developing their data. I’m not sure their data protection policy is as good as ours, because ours is very, very robust. But the thing is that they harmonized. They harmonized it in such a way that there’s a converging point, whether you are going to hospital or bank or going for election, so your identity is your identity. They’ve harmonized that. That’s one of the policies we have seen in West Africa that has happened. And we are also, there’s this West African Data Protection Initiative that is on, and we had, ECOWAS is facilitating it, and so there are some countries in West Africa that have done some work in their data protection, but ours is still better than others. Ghana has done very well, but I don’t think it’s as robust as our own, though, that, well, we are bigger. We are bigger in number, and so we have a lesson to learn from Ghana. They’ve gone very, they’ve done very well in their data protection strategy and legislation. Other countries like Sierra Leone, they’ve not even started anything, and Senegal is also coming up with some, but not. not as good as we are looking at it. So for me, in West Africa, Nigeria is still leading, and we should continue the lead, and we should also make sure that our legislation must be, I mean, our policies must be progressive one, our legislation must be a progressive one, and we will make sure that we are protected. Our children in particular are protected, and we’ll be in the online environment, safely in the online environment, and we’ll convert our population to economy. What do I say, what do I mean by that? That we have big population that the minister has told us, over two million, and so there’s not even, over 200 million, sorry, over 200 million. So there’s nothing that we bring in that will not surpass every other, there’s no bill, there’s no legislation, there’s no policy that we implement that will not surpass all others in our sub-region. We have the private sector. Once upon a time, I met somebody from Cote d’Ivoire. He was selling, is it okay? He wanted to market his products, his cream for women. So I met him in Ethiopia. He said, it’s crazy to be in Nigeria. You can’t meet the demand. So in technology, you can’t meet the demand. In physical products, you can’t meet the demand. In data generation and data economy, we can’t meet the, nobody can beat us on it. So I. I believe that we have the numbers, we have the talent, we have the wherewithal to be able to surpass what we are doing now. So, I mean, if we do it right, right? If we do it right, we attract investment, investment in our infrastructure, investment in everything we do. Thank you.

Engr. Kunle Olorundare: Thank you, Madam. I think what I came through from our intervention is that our glass as a country is half full, not half empty. We may not be there yet, but we have made substantial progress. And the second part of it is that there are also a few lessons from other countries around us and from Africa as a whole. And thirdly, a number of also policies are also being developed around us that we, as a member of ECOWAS and AU, we also need to be very much familiar with so that in fashioning our local policies, we also benefit from some of these initiatives. Thank you, Madam. Now, Madam Secretary, I’m going to you now. One of the challenges of our country is inconsistency in policy formulation. And in this panel, you are the number one bureaucrat. How do we ensure, and you mentioned that in 2021, there was this policy. And in 2022, there was this policy. How do we put in place a framework, some sort of safeguard to ensure that the policy of 2021 is not upturned in 2023 and a new one in place in 2025? How do we ensure policy consistency, particularly in a digital age?

Panelist: Okay, thank you very much again, distinguished Senator. That is absolutely a very fair and correct statement to say that we, and it’s just not. peculiar to Nigeria. Policy inconsistencies, policy somersault and so on and so forth are a norm in many jurisdictions. One thing that is causing this is largely due to very weak institutions. So our governance structure is such that individuals become more powerful than the institutions they lead. And so when you have that kind of a situation, you are faced with very weak ability for the bureaucrats to guide the politicians, because mostly it’s the politicians, people like you, that are actually pulling the shots. And without us developing our political frameworks for politicians and our bureaucrats to be on the same page, we continue to have this kind of problems. So I think one of the key things that we can do to address this problem is by fair group learning like this. We’re all here, we’ve listened to so many jurisdictions, how they are striving, how they are moving forward. Look at all those countries that are succeeding. One common thing you will find about them is that they have been consistent in their policies development. Now having said that, I want to acknowledge the fact that in Nigeria today we’ve really come a long way in terms of our consistency. Because as we speak, most of the policies that I mentioned were actually done by previous regimes. And I think that gives us a kudos and and thumbs up, realizing that a new administration, when they come, they also in this regard.

Engr. Kunle Olorundare: Yes, thank you. Thank you. And it appears peer learning and then, of course, also to continue to have multistakeholder approach so that when we have a very strong multistakeholder approach, including the civil society, it becomes difficult for any government to just change a policy. And of course, we must continue to have engagement between the brokerage and those of us who are tenants. The brokers are the landlords, but the politicians are our tenants. Let me now go to Netherland. Nether, you are at the center of most of our policy implementation. While the ministry is in the forefront of formulating this policy, the agencies under the ministry are those that implement the policy. And in particular, Nether, you have a role to play. Let me ask you, what are the challenges of policy implementation now? The ministry, the government, formulates policies which you are supposed to implement. And now and then, we see agencies, our government, flouting some of those policies. Even when sometimes some of the initiatives are supposed to be vetted by you and approved by you, what are the challenges you have in implementing some of the policies, particularly in the digital space?

Niteabai Dominic: Thank you very much, distinguished senator. And I stand on existing protocols. Nether is in a unique position because we basically deal with digital technologies either as a tool or as a sector on its own. And because digital technologies permeates every sector, it becomes difficult for you to step into the space of other regulators to manage and handle digital technologies in their space. Remember, most of these core regulators also have their mandates and powers prescribed by existing laws. And so they are entitled to actually carry out those functions which they are actually carrying out right now. Take for example, the CBN. When they’re dealing with monetary policies, they definitely have to deal with technology as well because it affects their monetary policies. And so the challenge in actually implementing government policy is the fact that a policy may govern a certain area and yet there’s another policy that affects that policy from a totally different sector on its own. And this government agency have legitimate authority to actually implement those structures. So I think one of the greatest things that can actually happen is not only about policy somersault and policy change, but also policy coherence and opportunity for different regulators to actually have a forum and opportunity, like you mentioned, the multi-stakeholder model to engage and discuss and have a common position regarding the application of technology in their various sectors. This will create the kind of coherence that will enable NIDA to go on and implement its functions under its mandates and ensure that government policy actually achieve those goals which they have been developed to achieve. So just an opportunity for all regulators to collaborate. mandate and ensure there’s a uniform position whenever we are dealing with technology issues, it will help NIDA to implement its mandate appropriately. So that’s one of the greatest difficulties, sir.

Engr. Kunle Olorundare: I’m getting really excited that we are having some of the issues that may hamper full implementation of some of our policies and the realisation of our vision. Before I go to NCC, there’s a baby in the house and that baby is a data protection commission. I think among all the agencies, they are the last established in 2023, 2024. So Dr Vincent Olatunji, if you can hear me, now you have been at the helm of affairs of a new agency as a pioneer national commissioner in the last one year. What has been your experience? What do we need to do differently? So that because before you become an octopus like NCC or NIDA, is there anything we need to tweak now in your act or your experience that you want to share with us in the next three minutes?

Dr. Vincent Olatunji: Thank you very much. The team is set it up and our chair of this ICT and sub-administration.

Engr. Kunle Olorundare: Please unmute the man.

Dr. Vincent Olatunji: Thank you. Can you hear me now? Yes. Yes. Can you hear me? Yes, I can. Can you hear me now? Thank you very much everybody. The team is set it up. Our chair of ICT and sub-administration. I’m really excited to be joining this afternoon virtually, I would love to be there physically but because of so many things going on here, I couldn’t join, but because of the trends I have in Watts, we are glad to be here to be able to actually join virtually. The theme of this virtual forum, which is about inter-political and capitalist development is very important at this point in time. What I would like to say is that we never land good policies anywhere. Our major challenges have been the implementation, funding, infrastructure, and labor market. We have very robust and constructive policies in different sectors. Unfortunately, this is the economic sector. But implementation has been a major challenge. The policy of 2000, the policy of 2003-2007, the economic policy, now we have the material blueprints for the sector, the AI strategy that has just been developed, and more importantly the Data Protection Act of 2023. All these are things that governments have put in place to ensure that we have a robust economy in this country. And this has actually impacted our socio-economic development. I’m going to lay down the contribution of this sector to our GDP. We are doing over 18% now, as compared to less than 4% from 20 years ago. Now, coming to the issue of privacy and data protection, we don’t want to talk about digital economy in any way. The foundation is digital identity. So recognizing that in Nigeria, who has every right to be protected. As the Nigerian government has made the announcements, and the people, the politicians, the people that are concerned, we expect that it has to be amended. Before we know, perhaps God, I will sign on this appellate letter to the President on June 23. Now, to the question of our challenges, I think it is the number of issues that will be coming up with the organizations, either in the private sector or the public sector. There are definitely two main challenges that will come up. But number one in Nigeria is the issue of awareness. A lot of people don’t even know what they are talking about when you mention data provision. Data subjects don’t know their rights. Data controllers and prosecutors don’t appreciate or know their obligations to data subjects. Even when they know, they are still very hesitant in terms of compliance. And this is even worse in the public sector, where a lot of them see themselves as God. And these institutional algorithms, the algorithms of government organizations, will say, oh, we can do whatever we like. And we’re going to obey any of that. That has been the major challenge. However, we have worked with a lot of stakeholders to ensure that we are compliant. Now, funding has always been a major challenge for all organizations, had there been a system before or now. But now, they were just coming up. It has been very, very challenging. But we thank God it’s a good week. And you know, with a lot of different factors, that’s how we were able to come this far. Health and human capital. This ecosystem we think new in Nigeria is just arriving. And a lot of resources are needed for us to make qualified customers, experts, in the sense of clinicians. I will not say that data controllers and prosecutors are their designated data protection officers. And in Nigeria, we have over 500,000 data controllers and prosecutors. Whereas, those of us who are certified data protectors, who are qualified prosecutors, these are going to be less than 10,000. So there’s a huge gap. I think that jobs that are ready for us to take on, that will be a major challenge. Even for the commission itself, in terms of recruiting, training, and retaining qualified personnel, that will be a major challenge. Some of our personnel, in the last two years that we’ve started, it was several of our personnel’s objectives due to the fact that our immigration is very, very poor as compared to what we did in the private sector. But the most important thing is that we are making progress, and now we have the lawyer, and things are different as compared to when we were doing what we were doing. And because of this law, we had a lot of international conventions, took a lot of organizations, and we have a lot of partnerships, and we are happy that we have the chair of the House of Senators here with us. In terms of agency provision, they should really assist us to ensure that we are appropriately financed in order to be able to deal with our budget. Without funding, we won’t be able to really do anything. Infrastructure, obviously it’s a big job. It’s a high job. It has to be deployed. As far as possible things are, reporting, which is creating awareness, investigations, trainings, and so many other things that we need their support. So we are happy that we are, we have already just submitted this next year’s budget to you, and we hope, we believe that with your support, we will be able to assist you to be better funded next year. And we are happy that both of you, you are professionals in the sector, you should be with us as long as we are in the tech ecosystem. Also, we would certainly like you to join us in what we are developing in BIM. If you are already a supporter, I would really be proud of your support. Um Continue to um our uh next year who reasons Thank you.

Engr. Kunle Olorundare: Thank you national reported that you have not they are in the holy land. They will they will repent here before they they will repent here before they go. Thank you national commissioner. Um if if the challenge of uh the data protection commission is about being in infancy is about uh learning the ropes about certain problems NCC do you have the challenge of policy of You came on board sometimes ago. You have done such a very great job. Are your policies still in tandem or regulations are they still taking cognizance of current realities as our time NCC was created you are more about mobile telephony today we have AI, we have quantum computing we have a number of other things that have changed the landscape are your policies and regulations are they still very much in line with the current realities or does it need to tweak them

Khadijah Sani: Thank you very much sir um just like you rightly said the NCC enjoys a very unique position um our act in itself is also a very um robust act um it has served as a reference point for not only Nigeria but other countries um to look for forward to yes um there’s been quite a number of developments over the years from when um the act was done in and to date um we accede to that fact we do have places in the act of course that we want to to update but um as at this point I think our act is still very robust enough to cater for the changes because um in the first place apart from the mobile numbers that we’re dealing with the act also provides for us for all electronic um addresses which basically is IP addresses and ASNs so from that angle if you look at it um the act covers not only the mobile telephony or um telephony aspect but also the internet aspect as well so it’s very robust in that manner but um it’s inevitable um there’s convergence in every um everywhere you look at um your device is also um our mobile phones are now points we access the internet they’re also where we used to make calls and in some cases they’re also our offices basically so there’s a convergence in that sense and definitely as the future grows we also must converge in terms of regulation and that’s something that is um inevitable for the whole industry not only the the NCC of course um NITDA is here other agencies like he has mentioned we all have interaction from that point of view even with data protection we deal with um in areas where we feel there are need for us in terms of security for example in terms of in terms of data protection, and so on. So you can see that there has to be that convergence or that work relationship between all the agencies under the ministry, and especially because we, like I said, we are an enabling, we’re regulating an enabling environment. Yes, thank you.

Engr. Kunle Olorundare: Let me say a word on you before you move. We have USPF as part of initiative to ensure that the underserved communities, or we have very, we’ve minimized the digital divide. And yet, we’re looking at a UNESCO, they’ve just come up with a IUI, the new indices to measure connectivity, to measure how countries are progressing on digital divide. And Nigeria does seem to rank very high on that list. Is our USPF policy working?

Khadijah Sani: Well, the policy itself is working, but like NDPC has mentioned as well, always lies with funding, basically. Nigeria is a very large country, and we have a population of over 200 million, a lot of unserved and underserved areas. In fact, even within the cities that are on the unserved areas, if you look at it from the perspective of let’s say fiber rollout, for example, even within cities like Abuja, in fact, apart from Abuja, Lagos, and maybe Port Harcourt, I don’t think we have any commercial FTTH savings in any of the state capitals. So from even that point of view, you can see the challenge that is there for us as Nigerians. Well, that means the USPF is actually quite limited in terms of the funding they can put in all the areas. So most of the concentration of effort has been the rural areas, and has been of course provision of mobile connectivity, and which incidentally takes about 98% of all internet connectivity in Nigeria comes through mobile connectivity. So I think in that aspect, yes, the USPF has been trying, but yes, there is need to do more, but what is. holding us back mostly is funding, because of the amount of money we have for those projects would not be enough to go around all the constituencies. Thank you.

Engr. Kunle Olorundare: General Andare, the civil society, and by the way, let me announce that there’s gonna be another side function from the National Assembly. I’m therefore bringing honorable, I wanted to ensure that you are not aware that it will be speaking. Now that you have spoken your minds, all the agencies have spoken except the civil society. Please join me in welcoming honorable Stanley, I’ll allow the DDDG to join the panel. You’ll be providing a broad oversight function on these agencies, because I also have a question for you. Now I’m not speaking today as a legislator, and I don’t want the electoral voice to be muted. So let me use the power of microphone to bring you to the… So you have listened to all of them. You listened to Dr. Vincent Lalatunji who said the president just presented a budget. I never wish you, if you get all the budget for us. So the civil society, what do you think your roles are in policy formulation and in holding government agencies accountable for implementation of unannounced policies? How far so far use of experience in Nigeria to guide us?

Benjamin Akinmoyeje: All right, thank you very much. I think that is a very good question for a civil society and at the same time, an advocacy group. Let me start by saying that the Internet Society is an advocacy group, and actually it’s a global advocacy group. And of course we have chapters in all countries. We have in the Nigerian chapter, which of course I preside on, and we are chartered. Chartered in the sense that we have our own board, and of course we have executive council. So for roles in Internet Society, what we do is to try as much as possible to collaborate with the agencies to see that what is being postulated is brought to fruition. That is one of the things we have been doing. And for us in this particular era, we believe that people need to be digitally literate, and we have a lot of initiatives that we are preaching in that regard. And as a matter of fact, we work with the Nigerian Internet Governance Forum, the Multistakeholder Advisory Group to organize what is known as Nigerian School on the Internet Governance. This year, we organized the fifth edition, and it has been acclaimed to be the best so far, so good. And what we do in the NSIG is to bring people up to speed in terms of knowledge, in terms of, okay, you need to be digitally literate, you need to be aware of what is happening within the Internet Governance space. And apart from that, one other initiative that we have been involved in is because we believe that the women or the ladies folk or the girls folk is very unique, and they need to be given special attention. So we came to the United Nations Stroke ITU in the Girls in My City Day, and we do celebrate it every year. And it may interest you to know that even this year, we collaborated with another… government organization known as Ndupe Kalu Foundation, and we trained over 600 girls in ICT. What we’re trying to achieve is to ensure that, okay, a girl child is also part of this community that we’re talking about, and they need to be digitally literate, and they need to be in the field. That is one of our initiatives. And on the issue of, okay, how do we work with the government agency? We’ve been working with them, actually, and one of the things we try to do is to try as much as possible to let her hear to the grant. Whenever there is going to be a new policy, we try as much as possible to get into the stakeholders forum, you know, open forum, so that our voices can be heard. We believe in the rule of law, and we believe that, okay, things must follow due process. However, everybody needs to be carried along, and for us to do that, we came to such, you know, so that we can, you know, our own opinion. I would believe that by doing that, at the end of the day, we’re going to have a very robust and sound policy. And you may also be interested to know that, apart from that, we’ve also been, you know, been in touch with, what I mean by that is that we believe that internet is a great equalizer. Equalizer in the sense that if you’re rich, you use the internet. If you’re a senator, you use the internet. If you’re not a senator, you use the internet. And of course, the opportunity is the same for everybody, and we believe that, okay, internet must be everywhere. And if it must be everywhere, so we need to, you know, ensure that the infrastructure, that the digital infrastructures are well proliferated. And of course, for us to have the internet, the infrastructure must be on ground. And if they’re on ground, of course the services too must be on ground. And for us, we believe that, yes, this 21st century is all about the internet. That is the digital economy. And that’s why we are part of this. And sir, I want to, to make this call, that if, Peraventsho, there is a way that whenever there’s going to be an approval, because the truth is that we may not hear about everything that is going on, but if there’s a channel of communication that can be set up so that at least, you know, civil organizations can be informed, so that at least we’ll be part of your process, we’ll be glad to do that. Thank you.

Engr. Kunle Olorundare: Peraventsho, we must thank you for your effort as a civil society organization. Policies, regulations, they are subsidiary to legislation. And therefore, in the last few days, here, the experience and the usual refrain from all the parliaments across the world is this seemingly disconnect between the legislature and the executive branch of government, such that you have a situation where countries sign protocols, they sign conventions, but they are not ratified back at home because the legislators didn’t understand the basis or they were not part of it in the first instance. In your view, Honorable Chairman of the House Committee on ICT and Cybersecurity, what do you think we need to do to bridge the gap between the legislatures and the rest of the stakeholders, particularly our agencies and the executive branch of government?

Aditi Stanley-Olajide: Thank you, my respected colleague, Senator Shrives Afolabi-Salisu, and permit me to stand on existing protocol. I am Honorable Aditi Stanley-Olajide. First thing first, thank you for this opportunity to speak. Secondly, it’s unfortunate that it only takes us to be in a country like this for us to sit down at a round table, but we’ll make the best of it. So you raised a very important question about the gap. I can tell you there is no bill or no act that is establishing any of this agency seated here that is going to stand the test of time. All of them are obsolete. I can tell you that for free. NCC Act, NIDA Act, well, even the NDPC Act that we just recently passed, there’s a few things that we need to amend there. Because some people are taking advantage of loopholes that are in the acts. So how can we bridge this gap? For the first time in the history of the National Assembly, you have industry people chairing the committees in both the Senate and the House. And I don’t believe that you’re taking advantage of it. We’re here to serve. We’re here to leverage. We’re here to synergize with you. But oftentimes, I see people being territorial. This is my territory. I don’t want anybody on it. And oftentimes, we are scrambling for information to help you. If you need me to help you with better act or amend your act, I must know exactly your pain points. Well, oftentimes, you cannot articulate your pain points to the National Assembly. So at the end of the day, we end up doing what might not necessarily be 100% to your advantage. We’ll do something, whether you like it or not. But the question is in what you really need. I think collaboration, what are your performance review of where you are will be helpful to us. OK, what are the challenges you are faced with right now? NDPC. are struggling right now because NCC need that. They are not funding them. They are the ones that have been chartered to fund them. But if they don’t fund them, that agency will be scrambling. So we are looking for money for NDPC, but at the same time, the act was very clear that NCC and NIDA for two years must fund them. But at this point, they are still struggling. So in a way, we have to work together. The key here is collaboration. And let me also say this to shake the table a little bit. There are a bit of misplaced priorities also in some of our agencies. Because oftentimes, like NCC is a regulatory body, you know, sometimes you find them doing things in the space of NIDA. So there are duplications of, you know, mandates. The clear mandate of NIDA is very clear, but sometimes you find NCC playing in that role. And sometimes you also find other agencies outside of this ministry, or the agency seated here, also performing the roles of NIDA or NCC. So in a way, we also have to understand these things clearly. And even if we have to take out some clauses in the act of some of these other agencies, because that was one of the reasons why the president is looking at consolidating some agencies, or maybe set some aside. Some of these agencies, their budget was cut this year, 2024. I can tell you 50% of the revenue of NCC was cut. Ilias, NIDA, same thing. I can ask them, do you know exactly why it was cut? Did any of you… National Assembly Chairman, did they actually fight for you to get that money back? Because they can’t. They are fighting right now for me to get that money back, and they will. So we have to work together. So to the point of the chairman, what we need to take away from here is that we need to cut down the bureaucracy, drop the ceiling, or drop the guard, so that we can work together with you to create better laws that will help you move your agencies forward, and our country, Nigeria, can be projected into the future. Because for me, I’m looking forward to Africa, where we have one single currency. We have one single passport. We have one single central bank that will govern the entire Africa. And also, we have the ease of doing business across entire Africa will be taken down. So thank you very much for your question, and let me stop right there. Thank you.

Engr. Kunle Olorundare: My brother for that intervention, and I’m extremely delighted that I actually called him. I wouldn’t have been able to combine that together. That was, I’ll be the judge in my own case, but he has eloquently conveyed what I believe represent the views of the next leaders. Have you heard about Chatham House before? Chatham House Roads? If you have heard about it, please raise up your hand. OK. It means, so this place has now been converted to Chatham House. Anything you say, anything, your views, will not be used against you, because I listen to the National Assembly. You know, because while we, I’m extremely delighted that we’re having this, I mean, to me, I’m going to solve the last 10 minutes for us to actually agree on the action points. I don’t want to preside over a session that we just talk and we disappear. I want us to go back home and have something that we’re going to do. So I’m going to turn the microphone to the audience, but just in case, just for us to be on the same page, a bit of a few things. Number one, there are a few things that we need to learn from other countries. And there are also a number of initiatives that are going on that we can also benefit from. They are from EU, from GIZ, from UNESCO. So the first thing we need to recognize that, yes, we are big and we are doing very well, but there are also other countries that are also doing some initiatives that we can learn from. Number two is that in order to ensure that our policies are sustainable and are consistent, we must continuously do peer review within the country and outside the country. And we must also ensure that we adopt a multistakeholder approach. And I’m going to extend this multistakeholder approach a little bit. We also need to involve the sub-nationals in some of the policies that we develop. Otherwise, we develop federal policies and the sub-nationals, they do whatever they like. So I think multistakeholder approach also includes sub-nationals. Number four, there are mandate overlaps between our agencies. And this is because, as an example, the CBN Act, as I tell you, it was propagated in time to say that banking would become technology. So today, banking is no longer banking as it used to be. Banking is not technology. But being technology, it’s not technology. it puts banking right. I mean, after all, who heard about what they call FinTech 10 years ago? There was no FinTech. So today we talk about FinTech, meaning that there’s an intersection between a financial industry and the technology industry. That has also now brought some little frictions between those who have the mandate to develop technology and those who have mandates to regulate banking. So mandate of a lot. Policy incoherence, and I know this very well, very strongly, which is also related to the one above. Sometimes a policy in banking may complete the policy in a week. The one in health may complete with the one in transport. So policy incoherence. That’s a need for regulators to collaborate more regularly. If you are working for the same country for the same goal, you can sit around the table. Rather than agencies and individuals are coming and escaping from one agency to the other or being called upon by different agencies, we can have a unified approach to provide, I mean, to destroy our mandate. Policy awareness is a great thing. A number of people are not even aware that we have a National Data Protection Commission and its roles and responsibilities. We need to create more awareness. Funding will also always be an issue. And it has already said, not just in a Data Protection Commission, but even NCC for that matter. USPF complained not having enough money. Then I wonder who has money on this table. Capacity building. And capacity building, I will also extend it to also include capacity building for the legislators. It is true, I’ve been in the industry for close to 40 years, and I must recognize Dr. James Olufoye here. We are together in the Digital Computers Association of Nigeria, the Inter-Nigeria Computer Society, ITANI, SPON, and a lot of those others. We are there in the National Assembly to establish NITRA. Most of these agencies were there as civil society, as a private sector player then. So I am aware. But I can’t say beyond Honorable Stanley Adelidji. I can’t say members of his committee. I can’t mention two or three of them who are technology savvy. In my committee as well, beyond me, maybe one or two other people, I can’t say that people can fully understand what it is they were talking about, artificial intelligence or cyber security. So capacity building, the quality of legislation is directly related to the quality of the awareness of the legislators. If you are not aware, our process in the chamber requires us that the bill should go to first reading, second reading, third reading. And there are so many bills, and you want to get your bill passed. So whatever information you have, that’s what you put in the bill. And it goes out, and then the civil society are going to say, hey, this National Assembly. When we are doing the bill, where were you? So capacity building should also be for National Assembly members. And this also goes to the agencies. I mean, for us, it is not a perfunctory thing to just put a stakeholder as a trainee in your budget. And therefore, it just means to just fulfill some rationales. We would like to see members of the Senate Committee on ICT and cyber security, members of House Committee on ICT and cyber security, to be given proper training, proper training. And I mean proper training. I mean, for some of us, we may not require training in the areas of technology again. But in the area of technology, we need training. of policy formulation, to know what the EU is doing on artificial intelligence, to know what is coming up in terms of cyber security, to know the African Union free container-free, you understand what I wanted to say? OK, OK. You know, there was a data policy framework. So these agencies, in your budget, put something substantial for capacity build. And it’s not the one that you go and give a contractor to, and say, OK, yes, we have trained them. It won’t happen this year. Infrastructure is also a major issue. Then somebody spoke about, I mean, silos, being territorial, and, of course, the need for periodic interface around table. We have conversed that the digital economy ecosystem, we should have our own off-site, where all of these agencies will come together, from NIDA to NECOMSTAR, from NECOMSTAR to, I mean, to NIPO, from NIPO to the one in the satellites. Yeah, because if you want to deliver service to the country, some services will be done territorially. Some will be done using satellite. Some will require the collaboration of NCC with postal services. Some will require NIDA to begin to look at the budget. And luckily, peers, you are here. I mean, we’re having a family meeting now. We have situations where you see agencies. This year, they will say automation. Next year, they will say software. The following year, they will say equipment. The next year, they will say, and then you ask yourself, and when you put all of these together, they run into billions of dollars. And sometimes you ask, NIDA, are you seen some of these budget items that go not to nowhere. And neither is support. You’re supposed to be able to approve some of them to say, OK, do this. We’ll see it as legislator now and then. We did an exercise for 2023 and 2024. All you need to do is to do a search, and you’ll see the various budget items coming up. If you buy equipment this year, OK, next year, where are those equipment before you buy another one? Do we have a National Asset Registry? OK. My job is not to limit our conversations. It’s also to stimulate you to say, OK, now we have charter house rule. Forget that the PS is here. For this purpose, it’s engineer. I look, you sue. F-N-S-E. Period. Agreed? Forget that neither is here. Even if you’re a NIDA staff, we want to make your call, just look straight in the face. I’ll give you the parliamentary immunity. So Nigerians who are here, I think there can’t be a better place for us to meet than here. If I’m back in Nigeria, there will be too many distractions. So the last 45 minutes have been extremely, very, very important. And we have 15 minutes to round up. So if you have any intervention that you think can help us, please raise up your hand. And I’ll start with Dr. Olufoye.

Audience: Our distinguished senator and our protocol.

Engr. Kunle Olorundare: Olufoye, you know we know our senator for over 40 years now. He will deliver another lecture. But you’ll do it for 90 seconds. And your time starts now. One, two.

Audience: So I stand on the existing protocols. James Olufoye, once again. And we’ve been in this process since 1995, advocacy and engaging the stakeholders. Now, there are two points I just want to raise. The first one, we’ve been talking about collaboration, multi-stakeholder engagement. The solution is in. here, the Net Mundia Multi-Stakeholder Guideline. In April this year at Sao Paulo, stakeholders came together and we agreed on the best approach on how to ensure that all stakeholders come together, all stakeholders come together and discuss meaningfully. And it’s in that situation we can have all people to buy in. Buying is very important because every stakeholder will now capacitate their communities, sub-national, parliamentarian, all agencies. So we need to do that. So I have copies of this, so as many as I want to pick it up, it’s summarised here. Secondly, there is something new called the Summit of the Future, talking about global digital comparison. I can tell you for free that is the fulfilment of the second outcome of the WSIS 2005 agenda. The first one was IGF, which is successful. The second one has not been successful as well, it’s just this September it became successful. We are on the Summit of the Future. Now they are talking about how do we implement it. I want to recommend, because we don’t want new institutions, we don’t want to set up any bottlenecks, let’s use the existing frameworks. For example, we have Nigeria IGF, I want to thank Amazon, Madam Mary, please let us appreciate her. For starting that, she pushed it rigorously and I love it. Then the other thing I want to mention, we don’t have WSIS. So I want to charge us, the ministry, to lead. UNECA leads for Africa and it’s wonderful, just like United Nations lead. So let’s put that in place so that we can have Nigeria WSIS forum. We’ve never had WSIS meeting in Nigeria. We need to have a forum where we discuss this new thing without setting up new institutions. That’s what we believe in the private sector. We don’t believe let us set up new bureaucracy. Thank you very much.

Engr. Kunle Olorundare: Good afternoon, everyone.

Amina Ramallan: My name is Amina Ramalan. I work for the Nigerian Communications Commission. So I just want to speak, not from the angle of the commission, but from the angle of youth perspective in general. So at the beginning of the session, the Honorable Pam Sek, he spoke about the representation of youth. I think he said 65% of the population is 35 and below. However, majority of the conversation and I think some of the outcomes that the Honorable spoke about, I did not hear anything about action points that have to do with youth. So I just wanted to add two cents, just to say that we need to, 65% is actually a very high number. And for us to be able to get to the point that we want for a digital economy, we need to urge ourselves to integrate youth voices, to amplify youth voices, not just in decision making processes, but governance is going younger now. Governance is going younger. When we’re talking youth today, we’re not just talking 30, 20 years old in governance, they are making decisions, they are making differences, right? So we need to also… So, we need to look at how we can integrate, you know, integrating digital literacy into school curriculums. And then, the last point I want to make is we need to also look at Nigeria-specific solutions for Nigerian problems. So, some of the issues that we have can only be solved in-house, not by just, you know, burrowing best practices from out there. Thank you.

Engr. Kunle Olorundare: Thank you very much for your time, and I look forward to hearing from you in the next few days. Thank you very much.

Audience: Distinguished, you have already categorized me already. Okay, good afternoon, or good evening already. I’m standing on existing protocols. I’m a member of the board of trustees, and I’m the director of the NGO, which is a non-governmental organization. We’ve been talking about policies. For me, we don’t lack policies. I think we have a great ton of repository of policies. In fact, if we need any policy now, we can get one. But how best, what mechanisms are we actually putting in place to evaluate the policies, what we’re already putting out? Are they actually meeting our goals and expectations? And, you know, I think, you know, we have a lot of policies that have been implemented by the previous government last year, by the previous government. This year, how far has that gone? Blockchain technology is still novel. However, a lot of people do not still understand it, and we have a policy that’s there, and the policy is almost like it’s slipping. And even this year, NCC, I think we directed from NCC through ISPs, have actually blocked a lot of things. And the second one I would like to talk about is for the commissioner that is joining online. That is data privacy and all. Anything we’re talking about in this room, if it doesn’t reach the common man, then it doesn’t make any sense. And how far are we going with sensitization on data bridges within the country? So it’s not just about we here in this room. We’ll have other opportunities to talk about these things. The common man must benefit from whatever we are doing. Thank you.

Engr. Kunle Olorundare: Okay, thank you. I’ll just, because we have seven minutes to go, but I don’t want to shut down the youth. She’s been here, and then the last one over there. If you make it very brief. Thank you so much. I’ll buy you a dinner. Good evening, everyone. Standing on existing protocols. My challenge is with- Your name is, please?

Aisha Bashir Tukur: Sorry, my name is Aisha Bashir Tukur, and I’m from Federal Inland Revenue Services, but I’m here for my startup, actually. The world is becoming smarter every day, whether we like it or not. Children and everyone is exposed to internet. What are we doing on cyber policies or cyber security? Cyber security policies. We have cyber bullying. I mean, we have young children, age 13 and below, doing so much on the internet. How do we harness their talents to something reasonable from an early age? This is my biggest concern.

Engr. Kunle Olorundare: Thank you, Aisha. Cyber security, cyber bullying, cyber policy. And the last one. Good evening, everyone.

Audience: My name is Sinhwe Zamobilo. I’m a senior program officer with Pridem Initiative, a civil society with headquarters in Nigeria, but Africa focus. Because of time, I think my intervention is coming from the statement about inconsistency. And the speaker once said that it didn’t feel good with me when he said it’s like a norm. And we want Nigeria’s case to be different. Let’s see it as a challenge, that even if it is a norm elsewhere, it shouldn’t be a norm with us in Nigeria, that we have policy inconsistency. Then what Dr. Lufoye said about us meeting over this, I don’t feel good that we are having this conversation outside the soil of Nigeria. I am making a proposition that when we go home, like I said at the ministry, we have a repository of resources, both human, who play major roles on the global stage. And who should come home and do those things at home. So I am looking forward to hearing that we are calling for WSIS. Currently, we have started internally as an organization. We are part of the people making input into the one that they are doing in UNICA and the one they are doing at UN. But we are not hiring anything in Nigeria. So we are looking forward to that. And again, I am also making this call. Please, I want to appreciate the NDPC commissioner. One of the things I will appreciate him for is his openness and ease of access. And I’m judging our, when our honorable member said something, that they look for information to make laws concerning us. That didn’t sound well, too. Because we make efforts to get across to the policy makers so that these things can be attended to. I am making a request for ease of access to the legislators, to the ministries, and to the agencies, so that we can move Nigeria forward. Thank you so much.

Engr. Kunle Olorundare: That sounded good to me, at least. Something sounded good to me. If everything didn’t sound good to you, something sounded good to me. So I think we’re going to end up, but before we end up, we’re going to have, oh, there’s a question online. What is the question? Can somebody read it? Can you read it? Do you have the question? Do you have the question? OK, so in order to save time. OK, you know what? We’re going to have one more minute. You can respond to any of the questions, and then you’re closing the talks. Let me start with the peers.

Panelist: OK, thank you very much, Distinguished. For lack of time, quickly, I want to agree with the Honorable Stanley. We have never had it good, really, as a sector, having engaged with the two senators and the two other senators. I can’t imagine us having this kind of quality of people with a massive understanding of the sector, and we cannot take advantage of that. So I think I totally agree with that point. Secondly, the issue that was raised also by a distinguished senator with regards to capacity building for the legislators. I think it’s absolutely important. Beyond the legislators, also even the judiciary, especially on the aspect of AI. And I can assure you that already we are thinking about something to do with that quickly. Now, coming back to the question by Dr. Also, I want to agree that we need to really create platforms where engagement, policy, advocacy, and stakeholder engagement will be done. The lady that spoke on literacy, digital literacy, I think we are doing a lot, honestly. There is a three million technical talent program going on. Lately, also, we jointly with NIDA and NYSE were able to launch the digital literacy for all. In Nigeria, I was there personally. So really, there is quite a lot of things. And different agencies of government at all levels are doing a lot of training. We can do much better. But I think a lot is being done. I could agree with the gentleman also that spoke about how can we deal with policy implementation, lack of aid, and so on. I also agree that we can do much better. But like I’ve said earlier, it’s the issue of institutions. We need to have strong institutions to be able to do that. Finally, on the issue of cybersecurity, I mean, the gentleman there was mentioning that we already have the Cybersecurity Act. I mean, there is a Cybersecurity Act of 2020, right? And there is a very new 2023 or so. So there are quite a few things. There is the lack of awareness claims. So people are missing a lot in terms of getting things done. I think we can go on with this conversation at a wider level. Thank you.

Niteabai Dominic: Thank you. In short, my contribution would be very simple. When we talk about engagement, let us engage. As a whole, trust deficits that we have. have with government. This is sincerely from the bottom of my heart. Civil society is saying let’s engage, but each time we call you, you don’t show up. That’s when we show up in your events. He’s a deliberate policy leader. Any time civil society holds an event, we do, because when we invite you, you don’t come. So let’s be sincere to ourselves. If you want to engage, let’s engage and not just look about it. We’ll meet them at the forum. Thank you, sir.

Benjamin Akinmoyeje: The youth, the old, the landed ones, and interestingly for us. Okay. Okay. Interestingly for us, we have all professionals as members of Internet Society. So if you have Internet Society participating in a particular event, that means that the youth, the old, the professionals, the engineers…

Engr. Kunle Olorundare: Thank you, thank you. Dr. Fizet, you have a few seconds to say… Dr. Fizet?

Dr. Vincent Olatunji: Just to say that what is really important for us moving forward is collaboration. We all work together and pursue the same vision. Our voices are very robust. Let’s put in place a proper decision framework, a proper…

Engr. Kunle Olorundare: Thank you. Madam Uchman?

Mary Uduma: Thank you. NIGF is there as a platform. Let’s converge there. Quarterly meeting? Carried. Quarterly meeting? Carried. Online participation. You can participate online. So go back to the PAMSEC and get that straightened up so that we can work on WSIS, work on Global Digital Compact.

Aditi Stanley-Olajide: Yeah, just one quick thing. We also at National Assembly, we’re also going to take a lead role on this to bring everybody together. Before everybody leave here today, leave your number. We’re also going to WhatsApp Line. We’re going to also work with the Ministry. We’re going to drive this and we’re all going to…

Engr. Kunle Olorundare: Thank you all for being here. Thank you for participating. Thank you for your time. It’s been such a very interesting engagement. And we’ll be working with the PAMSEC to ensure that some of these items, we act on them when we’re back home. God bless you. God bless the Federal Republic of Nigeria and safe trips back home. And please have a look for those that want to see me. And please, Sam, down below, down below, let’s take a picture.

P

Panelist

Speech speed

106 words per minute

Speech length

1686 words

Speech time

953 seconds

Need for policy coherence and collaboration between agencies

Explanation

The speaker emphasizes the importance of policy coherence and collaboration between different government agencies. This is crucial for effective implementation of digital policies and to avoid duplication of efforts.

Evidence

The speaker mentions the example of NCC sometimes performing roles that should be done by NIDA, leading to duplication of mandates.

Major Discussion Point

Policy Development and Implementation

Agreed with

Engr. Kunle Olorundare

Unknown speaker

Agreed on

Need for multistakeholder approach in policy development

Large youth population as opportunity for digital economy growth

Explanation

The speaker highlights Nigeria’s large youth population as a significant opportunity for digital economy growth. With over 65% of the population under 35, Nigeria is well-positioned to lead in digital transformation.

Evidence

The speaker mentions that over 65% of Nigeria’s population is currently under the age of 35.

Major Discussion Point

Youth Engagement in Digital Economy

Progress in internet access but need for more infrastructure investment

Explanation

The speaker acknowledges progress in internet access in Nigeria but emphasizes the need for further infrastructure investment. This is crucial for expanding digital access and supporting the growing digital economy.

Evidence

The speaker mentions that internet access is almost 75%, which means almost 150 million people have access to the internet in Nigeria.

Major Discussion Point

Digital Infrastructure and Access

E

Engr. Kunle Olorundare

Speech speed

141 words per minute

Speech length

3430 words

Speech time

1451 seconds

Importance of multistakeholder approach including sub-nationals

Explanation

The speaker emphasizes the need for a multistakeholder approach in policy development and implementation, including involvement of sub-national entities. This approach ensures more comprehensive and effective policies.

Evidence

The speaker suggests that without involving sub-nationals, federal policies may not be effectively implemented at the state level.

Major Discussion Point

Policy Development and Implementation

Agreed with

Panelist

Unknown speaker

Agreed on

Need for multistakeholder approach in policy development

N

Niteabai Dominic

Speech speed

132 words per minute

Speech length

425 words

Speech time

192 seconds

Challenges in implementing policies due to overlapping mandates

Explanation

The speaker highlights the difficulties in implementing policies due to overlapping mandates between different agencies. This overlap can lead to confusion and inefficiency in policy implementation.

Evidence

The speaker mentions the example of financial technology (FinTech) which falls under both banking and technology sectors, creating potential conflicts between regulatory bodies.

Major Discussion Point

Policy Development and Implementation

Differed with

Unknown speaker

Differed on

Policy implementation challenges

U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need for stronger institutions to support policy implementation

Explanation

The speaker emphasizes the importance of strong institutions in supporting effective policy implementation. Weak institutions can lead to policy inconsistencies and implementation challenges.

Major Discussion Point

Policy Development and Implementation

Agreed with

Engr. Kunle Olorundare

Panelist

Agreed on

Need for multistakeholder approach in policy development

Differed with

Niteabai Dominic

Differed on

Policy implementation challenges

Call for easier access to legislators by stakeholders

Explanation

The speaker calls for improved access to legislators by various stakeholders in the digital sector. This access is important for ensuring that legislators have the necessary information to make informed decisions.

Major Discussion Point

Legislative Engagement

A

Audience

Speech speed

159 words per minute

Speech length

970 words

Speech time

365 seconds

Importance of evaluating existing policies

Explanation

The speaker stresses the need for mechanisms to evaluate existing policies. This evaluation is crucial to determine if policies are meeting their intended goals and expectations.

Evidence

The speaker mentions the example of blockchain technology policy, which is still not well understood despite its implementation.

Major Discussion Point

Policy Development and Implementation

K

Khadijah Sani

Speech speed

134 words per minute

Speech length

591 words

Speech time

263 seconds

Funding challenges for Universal Service Provision Fund

Explanation

The speaker highlights the funding challenges faced by the Universal Service Provision Fund (USPF). These challenges limit the fund’s ability to address digital divide issues effectively.

Evidence

The speaker mentions that Nigeria is a large country with over 200 million people and many unserved and underserved areas, which requires significant funding to address.

Major Discussion Point

Digital Infrastructure and Access

A

Amina Ramallan

Speech speed

151 words per minute

Speech length

258 words

Speech time

102 seconds

Need for digital literacy initiatives, especially for youth

Explanation

The speaker emphasizes the importance of digital literacy initiatives, particularly for youth. This is crucial for preparing the large youth population to participate effectively in the digital economy.

Major Discussion Point

Digital Infrastructure and Access

Agreed with

Dr. Vincent Olatunji

Adedeji Stanley-Olajide

Agreed on

Importance of capacity building and awareness

Need to integrate youth voices in decision-making processes

Explanation

The speaker calls for greater integration of youth voices in decision-making processes related to digital policies. This is important given the large youth population in Nigeria and their role in the digital economy.

Evidence

The speaker mentions that 65% of Nigeria’s population is 35 and below, emphasizing the importance of youth representation.

Major Discussion Point

Youth Engagement in Digital Economy

Importance of digital literacy in school curriculums

Explanation

The speaker stresses the need to integrate digital literacy into school curriculums. This would help prepare young people for the digital economy from an early age.

Major Discussion Point

Youth Engagement in Digital Economy

Need for Nigeria-specific solutions to digital challenges

Explanation

The speaker emphasizes the importance of developing Nigeria-specific solutions to digital challenges. This approach ensures that solutions are tailored to the unique context and needs of Nigeria.

Major Discussion Point

International Cooperation and Best Practices

D

Dr. Vincent Olatunji

Speech speed

148 words per minute

Speech length

1036 words

Speech time

419 seconds

Establishment of new Data Protection Commission

Explanation

The speaker discusses the recent establishment of a new Data Protection Commission in Nigeria. This commission is tasked with implementing and enforcing data protection regulations.

Evidence

The speaker mentions that the Data Protection Commission was established in 2023-2024.

Major Discussion Point

Data Protection and Cybersecurity

Need for awareness on data protection rights and obligations

Explanation

The speaker highlights the need for greater awareness about data protection rights and obligations. This includes educating both data subjects about their rights and data controllers about their responsibilities.

Evidence

The speaker mentions that many people, including data subjects and data controllers, are not aware of their rights and obligations regarding data protection.

Major Discussion Point

Data Protection and Cybersecurity

Agreed with

Adedeji Stanley-Olajide

Amina Ramallan

Agreed on

Importance of capacity building and awareness

A

Aishat Bashir Tukur

Speech speed

136 words per minute

Speech length

92 words

Speech time

40 seconds

Importance of cybersecurity policies, especially for youth

Explanation

The speaker emphasizes the importance of cybersecurity policies, particularly for protecting young people online. This includes addressing issues like cyber bullying and ensuring safe internet use for children.

Evidence

The speaker mentions the exposure of children as young as 13 to various online risks.

Major Discussion Point

Data Protection and Cybersecurity

A

Adedeji Stanley-Olajide

Speech speed

137 words per minute

Speech length

827 words

Speech time

359 seconds

Need for capacity building of legislators on technology issues

Explanation

The speaker highlights the importance of capacity building for legislators on technology issues. This is crucial for effective lawmaking and oversight in the rapidly evolving digital sector.

Evidence

The speaker mentions that beyond a few members, most legislators may not fully understand complex technology issues like artificial intelligence or cybersecurity.

Major Discussion Point

Legislative Engagement

Agreed with

Dr. Vincent Olatunji

Amina Ramallan

Agreed on

Importance of capacity building and awareness

Importance of collaboration between agencies and legislators

Explanation

The speaker emphasizes the need for closer collaboration between government agencies and legislators. This collaboration is essential for developing effective laws and policies in the digital sector.

Evidence

The speaker suggests creating a WhatsApp group to facilitate communication and collaboration between stakeholders.

Major Discussion Point

Legislative Engagement

M

Mary Uduma

Speech speed

128 words per minute

Speech length

687 words

Speech time

321 seconds

Learning from other West African countries’ digital initiatives

Explanation

The speaker suggests learning from digital initiatives in other West African countries. This can provide valuable insights and best practices for Nigeria’s digital policy development.

Evidence

The speaker mentions Benin’s harmonized digital identity system as an example of a successful initiative in the region.

Major Discussion Point

International Cooperation and Best Practices

B

Benjamin Akinmoyeje

Speech speed

173 words per minute

Speech length

822 words

Speech time

284 seconds

Importance of participating in global internet governance forums

Explanation

The speaker emphasizes the importance of Nigeria’s participation in global internet governance forums. This participation can help shape international policies and ensure Nigeria’s interests are represented.

Evidence

The speaker mentions the Internet Society’s involvement in various global internet governance initiatives.

Major Discussion Point

International Cooperation and Best Practices

Agreements

Agreement Points

Need for multistakeholder approach in policy development

Engr. Kunle Olorundare

Panelist

Unknown speaker

Importance of multistakeholder approach including sub-nationals

Need for policy coherence and collaboration between agencies

Need for stronger institutions to support policy implementation

Multiple speakers emphasized the importance of involving various stakeholders, including sub-national entities and different agencies, in policy development and implementation to ensure effectiveness and coherence.

Importance of capacity building and awareness

Dr. Vincent Olatunji

Adedeji Stanley-Olajide

Amina Ramallan

Need for awareness on data protection rights and obligations

Need for capacity building of legislators on technology issues

Need for digital literacy initiatives, especially for youth

Several speakers highlighted the need for capacity building and awareness initiatives across different stakeholder groups, including legislators, youth, and the general public, to enhance understanding of digital issues and policies.

Similar Viewpoints

Both speakers acknowledged progress in internet access but emphasized the need for further investment in digital infrastructure, highlighting funding challenges as a key obstacle.

Panelist

Khadijah Sani

Progress in internet access but need for more infrastructure investment

Funding challenges for Universal Service Provision Fund

Both speakers emphasized the importance of addressing youth-specific concerns in digital policies, including their representation in decision-making and their safety online.

Amina Ramallan

Aishat Bashir Tukur

Need to integrate youth voices in decision-making processes

Importance of cybersecurity policies, especially for youth

Unexpected Consensus

Consistency in policy implementation across administrations

Panelist

Adedeji Stanley-Olajide

Need for policy coherence and collaboration between agencies

Importance of collaboration between agencies and legislators

Despite representing different branches of government, both speakers agreed on the need for consistency and collaboration in policy implementation, which is unexpected given typical tensions between executive agencies and legislators.

Overall Assessment

Summary

The main areas of agreement included the need for a multistakeholder approach in policy development, the importance of capacity building and awareness initiatives, the need for further investment in digital infrastructure, and the importance of addressing youth-specific concerns in digital policies.

Consensus level

There was a moderate level of consensus among the speakers on key issues, particularly on the need for collaboration and capacity building. This consensus suggests a shared understanding of the challenges facing Nigeria’s digital economy and could potentially lead to more coordinated efforts in policy development and implementation. However, some differences in perspective and emphasis were also evident, particularly regarding specific implementation strategies and priorities.

Differences

Different Viewpoints

Policy implementation challenges

Niteabai Dominic

Unknown speaker

Challenges in implementing policies due to overlapping mandates

Need for stronger institutions to support policy implementation

While Niteabai Dominic focuses on overlapping mandates as a key challenge in policy implementation, the unknown speaker emphasizes the need for stronger institutions. This suggests a difference in perspective on the root cause of implementation difficulties.

Unexpected Differences

Approach to policy development

Audience

Panelist

Importance of evaluating existing policies

Need for policy coherence and collaboration between agencies

While one might expect agreement on the need for both policy evaluation and coherence, the speakers unexpectedly focus on different aspects of policy development. The audience member emphasizes evaluation of existing policies, while the Panelist focuses on coherence and collaboration in developing new policies.

Overall Assessment

summary

The main areas of disagreement revolve around policy implementation challenges, approaches to inclusive decision-making, and priorities in infrastructure development and policy formulation.

difference_level

The level of disagreement among speakers appears to be moderate. While there are differences in focus and approach, there seems to be a general consensus on the importance of improving digital policies and infrastructure. These differences in perspective could potentially lead to more comprehensive and nuanced policy development if properly addressed and integrated.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of inclusive decision-making, but while Engr. Kunle Olorundare emphasizes including sub-national entities, Amina Ramallan focuses specifically on integrating youth voices.

Engr. Kunle Olorundare

Amina Ramallan

Importance of multistakeholder approach including sub-nationals

Need to integrate youth voices in decision-making processes

Both speakers acknowledge the need for infrastructure investment, but they approach it from different angles. The Panelist focuses on overall progress and need, while Khadijah Sani specifically highlights funding challenges for the Universal Service Provision Fund.

Panelist

Khadijah Sani

Progress in internet access but need for more infrastructure investment

Funding challenges for Universal Service Provision Fund

Similar Viewpoints

Both speakers acknowledged progress in internet access but emphasized the need for further investment in digital infrastructure, highlighting funding challenges as a key obstacle.

Panelist

Khadijah Sani

Progress in internet access but need for more infrastructure investment

Funding challenges for Universal Service Provision Fund

Both speakers emphasized the importance of addressing youth-specific concerns in digital policies, including their representation in decision-making and their safety online.

Amina Ramallan

Aishat Bashir Tukur

Need to integrate youth voices in decision-making processes

Importance of cybersecurity policies, especially for youth

Takeaways

Key Takeaways

There is a need for greater policy coherence and collaboration between agencies in Nigeria’s digital sector

A multistakeholder approach, including sub-national entities, is crucial for effective policy development and implementation

Nigeria has made progress in digital access and infrastructure but still faces challenges in funding and implementation

Data protection and cybersecurity are growing concerns that require more awareness and robust policies

There is a significant opportunity to leverage Nigeria’s large youth population for digital economy growth

Capacity building for legislators on technology issues is essential for effective lawmaking in the digital sector

Resolutions and Action Items

Establish a quarterly meeting for stakeholders in the digital sector

Create a WhatsApp group for continued communication between stakeholders

The National Assembly to take a lead role in bringing stakeholders together

Explore organizing a Nigeria WSIS (World Summit on the Information Society) forum

Integrate digital literacy into school curriculums

Provide capacity building for legislators on technology issues

Unresolved Issues

How to effectively address overlapping mandates between agencies

Specific strategies for increasing funding for digital infrastructure projects

Detailed plans for improving cybersecurity, especially for youth

How to practically integrate youth voices into decision-making processes

Specific methods to evaluate and update existing policies

Suggested Compromises

Agencies to be more open to collaboration and sharing information with legislators

Civil society organizations to make greater efforts to attend government-organized events

Balancing Nigeria-specific solutions with learning from international best practices

Finding ways to harmonize different agency mandates without creating new bureaucracies

Thought Provoking Comments

We never land good policies anywhere. Our major challenges have been the implementation, funding, infrastructure, and labor market.

speaker

Dr. Vincent Olatunji

reason

This comment cuts to the heart of Nigeria’s policy challenges, shifting focus from policy creation to implementation.

impact

It redirected the conversation towards practical challenges and solutions rather than just policy formulation.

For the first time in the history of the National Assembly, you have industry people chairing the committees in both the Senate and the House. And I don’t believe that you’re taking advantage of it.

speaker

Adedeji Stanley-Olajide

reason

This highlights a unique opportunity for collaboration between legislators and industry experts that is currently being underutilized.

impact

It sparked discussion about improving collaboration between government agencies and legislators to create more effective policies and laws.

65% is actually a very high number. And for us to be able to get to the point that we want for a digital economy, we need to urge ourselves to integrate youth voices, to amplify youth voices, not just in decision making processes, but governance is going younger now.

speaker

Amina Ramalan

reason

This comment brought attention to the demographic reality of Nigeria and the need to include youth perspectives in policymaking.

impact

It shifted the conversation to consider the role of youth in Nigeria’s digital future and policy-making processes.

How far are we going with sensitization on data bridges within the country? So it’s not just about we here in this room. We’ll have other opportunities to talk about these things. The common man must benefit from whatever we are doing.

speaker

Audience member

reason

This comment emphasized the importance of making policies and their benefits accessible to the general public.

impact

It broadened the discussion to include considerations of public awareness and the practical impact of policies on ordinary citizens.

Overall Assessment

These key comments shaped the discussion by shifting focus from theoretical policy formulation to practical implementation challenges, highlighting the need for better collaboration between industry experts and legislators, emphasizing the importance of including youth perspectives, and stressing the need to make policies accessible and beneficial to the general public. The discussion evolved from a high-level policy talk to a more grounded conversation about real-world impacts and inclusive policy-making processes.

Follow-up Questions

How can we involve parliamentarians more in the Internet Governance Forum process?

speaker

Dr. D. S. Wariowei

explanation

Involving parliamentarians could lead to better representation and guidance in future IGF meetings.

How can we improve funding for the Universal Service Provision Fund (USPF) to better address digital divide issues?

speaker

Khadijah Sani

explanation

Increased funding could help expand connectivity to underserved areas and improve Nigeria’s ranking on digital divide indices.

How can we address the issue of mandate overlaps between agencies in the digital space?

speaker

Adedeji Stanley-Olajide

explanation

Resolving mandate overlaps could improve efficiency and reduce conflicts between agencies.

How can we improve policy coherence across different sectors (e.g., banking, health, transport) in relation to digital technologies?

speaker

Engr. Kunle Olorundare

explanation

Better policy coherence could lead to more effective implementation of digital initiatives across sectors.

How can we enhance capacity building for legislators to improve their understanding of technology and digital policy issues?

speaker

Engr. Kunle Olorundare

explanation

Improved knowledge among legislators could lead to better-informed policy-making and legislation.

How can we better integrate youth voices in decision-making processes related to digital policy?

speaker

Amina Ramallan

explanation

Given that 65% of Nigeria’s population is under 35, including youth perspectives is crucial for effective digital policy.

How can we develop more Nigeria-specific solutions for digital challenges?

speaker

Amina Ramallan

explanation

Tailored solutions could be more effective in addressing Nigeria’s unique digital ecosystem challenges.

What mechanisms can we put in place to better evaluate the effectiveness of existing policies?

speaker

Unnamed audience member

explanation

Regular evaluation could help improve policy implementation and outcomes.

How can we improve public sensitization on data privacy and breaches?

speaker

Unnamed audience member

explanation

Increased public awareness could lead to better data protection practices among citizens.

What policies can we develop to address cybersecurity and cyber bullying, especially for young internet users?

speaker

Aishat Bashir Tukur

explanation

Protecting young users and harnessing their talents safely is crucial as internet usage grows among youth.

How can we organize a Nigeria-specific WSIS (World Summit on the Information Society) forum?

speaker

Dr. Jimson Olufuye

explanation

A local WSIS forum could help address Nigeria-specific digital issues and align with global digital initiatives.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #172 Regulating AI and Emerging Risks for Children’s Rights

WS #172 Regulating AI and Emerging Risks for Children’s Rights

Session at a Glance

Summary

This discussion focused on the impact of artificial intelligence (AI) on children and the need for regulation to protect children’s rights in the digital environment. Participants highlighted how AI is pervasive in children’s lives, often without their awareness, and can pose risks such as data exploitation, privacy violations, and exposure to harmful content. Research shows that many AI systems are not designed with children’s best interests in mind, despite children being a significant user base.

The discussion emphasized the importance of developing global standards and regulations for AI that prioritize children’s rights and safety. The EU’s AI Act was cited as a step in the right direction, though challenges remain in its implementation and enforcement. Participants stressed the need for technical standards and frameworks to guide the responsible development and deployment of AI systems affecting children.

Youth perspectives were prominently featured, with concerns raised about AI’s impact on education, creativity, privacy, and mental health. The discussion underscored the importance of involving children in the development of AI policies and regulations. Participants called for increased awareness and education for families and children about AI risks and safeguards.

The conversation concluded with a call to action for policymakers, tech companies, and society at large to ensure AI systems are designed and governed with children’s rights and well-being at the forefront. The upcoming AI code for children was highlighted as a potential blueprint for addressing these concerns and implementing practical safeguards for children in the AI landscape.

Keypoints

Major discussion points:

– The impact of AI on children’s rights, privacy, and wellbeing

– The need for AI regulation and standards that specifically consider children

– The importance of designing AI systems with children’s needs in mind from the start

– Challenges in implementing “safety by design” principles for AI that impacts children

– The role of families, education, and public awareness in protecting children from AI risks

The overall purpose of the discussion was to examine the impacts of AI on children and explore policy, regulatory, and technical solutions to protect children’s rights and wellbeing as AI systems become more prevalent. The discussion aimed to provide input for the upcoming AI Action Summit in Paris.

The tone of the discussion was largely serious and concerned about the risks AI poses to children, but also cautiously optimistic about the potential to develop safeguards and standards. There was some frustration expressed that known issues around children’s online safety have not been adequately addressed as AI has developed. The tone became more solution-oriented and forward-looking towards the end, focusing on upcoming regulations and standards that could help protect children.

Speakers

– Leanda Barrington-Leach: Moderator, representative of Five Rights Foundation

– Nidhi Ramesh: Five Rights Youth Ambassador, 16 years old, from Malaysia

– Jun Zhao: Senior researcher in the Department of Computer Science at Oxford University, leads the Oxford Child-Centered AI Design Lab

– Brando Benifei: Member of the European Parliament, co-rapporteur of the AI Act, co-chair of the child rights intergroup

– Ansgar Koene: AI ethics and public policy regulatory lead at Ernst & Young, trustee of Five Rights Foundation

– Baroness Beeban Kidron: Chair of Five Rights Foundation, member of the House of Lords in the UK, architect of the age-appropriate design code

Additional speakers:

– Peter Zanga Jackson: Regulator from Liberia

– Jutta Croll: German Digital Opportunities Foundation

– Lena Slachmuijlder: Affiliation not specified

– Dorothy Gordon: From UNESCO (mentioned in a question)

Full session report

The Impact of AI on Children: A Comprehensive Discussion

This summary provides an in-depth overview of a discussion focused on the impact of artificial intelligence (AI) on children and the need for regulation to protect children’s rights in the digital environment. The conversation brought together experts from various fields, including youth representation, academia, policy-making, and industry. The session, which experienced some technical difficulties, served as a preparatory event for the AI Action Summit in Paris.

1. AI’s Pervasive Influence on Children’s Lives

The discussion opened with a stark realisation: AI is ubiquitous in children’s lives, often operating without their awareness. Nidhi Ramesh, a 16-year-old Youth Ambassador, highlighted that many children don’t realise most of their online interactions are mediated by AI algorithms, which make choices, recommendations, and even decisions for them. This lack of awareness raises critical questions about informed consent and digital literacy among young users.

Dr Jun Zhao from Oxford University provided empirical evidence, noting that a recent UK survey showed children are twice as likely to adopt new AI technologies compared to adults. This rapid adoption underscores the urgency of addressing potential risks associated with AI use among children.

2. Risks and Challenges

The speakers unanimously agreed that AI poses significant risks to children’s privacy, safety, and well-being. These risks include:

a) Data Exploitation: AI systems can collect sensitive data from children without proper safeguards, as pointed out by Dr Zhao.

b) Privacy Violations: The pervasive nature of AI raises concerns about children’s privacy rights.

c) Exposure to Harmful Content: AI chatbots and recommendation systems can inadvertently expose children to inappropriate content.

d) Mental Health Impacts: The psychological effects of AI, particularly systems designed for companionship, were highlighted as an area of concern.

e) Educational Risks: Nidhi Ramesh raised thought-provoking questions about AI’s impact on learning, noting that while AI can make homework quicker, it risks compromising essential learning skills, creativity, and critical thinking abilities.

f) Amplification of Existing Harms: Leanda Barrington-Leach emphasised that AI can exacerbate existing systemic problems affecting children.

3. Regulatory Landscape and Challenges

The discussion highlighted the evolving regulatory landscape surrounding AI and children’s rights:

a) EU AI Act: Brando Benifei, Member of the European Parliament, noted that while the EU AI Act includes some provisions to protect children, these were not initially present and had to be introduced through amendments. This revelation underscores the importance of vigilance in ensuring children’s rights are protected in AI regulations.

b) Technical Standards: Ansgar Koene from Ernst & Young pointed out that technical standards are still being developed to operationalise AI regulations effectively, particularly for the AI Act.

c) AI Code for Children: Baroness Beeban Kidron mentioned the development of an AI code for children by the Five Rights Foundation. This code aims to provide practical guidance on designing AI systems with children’s rights in mind and is expected to be launched at the Paris summit. It targets policymakers, regulators, and AI developers.

d) Global Cooperation: Benifei stressed the need for global dialogue and cooperation to build common frameworks for protecting children in AI systems.

e) Global Digital Compact: Baroness Kidron highlighted the relevance of the Global Digital Compact to AI and children’s rights, emphasising its potential impact on global governance of digital technologies.

4. Designing AI Systems with Children in Mind

The discussion emphasised the importance of integrating children’s needs and rights into AI development from the outset:

a) Safety by Design: Dr Zhao advocated for incorporating safety by design principles into AI development, noting that some AI companies are already embracing this approach.

b) Organisational Awareness: Koene highlighted that many organisations lack awareness of how their AI systems impact children, suggesting a need for greater education and expertise within the industry.

c) Expert Involvement: The importance of involving subject matter experts on children’s impacts in AI development was stressed.

d) Ethical Considerations: Barrington-Leach argued that AI should not be used to experiment on children, emphasising the need for ethical guidelines in AI development and deployment.

5. Role of Education and Awareness

The discussion touched upon the crucial role of education in protecting children from AI risks:

a) Family Involvement: Peter Zanga Jackson, a regulator from Liberia, highlighted the role of families in educating children about AI.

b) School Curriculum: The need to integrate AI awareness into school curricula was discussed.

c) Public Awareness: Speakers agreed on the importance of increasing public awareness about AI’s impact on children. Koene emphasised the need for public sector support in educating the general population about AI risks.

6. Consumer Rights and Advocacy

Koene pointed out the potential role of consumer rights organisations in advocating for safer AI products and pressuring tech companies to respect children’s rights.

7. Unresolved Issues and Future Directions

The discussion identified several unresolved issues and areas for future focus:

a) Enforcement of Regulations: Questions remain about how to effectively enforce AI regulations and standards across different jurisdictions.

b) Balancing Innovation and Protection: Finding the right balance between fostering AI innovation and protecting children from potential harms remains a challenge.

c) Prioritising Children’s Rights: Ensuring AI companies prioritise children’s rights and safety over profit motives was identified as an ongoing concern.

d) Addressing Subtle Risks: Dr Zhao highlighted the complexity of AI risks and the need for better awareness and translation of policies into practical guidance.

Conclusion

The discussion concluded with a call to action for policymakers, tech companies, and society at large to ensure AI systems are designed and governed with children’s rights and well-being at the forefront. The upcoming AI code for children was highlighted as a potential blueprint for addressing these concerns and implementing practical safeguards for children in the AI landscape.

The conversation demonstrated a high level of consensus on the main issues, with speakers from various backgrounds sharing similar concerns and proposed solutions. This strong agreement implies a clear direction for future policy-making and research in the field of AI governance for children’s protection. However, the discussion also revealed the complexity of the challenges ahead and the need for continued dialogue, research, and collaborative action to ensure a safe and beneficial AI environment for children.

Notable initiatives mentioned include the child rights intergroup in the European Parliament, which Brando Benifei highlighted as an important forum for addressing these issues. The discussion underscored the importance of translating high-level policies and principles into practical, implementable guidelines for AI developers and policymakers to effectively protect children’s rights in the rapidly evolving AI landscape.

Session Transcript

Leanda Barrington-Leach: who we are, Five Rights Foundation, so I was saying that we do research, we develop policy and technical frameworks also to ensure that digital systems are designed to deliver for children and notably with children’s rights in mind, children being all under 18s around the world. As part of this work for the General Comment 25, which sets out how the Convention on Rights of the Child applies to the digital environment, we worked very closely with a number of governments around the world to develop new policy and regulatory frameworks, in particular the age-appropriate design code, which if you haven’t heard about we can tell you more later. The reason we are doing this is that obviously we have seen in our work with kids that there has been basically a global problem in that tech has developed ignoring children and ignoring their established rights that many people fought very very hard for in the past century and suddenly we have a new world order and these are being trampled upon and it’s a global problem because young kids are using the same technology all around the world and living very similar experiences and similar risks and similar harms. Luckily there is a global solution, so global problem, global solution and we are working towards global norms for tech design to ensure as I said that those established rights are taken into account in the digital environment. So today AI, so what we see is something which is maybe not fundamentally new but which is supercharging some of these harms and systemic problems that we have already been addressing and we are looking indeed for, as I say, global standards and a way of addressing this. Luckily there is clearly a rising understanding and convergence, political will, rising political will. to address these issues, in particular for children based on their established rights. How are we going to do it? Well, I am very, very pleased to today be joined by a very distinguished panel of experts and also young people who can help us define some of the things that we need to take forward, in particular as this event is an official preparatory event for the Paris AI Action Summit, and so we are looking at practical solutions to the issues we face. I am going with that to hand over, just for some opening words, to our first speaker, Nidhi. Now, Nidhi is a Five Rights Youth Ambassador, 16 years old, from Malaysia, and there she is. Hello, Nidhi, it’s lovely to see you with us today. Nidhi is a very passionate advocate for children’s rights in the digital environment, has represented children and five rights around the world, including, I think, previously at an IGF. Nidhi, so this is your second time at the IGF, welcome back. Nidhi is also an author, she hosts her own podcast, and is also of the Five Rights Youth Voice podcast, check this out. So with that, Nidhi, over to you, tell us from your perspective, what’s happening and what needs to be done.

Nidhi Ramesh: Hello, everyone, and thank you, Leanda, so much for such a kind introduction. I’ll repeat, my name is Nidhi Ramesh, and as a child rights activist in the digital space, I’m so honoured to be here today, and to be able to share my views and opinions on how AI impacts young people like me all over the world, and what I believe we can do to ensure its responsible use. Before I begin with my own experience, I think it’s key to highlight that AI is on every single platform, mobile application, and website that we all use. When someone says AI, the first thing one might think of is about generative artificial intelligence. intelligence, Gen AI, apps like ChatGBT, Copilot, Jasper AI, Replica, and many others. We live in a world where AI is everywhere, but most of us can’t even tell when we’re interacting with it. Whether it’s through social media algorithms, voice assistants, or personalized learning tools, AI often works in the background, shaping our decisions and experiences. Many children don’t realize that most of their interactions with the online world might actually be through various AI algorithms, making choices, recommendations, and even decisions for them. What’s even more concerning is the misuse of AI by tech companies that put profit above children’s safety and privacy, from recommending harmful and addictive content to collecting data without consent. Many AI systems operate without safeguards for young people. Most children don’t even know they’re being exposed to these algorithms, let alone how to protect themselves from potential harm. That said, I don’t want to paint AI as the villain. AI, when implemented in platforms and used responsibly, has incredible potential, transforming how we learn. AI-powered tools can provide personalized resources, making learning accessible and inclusive. For children with disabilities or those in remote areas, this is a real game-changer. But while these benefits are real, so are the risks, and we can’t afford to ignore them. One major concern is the erosion of originality. AI is flooding the internet with self-generated content, making it harder to find authentic, human-created work. As someone who runs a podcast, I know firsthand how much effort goes into creating something original. and children’s rights in an attempt to spread awareness on topics that are important to me. This also means that I spend a long time researching topics, writing scripts, recording audio, editing, and more in order to publish a full episode. It’s a process that takes time and effort. Yet in this day and age, with the click of a button, one can easily find AI algorithms that can use any topic you give it and generate a compelling script. Other gen AI applications like iSpeech, Descript, Murph, etc. use imported clips of your voice to perfectly imitate what you sound like when you give it this AI-generated script. So in a way, these two AI programs can do what I spend hours and days working on in seconds. And while that might seem convenient, it undermines the value of creativity and hard work that we put in. And this isn’t just about me, it’s about every artist, writer, and musician who’s at risk AI agents. I’ve also written and published two books like Leanda said, but again, nowadays it’s so easy for AI to write or create something similar at just a simple command, undermining so many creators out there who want to share their work with people. Perhaps this example is more relevant to me as an individual. However, the risks and problems that arise for AI are still there and need to be addressed, especially the ones I mentioned at the start. So what do we do about this? How do we ensure that AI works for children and not against them? I believe policymakers and tech companies have a huge role to play. First and foremost, we need stronger laws and regulations around AI governance on social media platforms like Instagram, TikTok, and Snapchat. These systems must prioritize data protection and privacy, especially for children. Young people deserve to know When what information is collected, transparency isn’t optional, it’s essential. We also need AI systems designed with children’s well-being at their core. This means algorithms that promote safety and mental health, rather than exploiting vulnerabilities for profit. Tech companies must be held accountable for the impact their systems have on young minds. Baroness Stephen Kidron and Five Rights, the organization I have the huge honor of being a part of, is currently working on designing and bringing together more regulations on this, which I’m sure will be discussed later as the panel continues. So I’ll leave it there. Thank you.

Leanda Barrington-Leach: Thank you so much, Nidhi. I should not, because you say it so much better than I ever could. I hope you’ll stay with us, because I’m sure our audience in the room and online will have questions and would like to interact with you more afterwards. But we are going to move on to our second panelist, Dr. Jun Cao, who’s joining us from Oxford. Hi, Jun. Great to see you. So Dr. Jun Cao is a senior researcher in the Department of Computer Science at Oxford University. Her research focuses on best algorithm-based decision making on our everyday lives, especially when it regards families and young children. For this, Jun takes a human-centric approach, focusing on understanding users’ needs in order to design technologies that can make a real impact. Jun currently leads the Oxford Child-Centered AI Design Lab and a major research grant examining the challenges of supporting children’s digital agency in the age of AI. So Jun, thank you for joining us. Can you tell us, what’s the research telling us then about the impact of AI on children?

Jun Zhao: Well, thank you so much for the introduction, Liana. And can I just confirm that everyone can hear me all right? I presume that sounds all right.

Leanda Barrington-Leach: Thumbs up, Jun.

Jun Zhao: All right. OK, so I got some slides prepared. Can I project them? Can people see that? I’m also very happy just to talk about the research.

Leanda Barrington-Leach: You bet, Jun.

Jun Zhao: Right. OK, well, thank you very much for inviting me to be here. I wish I could be there in person very much with all of you guys in spirit. It’s shocking to hear Nidhi’s talk and presentation and how much it resonates with our research evidence. You know, as Nidhi said, AI is everywhere in children’s life. From the moment they were born and to the education systems they would be using at home or at school. And we see a very similar number of wide adoption of these technologies if you look at the survey in any countries. And also, as Nidhi said, you know, AI, you know, the rise of AI, children are rapidly embracing these new technologies. Our recent survey in the UK shows that children are twice as likely to adopt these new technologies than adults. And an earlier survey by the Internet Matters also shows that there is such a huge proportion of children in the UK are using AI technologies to help with their schoolwork. So it’s really exciting. And, you know, as Nidhi said, there is a lot of great, exciting opportunities, especially. Support children with their learning, children with special education needs, who needs extra support with their social emotional management. We also see some really exciting examples. to see how AI could help children providing them with better health opportunities, early diagnosis for autism, which is an issue in many countries in the world. And, but also we must be cautious about how all these technologies may or may not have been designed with children’s best interest in mind. So this is the slide showed a recent research we did last year, where we did a systematic review of about 200 pieces work from the human computer interaction research community, which is community pride themselves in designing for human in the heart of the design process. And we tried to analyze how AI has been used in different kinds of application domains for children. It was quite interesting to see how education and healthcare has been the most dominant application areas as well as interestingly keeping children safe online. And now we looked more closely into the range of personal data that’s being used to feed all these algorithms. And we were quite surprised to see the diverse range of really sensitive data, like genetic data, like behavior data could be routinely used by all these AI systems, but not necessarily this full consent or assent from children or even necessary for the function of those applications. And it’s also interesting when we did a review of all the current AI6 principles out there last year, and we tried to do a mapping of all these recommended ethical principles to the actual. Implementation, as everyone can see in this diagram, is a very sparsely populated table. So even basic principles like privacy, like safety, like meeting children’s developmental needs are rarely considered comprehensively in all these applications that are designed for children and often in a very critical area. So this is quite concerning to see how principles are applied in experimental settings. And it’s even more concerning when we see practices taking place in real-world cases. So this is quite an old report from 2007 with the early rise of smart home devices, smart toys. Researchers have identified very quickly serious implications, the safety risks associated with these cute cuddly bears. But you know, seven years on, many legislations have been developed since then, but it was quite… has the rise of the variety of IoT devices and smart home devices. A recent study by us, as well as many other recent studies, have shown that children’s data can be collected by all these devices, even when they’re online or offline. As one of the researchers from a recent privacy conference confessed, individual children probably won’t experience negative consequences due to toys creating profiles about them. And nobody really knows that for sure. So here is another example. see similar to the biases adults would be subject to, children can also be exposed to unfair decisions by AI systems simply due to their race or socioeconomic status, but often probably with much more lasting effects in critical situations such as criminal decision making. Rapid development of AI associated with rapid deployment, but ironically, not always there’s sufficient safeguarding in the process of design. For example, when chatbot-like technologies were deployed by Snapchat last year, some serious risks were immediately reported exposing children to inappropriate content and contact, even when they declared their only age of 13 or 15. So another thing that is quite interesting in our research is we found that although a lot of risks like privacy and safety have been extensively discussed, the exploitive nature of AI algorithms has been rarely discussed. When we began our research with children’s data privacy, we began this experiment of analyzing third-party data tracking behaviors of over 1 million apps from Google Play Store. One of the most shocking discoveries we found from this study is the prevalence of data tracking existence. These cute apps used by children, often very, very young children, when they learn how to begin their handwriting, how to pop a balloon so that they can develop their fine motor skills, this is a violation of children’s basic rights and their vulnerabilities. So, seven years on since our initial research, what has happened, GDPR happened. We repeated our study. It was quite interesting to see that tracking behavior did not change immediately at enforcement of legislation. But what did happen is the app store has made it extremely difficult for us to repeat and continue our data analysis. But what we haven’t stopped is to continue asking the question, why all this tracking of children’s data, and how we can better protect them. It’s interesting to see this recent study published earlier this year, where it provides even more firm evidence about the exploitation of children’s data, the proportion of large social media platforms rely on children’s attention for their advertisement revenue. So just like Nidhi said, these companies are not designing with children’s heart, but their market gains. Several recent studies have made similar findings showing that recommendation systems can actively amplify and direct children to harmful content. For example, here the studies have shown that children identified with mental health issues could be more likely exposed to posts, leading to more mental health risks. Harmful content is because it’s more attention grabbing, invoking stronger emotions and prolong children’s engagement. So many of these studies are actually conducted through simulations because researchers do not access to the platform APIs or the code. of the algorithms. But what happens when we talk to the children directly and ask them about their experiences? This is one of the studies that we conducted last year. Assistant with many other research studies out there, children found experience very passive and disrespectful. And many of them have found it unfair that systems can do this to their data and manipulate their experiences. And while such feelings of being exploited and disrespected can be hard to quantify, we must not neglect how these feelings are fundamentally disrespectful of children’s rights in many ways. And how the same aggressive practices could cause harm for children of different developmental stages or vulnerabilities. And I’ll just leave the evidence discussions here for now and for other speakers, because I think the fundamental phenomenons will have lots of evidence for the fundamental phenomenons, but it will be quite interesting to hear that how the recent EU-AR Act could or could not provide the much needed protection that we need for children of this generation. Thank you very much, Liana.

Leanda Barrington-Leach: Thank you so much, Zhen. I am going to step over here. So I’m in frame for that presentation of some of the overwhelming evidence. And I think if you had a little bit longer, you could have said an awful lot more. I’d like to also point people to some of the research done by Five Rights. I have a disrupted childhood here, which sets out some of the basics of persuasive design and also the pathway. research using avatars shows very clearly how algorithms drive children to very specific harms and of course there’s plenty of evidence from a number of court cases as well of children who to talk about the AI act and I am delighted to welcome the honourable member of the European Parliament maybe over here is uh no apparently the camera needs to see you so um yes so Mr. Brando Benefe who was co-rapporteur of the AI act in the European Parliament and the oversight monitoring group so absolutely critical role to make sure that the AI act delivers and co-chair also of the child rights intergroup in the European Parliament we’re absolutely delighted to have you here

Brando Benifei: yeah I’m really happy I can be here for this opportunity I’m sorry that due to the overlapping with another meeting I’m I have to attend because of the parliament program we have a good delegation here I will need to leave soon after my my intervention maybe if there is one question I can answer but I will continue I want to thank Five Rights Foundation also for the extremely useful contributions that were given during the drafting process of the AI act the original text from the European Commission was unfortunately lacking completely the dimension of child protection it was not there at all so we had to bring it in with amendments from the European Parliament with our drafting work and the negotiations that followed so we have some uh space protection inside AI act not as much as we wanted but there are also some more general provisions that can be applied effectively, if we want to apply them effectively, on the cases that we just heard of. That’s why it’s important that now the Parliament, in the new mandate that just started just a few months ago, both confirmed the existence of the child intergroup, children’s rights intergroup. As I said, I’m now, I will be the vice chair for that. It starts its work now in the new mandate. And it’s an important forum to bring together all the MEPs from different perspectives, all the parliamentarians that want to work on children’s rights. And we confirmed a monitoring group of the AI Act. So after approving the text, now we are following step by step its application. It will be crucial because some of the issues that you have been already talking about with the previous speakers are to be checked in the way that they are applied. For example, in February 2025, full mandatoriness, the full application of the prohibitions, that it’s a very important aspect of the AI Act. And among the prohibited uses, there are also emotional recognition in the study places. We wanted to avoid that, to in fact enter into a form of pressure and intrusion on children in schools. So this is one aspect, for example. But then also predictive policing that can target minors from certain minorities will be prohibited. And also we prohibit the indiscriminate use of AI-powered biometric cameras in live action in a way that will prevent forms of surveillance that can also. infringe the privacy and the protection of children. And we have, for example, prohibit also the facial scraping on the internet. So that’s something that is used that prepare generative AI or chatbots to commit some of the abuses that we have seen. And we are trying to protect this data. But also apart from the prohibitions that will kick in soon, we have very important transparency provisions that will be quite important, looking at the generative AI. And for example, we demand specific protocols by the generative AI developers to contrast the capitals to have this kind of inappropriate conversations that we have seen that has been exemplified earlier and the production of inappropriate content that can be offensive for children. This is something that needs to be entrenched in the way the system is trained and it’s limited and needs to be checked periodically. But also we want to label AI generated content. This is crucial to find another issue that was not very much touched until now in this discussion, which I think it’s very bullying. Cyber mistreatment of children, which is a very important source of mental disorders, of attacking mental health. And in fact, with the new systems of generative AI, you can have a totally new level of extremely damaging cyber bullying of all kinds. And this is something we also need to tackle by avoiding the production, but when the thing is there, at least it needs to be clear that this is not true. That is fake. And so people cannot be. induced to think that a person is doing or saying things that will make them feel ashamed and have mental health problems. And also, finally, I want to underline that these are some more examples about how the AI Act interacts, but I want to concentrate on the fact that this interacts also with the Digital Services Act and with the child sexual abuse material legislation that we have been developing, that has been forwarded by the European Union and we think that the ecosystem needs to work together. As I said, I’m the specialist of the AI Act, I’ve been working on that, but in fact you put that together with this new legislation on child sexual abuse and you can build a proper framework of protection. And in fact, we want to continue a global dialogue, we are working on that, I am doing that with different governments and parliaments so that we can build a common framework of action. And that’s why it’s very important that civil society foundations, organizations can be linked, that are not only between the governments, but also in civil society. And I insist that the parliamentarians have to do their part here. It’s important that we have the IGF parliamentary track that also dealt in one of the discussions about these topics and we need to continue developing in this direction. We hope we can give some good practice by the application of this legislation, but clearly we need to build together an apparatus of actions and legislation, soft and hard laws that can protect our children online. Thank you very much.

Leanda Barrington-Leach: Thank you so much, Brando. I know you have to leave. Do you still have time for another question or anything from the room? Okay. Does anyone have a burning question? Mr. Benifei? I would have lots of questions, but I’ll have to keep them.

Peter Zanga Jackson: Well, my name is Peter Zanga-Jackson, Jr. I’m from Liberia. I’m the regulator. Firstly, thank you for the explanation you gave, but I want to ask you, because the child that we are talking about, they are from the family, and the family is the fundamental of that child. They are in some homes, there’s no check when it comes to the child. Some families, no monitoring. So you don’t think there should be an awareness. Firstly, educate the families as to what they should do. Limit the child or children to some extent before going to the next level of trying to detect giants that develop all of these AR and so on. I can say it’s more on the family. What do you think the family can do as the fundamental of the child? Yeah, children. Our children should start from the possible place before we go outside to find a solution. This is my question.

Brando Benifei: Is it working? You can hear? Okay. So just to answer very quickly on this, I think it’s a very important topic, because we need families to be ready to do their part in this. Obviously, I concentrated on that. It’s also building a culture. And this means you need to give the instruments to the adults to be able to have an informed conversation with their children. So, I don’t think we will solve everything by giving instruments to the adult population. We need schools, we need the formal education targeted at children through the institutions, but obviously if we have a more conscious population, also of the older age, that is not digital native, that needs to be trained, then they can also transmit to their children some basic foundational aspects. Be healthy and protected and conscious and not be manipulated while you are using new technologies, AI, the internet. So, yes, we also need the families to be on board. And we cannot solve everything with that, but at the same time, without investing also in the families, I think we are missing an important piece. I completely agree with you. Thank you.

Leanda Barrington-Leach: Thank you. Thank you very much. Lots of luck in overseeing the AI actor implementation. And we’ll be telling you about what comes from this panel, which is relevant to that later. And I’d just like to say our friend left the room, but I’d like to say that the European Parents Association was very much behind a lot of the work done on the AI Act. They have been big drivers of this. Now, over to our next speaker, Dr. Ansgar Kuna, who is AI ethics and public policy regulatory lead at Ernst & Young. I probably made your title even longer. It’s already quite long. So Ansgar, we are delighted to have as a trustee of the Five Rights Foundation, is our vice chair of our board and an absolute expert in AI and working a lot on the technical standards that are needed. But among others, the implementation and enforcement of the AI Act. So Ansgar, we’re going to hear from you a little bit about the status quo in terms of what we have to get this kind of regulation and also things like the AI convention and the framework that came from the UN a few months ago. So there are a few things. AI Act. So a few words from you, what we’d like to hear about is indeed, what is the status quo in

Ansgar Koene: terms of actually making this real? What’s missing? Where do we go from here? Sure, same check as everyone else. Can you hear me? Okay, good. So yes, we’re definitely in a very interesting period with the introduction of new international charters like the Council of Europe’s charter on AI. Legislation like the AI Act, but also in other jurisdictions that are pushing either through mandatory obligations around safeguards for AI, or are putting on the table an expectation from the regulator that they’re saying, we expect you to follow voluntary codes around responsible use of AI. And certainly, we have seen, if we look at the types of organizations that you are with, that the introduction of has pushed forward the level of engagement, the level of resources also that are provided within organizations, be it public or private sector organizations, to actually make sure that the regulations apply. If we think, especially about the way in which these types of regulations apply and have are going, there is a large challenge for a lot of organizations. There is a large challenge for a lot of organizations, similar to what we’ve seen in the platform space, that the organizations are often not quite aware to what extent what they are doing actually impacts. on children. Similar as what we’ve seen with social media platforms and other online platforms, that when they created the space, they were building the space not with children in mind, they were building the space with adults in mind, even though in reality we know that these platforms are children, they did not even conceive that this is something that they need to be building for. And a similar challenge is in the AI space, especially as in the AI space, as it is moving to a model where we have creators of the core AI models, LLMs being a prime example of that, being separate from the deployers of the AI models that then integrate them into their systems, that there is a distance between those who have the capacity to actually understand and do something about compliance, also compliance with regards to aspects regarding children, that they’re different from the ones that are directly facing the users. And even often those that are directly facing the users are not sufficiently tracking and aware of who exactly their users are. So if the AI Act asks for, prohibits a subliminal manipulation of vulnerable groups, but the deployers and especially the providers are not even aware that young people as a vulnerable group, then of course they are challenged in knowing whether or not they are subliminally manipulating them, or if they are having a negative impact on them, and they don’t even understand what does a negative impact on young people mean. This reflects then on some of the work that has been happening in the standardization space. So last year, the The IEEE published a standard, which Five Rites was a prime, the IEEE 2089 standard on age-appropriate design. And one of the things that that standard actually asks for is that organizations, as they are engaging in the design and development of AI systems, that they have people within that development process who are subject matter experts with regards to the impacts that these systems can have on children. So that there is someone at least involved in the process that thinks about how could this kind of a system impact children, what are the potential challenges, the potential negative impacts that could arise if someone under 18, let alone someone under 10, is using these kinds of systems. However, the standardization space is also, it is a space that is still very much in flux, in development. If we think, for instance, around the standards that are meant to provide the clear operational guidance on how to become compliant with the AI Act, all of those standards are still in development. The European Standards Body, CEN and CENALEC are rushing to try to meet the deadline that the AI Act has set to be able to provide these standards. And because they are rushing, they are focusing at the high level, horizontal level, there are very few, add an understanding as to the particulars that are necessary to address concerns regarding children. Fire rights is participating in the process, but there are multiple standards being developed. Simultaneously, it is highly challenging to be contributing to all of these at the same time, to make sure that the risk management standard. considers the risks to children at the same time as a trust within a standard considers what does accuracy, what our children might use an AI system actually mean. So the technical space around how do we move from a high level intended outcome which the regulations have specified into an operational, what do you need to do on the ground to make sure that the systems work to meet those requirements is still a space that needs a lot of support, it needs a lot of work. And as I’ve said, there is even the core challenge that organizations need to be aware that they even need to consider how children might be impacted by these systems as they’re deploying something at a new chatbot, as they’re deploying AI as part of a system for targeting advertising, as they’re using AI as part of something like that, they’re generally not building it with children in mind. And so it is a space that is dynamic, it is a space that is moving in the right direction to the extent that it has been integrated in the AI actor instance as Brando mentioned. However, because there are so many new things, new compliance activities, new thinking about what does responsible AI actually mean while there is also a huge rush to try to find new ways to actually get a return on investment on these, that there is a huge risk that the particular concerns around children will fall between the cracks if we do not raise enough awareness about this.

Leanda Barrington-Leach: Thank you so much Ansgar. I think we’re going to have our last intervention and then engage in a discussion if that is okay. This one’s not working? Is it working? Okay, great. Thank you so much for that. Indeed, I think it’s absolutely critical that, you know, law is already something, but we need to get down into the weeds to get those technical frameworks in place, because otherwise, companies can say things like, oh, well, we didn’t know that we were exploiting children’s vulnerabilities. We didn’t know that children were there. We are not designing for children. Why do we need… I’m being a little bit provocative, but the reality, of course, is that many of the biggest companies, at the very least, are quite aware that children are a massive market. They are targeting children as their current market and their future market. So, of course, it’s a little bit disingenuous, but until we get all of that detail in place, then that is a game that we will be playing. So, absolutely critical. Thank you so much for that. We have just our last intervention online from Baroness B. Ben Kidron, who is the chair of Five Rights Foundation. Baroness Kidron is a member of the House of Lords in the UK and was the architect of the age-appropriate design code. Baroness Kidron has been a long-term advocate of children’s rights and is currently working on an AI code, which hopefully will feed into some of the things that we’ve been speaking about. So, Baroness Kidron… you you you you you

Baroness Beeban Kidron: I’m delighted that today’s conversation will feed into those which will take place in Paris in February. As Nidhi and Jun will have shared, it’s crucial that we develop artificial intelligence and other automated systems with one eye on how we’re going to impact children. The possibilities are infinite. I was very moved by a system that in real time could monitor a preterm baby heartbeat without having to stick heavy instruments on their paper-thin chest. Terre des Hommes, a fellow children’s rights NGO, recently launched an AI chatbot to support children’s access to justice. Or a few months ago, I met a wonderful group of 14-year-old girls who built an app to teach sign language to hearing classmates so that could all communicate with their deaf peer. There is no holds immense potential. But like any technology, AI must be developed. with children in mind. And I do want to emphasize that it’s a design choice if recommender systems feed children alarming content, promoting heating disorders, or self-harm. It’s a design choice if AI-powered chatbots encourage emotional attachments, which may, in some cases, have led to children taking their own lives. It’s a design choice if, cynically, some of those chatbots revive deceased children through the creation of AI bots imitating their personalities, re-traumatizing their families and friends, and creating a loop in which self-harm or suicide is valorized. As children point out to us repeatedly, it’s a design choice if AI-powered chatbots encourage emotional attachments, promoting heating disorders, or self-harm. It’s a design choice if, cynically, some of those chatbots revive deceased children through the creation of AI bots imitating their personalities, re-traumatizing their families and friends, and creating a loop in which self-harm or suicide is valorized. It’s a design choice if, cynically, some of those chatbots revive deceased children through the creation of AI bots imitating their personalities.

MODERATOR: Dear host, I don’t know if you can hear me, but if you can allow me to share my screen again I can restart the playback.

Leanda Barrington-Leach: I’m so sorry, it seems to be either a choice between online people being able to see and hear and us being able to see and hear. Why don’t we give that a moment, I think there were some questions, at least the ones from the room, I’m not sure about the ones online. Let’s come to that and see if we can get the end of Baroness Kidron’s intervention in a minute. Jutta, you had a question or a point to make.

Jutta Croll: Yes, Jutta Kohl from the German Digital Opportunities Foundation. I had the honour to work with Five Lights Foundation in the working group on general command number 25. And I really appreciate what we heard from Bieben, also what we heard from Ansgar. But at the same time, I’m a bit disillusioned, because I think it’s 10 to 12 years ago when we really talked to tech companies about the concept of safety by design. And that was, although we had artificial intelligence at that point of time, it was not in the hands of the children in the real way. So I would have expected that this principle would be in the standards, would be in the hands of developers to be adhered to and to take in consideration that children… It was obvious throughout all the developments. I would say when the internet came up, it was not designed for children. So maybe we were then going behind that and say, okay, now we have this idea of safety by design. Have in mind that children probably will be users of the services of the devices and so on. And now we end up several years ago with AI and the same situation that we had before with other technology, digital technology. Ansgar, maybe you have an answer to that, not to disillusion me?

Ansgar Koene: I fear that my answer is not going to be something that will remove your disillusionment. So, the practice that we are seeing is to bring things to the market and in that rush, it remains the case that the so-called functional requirements, that is to say the things that need to be there in order to produce the type of output that they want to create, get the prominence sort of the investment and the so-called non-functional, this terminology is terrible, requirements such as making sure that there will not be negative consequences especially are marginalized in the design process unless there is a significant factor behind it, such as the risk of a huge fine. That is why even though we’ve seen discussions around responsible AI principles for many years, there was always a lack of investment to really get it implemented. There was often the case of technologists within the company saying, this is something that we should be doing, but they were not being given the resources to actually do it Until now, there is something like an AI act where you’re going to face fines, now suddenly there is an investment in doing it.

Leanda Barrington-Leach: Is it this that’s crackling? This is okay? It was your voice. Okay, maybe we’ll get Baroness Kidron back and also our other speakers. But in the meantime, oh, wonderful, we can hear. I hope that’s current, not right from the beginning when they couldn’t hear. But in the meantime, do we have any other questions or comments from the room? Otherwise, I will go to two ones online, too. And I have to say, Jutta, I totally agree with you. It’s taking far too long. And as I said a bit before, it’s a little bit disingenuous, because we do know what the issues are and we’ve known for a long time. Let’s go over to Lena.

Lena Slachmuijlder: Yeah, thanks so much. And it’s just such good work that you’re all doing.

Baroness Beeban Kidron: Good morning. I regret not being able to be with you in person at the Internet Governance Forum today. This session is officially accredited as a preparatory event for the AI Action Summit. And I’m delighted to be able to feed into those which will take place in Paris in February. As Nadine and Jun will have shared, it’s crucial that we develop artificial intelligence and other automated systems with one eye on how it will impact choice.

Lena Slachmuijlder: Yeah. Okay. I mean, I also feel as though we’ve known the issues for a long time, and the only age is when they face fines or penalties or litigation. I’m just curious, because there’s people from other countries in the room as well. And I’m wondering, Five Rights has been doing some work globally. You know, are we seeing the same conclusions in terms of the ability of other countries?

Baroness Beeban Kidron: Good morning. I regret not being able to be with you in person at the Internet Governance Forum today. This session is a preparatory event for the AI Action Summit. and I’m delighted that today’s conversation will feed into those which will take place in Paris.

Leanda Barrington-Leach: Thank you. In the past. Use the microphone. I’m going to

Lena Slachmuijlder: I was to the room who also finding similar issues. Is there a sense from other countries that they also need to get in line and have some really robust regulation like we heard from the experience in Europe. It’s just an invitation for others online or in the room. I think the work that I do is also aligned with five rights in it with the Council on tech and social cohesion. It’s trying to regulate the upstream design features that lead to these kinds of harms and also polarization.

Leanda Barrington-Leach: I’m so sorry because I was only half listening and afterwards you’re going to tell me that again because think I want to know. We also have online our speakers Jun and Nidhi. Anything that you have heard please wave or put something in the chat and then I will see it and bring you in. Otherwise is there anything else from the room? I have a question online. Okay so I have a question online from Dorothy Gordon from UNESCO who’s asking how involved to consumer rights organizations in working to get major tech companies to stop abusing children’s rights in this way? I believe we need consumers to deliberately avoid using dangerous products. So that’s public awareness and almost boycotting I guess what consumer organizations can do other things like submitting

Ansgar Koene: complaints to take that. I’m afraid the only part that consumer rights organizations are doing in this space that I can really speak to is that yes we do have at least in Europe when it comes to the standardization for the AI act we do have participation by the consumer rights organization ANEC in helping to make sure that consumer concerns are taken into consideration as the standards are being developed and this isn’t by a non-industry player. I’m not aware as to the activities that are being done regarding educating users as to the impact that these types of technologies may have on them and therefore to try to help them make a informed choice as to whether they want to use these tools. Obviously there are NGOs that are working on these types of things as well. Mozilla every year has before Christmas some activities around which digital tools may be spying on you etc. and will only reach a particular subsection of a population who are generally already I imagine this is a space where we also need support from public sector to do campaigns to help people better understand this. who has the resources to reach the whole population as opposed to only people who already are looking for this kind of information I think that is going to be a big question.

Leanda Barrington-Leach: Thank you, Ansgar. I’m going to take the liberty of getting slightly longer on the tech I’m going to jump in a little bit more quickly AI is not new. Artificial intelligence was created in 1955. AI systems are really a continuation of algorithmic and automated systems which we all have experience with. This also means that AI is not too complicated to understand and it is certainly not too complicated to regulate. Secondly, AI is not a new theory. Generative AI systems are different. It is based on machine learning and artificial general intelligence. Theoreticals now is wholly different. Generative AI is coming. It is unnecessary to tackle tasks that could be automated through other specialized AI even non-AI approaches which are more accurate and energy deficient. The choice of model should be based on necessity and proportionality. Thank you so much. Make sure AI remains an example. We are not. It’s a simple challenge. We’re just a species. We continue to exist in the past. We seek to maximize profit. We try to assume. It’s a simple challenge. It’s a simple challenge. It’s a simple challenge. It’s a simple challenge. It’s true that we seek to sow doubt and uncertainty to keep authorities from effectively legislating and to prevent citizens from demanding effective legislation. But this does not tolerate AI exceptions. It’s no secret that adults have failed to provide children with the best respecting online environment. As AI is not different from previous technologies, the same will happen if we do not act immediately. This is why over the past year, building on global consensus and working hand in hand with global experts in the field, we have developed an AI code for children. Launching the code in the coming months will provide a clear and practical path forward for designing, deploying and governing AI systems, taking into account children’s rights and needs. It’s an important and necessary correction to the persistent failure to consider children and a vital blueprint for delivering on the commitments to children in the global digital compact and in regulatory advances such as the AI Act. We need from the outset to consider how to build the rights and needs of every child into the design and governance of AI systems. The code, which we will launch at Paris hopefully, will be for anyone who designs, adapts or deploys an AI system that impacts children. It is practical, actionable, adaptable and applicable to all kinds of AI systems. It mandates certain expertise and actions and raises questions designed to reveal gaps and risks. It leaves a level of autonomy to find sufficient mitigation measures. It is intended to support existing regulatory initiatives and provide a standard for those jurisdictions that are considering introducing new legislation or regulation. In the Global Digital Compact, all governments agreed on the urgent need to assess and address the potential impact, opportunity and risks of artificial intelligence systems on the well-being and rights of individuals. That’s a quote. Children represent one third of internet users and are early adopters of technology and have unique rights and vulnerabilities. They must be at the centre of our discussions and considerations. I hope I didn’t misquote any of that or that she didn’t change it in the final version. But you can always quote me because I agree with all of that. I think putting children at the centre of the conversation maybe means that in the last few minutes, I’d like to go back to Nidhi, if that is OK with you. Nidhi, are you still with us?

Nidhi Ramesh: Yes, I am.

Leanda Barrington-Leach: Could you bring Nidhi up, please? OK, I don’t think our tech people are listening to me again. Nidhi, if you’re with us, I’d love to hear your reflections. You talk to, hello again, not only your peers and colleagues all the time, but also a big group of child ambassadors within Five Rights. What are the conclusions that you draw from this and what are maybe some of the things that you think that you and your colleagues would like to tell us and for us to take forward?

Nidhi Ramesh: Thank you, Yolanda. That’s such an interesting question. So as Five Rights youth ambassadors, we often- Can’t hear you yet. Oh, sorry. My mic should be on. All right.

Leanda Barrington-Leach: Try again.

Nidhi Ramesh: Hello. Can you hear me now?

Leanda Barrington-Leach: No, still not. AI will one day solve all of these problems, I am sure. Oh, Nidhi, can we hear you now?

Nidhi Ramesh: Yes. Hello. Can you hear me now?

Leanda Barrington-Leach: We can, we can. Go ahead.

Nidhi Ramesh: Perfect. All right. Then I’ll just start again. Thank you so much, Leanda. That’s such an interesting question. And as Five Rights Youth Ambassadors, we often discuss the opportunities and the risks of AI, especially for children and young people. While we see its potential, there have been obviously some key concerns that stand out to us. So one major issue is education. AI can make homework quicker, but it risks taking away from essential learning skills. As one of my peers put it, it’s making homework easier, but at what cost to our learning? And we worry a lot about losing creativity and critical thinking, skills we’ll need later on in life. Another significant concern is privacy. AI systems can analyze so much about us, even from just a photo or a message. One ambassador shared how AI is amazing and how it can help us, but it’s also scary how much it knows about us. Many of us feel uncomfortable with how much information we’re unknowingly sharing, especially when we’re not informed about how it’s being used, like how I mentioned earlier during my first intervention. We’ve also talked a lot about the psychological risks of AI, systems designed for companionship, for example, might seem helpful, but in the long term, have a lot of consequences. As one of our ambassadors said, it’s about more than just privacy. technology, it’s kind of about our values, relying on machines that mimic empathy could affect our real world social skills, especially for vulnerable young people. And of course, there’s the growing threat of deep fakes. Marco, one of our youth ambassadors summed up well by saying that AI tools are developing and deep fakes are becoming scarier, and it can ruin people’s online footprint. So to sum it up, while AI brings immense opportunities, it’s these educational, ethical and privacy related risks that concern us the most. And it’s crucial that AI systems are designed to protect young people with a lot of safeguards that prioritize our rights and well-being.

Leanda Barrington-Leach: Thank you so much. Thank you so much, Nidhi. It’s always wonderful to hear from you and from your fellow youth ambassadors. I hope that the code that we’ll bring out is something that will serve you. But we are going to get your direct feedback, of course, on it very soon. We really hope that you will find some of the elements there to address some of the things that you have brought up. We have two minutes to go. And we have had a very eventful session, I would say. But I don’t know, Jun, if you’re online, if you want to come back with any closing words.

Jun Zhao: Hi, Leanna, can you still hear me alright?

Leanda Barrington-Leach: We can.

Jun Zhao: Oh, fabulous. Fantastic. And what a fabulous session. And I tried to come in a few times. And I think it was got confused a few times when we are trying to manage the video and hybrid attention. And what I just want to really try to come in is about two things the discussion about safety by design and parents’ role in order of safeguarding children. I think I agree with Ansgar’s point. I think we are definitely moving towards the right positive direction, but it’s a really challenging domain. I know there are a lot of Gen AI companies embracing the safety by design principles and trying to integrate that really actively in their design and development process now, which is really encouraging to see that, especially if they are taking that perspective from children. But it’s very complex because the risks are quite diverse. I agree with what Ansgar said. Some of the companies may not be aware of some of the risks for children, but I think some of them do. At the same time, there’s a challenge because the diverse risks, some of them may not seem having direct impacts or immediate safety risks for children. Some of the risks that Nadi raised, like exploitations, manipulations, they may not see it as harm, but they are harmful nevertheless. So it will be quite interesting to see in the next couple of years when the EU AI Act as well as many other acts come into place, how all this understanding about various forms of risks and harms are going to fair out in the legislation enforcement, and how we can all work together to facilitate better awareness, better translation from policies into practical guidance so we can create a better AI world for our children and our society as a whole. And I think that’s all I’ve got to say, Liana. I hope that way we can finish on a positive note. note and something exciting for us to look forward to in 2025.

Leanda Barrington-Leach: Thank you very much, Jin. Indeed, there remain outstanding questions. But as you have said, there’s still plenty going on. We do have 2025, lots of things that we can deliver on. And I think I would just like to reference maybe at the end that in the UN framework, and we’re here under the UN’s umbrella, governing AI for humanity, there was a very, very clear point, which is that AI must not be experimenting on children. AI might be in some ways, some aspects of it, we might be using it in new and novel ways. But, you know, we can innovate all we want. But this is something where we know that our children are too precious and grow up too fast. And the education, as you said, Nidhi, you know, even impacting your education, we’re talking about, you know, the generations on the future. We must not be experimenting on children. And this is what we will take to the to the Paris summit with all of this input. And we hope that all online and in the room, you will come behind us and have a look at this code and see how it can be bettered, improved so that it can deliver on these issues for kids. Thank you so much. I’d like you all to join me in thanking our panelists for this very rich discussion. I’m very grateful for your patience in particular. Thanks so much. Thank you so much for such an amazing session, Landa. Thank you, everyone.

N

Nidhi Ramesh

Speech speed

156 words per minute

Speech length

1168 words

Speech time

448 seconds

AI is ubiquitous in children’s lives but often operates without their awareness

Explanation

AI is present on every platform, application, and website that children use. Many children don’t realize that most of their online interactions are through AI algorithms making choices and decisions for them.

Evidence

Examples given include social media algorithms, voice assistants, and personalized learning tools.

Major Discussion Point

Impact of AI on Children

AI poses risks to children’s privacy, mental health, and learning

Explanation

AI systems can analyze a lot of personal information from children, even from just a photo or message. There are concerns about losing creativity and critical thinking skills due to AI-assisted homework.

Evidence

Quotes from youth ambassadors expressing concerns about privacy and the impact of AI on learning.

Major Discussion Point

Impact of AI on Children

Agreed with

Jun Zhao

Leanda Barrington-Leach

Agreed on

AI poses risks to children’s privacy and well-being

J

Jun Zhao

Speech speed

121 words per minute

Speech length

1695 words

Speech time

839 seconds

AI systems can collect sensitive data from children without proper safeguards

Explanation

AI applications designed for children often use sensitive personal data, including genetic and behavioral data. This data collection often occurs without full consent or necessity for the application’s function.

Evidence

Systematic review of about 200 pieces of work from the human-computer interaction research community.

Major Discussion Point

Impact of AI on Children

Agreed with

Nidhi Ramesh

Leanda Barrington-Leach

Agreed on

AI poses risks to children’s privacy and well-being

AI chatbots and recommendation systems can expose children to inappropriate content

Explanation

AI-powered systems can amplify and direct children to harmful content. This is particularly concerning for children with mental health issues who may be exposed to more risky content.

Evidence

Studies showing that recommendation systems can actively amplify and direct children to harmful content.

Major Discussion Point

Impact of AI on Children

Agreed with

Nidhi Ramesh

Leanda Barrington-Leach

Agreed on

AI poses risks to children’s privacy and well-being

Safety by design principles should be integrated into AI development

Explanation

There is a positive trend of AI companies embracing safety by design principles and integrating them into their development processes. However, the complexity of diverse risks for children makes this challenging.

Evidence

Mention of Gen AI companies actively integrating safety by design principles in their processes.

Major Discussion Point

Designing AI Systems with Children in Mind

L

Leanda Barrington-Leach

Speech speed

153 words per minute

Speech length

2875 words

Speech time

1122 seconds

AI can amplify existing harms and systemic problems affecting children

Explanation

AI is supercharging some of the harms and systemic problems that already exist in the digital environment. This is a global problem as children around the world are using the same technology and facing similar risks and harms.

Major Discussion Point

Impact of AI on Children

Agreed with

Nidhi Ramesh

Jun Zhao

Agreed on

AI poses risks to children’s privacy and well-being

AI should not be used to experiment on children

Explanation

While AI brings opportunities, it should not be used to experiment on children. Children are too precious and their development too important to be subject to experimental AI technologies.

Evidence

Reference to the UN framework on governing AI for humanity, which states that AI must not experiment on children.

Major Discussion Point

Designing AI Systems with Children in Mind

B

Brando Benifei

Speech speed

137 words per minute

Speech length

1135 words

Speech time

493 seconds

The EU AI Act includes some provisions to protect children, but more is needed

Explanation

The EU AI Act now includes provisions for child protection, which were initially lacking in the original text. However, there is still a need for more comprehensive protection measures for children in AI systems.

Evidence

Examples of prohibitions in the AI Act, such as emotional recognition in study places and indiscriminate use of AI-powered biometric cameras.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

Agreed with

Ansgar Koene

Baroness Beeban Kidron

Agreed on

Need for AI regulation and governance to protect children

Differed with

Ansgar Koene

Differed on

Approach to AI regulation

Global cooperation and dialogue is needed to build common frameworks

Explanation

There is a need for continued global dialogue to build a common framework of action for AI governance. This involves not only governments but also civil society organizations and parliamentarians.

Evidence

Mention of working with different governments and parliaments, and the importance of the IGF parliamentary track.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

A

Ansgar Koene

Speech speed

131 words per minute

Speech length

1447 words

Speech time

659 seconds

Technical standards are still being developed to operationalize AI regulations

Explanation

Standards to provide clear operational guidance on how to comply with AI regulations are still in development. There is a rush to meet deadlines set by legislation like the AI Act, but this rush may lead to insufficient consideration of children’s concerns.

Evidence

Mention of European Standards Body CEN and CENALEC working on standards for AI Act compliance.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

Agreed with

Brando Benifei

Baroness Beeban Kidron

Agreed on

Need for AI regulation and governance to protect children

Differed with

Brando Benifei

Differed on

Approach to AI regulation

Organizations often lack awareness of how their AI systems impact children

Explanation

Many organizations deploying AI systems are not aware that their systems may impact children. This lack of awareness makes it challenging to comply with regulations aimed at protecting children.

Major Discussion Point

Designing AI Systems with Children in Mind

There is a need for subject matter experts on children’s impacts in AI development

Explanation

Organizations developing AI systems need to include subject matter experts who understand the potential impacts on children. This expertise is crucial for considering how AI systems could affect children during the development process.

Evidence

Reference to IEEE 2089 standard on age-appropriate design, which calls for including such experts in AI development.

Major Discussion Point

Designing AI Systems with Children in Mind

Consumer rights organizations have a role in advocating for safer AI products

Explanation

Consumer rights organizations are participating in the development of standards for AI regulations. They help ensure that consumer concerns are taken into consideration in the standardization process.

Evidence

Mention of ANEC (European consumer voice in standardisation) participating in AI Act standardization efforts.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

B

Baroness Beeban Kidron

Speech speed

110 words per minute

Speech length

436 words

Speech time

236 seconds

An AI code for children is being developed to provide practical guidance

Explanation

Five Rights Foundation is developing an AI code for children to provide clear and practical guidance for designing, deploying, and governing AI systems with children’s rights and needs in mind. This code aims to address the persistent failure to consider children in AI development.

Evidence

Mention of the code being developed over the past year, building on global consensus and working with global experts.

Major Discussion Point

Regulation and Governance of AI for Children’s Protection

Agreed with

Brando Benifei

Ansgar Koene

Agreed on

Need for AI regulation and governance to protect children

P

Peter Zanga Jackson

Speech speed

116 words per minute

Speech length

158 words

Speech time

81 seconds

Families and schools have a role in educating children about AI

Explanation

Families, as the fundamental unit for children, should be educated about AI and its impacts. This education should start at home before expanding to broader societal efforts.

Major Discussion Point

Designing AI Systems with Children in Mind

Agreements

Agreement Points

AI poses risks to children’s privacy and well-being

Nidhi Ramesh

Jun Zhao

Leanda Barrington-Leach

AI poses risks to children’s privacy, mental health, and learning

AI systems can collect sensitive data from children without proper safeguards

AI chatbots and recommendation systems can expose children to inappropriate content

AI can amplify existing harms and systemic problems affecting children

Multiple speakers highlighted the various risks AI poses to children, including privacy violations, exposure to inappropriate content, and potential negative impacts on mental health and learning.

Need for AI regulation and governance to protect children

Brando Benifei

Ansgar Koene

Baroness Beeban Kidron

The EU AI Act includes some provisions to protect children, but more is needed

Technical standards are still being developed to operationalize AI regulations

An AI code for children is being developed to provide practical guidance

Speakers agreed on the necessity of developing comprehensive regulations, standards, and guidelines to ensure AI systems are designed and deployed with children’s rights and safety in mind.

Similar Viewpoints

Both speakers emphasized the importance of incorporating children’s perspectives and expertise in AI development processes to ensure systems are designed with children’s safety and rights in mind.

Ansgar Koene

Jun Zhao

Organizations often lack awareness of how their AI systems impact children

There is a need for subject matter experts on children’s impacts in AI development

Safety by design principles should be integrated into AI development

Unexpected Consensus

Global cooperation for AI governance

Brando Benifei

Leanda Barrington-Leach

Global cooperation and dialogue is needed to build common frameworks

AI can amplify existing harms and systemic problems affecting children

While not explicitly stated by all speakers, there was an underlying agreement on the need for global cooperation to address AI’s impact on children, recognizing it as a global issue requiring coordinated solutions.

Overall Assessment

Summary

The speakers generally agreed on the significant risks AI poses to children’s privacy, safety, and well-being, as well as the urgent need for comprehensive regulations and guidelines to protect children in the AI landscape.

Consensus level

High level of consensus on the main issues, with speakers from various backgrounds (youth, academia, policy-making) sharing similar concerns and proposed solutions. This strong agreement implies a clear direction for future policy-making and research in the field of AI governance for children’s protection.

Differences

Different Viewpoints

Approach to AI regulation

Brando Benifei

Ansgar Koene

The EU AI Act includes some provisions to protect children, but more is needed

Technical standards are still being developed to operationalize AI regulations

While Benifei emphasizes the progress made in including child protection provisions in the EU AI Act, Koene highlights the ongoing challenges in developing technical standards to implement these regulations effectively.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of current regulatory efforts and the specific approaches needed to protect children in AI development and deployment.

difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but offer different perspectives or emphasize different aspects of the problem. This suggests a general consensus on the importance of protecting children in AI development, but some differences in how to achieve this goal effectively.

Partial Agreements

Partial Agreements

All speakers agree on the risks AI poses to children, but they differ in their focus. Ramesh emphasizes the impact on learning and mental health, Zhao highlights data collection issues, and Koene points out the lack of awareness among organizations developing AI systems.

Nidhi Ramesh

Jun Zhao

Ansgar Koene

AI poses risks to children’s privacy, mental health, and learning

AI systems can collect sensitive data from children without proper safeguards

Organizations often lack awareness of how their AI systems impact children

Similar Viewpoints

Both speakers emphasized the importance of incorporating children’s perspectives and expertise in AI development processes to ensure systems are designed with children’s safety and rights in mind.

Ansgar Koene

Jun Zhao

Organizations often lack awareness of how their AI systems impact children

There is a need for subject matter experts on children’s impacts in AI development

Safety by design principles should be integrated into AI development

Takeaways

Key Takeaways

AI is pervasive in children’s lives but often operates without their awareness or proper safeguards

AI can amplify existing harms and pose risks to children’s privacy, mental health, and learning

The EU AI Act includes some provisions to protect children, but more comprehensive regulation is needed

Technical standards and practical guidance (like the proposed AI code for children) are still being developed to operationalize AI regulations

Global cooperation and dialogue is needed to build common frameworks for protecting children in AI systems

Organizations developing AI often lack awareness of how their systems impact children

Safety by design principles should be integrated into AI development with input from child impact experts

Resolutions and Action Items

Launch an AI code for children at the upcoming Paris AI Action Summit to provide practical guidance on designing AI systems with children’s rights in mind

Continue developing technical standards to operationalize the EU AI Act’s provisions related to children

Increase awareness and education for families and schools about AI’s impact on children

Unresolved Issues

How to effectively enforce AI regulations and standards across different jurisdictions

How to balance innovation in AI with protecting children from potential harms

How to ensure AI companies prioritize children’s rights and safety over profit motives

How to address the diverse and sometimes subtle risks AI poses to children beyond immediate safety concerns

Suggested Compromises

Allowing some autonomy for AI developers to find appropriate mitigation measures while mandating certain expertise and actions to protect children’s rights

Thought Provoking Comments

Many children don’t realize that most of their interactions with the online world might actually be through various AI algorithms, making choices, recommendations, and even decisions for them.

speaker

Nidhi Ramesh

reason

This highlights a critical lack of awareness among children about how AI is shaping their online experiences, raising important questions about informed consent and digital literacy.

impact

Set the tone for discussing the hidden influence of AI on children’s lives and the need for greater transparency and education.

Our recent survey in the UK shows that children are twice as likely to adopt these new technologies than adults.

speaker

Jun Zhao

reason

Provides concrete data showing children’s rapid adoption of AI technologies, emphasizing the urgency of addressing potential risks.

impact

Shifted the discussion towards the need for proactive measures, given how quickly children are embracing AI.

The original text from the European Commission was unfortunately lacking completely the dimension of child protection it was not there at all so we had to bring it in with amendments from the European Parliament with our drafting work and the negotiations that followed

speaker

Brando Benifei

reason

Reveals how child protection was initially overlooked in major AI legislation, highlighting the importance of advocacy and the role of policymakers in addressing this gap.

impact

Focused the conversation on the legislative process and the need for continued vigilance to ensure children’s rights are protected in AI regulations.

The practice that we are seeing is to bring things to the market and in that rush, it remains the case that the so-called functional requirements, that is to say the things that need to be there in order to produce the type of output that they want to create, get the prominence sort of the investment and the so-called non-functional, this terminology is terrible, requirements such as making sure that there will not be negative consequences especially are marginalized in the design process unless there is a significant factor behind it, such as the risk of a huge fine.

speaker

Ansgar Koene

reason

Provides insight into the industry practices that prioritize functionality over safety, especially for children, unless there are strong regulatory incentives.

impact

Deepened the discussion on the challenges of implementing child protection measures in AI development and the role of regulation in incentivizing change.

AI can make homework quicker, but it risks taking away from essential learning skills. As one of my peers put it, it’s making homework easier, but at what cost to our learning?

speaker

Nidhi Ramesh

reason

Offers a nuanced perspective on the double-edged nature of AI in education, highlighting concerns about its impact on fundamental learning processes.

impact

Brought the discussion back to the practical, everyday implications of AI for children, particularly in education, and raised questions about long-term consequences.

Overall Assessment

These key comments shaped the discussion by highlighting the pervasive yet often invisible influence of AI on children’s lives, the rapid pace of adoption, the initial oversight in legislation, the challenges in implementing protective measures, and the complex implications for education and development. The discussion evolved from raising awareness about the issue to exploring regulatory approaches and industry practices, and finally to considering the nuanced impacts on children’s learning and development. This progression deepened the conversation, moving from broad concerns to specific challenges and potential solutions, while consistently emphasizing the need for a child-centric approach to AI development and regulation.

Follow-up Questions

How can families be better educated and involved in protecting children online?

speaker

Peter Zanga Jackson

explanation

This question addresses the fundamental role of families in safeguarding children’s online experiences and suggests the need for more awareness and education at the family level.

How can we ensure AI systems are designed with children’s well-being as a core priority?

speaker

Baroness Beeban Kidron

explanation

This area of research is crucial for developing AI systems that prioritize children’s rights and safety, rather than exploiting their vulnerabilities for profit.

How can we better implement the principle of ‘safety by design’ in AI and other technologies?

speaker

Jutta Croll

explanation

This question highlights the need to integrate safety considerations from the earliest stages of technology development, especially for systems that may be used by children.

Are other countries outside of Europe developing similar robust regulations for AI and children’s rights?

speaker

Lena Slachmuijlder

explanation

This area of research is important for understanding the global landscape of AI regulation and children’s rights protection across different jurisdictions.

How can consumer rights organizations be more involved in pressuring tech companies to respect children’s rights?

speaker

Dorothy Gordon (UNESCO)

explanation

This question explores the potential role of consumer advocacy in driving change in tech company practices regarding children’s rights and AI.

How can we address the educational risks of AI, such as its impact on critical thinking and creativity?

speaker

Nidhi Ramesh

explanation

This area of research is important for understanding and mitigating the potential negative effects of AI on children’s learning and skill development.

How can we better inform children about how their data is being used by AI systems?

speaker

Nidhi Ramesh

explanation

This question addresses the need for transparency and education around AI and data privacy for young users.

How can we address the psychological risks of AI, particularly systems designed for companionship?

speaker

Nidhi Ramesh

explanation

This area of research is crucial for understanding and mitigating the potential long-term psychological impacts of AI companionship on children’s social skills and emotional development.

How can we better protect against the threat of AI-generated deep fakes, especially for young people?

speaker

Nidhi Ramesh

explanation

This question addresses the growing concern of AI-generated misinformation and its potential impact on children’s online safety and reputation.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.