WS #279 AI: Guardian for Critical Infrastructure in Developing World

WS #279 AI: Guardian for Critical Infrastructure in Developing World

Session at a Glance

Summary

This panel discussion focused on the challenges and opportunities of using AI to secure critical infrastructure in developing countries. Experts from various sectors discussed key issues including cybersecurity risks, capacity building, and international cooperation.

Panelists highlighted several challenges faced by developing countries, including legacy infrastructure, lack of cybersecurity expertise, and limited resources. They emphasized the need for upskilling technical professionals and leveraging AI to enhance threat detection and response capabilities. The importance of multi-stakeholder collaboration was stressed, with calls for partnerships between governments, private sector, and civil society to develop affordable and accessible AI-powered security solutions.

The discussion explored strategies for reducing dependence on foreign technology, including developing robust domestic legal frameworks, fostering regional cooperation, and investing in local capacity building. Panelists also addressed the need to balance AI-driven security with privacy concerns and ethical considerations, suggesting a risk-based approach and adherence to international standards and human rights principles.

Key recommendations included prioritizing critical infrastructure protection, developing national and regional cybersecurity frameworks, and participating in international forums to share best practices. The importance of tailoring solutions to local contexts while adhering to global standards was emphasized. Panelists also discussed the need for sustainable funding models and tiered pricing to ensure accessibility for developing countries.

Overall, the discussion underscored the potential of AI in enhancing critical infrastructure security while highlighting the need for collaborative, ethical, and context-sensitive approaches to implementation in developing countries.

Keypoints

Major discussion points:

– Challenges in securing critical infrastructure in developing countries, including legacy systems, lack of expertise, and digital transformation issues

– Strategies for training and upskilling cybersecurity professionals in developing countries

– Risks associated with AI systems for critical infrastructure and ways to mitigate them

– Need for international cooperation and partnerships to share best practices on critical infrastructure security

– Balancing AI-driven security with privacy and ethical considerations

The overall purpose of the discussion was to explore how developing countries can leverage AI to enhance the security of critical infrastructure, while addressing challenges around expertise, resources, and international cooperation.

The tone of the discussion was largely informative and collaborative. Speakers shared insights from their various backgrounds and perspectives, building on each other’s points. There was an emphasis on the need for global cooperation and knowledge sharing to address these complex challenges. The tone remained consistent throughout, with speakers maintaining a constructive and solution-oriented approach.

Speakers

– Harisa Shahid: Co-organizer of the session

– Muhammad Umair Ali: Co-organizer of the session, employed in the AI field, represents private sector

– Hafiz Muhammad Farooq: Cybersecurity architect at Saudi Aramco, 20+ years experience in network and cybersecurity

– Jenna Fung: Program director for NetMission.Asia Internet Governance Academy, leads Asia Pacific Youth IGF

– Daniel Lohrmann: Deals with public sector portfolio at Presidio, cybersecurity professional with 30+ years experience

– Gyan Prakash Tripathi: Lawyer, worked with think tanks and research organizations, represents Civil Society Stakeholder Group

– Jacco-Pepijn Baljet: Senior policy officer at the Ministry of Foreign Affairs of the Netherlands

Additional speakers:

– Fernando: Part of Brazilian youth delegation, works in network provider

– Thuy: From .vn directory, part of technical community

Full session report

Expanded Summary: AI for Securing Critical Infrastructure in Developing Countries

This panel discussion brought together experts from various sectors to explore the challenges and opportunities of using AI to secure critical infrastructure in developing countries. The session, co-organised by Harisa Shahid and Muhammad Umair Ali, featured speakers with diverse backgrounds in cybersecurity, policy, law, and youth engagement.

Key Challenges in Securing Critical Infrastructure

The discussion began with Hafiz Muhammad Farooq, a cybersecurity architect at Saudi Aramco, outlining three major challenges faced by developing countries, particularly in the MENA region:

1. Legacy infrastructure: Outdated systems create vulnerabilities and are difficult to secure.

2. Lack of cybersecurity expertise: There is a shortage of professionals skilled in protecting industrial control systems.

3. Digital transformation issues: Rapid adoption of new technologies without adequate security measures.

Jacco-Pepijn Baljet, from the Dutch Ministry of Foreign Affairs, added that limited resources and budget constraints further exacerbate these challenges. Jenna Fung, representing the youth perspective, highlighted knowledge gaps due to less exposure to new technologies in developing countries and emphasized the digital divide that necessitates tailored capacity-building strategies.

Leveraging AI for Enhanced Security

Despite these challenges, speakers agreed that AI presents significant opportunities for improving critical infrastructure security. Hafiz Muhammad Farooq emphasised that AI can augment threat detection and response capabilities, enabling automated analysis of large-scale infrastructure data. However, Daniel Lohrmann, a cybersecurity professional with over 30 years of experience, cautioned that AI systems themselves face risks such as data poisoning attacks, privacy attacks, adversarial attacks, model theft, and dependency or vulnerability supply chain attacks.

Lohrmann also highlighted a unique advantage of AI in overcoming language barriers, suggesting that AI could make cybersecurity solutions available in multiple languages, thereby increasing accessibility for developing countries. He specifically mentioned critical infrastructure sectors such as utilities, finance, government, and transportation as key areas for AI application.

Capacity Building Strategies

The panel agreed on the critical need for capacity building in developing countries. Jenna Fung advocated for developing national strategies tailored to local contexts and leveraging online resources and regional educational opportunities. Daniel Lohrmann suggested establishing public-private partnerships for knowledge transfer and implementing tiered pricing models for AI security solutions to ensure affordability.

An audience member, Fernando, raised concerns about retaining cybersecurity professionals in their home countries. In response, Hafiz Muhammad Farooq suggested focusing on producing more talent rather than retention, emphasizing the importance of continuous education and skill development.

International Cooperation and Partnerships

Speakers unanimously agreed on the importance of international collaboration. Jacco-Pepijn Baljet emphasised the need to exchange best practices and lessons learned through international forums. He also highlighted the relevance of the Global Digital Compact in AI governance. Hafiz Muhammad Farooq called for the development of global standards and frameworks specifically for AI in critical infrastructure, mentioning the AI Act and AS2 directive as examples.

Gayan Prakash Tripathi, representing the Civil Society Stakeholder Group, suggested forming regional knowledge-sharing and R&D blocks to compound available resources. He also proposed a three-pronged strategy for developing countries to reduce dependence on foreign technology:

1. Robust domestic legal framework and strategic contracting

2. Inclusive, transparent, and accountable government mechanisms

3. Regional cooperation and capacity-building for long-term sovereignty

Balancing Security, Privacy, and Ethics

The discussion addressed the need to balance AI-driven security with privacy concerns and ethical considerations. Daniel Lohrmann advocated for implementing robust data governance and secure model development practices. Jacco-Pepijn Baljet suggested adopting a risk-based approach to AI systems regulation and emphasized the need for both high-level international agreements and local legislation to address ethical considerations in AI implementation.

Key Recommendations and Action Items

1. Develop national strategies for tailored capacity building in cybersecurity and AI.

2. Establish public-private partnerships for knowledge transfer and technology access.

3. Work towards creating global standards and frameworks for AI in critical infrastructure security through multi-stakeholder cooperation.

4. Implement tiered pricing models for AI security solutions to ensure affordability for developing countries.

5. Increase collaboration and knowledge sharing through international forums and regional partnerships.

Resilience Strategies and Future Considerations

Daniel Lohrmann suggested specific resilience strategies against AI-driven attacks, including AI-powered threat intelligence, red teaming, simulated attacks, and comprehensive incident response plans. The panel also discussed the importance of balancing global principles with local context when developing AI and cybersecurity policies.

Conclusion

The discussion underscored the potential of AI in enhancing critical infrastructure security while highlighting the need for collaborative, ethical, and context-sensitive approaches to implementation in developing countries. Future initiatives in this area are likely to focus on collaborative efforts, knowledge sharing, and capacity building, while also addressing the ethical and security concerns associated with AI implementation.

Session Transcript

Harisa Shahid: community here, and I’m joined by my co-organizer and partner, Mr. Mohammad Omar Ali from Pakistan, who is employed in the AI field and represents the private sector group here. And without further ado, I would like to introduce the esteemed panelist for our session today. We have Mr. Jacob Papagen, who is a senior policy officer at the Ministry of Foreign Affairs of the Netherlands. He has vast experience in fostering partnerships to address cyber-related issues across several countries. He has served at the permanent mission of the Netherlands to the UN in Vienna on nuclear issues and at the European Union delegation to the Council of Europe in Strasbourg. Next, we have here Mr. Hafiz Farooq with us. Mr. Hafiz Farooq is a cybersecurity architect at Saudi Aramco. He holds over 20 years of experience in network and cybersecurity. He is a three-time fellow at the Internet Corporation for Assigned Names and Numbers, ICANN, and is serving as the member of the Root Server System Advisory Committee and other working groups. He also serves on the advisory board of several Fortune 500 companies. He is an esteemed cybersecurity professional and is joining us here on site. Now, without further delay, I would like to give the stage to Mr. Mohammad Omar Ali, who would like to introduce our online speakers. So you can continue, Omar.

Muhamad Umair Ali: Hi, Harisa. Hi, everyone. Thank you so much for joining. So yes, without further delay, I would like to introduce the virtual speakers. We have Mr. Daniel Lohmann. Mr. Lohmann is an esteemed cybersecurity professional. He currently deals with public sector portfolio at Presidio. He’s an accomplished author and award-winning cybersecurity professional with over 30 years of work experience, starting from the National Security Agency at the United States government, and then has worked up. with Department of Homeland Security as well as for White House and other organizations. So he’s joining us today from New York. I guess it’s quite an early time there. So thank you, Mr. Daniel, for joining us. Followed by that, Jenna Funk. Ms. Jenna Funk is the program director for the NetMission.Asia Internet Governance Academy. She also leads the Asia Pacific Youth IGF as well as an elected member of the Youth Coalition on Internet Governance Steering Committee. She’s joining us from Toronto, Canada today. Welcome, Jenna. And up next, we have the final panelist, Mr. Gayan Prakash Tripathi. Mr. Gayan is basically a lawyer and has worked with several think tanks and research-based organizations. He is joining us as a representative of the Civil Society Stakeholder Group today, and he is currently based in Vienna, Austria. Thank you so much, Mr. Gayan and everyone else for joining. Over to you, Harza.

Harisa Shahid: Okay, thank you so much, Romain, and thank you so much for our esteemed speakers for joining us today. So starting with this session, there is a question that what is actually the critical infrastructure? So introducing the critical infrastructure, it refers to the physical and digital systems, assets, and networks that are essential to the functioning of a society and economy. These systems are crucial for ensuring public safety, economy stability, and national security. Any disruption or damage to critical infrastructure can have serious consequences for the public health, safety, and obviously the national and global economy. Such infrastructure includes but is not limited to energy infrastructure, the transport network, healthcare network, financial services, defense services, and critical manufacturing, among others. Today, we aim to discuss navigating the security of such critical infrastructure in the rapidly developing world. developing age of AI through the multi-stakeholder participation, international cooperation, capacity development, resource allocation, and building resilience into the infrastructure for the developing countries. So, this brings me to my first question, and I would like to ask Mr. Hafiz Farooq that, what are the unique challenges faced by the developing countries, like particularly we have seen in Middle East to the North Africa and South Asia, in securing critical infrastructure from cyber threats and how can AI be used to address them?

Hafiz Muhammad Farooq: First of all, thank you very much for inviting me today for this great panel discussion. I’m Hafiz Farooq from Saudi Aramco, so it’s a great question. I would say in the developing countries, especially MENA itself, the major challenge we have is in the area of critical infrastructure. I would say there are three major areas where we see there are issues. The area number one, I would say is the legacy infrastructures. In the developing countries, the companies don’t have huge budget to upgrade their security infrastructures. They keep using outdated systems and technologies because of lack of resources and lack of budgets, and here comes a problem. So, these old system they keep using, they have lots of vulnerabilities, they don’t have security features which are required these days. So, they actually create a huge attack surface for attackers to attack on your infrastructure, and here comes a problem. So, the legacy system is one problem. The second problem which I want to highlight in the critical infrastructure is the lack of security expertise. You know, the lack of expertise, in the critical infrastructure domain is a global problem. It’s not only a problem for the developing countries, it is a problem everywhere. But obviously developing countries are also getting the heat of this problem. You will find many security experts in the industry who know about the TCP protocol, but when you talk about any ICS protocol, like Modbus TCP, you will not find many experts who know in-depth detail of the technology. So I would say lack of expertise is one of the problem and companies need to dedicate some budget on training their resources, training their individuals to make sure they are on top of the new technologies coming in this area. And third important area which I want to highlight is the digital transformation. It’s not an issue, I know all of you guys love digital transformation and I really appreciate that also, but the problem is people do spend money on digital transformation, but they don’t give attention to spending some money on securing the digital infrastructure. So when are you deploying these digital infrastructure, make sure that you deploy cybersecurity controls on top of that. And if you don’t do that, these transformation will become a pain in time to come. So you need to keep this thing in mind. Now coming to the second part of your question, Harsha, which is about how can we use AI for this? I think obviously AI is a great technology, it can do many major stuff to secure our critical infrastructures, but two areas are the key areas where AI can be very useful. One of them is threat detection and response. So you can ingest all your data from your critical infrastructure in real time to your algorithms and they can find anomalies out of your daily operation and find out if there is a real time security threat. So detection and response can be augmented by AI big time, there is no doubt about it. Especially a company like Aramco, we have like a massive infrastructure, I mean, we have a million. of assets scattered all across the world. We need army of resources, army of SOC analysts sitting in real time, sitting in the SOC, doing analysis on these events, which is impossible. So here comes the role of AI, where AI algorithms can tap in they can jump in and they can make life easy for you. This is what my company is doing. We can’t just employ hundreds of security analysts just to do everything. We have to rely on AI. So I hope I answer your question. Thank you.

Harisa Shahid: Perfect, very well. Thank you so much for the great points made. One of the thing you highlighted is the lack of expertise as we all know that it is a very major problem. And the first issue that comes to mind when we talk about cybersecurity and AI that for you to deploy these solutions it must have the required expertise in order to work in these areas. So this brings me to my next question, which is to Jenna Fung, that what are the most effective strategies for training and upscaling technical professionals in developing countries? As I’ve seen that you have been working with some civil organizations and the community as well. So, and how we can leverage AI for critical infrastructure security and like what is the limitation for their adoption? So I would, the floor is all yours, Jenna. Okay. I think Jenna is unable to unmute herself. Can you please make her a co-host? Okay, can we make Ms. Jenna Fung the co-host?

Muhamad Umair Ali: Also, can we do this the same for Gyan and for Daniel Lorman? I think I did put requests to the IGF host but haven’t heard back from them yet.

Harisa Shahid: Okay, they’re working on it. It’s done? Okay, it’s done. I think Jenna, can you please try again? Okay. Okay, I think, while Janet tries to reconnect, we can move on to our next question, which is from Mr. Dan. What are the primary cybersecurity risks associated with AI systems, and how can these risks be mitigated to protect the critical infrastructure? Dan, are you able to unmute yourself? I think the host is unmuting me again and again, and not Dan and Jenna. Can you please unmute Mr. Dan and Jenna Fung? Hello, can you hear me now? Yeah, yeah, perfect. We can hear you.

Daniel Lohrman: Yeah, but I cannot, the video is not started, so I don’t know if you can see me, but I can certainly start talking if you’d like. Yeah, sure. Yeah, I’m getting a message saying the host must unmute you, or the video is not enabled. So, yeah, so thank you all. First of all, great to join you today, and as soon as the video comes live, I’ll be happy to be on video, but it’s great to be with everyone. I’m actually in Michigan in the USA, and this question is a really important question. I mean, there’s a lot of different challenges. Just to repeat again, you want me to answer the primary cybersecurity risk associated with AI systems, and how can these be mitigated? Is that correct? Yeah, yeah, yeah. Great. So, I mean, first of all, I just would say that AI is being used extensively to attack us, so AI systems can be exploited to execute large-scale automated attacks, such as spear phishing malware distribution. And AI-driven attacks are actually spreading and broadening and deepening the attacks against critical infrastructure worldwide. So this is happening all over the United States right now, all over the world right now. And still the video is still not working. But on the actual AI systems themselves, I think I just want to mention four or five different areas and I can dive into some how ways we can mitigate these. But from data poisoning attacks to privacy attacks, adversarial attacks. So for example, data poisoning attacks, malicious actors manipulate training data to bias or compromise AI models leading to faulty decision-making. And so that, for example, poison data can cause an AI system managing a power grid to misclassify a threat resulting in an outage. Privacy attacks, AI system might reveal patient data during training or an adversarial attack. Attackers input specifically crafted data to deceive AI systems causing incorrect outputs. And so we need to make sure that those are protected against. Another type of risk we have is model theft. AI models are stolen via exposed APIs, application programming interfaces or insider threats enabling attackers to duplicate or misuse them. So stolen models can be weaponized to attack critical systems or sold to competitors. And then just a couple more, I’ve just mentioned some dependency or vulnerability supply chain attacks. So third-party components or open source libraries using AI systems might contain vulnerabilities. A compromised library and an AI application managing water supply can serve as an entry point for attackers. So the second part of your question, I’ll just mention briefly. So what are some things we can do around mitigating these? And mitigating AI cyber security. security risks in critical infrastructure require us to have a robust data governance model such as validating data sets using differential privacy, which is a technique to prevent data poisoning and privacy attacks. We also need to make sure we’re doing secure model development, including adversarial training and regular updates to build resilience. And when we have attacks, you know, be able to sustain them and recover. Access controls, encryption, and network segmentation can protect against unauthorized access and the spread of these attacks. And then with the third-party risks, you know, they can be reduced through stringent vetting and secure software practices. Continuous monitoring with AI-driven anomaly detection can ensure proactive threat management. And then lastly, I just want to mention instant response plans need to be updated. And you know, models like NIST, there’s a variety of different really great instant response plans. Be ready, but when attacks do happen, that you can collaborate on threat intelligence to strengthen people’s defenses. And as was mentioned earlier, there needs to be more training and awareness to create a culture of security and resilience. So those are some of my… It’s working now, so good to see you. And so those are some of my opening comments.

Harisa Shahid: Thank you so much. Thank you so much, Mr. Dan. And now I would like to move towards Jenna. And Jenna, I will repeat the question. The question was that what are the most effective strategies for training and upskilling technical professionals or youth in the developing countries to leverage AI for critical infrastructure security? And what do you see are the limitations for their adoption?

Jenna Fung: All right, thank you so much. I hope that I am audible in the room and awesome. And I got a thumbs up from Dan as well, so I assume remote participants can also hear me clearly. Thanks for having me on this panel. Given my background, I work mostly with young people in Asia Pacific regarding capacity building, having some knowledge to a certain extent about cybersecurity and infrastructure and all that. Although I can’t speak for the technicalities of all the subject matter, but I do have some opinions on how I see what we could do or do better for capacity building, especially if, I mean, assuming the title of our sessions and in many sense, we are using this ever evolving technologies for critical infrastructure these days. And with my experience working with many young people in Asia Pacific in the past six, seven years, and you could see that many of the time there’s some knowledge gap in that. And as I currently reside in North America as well. And so I think that’s some differences when we are somewhat exposed to the same level of like development, things are really new and people who have knowledge, like for example, governments or companies are using it on infrastructures or things that people from anywhere use every day. But because for example, people in a developing countries might have less resources or opportunity to be exposed to informations or resources to learn more about it. And they essentially become more vulnerable and there’s like a big gaps in between. And I will spare you all the details about what is about digital divide and all that. I think essentially, of course, like ideally, like you said, is that there should be a tailor-made national, like a tailor-made… strategies on how to do capacity building for people who implement or execute this kind of technologies in their works. But especially, for example, government or even civil servants who use it at their work, I think they should be the first group of people who need resources. But also there is like people who receive the use of people who are being impacted by the implementation of these technologies in the infrastructure as well. They should develop their literacies towards this kind of tools or the use of AI as well. So I think a national strategies will be ideal and helpful. But many of the time, because like I said, there’s like financial restraints or resources and there are many other even more critical things that you need to invest and put effort into it or prioritize because there’s like geopolitical tensions or you need to allocate other resources and prioritize your energy for dealing with, for example, climate change and all that. And there are times that capacity building will sometimes being put behind our head. So I think there are times where individuals or especially young people in developing country can leverage the power of the Internet, perhaps to kind of look for resources elsewhere. And even like not within your own country, perhaps you can look within your regions if there’s like any NGOs or organizations are providing this kind of like. educational opportunity for you to enrich your own knowledge. I mean, perhaps like many, many people are aware of a lot of like big corporations also offer some sort of like skill trainings, like a, like a micro credentials opportunity for you to learn about things as well. So I think that will be helpful for young people to kind of like develop knowledge as well. So I will stop here and hopefully can chat about more and touch upon other questions as the audience also ask questions later on. Thanks.

Harisa Shahid: Thank you so much, Jenna. And the points are very well made. One of the most important point I would highlight here, as Jenna mentioned, that we can look within our own region to educate people. And because we are specifically talking about the developing countries, it’s always difficult for the developing countries to invest more resources and to get resources from across the borders. Right. So this leads to the next question, which I would like to have from Gyan, that how can developing countries rely less on foreign technology to ensure security of critical infrastructure and maintain digital sovereignty?

Gyan Prakash Tripathi: Thanks, Sarasa. And hello, everyone. Excellent to see many familiar faces on the panel and in the audience. This question of digital dependency and export of technology, as well as the governance architecture, is also something that kept up popping during the now concluded first research cycle at the Observatory of Information and Democracy. Through our meta-analysis of global literature, we observed that the emergence of epidemic injustice due to the corporate incentives, strategies and practices involved in designing, developing, selling and controlling socio-technical solutions that are at the heart of information ecosystems. These then make global South nations vulnerable to exploitations. by further privileging of information and knowledge that are neither representative nor inclusive. To address this, I suggest a three-pronged strategy that emphasizes legal safeguards, multi-stakeholder accountability, and capacity-building measures. The first prong is a robust domestic legal framework and strategic contracting. Here, global South and developing nations must codify clear obligations into their legislations, which must themselves be human rights-centric. They must enact legislation and regulations that mandate transparency, human rights due diligence, and data protection protocols for all technology suppliers, regardless of their origin. They must also have stringent contractual terms that demand technology transfer, skills development, and long-term support arrangements. By these provisions, they could also include mandatory training for local engineers, commitment to open standards, and clear exit strategies that can prevent vendor lock-in. The second prong that I would strongly suggest is inclusive, transparent, and accountable government mechanisms, which can be achieved through multi-stakeholderism. There must be a clear and direct independent oversight by bodies that include government representatives, CSOs, industry experts, and also human rights advocates. But I don’t think I need to elaborate on that in this forum. And the approach is, of course, well-documented. The third and more critical prong of the strategy is regional cooperation and capacity-building for long-term sovereignty. It is pertinent that global South nations form knowledge, R&D, and cooperative blocks to compound the resources that they have available. This can be done either through collaboration with geographically proximate countries facing similar challenges to develop common legal and technical standards. Another way it can be achieved is by forming issue- or interest-based blocks. which can increase collective bargaining power and reduce the risk of exploitative deals. Each prong of this strategy seeks to reinforce sovereignty, protect local interests and uphold human rights standards. By implementing this, developing countries can create a balanced, forward-looking legal and policy ecosystem which will respect human rights, reinforce sovereignty and foster resilient, fair and beneficial technology partnerships. Thank you and back to you, Harisa.

Harisa Shahid: Thank you so much, Kian, for your valuable input. So after listening to every speaker here, I would like to move on to Mr. Jackal. You have worked with the government sector and have expertise in that. So I would like to ask you that what do you see are the key challenges and opportunities in establishing international partnerships to share best practices and technologies for critical infrastructure security, particularly in the context of AI?

Jacob-Pepijn Baljet: Thank you, Harisa, and thank you to all the speakers before me, which I can actually echo many of the points made before, because together they bring all the perspectives together and I think that’s also very symbolic for the IGF as well, where all stakeholders come together and learn from each other. So thank you for that. I would say, of course, we’ve heard it before too, one of the challenges usually is to bring enough human resources and finances together. And on the other hand, AI can also relieve a lot of challenges in terms of human resources, as the other speaker said, because you can use AI instead of actually checking all the cyber security vulnerabilities by persons. So I would say… These challenges together mean that one has to prioritize. And especially in terms of critical infrastructure internationally, we have not agreed one agreed definition of what is critical infrastructure. And maybe we don’t also need it because every country and every region is different. We did agree in the UN that, of course, the core infrastructure of the Internet, the general availability and integrity of the Internet, is part of the critical infrastructure. And I’m glad you said you were also involved with ICANN. And, of course, ICANN is also part of this. So I would also like to stress that. But other than that, of course, I think many countries will share many critical infrastructure ideas. It’s quite logical to say that energy grid or water supply or your own cybersecurity operation center or your SOC or your CSIRT are part of your critical infrastructure. So I would say one has to prioritize. Every country would have to find their own national priorities. But you can, of course, exchange within your region. It was also mentioned. What are your most priority issues? And one opportunity in international partnerships is also exchanging best practices and exchanging ideas. Also, negative experiences. It’s also very important that you share negative experiences together so that people can learn from each other. And let’s see, there are a number of international mechanisms already for that. There’s both the Internet Governance Forum, but there’s also the AI for Good Summit with the ITU in Geneva. There are many forums open to stakeholders. There’s also the Global Forum on Cyber Expertise, which is built to share this knowledge together and to bring supply and demand on capacity building together to actually bring stakeholders from the private sector and from the public sector together. the governments together from both the Global South and other countries. And also these days we also hear a lot about the Digital Cooperation Organization. It’s also an interesting organization that brings stakeholders together. I don’t know if they do a lot of work on AI yet, but I think that will be a logical step too. And here at the IGF there’s also a lot of talk about the Global Digital Compact and what has come out of it on the UN level. There are a number of mechanisms that will have to be implemented now on AI governance. And you also see there that it’s built to bring all the stakeholders together. And I think that’s the most key message I want to give, that it’s important that any mechanism or international partnership actually brings both civil society, academia, private sector, the technical community, and the governments together to really learn from each other and not only speak in their own bubble or in their own silo, and also between different owners. Because critical infrastructure, sometimes it’s owned by the state and sometimes it’s a private sector company. And then it’s important also within your country that you have mechanisms that you can exchange knowledge and exchange experiences within your country between the different stakeholders.

Harisa Shahid: Very true. Thank you so much for your input. Now this leads to the next question, as we have already mentioned, that it’s important for the collaboration between all the stakeholders, including the private sector, the government, and civil society to develop and deploy AI-powered security solutions that are affordable, because we are talking about the developing countries, and accessible to developing countries. So please, over to you, Mr. Dan. Yeah, thank you for the question. And I really appreciate the comments that were just given, because I

Daniel Lohrman: I think they really lead into that really well. And I think that this is a huge challenge. I would just echo some of the comments and just, I’ve prepared a number of different aspects of how private sector companies can collaborate with governments and civil society. I think, first of all, it just starts with a commitment that you wanna do it, you know, that we all, I mean, there’s a saying we say a lot in the US, which is when you’ve climbed the ladder, you need to send the ladder back down and help other people back up. It’s in all of our interests, the global interests to work together, to partner. Many, many companies, certainly in the US, but all around the world have great partnerships in developing countries. Others, it’s a new thing for them, but they recognize that it’s in the long-term best interests of everyone, the whole society, but also their own companies of where they wanna go and how they wanna work together and partner in the future. So how can you do that? You know, public-private partnerships is a big one. Talk about NGOs, non-governmental organizations, partnering with those. I think from a practical perspective, you need to have tiered pricing models, offering subsidized or tiered pricing for AI-powered solutions to ensure affordability for low-income regions. I think that’s done in other areas in society. We’ve talked about, you know, for example, pharmaceutical prices, drug prices for different things at different parts of the world. There’s models around that. The same kinds of things may need to be considered with AI and technology. Capacity building and skills development across organizations, and really making sure we have local training programs that meet the local needs. Because, you know, I’m sitting here in the United States. Obviously, I don’t understand the specific needs of the developing countries. I’ve actually. We’re very interested in that and working together with those on this panel and others around the world, and we’d love to help in different parts of the world in developing countries in Africa and around the world. But honestly, you know, developing these partnerships that transfer AI expertise to local professionals will ensure long-term sustainability, and it needs to be done and contextualized and localized to those. I think thinking about long-term sustainable models, infrastructure investment, partnering with governments to build the necessary digital infrastructure, such as cloud storage, broadband access in developing regions, but also ensuring that local needs are being met, you know, from privacy perspectives, but that we have proper funding mechanisms in place as well. And I think that’s a big challenge, leveraging international development funds. I know this is a UN panel, but looking at ways we can do grants, finance, initial deployments of AI-powered solutions, and then really talk about, I saw some questions, maybe we’ll get to those in a few minutes, but around local pilots or proof of concepts in a local context. I think those are really important. So, you know, affordable, accessible, you know, it really is going to require multi-stakeholder coalitions. So really establishing coalitions with international organizations, whether that be the UN, World Bank, really talking about working together with NGOs, as I mentioned, advocacy groups, and then just really making sure that we all speak the same language. And I just wanted to close on that. I think, you know, even some of the different terms we use in the U.S. are different than some of the terms that different people use around the world, and part of that is language. you know, different views on different spellings of words in English and that kind of a thing. I’m horrible at foreign languages, by the way, so I admit that up front. But just even the terminology and as we as we think about AI, I think AI can help us and I think it’s a little positive note. You know, I’ve seen applications in the USA with different counties and cities and governments around the United States where they’ve, you know, used to have one or two languages and now they’ve come together and they now support 140, 150 languages. And those same applications can be scaled to work in a wide variety of different communities that maybe, you know, even in like, you know, Washington, D.C., for example, Montgomery County is a great example. The Monty app application is called Monty, M-O-N-T-Y. It’s a great application. That’s in Washington, D.C. in the United States, but it serves communities, people from all over the world that live in that area that now have access to over 100 applications in their own language. And so I think AI can help us in that. And I think it can actually be part of the solution to make solutions that are available, maybe in English, available in multiple languages around the world. So the ability to reuse applications, to be able to learn from others, I think is a big part of this solution. And being able to, you know, not reinvent the wheel, if you will, but actually, you know, partner and say, OK, this government in the U.S. and this government in Europe is doing this really successful application. How can we apply that in developing countries?

Harisa Shahid: Exactly. Very well.Hi, sir. Are you still speaking? We are unable to hear you. Oh, I’m so sorry. Actually, I switched my channel. apologies for that. So OK, moving forward, I have a similar question for you as well, Mr. Hafiz Farooq, that how can multiple stakeholders work together developing a global or a regional framework? Because if we see, some regions do have a framework for cybersecurity and things like that. But if I talk about some developing countries, like if I talk about my country, Pakistan, we don’t have a framework specifically for the cybersecurity or information security. So how can multiple stakeholders work towards developing a global or a regional framework for incorporation of AI in the critical infrastructure security?

Hafiz Muhammad Farooq: Thank you, Harsha, for another great question. I generally agree with what Dan said. There has to be a global standard, first of all. So I agree. The year of frameworks and the legislation, it was very good for the cybersecurity industry because we have seen many standards, many legislations coming across. I will give you a few examples. I will take an example of Singapore. Singapore recently released their cybersecurity master plan 2024, which deals with critical infrastructures. Similarly, we have Hong Kong. First time they passed the bill for the protection of critical infrastructures. That is another example. Similarly, USA has revamped their cybersecurity strategy by including security threat cases for the protection of the critical infrastructure. So this shows how the cybersecurity industry is moving towards legislation, frameworks, and the standardization. Also, you might be aware, most of you guys, about the European Union. They recently enforced an AS2 directive and AI Act. This is something very promising as well. So things are. We’re really positive in 2024, but only I think the missing part is the critical infrastructure legislation, I would say. All these legislation I’m talking about, they don’t cover use of AI for the cybersecurity of the critical infrastructures. That is the missing piece right now. How to address that? As Dan said, it has to be global first. I don’t think so that the only regional approach is going to help us. First of all, developing countries and the technology giants, they need to sit together and they need to work on a global framework first, and then the regional frameworks, they should follow them. I don’t think so that only a regional country like Pakistan or for example, even Saudi Arabia, I mean, they can’t handle the bigger spectrum of cybersecurity threats alone. They need to work in collaboration. I mean, for example, UN is a good forum. ITU is a good forum. They should take the lead and they should actually standardize the use of AI for the cybersecurity of the critical infrastructure. I think more research and development and more collaboration is required for the time being to understand how AI is going to be used, how AI is going to be used for the protection of our infrastructures. So there’s still more work to do in years to come, and I hope, I mean, we move fast before the attacker, they start using AI. We, the defensive guy, we should start using the AI as well to protect our infrastructure. I hope I answer your question. Thank you.

Harisa Shahid: Definitely. So when we talk about AI, there comes some ethical considerations and some other security issues as. well because AI has its own concerns as well. So moving on to Mr. Jacko, I have a question for you that how can governments in developing countries effectively balance the need for AI driven security with privacy concerns and ethical considerations?

Jacob-Pepijn Baljet: Thank you Harissa and thank you Hafiz also for mentioning the need for the standards. I think maybe to start off with your question about security and privacy and ethical considerations, which is a great question and I think it’s a very relevant question. I would start off with saying that maybe there’s not really a dichotomy between the two, but you really need both at the same time. And usually more privacy or more ethical considerations do not mean less security. Both would go hand in hand of course. And also here I would say this question, actually when you talk about cybersecurity, I think that the basic principles, the general principles, they are both, they will be the same whether you use AI or not. And I think the only big difference is that for AI it enlarges many things and it makes many things much more impactful, especially positively and negatively. You can better defend against cyberattacks with AI, but you can also, of course, there is also the risks of the privacy risks and the risks of false data that you are using to train models is much bigger. So I think that that does require international cooperation. The EU AI Act was also mentioned, the risk-based approach to AI systems. So I think the best way to actually incorporate this is both, and I’m going to also continue with Hafiz’s threat here on international standards. and cooperation is to both think about what do we have in common universally, we have universal human rights, we have universal standards already there at the UN level and also yeah, basically globally. And then next to that, we have a local context where everything is happening and in Pakistan, this is different than from the Netherlands, or it’s also different in Saudi Arabia. And I think we need to take that into account too. So I think to actually we’ve seen that in the Global Digital Compact too, when we talk about ethical considerations, we’ve had a push to include that in the Global Digital Compact. So I think the best approach to do this is to have a high level, and I agree with Hafiz that that can be at the UN, to have a broader level agreement on the general principles. And then on the lower level, on a more regional or local level, you can have more legislation or you can have specific critical infrastructure legislation that then you can look at the UN level and say, okay, we agreed these general principles here on the protection of privacy or on the protection of security also. And we base our national legislation on this. And then next to that, because the stakeholder cooperation is important, you can do that locally, but you can also do it internationally. We have many standard organizations internationally. Of course, always a challenge and it’s always a huge investment to actually engage in standard organizations at the IETF or at the ISO or at different organizations. But this is a platform where you can engage with the big technology companies and with the suppliers and with the civil society. So I think it is important to also look there on a technical level with a multi-stakeholder approach, which is in principle open to everyone, but which can be improved for inclusivity to start there with the standards for incorporating AI in the cyber security field. Thank you.

Harisa Shahid: Exactly. And with that, we are coming to the end. of our session concluding the points we have highlighted that the skill building is very important and for the skill building collaboration exactly the main point which every speaker has highlighted here that the collaboration between all the stakeholders is the crucial part to enhance the AI and cybersecurity to raise awareness about the use of AI because right now AI is not being much used for the protection of our critical infrastructure so the awareness is very important so thank you so much for all to all of our speakers for joining us today and now we are moving towards the Q&A session so I would like the audience to if you have any questions please feel free to ask

Muhamad Umair Ali: just to chip in here followed by the on-site audience we also have a question from the online audience so I think we can proceed with the on-site audience first and followed by that I can ask the question that is in the chat box from the online audience

Harisa Shahid: yeah sure so we have one question here

Audience: hello everyone my name is Fernando I am part of Brazilian youth delegation and I work in a network provider so I’m part of the technical sector one thing that was presented as a problem was the lack of professionals in the cybersecurity AI but another problem that I see is even with a long and continual cybersecurity training most of the professionals eventually go to another country to work so basically my question is how to retain these talents in their country to continue their work

Harisa Shahid: Yeah, that’s a very important question. So anyone from the panel would like to ask? Oh, sorry, would like to answer?

Hafiz Muhammad Farooq: Yeah, I would just want to add a comment on the first of all, Fernando, great question. I mean, you know, the world is getting global. I mean, the the people that are moving around is very difficult to retain a talent. Companies are looking for talents. If you’re sitting in Brazil, and the company needs you some other part of the world, they will they will hook you from there. So this challenge is there. But I think instead of retaining the talent locally, the the challenge is to produce the talent. So because most of these systems, as I told you before, their legacy, I mean, I’m not saying that we are not professional, we are professional, but the systems are so old, that there are not enough documentation available, there is not much stuff available where you can simulate where you can train yourself to see how the system is going to operate. So I think instead of localizing localization of resources, we should concentrate more on the training aspect. And maybe the old legacy vendors, I mean, they should start maybe redoing the documentation. I mean, so actually, the my point is that the knowledge base should be increased instead of trying to localize the resource at a different location. Thank you.

Harisa Shahid: Thank you, Mr. Farouk. Does this answer your question? Okay, perfect. So any more questions we have?

Muhamad Umair Ali: I do have one from the

Harisa Shahid: We have one from the on-site participant.

Muhamad Umair Ali: I’m sorry, on-site or online?

Audience: My name is Thuy. I’m from .vn directory. We are from technical community. Then I have a question that we are talking about promoting using AI to protect our critical infrastructures. So I have a question that what infrastructure do you think will be in scope? of the critical infrastructure. Thank you.

Harisa Shahid: So your question is, basically, what infrastructure do we think is the critical infrastructure, right?

Audience: Yeah. My question for the panelists is that, what can you name some infrastructure that will be in the scope of critical infrastructure that should be protected by AI promotion here? Thank you.

Harisa Shahid: Thank you so much. So anyone from the panel would like to answer the question?

Daniel Lohrman: I can start. Certainly, from the US perspective, we have dedicated sectors that we’ve listed. There’s, depending on which people you talk to, 16 or 17 sectors. So everything from all utilities, water, power, but to finance, certainly banks, insurance companies, et cetera, government sectors. So that’s state, local, federal levels, all different levels of government. But then you can start talking about transportation. So clearly, airlines, trains. I mean, so really, all of the core physical infrastructure in society, and it’s actually a website that you can go to and just type in critical infrastructure, certainly USA. But in North America, there’s a very defined list and what’s covered and what’s not covered in the critical infrastructure.

Harisa Shahid: Thank you so much, Jack. Does this answer your question? Actually, we are running out of time. If you have any question, you can connect with our speakers Will that be OK? Thank you. Amir, do we have a, we can take, I think. only one more question from the off-site participants?

Muhamad Umair Ali: Yes, we can take one question. It’s for Dan. The question is from Ankita Rathi, and the question is, can you then please elaborate on the specific resilience strategies that the organizations should develop to recover from AI-driven attacks?

Daniel Lohrman: Absolutely. There’s a number of things people can think about. When you start thinking about threat intelligence, invest in AI-powered threat intelligence to detect and predict emerging attack patterns. Basically, fight AI with AI. Making sure that cyber attacks are moving faster than ever. You need to fight fire with fire almost is the mentality. You can also have red teaming and simulated attacks. Having tabletop exercises, AI augmented defense tools that allow you to respond very quickly. First of all, you need to know about these attacks that are happening and be able to respond very quickly. But I think overall, you really start with a good resilience strategy. Resilience is a very popular word in the US cybersecurity community right now. I think globally, it’s a hot word. You need to have a comprehensive incident response plan. If your critical infrastructure is attacked, whether that be the water, the utilities, the banks, you need to be aware of it. You need to be able to detect it, and you need to have all parts of your organization able to respond, not just from a technology perspective, but people, process, and technology. That means communication. It means working with all levels of, if your bank was hit, if your utility was hit, water supply was hit, everyone needs to know from the. the business side of things, to your clients, to your customers, what are the steps you’re going to take? How are you going to respond quickly? So once you detect that, being able to respond and recover quickly in a resilient way is really, really key, especially with the ransomware attacks that we’re seeing around the world right now. So hopefully that’s a short answer to a much longer question.

Muhamad Umair Ali: Yes, that was quite helpful. Thank you so much for that, Dan. I think that brings us towards the closure of the session. Any concluding remarks, Harisa? Or any photographic sessions?

Harisa Shahid: Yeah, I think we should have a photograph with all the online speakers and on-site speakers. So can you all please turn on your cameras? Jenna, can you do that? Should I stop sharing the screen?

Muhamad Umair Ali: I think it’s occupying quite a good part on the screen.

Harisa Shahid: Yeah, yeah, yeah, you can.

H

Hafiz Muhammad Farooq

Speech speed

154 words per minute

Speech length

1267 words

Speech time

493 seconds

Legacy infrastructure creates vulnerabilities

Explanation

Developing countries often use outdated systems and technologies due to budget constraints. These legacy systems have vulnerabilities and lack modern security features, creating a large attack surface for cybercriminals.

Evidence

Companies in developing countries don’t have huge budgets to upgrade their security infrastructures.

Major Discussion Point

Challenges in Securing Critical Infrastructure in Developing Countries

Lack of cybersecurity expertise, especially for industrial control systems

Explanation

There is a global shortage of cybersecurity experts, particularly in the domain of industrial control systems. This lack of expertise is more pronounced in developing countries, making it difficult to secure critical infrastructure.

Evidence

Many security experts know about TCP protocol, but few have in-depth knowledge of ICS protocols like Modbus TCP.

Major Discussion Point

Challenges in Securing Critical Infrastructure in Developing Countries

Agreed with

Jenna Fung

Daniel Lohrmann

Agreed on

Need for capacity building in developing countries

Digital transformation without adequate security measures

Explanation

Companies in developing countries often invest in digital transformation without allocating sufficient resources for cybersecurity. This creates vulnerabilities in the newly deployed digital infrastructure.

Evidence

People spend money on digital transformation but don’t give attention to spending money on securing the digital infrastructure.

Major Discussion Point

Challenges in Securing Critical Infrastructure in Developing Countries

AI can augment threat detection and response capabilities

Explanation

Artificial Intelligence can significantly enhance threat detection and response in critical infrastructure security. AI algorithms can analyze real-time data from infrastructure to identify anomalies and potential security threats.

Evidence

AI algorithms can tap in and make life easy for you, especially in companies with massive infrastructure like Saudi Aramco.

Major Discussion Point

Leveraging AI for Critical Infrastructure Security

AI enables automated analysis of large-scale infrastructure data

Explanation

AI can process and analyze vast amounts of data from critical infrastructure in real-time. This capability is particularly valuable for large organizations with extensive infrastructure that would be impossible to monitor manually.

Evidence

Saudi Aramco has millions of assets scattered across the world, requiring AI algorithms to analyze events instead of relying solely on human SOC analysts.

Major Discussion Point

Leveraging AI for Critical Infrastructure Security

Develop global standards and frameworks for AI in critical infrastructure

Explanation

There is a need for global standards and frameworks specifically addressing the use of AI in critical infrastructure security. These standards should be developed through collaboration between developing countries and technology leaders.

Evidence

Examples of recent cybersecurity legislation and frameworks in various countries, such as Singapore’s Cybersecurity Master Plan 2024 and the EU’s NIS2 directive and AI Act.

Major Discussion Point

International Cooperation for Critical Infrastructure Security

Agreed with

Jacco-Pepijn Baljet

Daniel Lohrmann

Gyan Prakash Tripathi

Agreed on

Importance of international collaboration

J

Jacco-Pepijn Baljet

Speech speed

151 words per minute

Speech length

1216 words

Speech time

480 seconds

Limited resources and budget constraints

Explanation

Developing countries often face challenges in allocating sufficient human and financial resources for cybersecurity. This limitation makes it difficult to implement comprehensive security measures for critical infrastructure.

Major Discussion Point

Challenges in Securing Critical Infrastructure in Developing Countries

Exchange best practices and lessons learned through international forums

Explanation

International partnerships provide opportunities to share best practices and experiences in critical infrastructure security. Various forums and organizations facilitate this knowledge exchange between countries and stakeholders.

Evidence

Examples of international mechanisms include the Internet Governance Forum, AI for Good Summit, Global Forum on Cyber Expertise, and Digital Cooperation Organization.

Major Discussion Point

International Cooperation for Critical Infrastructure Security

Agreed with

Daniel Lohrmann

Gyan Prakash Tripathi

Hafiz Muhammad Farooq

Agreed on

Importance of international collaboration

Balance global principles with local context in policy development

Explanation

Effective policies for AI-driven security in critical infrastructure should consider both universal principles and local contexts. This approach ensures that global standards are applied while addressing specific regional needs and challenges.

Evidence

The EU AI Act and its risk-based approach to AI systems was mentioned as an example of balancing global principles with local implementation.

Major Discussion Point

Balancing Security, Privacy and Ethics in AI-driven Security

J

Jenna Fung

Speech speed

133 words per minute

Speech length

629 words

Speech time

282 seconds

Knowledge gaps due to less exposure to new technologies

Explanation

Developing countries often have limited access to information and resources related to new technologies. This lack of exposure creates knowledge gaps, making populations more vulnerable to cyber threats.

Evidence

People in developing countries might have less resources or opportunity to be exposed to information or resources to learn more about new technologies.

Major Discussion Point

Challenges in Securing Critical Infrastructure in Developing Countries

Develop national strategies for tailored capacity building

Explanation

Countries should create customized national strategies for capacity building in AI and cybersecurity. These strategies should address the specific needs of different groups, including government officials, civil servants, and the general public.

Major Discussion Point

Strategies for Capacity Building in Developing Countries

Agreed with

Daniel Lohrmann

Hafiz Muhammad Farooq

Agreed on

Need for capacity building in developing countries

Leverage online resources and regional educational opportunities

Explanation

Individuals in developing countries can use online resources and regional educational programs to enhance their knowledge of AI and cybersecurity. This approach can help overcome resource limitations at the national level.

Evidence

NGOs or organizations within regions may provide educational opportunities. Large corporations also offer skill training and micro-credentials.

Major Discussion Point

Strategies for Capacity Building in Developing Countries

Agreed with

Daniel Lohrmann

Hafiz Muhammad Farooq

Agreed on

Need for capacity building in developing countries

D

Daniel Lohrman

Speech speed

151 words per minute

Speech length

1958 words

Speech time

774 seconds

AI systems themselves face risks like data poisoning and adversarial attacks

Explanation

AI systems used for critical infrastructure security are vulnerable to various types of attacks. These include data poisoning, privacy attacks, adversarial attacks, and model theft, which can compromise the effectiveness and reliability of AI-driven security measures.

Evidence

Examples include data poisoning attacks that can cause an AI system managing a power grid to misclassify threats, and adversarial attacks where specifically crafted data deceives AI systems.

Major Discussion Point

Leveraging AI for Critical Infrastructure Security

AI can help overcome language barriers in security applications

Explanation

AI technologies can facilitate multilingual support in security applications. This capability allows for broader access to security information and resources across diverse populations.

Evidence

Example of the Monty app in Montgomery County, Washington D.C., which supports over 100 languages using AI technology.

Major Discussion Point

Leveraging AI for Critical Infrastructure Security

Establish public-private partnerships for knowledge transfer

Explanation

Collaboration between private sector companies, governments, and civil society is crucial for developing and deploying AI-powered security solutions. These partnerships can facilitate knowledge transfer and ensure solutions are affordable and accessible to developing countries.

Major Discussion Point

Strategies for Capacity Building in Developing Countries

Agreed with

Jacco-Pepijn Baljet

Gyan Prakash Tripathi

Hafiz Muhammad Farooq

Agreed on

Importance of international collaboration

Implement tiered pricing models for AI security solutions

Explanation

To make AI-powered security solutions more accessible to developing countries, companies should consider implementing tiered or subsidized pricing models. This approach ensures affordability for low-income regions while maintaining the quality of security solutions.

Evidence

Comparison to tiered pricing models used in other industries, such as pharmaceutical pricing for different parts of the world.

Major Discussion Point

Strategies for Capacity Building in Developing Countries

Agreed with

Jenna Fung

Hafiz Muhammad Farooq

Agreed on

Need for capacity building in developing countries

Implement robust data governance and secure model development practices

Explanation

To mitigate risks associated with AI-driven security systems, organizations should implement strong data governance models and secure development practices. This includes techniques like differential privacy to prevent data poisoning and privacy attacks.

Major Discussion Point

Balancing Security, Privacy and Ethics in AI-driven Security

Establish multi-stakeholder coalitions to address ethical concerns

Explanation

Addressing ethical considerations in AI-driven security requires collaboration between various stakeholders. Multi-stakeholder coalitions can help ensure that AI solutions are developed and deployed responsibly, considering diverse perspectives and concerns.

Major Discussion Point

Balancing Security, Privacy and Ethics in AI-driven Security

G

Gyan Prakash Tripathi

Speech speed

132 words per minute

Speech length

432 words

Speech time

196 seconds

Form regional knowledge-sharing and R&D blocks

Explanation

Developing countries should create collaborative blocks for knowledge sharing and research and development. This approach allows countries to pool resources, increase collective bargaining power, and reduce the risk of exploitative deals in technology partnerships.

Evidence

Suggestions include collaboration with geographically proximate countries facing similar challenges or forming issue- or interest-based blocks.

Major Discussion Point

International Cooperation for Critical Infrastructure Security

Agreed with

Jacco-Pepijn Baljet

Daniel Lohrman

Hafiz Muhammad Farooq

Agreed on

Importance of international collaboration

Develop legal frameworks mandating transparency and human rights due diligence

Explanation

Developing countries should establish robust domestic legal frameworks that require transparency and human rights due diligence from technology suppliers. These frameworks should include provisions for technology transfer, skills development, and long-term support arrangements.

Evidence

Suggestions include enacting legislation with clear obligations, stringent contractual terms, and mandatory training for local engineers.

Major Discussion Point

Balancing Security, Privacy and Ethics in AI-driven Security

Agreements

Agreement Points

Importance of international collaboration

Jacco-Pepijn Baljet

Daniel Lohrmann

Gyan Prakash Tripathi

Hafiz Muhammad Farooq

Exchange best practices and lessons learned through international forums

Establish public-private partnerships for knowledge transfer

Form regional knowledge-sharing and R&D blocks

Develop global standards and frameworks for AI in critical infrastructure

Speakers agreed on the crucial role of international collaboration in addressing cybersecurity challenges for critical infrastructure, emphasizing knowledge sharing, partnerships, and global standards development.

Need for capacity building in developing countries

Jenna Fung

Daniel Lohrmann

Hafiz Muhammad Farooq

Develop national strategies for tailored capacity building

Leverage online resources and regional educational opportunities

Implement tiered pricing models for AI security solutions

Lack of cybersecurity expertise, especially for industrial control systems

Speakers concurred on the importance of capacity building in developing countries, suggesting various strategies to address knowledge gaps and resource limitations in cybersecurity and AI.

Similar Viewpoints

Both speakers highlighted the challenges faced by developing countries in securing critical infrastructure due to resource limitations and outdated systems.

Hafiz Muhammad Farooq

Jacco-Pepijn Baljet

Legacy infrastructure creates vulnerabilities

Limited resources and budget constraints

Both speakers emphasized the need for strong governance frameworks and practices to ensure responsible development and deployment of AI-driven security solutions.

Daniel Lohrmann

Gyan Prakash Tripathi

Implement robust data governance and secure model development practices

Develop legal frameworks mandating transparency and human rights due diligence

Unexpected Consensus

AI as both a solution and a potential risk

Hafiz Muhammad Farooq

Daniel Lohrmann

AI can augment threat detection and response capabilities

AI systems themselves face risks like data poisoning and adversarial attacks

While speakers generally viewed AI positively for enhancing cybersecurity, there was an unexpected consensus on acknowledging the potential risks associated with AI systems themselves, highlighting the need for a balanced approach in AI implementation.

Overall Assessment

Summary

The speakers largely agreed on the importance of international collaboration, capacity building, and the need for robust governance frameworks in addressing cybersecurity challenges for critical infrastructure in developing countries. There was also a shared recognition of both the potential benefits and risks associated with AI in cybersecurity.

Consensus level

High level of consensus among speakers, suggesting a strong foundation for developing comprehensive strategies to enhance critical infrastructure security in developing countries using AI. This consensus implies that future initiatives in this area are likely to focus on collaborative efforts, knowledge sharing, and capacity building, while also addressing the ethical and security concerns associated with AI implementation.

Differences

Different Viewpoints

Approach to retaining cybersecurity talent

Hafiz Muhammad Farooq

Fernando (audience member)

Instead of retaining the talent locally, the the challenge is to produce the talent.

How to retain these talents in their country to continue their work

While Fernando raised concerns about retaining cybersecurity professionals in their home countries, Hafiz Muhammad Farooq suggested focusing on producing more talent rather than retention.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were relatively minor and focused on different approaches to addressing similar challenges in cybersecurity and AI implementation in developing countries.

difference_level

The level of disagreement among the speakers was low. Most speakers presented complementary perspectives on the challenges and solutions for implementing AI in critical infrastructure security for developing countries. This low level of disagreement suggests a general consensus on the importance of international cooperation, capacity building, and addressing resource constraints in developing countries.

Partial Agreements

Partial Agreements

Both speakers agreed on the need for international collaboration, but Jacco-Pepijn Baljet emphasized sharing best practices through existing forums, while Hafiz Muhammad Farooq focused on developing new global standards specifically for AI in critical infrastructure.

Jacco-Pepijn Baljet

Hafiz Muhammad Farooq

Exchange best practices and lessons learned through international forums

Develop global standards and frameworks for AI in critical infrastructure

Both speakers agreed on the importance of knowledge transfer, but Jenna Fung emphasized individual-led learning through online resources, while Daniel Lohrmann focused on establishing formal public-private partnerships.

Jenna Fung

Daniel Lohrmann

Leverage online resources and regional educational opportunities

Establish public-private partnerships for knowledge transfer

Similar Viewpoints

Both speakers highlighted the challenges faced by developing countries in securing critical infrastructure due to resource limitations and outdated systems.

Hafiz Muhammad Farooq

Jacco-Pepijn Baljet

Legacy infrastructure creates vulnerabilities

Limited resources and budget constraints

Both speakers emphasized the need for strong governance frameworks and practices to ensure responsible development and deployment of AI-driven security solutions.

Daniel Lohrmann

Gyan Prakash Tripathi

Implement robust data governance and secure model development practices

Develop legal frameworks mandating transparency and human rights due diligence

Takeaways

Key Takeaways

Developing countries face significant challenges in securing critical infrastructure, including legacy systems, lack of expertise, and resource constraints.

AI can be leveraged to enhance critical infrastructure security, particularly for threat detection and response.

Capacity building and international cooperation are crucial for improving cybersecurity in developing countries.

A multi-stakeholder approach involving governments, private sector, and civil society is necessary to develop effective AI-powered security solutions.

Balancing security needs with privacy and ethical considerations is essential when implementing AI for critical infrastructure protection.

Resolutions and Action Items

Develop national strategies for tailored capacity building in cybersecurity and AI

Establish public-private partnerships for knowledge transfer and technology access

Work towards creating global standards and frameworks for AI in critical infrastructure security

Implement tiered pricing models for AI security solutions to ensure affordability for developing countries

Increase collaboration and knowledge sharing through international forums and regional partnerships

Unresolved Issues

Specific methods to retain cybersecurity talent in developing countries

Detailed strategies for balancing AI-driven security with privacy concerns in different national contexts

Concrete steps for developing countries to reduce dependence on foreign technology while maintaining digital sovereignty

Specific resilience strategies for organizations to recover from AI-driven attacks

Suggested Compromises

Balance global principles with local context when developing AI and cybersecurity policies

Adopt a risk-based approach to AI systems regulation to address both security needs and ethical concerns

Focus on producing more cybersecurity talent locally rather than solely trying to retain existing professionals

Thought Provoking Comments

In the developing countries, especially MENA itself, the major challenge we have is in the area of critical infrastructure. I would say there are three major areas where we see there are issues. The area number one, I would say is the legacy infrastructures.

speaker

Hafiz Muhammad Farooq

reason

This comment provided a structured analysis of key challenges facing developing countries in securing critical infrastructure, introducing important context for the discussion.

impact

It set the stage for exploring specific issues like outdated systems, lack of expertise, and digital transformation challenges in developing regions. This framed much of the subsequent conversation around capacity building and resource allocation.

AI-driven attacks are actually spreading and broadening and deepening the attacks against critical infrastructure worldwide.

speaker

Daniel Lohrmann

reason

This highlighted the urgency of the issue by emphasizing how AI is being weaponized against critical infrastructure.

impact

It shifted the discussion to focus more on the immediate threats and need for AI-powered defenses, rather than just theoretical benefits of AI for security.

I think essentially, of course, like ideally, like you said, is that there should be a tailor-made national, like a tailor-made… strategies on how to do capacity building for people who implement or execute this kind of technologies in their works.

speaker

Jenna Fung

reason

This comment emphasized the need for localized, context-specific approaches to capacity building rather than one-size-fits-all solutions.

impact

It prompted more discussion about how to develop effective training strategies tailored to developing countries’ specific needs and constraints.

To address this, I suggest a three-pronged strategy that emphasizes legal safeguards, multi-stakeholder accountability, and capacity-building measures.

speaker

Gyan Prakash Tripathi

reason

This comment offered a comprehensive framework for addressing digital dependency issues in developing countries.

impact

It broadened the conversation beyond just technical solutions to include legal, governance, and capacity-building dimensions. This multifaceted approach influenced subsequent comments about international cooperation and policy development.

I think AI can help us in that. And I think it can actually be part of the solution to make solutions that are available, maybe in English, available in multiple languages around the world.

speaker

Daniel Lohrmann

reason

This comment highlighted a specific, practical application of AI to address language barriers in cybersecurity education and implementation.

impact

It shifted the tone to a more optimistic view of AI’s potential to solve some of the challenges discussed earlier, particularly around accessibility and localization of resources.

Overall Assessment

These key comments shaped the discussion by progressively broadening its scope from specific technical challenges to encompass policy, governance, and capacity-building dimensions. They highlighted the complexity of securing critical infrastructure in developing countries, emphasizing the need for tailored, multifaceted approaches that leverage AI while addressing its risks. The discussion evolved from identifying problems to proposing concrete strategies for international cooperation and localized implementation, ultimately presenting a more holistic view of the challenges and potential solutions in this domain.

Follow-up Questions

How can developing countries form knowledge, R&D, and cooperative blocks to compound available resources?

speaker

Gyan Prakash Tripathi

explanation

This is important for developing long-term sovereignty and increasing collective bargaining power in technology partnerships.

How can AI help in making solutions available in multiple languages around the world?

speaker

Daniel Lohrmann

explanation

This is crucial for making AI-powered security solutions accessible and usable in diverse linguistic contexts, especially in developing countries.

How can we develop a global framework for incorporating AI in critical infrastructure security?

speaker

Hafiz Muhammad Farooq

explanation

A global framework is necessary to standardize the use of AI for cybersecurity of critical infrastructure across different countries and regions.

How can we improve inclusivity in international standard organizations for AI and cybersecurity?

speaker

Jacco-Pepijn Baljet

explanation

Improving inclusivity is important to ensure that developing countries can participate in setting global standards for AI in cybersecurity.

How can we increase the knowledge base and documentation for legacy systems in critical infrastructure?

speaker

Hafiz Muhammad Farooq

explanation

This is important for training new professionals and retaining expertise in managing and securing older critical infrastructure systems.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #81 Universal Standards for Digital Infrastructure Resiliency

WS #81 Universal Standards for Digital Infrastructure Resiliency

Session at a Glance

Summary

This discussion focused on building universal standards for digital infrastructure resilience. Participants explored the need for global standards while acknowledging the importance of localized implementation. They emphasized the critical role of data sovereignty, integrity, and protection against emerging threats like cryptojacking and quantum computing risks. The conversation highlighted the importance of multi-stakeholder collaboration, involving governments, private sector, academia, and international organizations in developing resilience frameworks.

Key challenges discussed included rapidly evolving threats, human error, and the need for flexible, agile policies that don’t become obsolete quickly. Participants stressed the importance of capacity building and continuous skill development to address the human element in cybersecurity. They also noted the need for a risk-based approach in developing resilience frameworks, incorporating threat modeling and scenario planning.

The discussion touched on the role of existing standards like ISO, while considering the need for new, more comprehensive frameworks that address the unique challenges of digital infrastructure. Participants agreed on the importance of starting with a clear definition of the problem and creating action plans with measurable outcomes. They also highlighted the need for diversity in technologies and systems to avoid single points of failure.

The session concluded with a call to action, emphasizing the need to move beyond conversations to practical implementation of resilience standards. The participants agreed to compile the discussion into a white paper to serve as a reference for countries and regions seeking to enhance their digital infrastructure resilience.

Keypoints

Major discussion points:

– The need for universal standards for digital infrastructure resilience

– Challenges in developing and implementing resilience standards across different countries

– The importance of multi-stakeholder collaboration in shaping standards and policies

– Emerging threats to digital infrastructure and strategies to address them

– Balancing universal standards with localized implementation

Overall purpose/goal:

The purpose of this discussion was to explore challenges and opportunities in securing critical digital infrastructure, with the aim of developing ideas for universal standards and best practices that could be adapted by different countries. The panel sought to produce insights that could inform a white paper on digital infrastructure resilience.

Tone:

The tone was largely collaborative and solution-oriented. Panelists built on each other’s points and acknowledged the complexity of the issues. There was a sense of urgency about the need to act, balanced with recognition of the challenges involved. The tone became slightly more pointed when audience questions challenged some assumptions, but remained constructive overall.

Speakers

– Genie Gan: Director of Government Affairs and Public Policy, Kaspersky; Moderator

– Aderonke Sola Ogunsola: Head of Policy and Process Review, Corporate Planning Strategy and Risk Management Department, Nigerian Communications Commission

– Pawan Anand: Major General Dr., Director of United Service Institution of India, PhD Guide and Mentor at the National Defense College

– Alaa Abdulaal: Chief of Digital Economy Foresight, Digital Cooperation Organization

Additional speakers:

– Dino Cataldo Dell’Accio: Chief Information Officer, UN Pension Fund; involved in Best Practice Forum on Cybersecurity, leading Blockchain Dynamic Coalition on Assurance and Standardization

– Unnamed audience member: Asked question about institutions for developing standards

Full session report

Digital Infrastructure Resilience: Building Universal Standards at IGF 2023

This discussion, part of the Internet Governance Forum (IGF) 2023, explored the challenges and opportunities in securing critical digital infrastructure, with the aim of developing ideas for universal standards and best practices that could be adapted by different countries. The panel sought to produce insights that would inform a white paper on digital infrastructure resilience.

Panelists and Their Backgrounds:

– Genie Gan (Moderator): Head of Government Affairs and Public Policy for Asia Pacific at Kaspersky

– Aderonke Sola Ogunsola: Head of Cybersecurity at the Nigerian Communications Commission

– Major General Dr. Pawan Anand: Senior Fellow at the United Service Institution of India

– Alaa Abdulaal: Senior Policy Advisor at the Digital Cooperation Organization

Key Themes and Discussions:

1. Threats to Digital Infrastructure:

The panel discussed various threats, including:

– Human error and skill gaps

– Rapid technological changes

– Economic and technological disparities between countries

– Physical threats to infrastructure

Specific examples were cited, such as the Singapore banking transaction disruption and a US-based cybersecurity company incident, highlighting the real-world impacts of these threats.

2. Multi-stakeholder Collaboration:

There was strong agreement on the importance of collaboration between governments, private sector, civil society, and international organizations. Ogunsola stressed the need for engagement from all stakeholders to develop effective standards. Abdulaal highlighted the key role of the private sector in innovation and capacity building, while also noting that international organisations can facilitate cooperation.

3. Regulatory and Standards Development:

The panel discussed the need for universal standards that are flexible enough to be adapted to different contexts. Ogunsola emphasized that universal standards are “not negotiable” but implementation may require customization. Anand concurred, stating that standards should be universal, but regulations need to be flexible to allow for innovation.

Challenges in developing and implementing standards included:

– Keeping pace with rapid technological advancements

– Addressing economic and technological disparities between countries

– Establishing common definitions and language around digital infrastructure resilience

Policy examples were discussed, such as Nigeria’s Cyber Security Act 2025 and critical national information infrastructure order.

4. Capacity Building and Human Resources:

Ogunsola emphasized the critical importance of capacity building and continuous skill development in ensuring digital infrastructure resilience. She highlighted human error and lack of skills as major threats to resilience.

5. Risk-based Approaches and Threat Modeling:

Dr. Anand and an audience member advocated for adopting risk-based approaches and incorporating threat modeling in developing resilience frameworks. This was seen as crucial for creating effective and adaptable standards.

6. Ethical Use of AI in Cybersecurity:

Anand highlighted the growing importance of AI in cybersecurity and stressed the need for ethical and responsible use of AI in this context.

Audience Engagement:

The discussion included valuable input from audience members, including:

– A challenge to the premise of developing new standards, pointing out existing ISO standards

– The need for a common language and universal definitions for concepts like Digital Public Infrastructure (DPI)

– Advocacy for risk-based approaches in resilience frameworks

Unresolved Issues and Future Directions:

Despite the productive discussion, several issues remained unresolved, including how to develop standards that remain current given rapid technological changes, and how to address economic and technological disparities between countries in implementing standards.

Conclusion and Next Steps:

The panel agreed on the urgency of moving from dialogue to action in enhancing global digital infrastructure resilience. Genie Gan outlined specific next steps, including compiling the discussion into a white paper to serve as a reference for countries and regions seeking to enhance their digital infrastructure resilience.

The moderator noted time constraints of the discussion, emphasizing the need for continued dialogue and action on this critical topic.

Session Transcript

Genie Gan: Can you do it again? Yes, I’m done. Alvar, wait, wait, wait. Wait, wait. Wait, wait. Wait, wait, wait. Wait, wait, wait. Wait, wait, wait. Stay with me. I’m actually going to assist. I’m going to do it soon. I think I’ll just wear it. Of course, yes. I’m doing that. Is it channel zero? Just checked up over there. So, channel one? One, huh? Okay. We should start, right? Okay, we’ll start in one minute. Alright. I’m just going to silence my phone. I know. Thank you for watching. Okay. Are we ready? Yeah, sure. Okay. Right. Good morning, everyone. We are seated in Riyadh at the Internet Governance Forum. Okay, sure, sure. Two minutes. We’ll start in two minutes, I’ve been told. Channel one. Okay. Okay. Okay. All the speakers are on site. I’m sorry. Sorry. We can start now. Okay, thank you. Good morning, everyone. From wherever or good afternoon or good evening from wherever in the world you’re dialing in from. Thank you for attending today’s session on building universal standards for digital infrastructure resiliency. My name is Jeannie Gan and I am the director of government affairs and public policy from Kaspersky. And I’m your moderator for today. We’re here today to discuss the challenges and opportunities of securing the backbone of a modern digital economy. Our critical infrastructure, including data centers, cloud services, and other foundational digital assets. Of course, we all know that cybersecurity and resilience of critical information infrastructure, CIIs, have become well-established requirements over the years. However, as the digital landscape evolves, we actually need to broaden our focus to include not only the security of information and data, but also the physical and operational resilience of the digital infrastructure that house these information and data. When we started conceptualizing this workshop early this year, we were using an example of how an outage of a major data center in Singapore actually disrupted 2.5 million banking transactions across Singapore’s largest banks. Citibank, DBS, to show how vulnerabilities in digital infrastructure can have far-reaching consequences even when it’s not triggered by any cyber attacks. But of course, its close linkage to cyber security was not immediately obvious as well. And then in July, everything all changed when a rogue software update by a US-based cyber security company led to the crippling of up to 8.5 million computers worldwide which were using Microsoft systems. And suddenly, many people realized how resilience requirements which apply in the cyber security industry could apply to digital infrastructure. And governments around the world were beginning to recognize this even before the incident. For instance, Singapore is studying the introduction of a digital infrastructure act going beyond cyber security to address a broader set of resilience risks ranging from misconfigurations in technical architecture to physical hazards such as fires, water leakages, and cooling system failures. The UK government from the other part of the world also has launched a public consultation on enhancing the security and resilience of its data infrastructure. And these developments marked the beginning of a global shift towards more comprehensive frameworks for digital infrastructure resilience. So, well, since conversations in this area are still pretty new and early stages, I think we’re in early stages, right? And there is a chance for the IGF to sort of shape some best practices and common standards. That’s our goal today, to brainstorm ideas. And really, we have today with us speakers from around the world from different regions. and we hope to brainstorm ideas that will help shape the future of digital infrastructure best practices. We have regulators as well, industry leaders and experts from academia to discuss these critical issues and collaborate on creating a white paper that we hope to produce at the end that will serve as a reference for countries developing laws and regulations to strengthen digital infrastructure resilience. So let me now quickly introduce our speakers after this context setting. First of all, I think I’ll start from my right on the far end. We have Ms. Adoronke. She is Head of Policy and Process Review, Corporate Planning Strategy and Risk Management Department at the Nigerian Communications Commission. She’ll speak about the role of national cyber security and digital infrastructure governance, particularly in Nigeria. And then I have Major General Dr. Pawan Anand, who is Director of United Service Institution of India and a PhD Guide and Mentor at the National Defense College. He will share insights from India, focusing on the challenges and threats to digital resilience, particularly in rapidly developing countries. And on my left, that’s Ms. Alaa Abdulal. She is Chief of Digital Economy Foresight at the Digital Cooperation Organization. She’ll discuss the role of the DCO, Digital Cooperation Organization, in shaping and enhancing digital infrastructure, both in the Kingdom of Saudi Arabia and internationally. So for today, we will explore several key topics throughout, focusing on three main themes. First, threats to digital infrastructure, we’ll discuss the latest threats to digital infrastructure, their economic and social impact, and how different countries are responding with new standards and regulations. And of course, the second theme will be multi-stakeholder collaboration, which I think is a running theme as far as the UN and IGF context is. We’ll encourage the exchange of expertise among all stakeholders, government, industry players, academia, and all that. And of course, lastly, we’ll talk about the regulatory and standards development, focusing on the importance of international standards for digital infrastructure, particularly and examine how best practices from cybersecurity can be adapted to this domain. So let’s kick off with some initial thoughts from our panelists. I’ll ask each of you to provide maybe a brief impulse statement, not more than two minutes, based on some following questions. Maybe we’ll start with Aderonke. So, Ronke, in your opinion, is digital infrastructure a universally accepted term? And could you also maybe give us a brief overview of the role of Nigeria’s NCC in digital infrastructure resilience?

Aderonke Sola Ogunsola: Thank you, Jeannie. Good morning, everybody. And thank you, Kaspersky, for sponsoring this conversation. I’d like to start by saying that this topic is timely, considering that it has a lot of interplay regarding development of standards, corporations, and what have you. In the course of reviewing this topic, there was a statement by Sharon and Karin Rudler, some sociologists. And what did they describe infrastructure as? It struck me. It says infrastructure was described in a provocative manner as something. that remains invisible until it’s broken down. So let’s just look at a picture. We all wake up this morning, our emails are not sending, we can’t connect, people cannot even see us all over the world. We can’t do any transactions. I’m sure everybody will go like, what’s happening here? So that’s the perspective. Infrastructure as the bedrock is the underlining framework for digital connection. So having said that, looking at universal, is digital infrastructure universally accepted? I think that’s yes. It’s a universally accepted term because when you look at different organizations, the ITU sees it as an enabler for provision of digital access. The OECD reviews it as an example for social economic development.

Genie Gan: Sorry, give me a moment. I think we need some help here. Ronke’s audio is coming on and off. Can we maybe switch a mic for her, please? Or we can let them, maybe you use that first. Sorry, is it off? No, it’s not that, it’s. I think you used that mic first. Just let me switch that up for you. Can you hear me now? Yes, now it’s okay, yeah. Please. Okay. Now it’s good, yep.

Aderonke Sola Ogunsola: All right, so do I start all over again or just continue from the Nigerian perspective? Yes, just the Nigerian perspective. So as a regulator, I work for the Nigerian Communications Commission and we are the regulator for the telecoms communications industry in Nigeria. Basically, we oversee the technical and economic regulation of the industry. And coming from the perspective of digital infrastructure, the Commission was established by an act of the Parliament, and we have amongst various powers or functions to facilitate and enable an environment. One of the things we are looking at is promoting digital infrastructure. We have various actions or interventions that we have taken, such as providing licenses for infrastructure development, operational spectrum licenses. We also, currently in Nigeria, we have our Cyber Security Act 2025, and part of what was identified was the critical infrastructure, and digital infrastructure was part of what was identified. In addition to this, I hope you can hear me? Yes. In addition to this, a critical national information infrastructure order was recently launched. What is it looking at? It’s because we have identified as a nation the importance, the sensitivity of the infrastructure. I believe, in my opinion, that we have moved beyond cyber security to cyber or infrastructure resilience. It’s the ability for us to bounce back if there’s any attack. Jeannie did talk about the Singaporean experience, but if you also cast your mind back earlier in the year, and the SG, ITU did mention it, about the submarine cable cut. It affected a lot of countries around the West African coast. But the good thing was Nigeria, and I would give kudos to my organization, we had always been proactive in our regulations. that our operators had, you know, resilience in their network, risk management is something that we saw, and also we ensured that they have a network. How do they survive if there’s any attack? So for Nigeria, we did not really feel the impact, but the West African coast did have their internet connectivity shut down for a while, and Nigeria has eight submarine cables landing in its shore. So you can imagine the amount of data network structure that we have, and the ITU also acknowledges that say nothing less than 90 or let’s say 80 to 90 percent of data connectivity is carried on the submarine cable internet connectivity and all of you. And because of this, the commission, Nigeria, is passionate about ensuring resiliency. So we are also part of the working group that was established by ITU. The Honorable Minister of Communications from Nigeria is a co-chair for the corporation to work on developing standards for this, as in me, developing universal standards for digital infrastructure. For me, it’s something that’s, it’s timely. It is expedient for us to look at it holistically, and we have pockets, national, like I mentioned, experience. We also have regional and group interventions, but like the SDG or what they do, I’m not sure we have something that’s universally in everyone. So well done to Kaspersky for sponsoring this, and I think it’s a conversation that is timely. Thank you. Thank you, Ronke. In fact, in your very short impulse statement, you’ve already touched on all three themes. You’ve discussed the threats to digital infrastructure, and I love the submarine cables example that affected West Africa. That was a fantastic illustration, actually, and also about some multi-stakeholder collaboration and efforts already. So maybe I’ll now turn to Ala. So the DCO, as we all know, plays a significant role in shaping and enhancing digital infrastructure internationally. So could you perhaps take a couple of minutes and share the goals of your organization in this area and what efforts are being made in addressing digital infrastructure residents? Switch it on. Yes, we can hear you now.

Genie Gan: Okay, thank you.

Alaa Abdulaal: Thank you very much, and I’m very honored to be part of this panel with you all. So the Digital Cooperation Organization, just to start, we represent 16 member states, having an 800 million population, and our goal overall is to make sure that every person, nation, and business has a fair opportunity to participate. As you have said, Janine, now we are in a world of digital economies accelerating very quickly. So our organization really focuses on giving that fair opportunity for everyone to be part of this growth of the inclusive digital economy. And for that, we have mentioned now, and even my colleague here mentioned, how important is it to have the right infrastructure and have access to that infrastructure from businesses, from governments. And what we are doing at DCO is that we are promoting the use of having the development of resilience framework. those specifically in the response of the increasing risk that is coming on the digital infrastructure. We are giving guides and advices to all our member state, putting all the stakeholders on one table. So what happened during what you have mentioned in July, one of the fastest response that we have done is that we gathered all our member states on one table to discuss the issue that has happened with the faulty deployment and the outage and how did it impacted each and every nation? What was their lesson learned? What can we do together? What are some of the missing regulations in some of the countries that can other can be shared? And this is why, and we are the digital cooperation organization, we believe in the cooperation and we believe that also this is, it should be in a multi-stakeholder approach and also looking not only at infrastructure because again, infrastructure is one layer, but even as you have mentioned, we have services, operations that are running those infrastructure. Okay, do the people have the right skills? And capacity to be educated, to run those infrastructure with the advancements of different technologies. Now, when infrastructure is varying from supercomputers that are supporting AI, that is even bringing different layer of risks. So by focusing on providing the right information, by putting all the right stakeholders and bringing countries together, we are aimed and focused to really enhance the digital infrastructure of our member states and even contribute to globally to all countries. Excellent. Thank you. Thank you for that. I like the point that you made about infrastructure being overlaid with systems and then, of course, capacity. And then maybe we can talk a little bit about capacity building later, because that’s the human element too.

Genie Gan: So now I’ll turn to Major General Pawan. From your experience, what are the main threats and challenges to digital resilience, particularly in India and other leading economies?

Pawan Anand: Thank you, Jeanne, and thank you, Kaspersky, for getting us all together on this interesting subject. So, well, to my mind, firstly, I’m from the USA, the United Service Institution of India, and we do a lot of emerging technologies work with the National Cyber Security Coordinator, the National Security Council, Ministry of Electronics and IT, the Ministry of Internal Affairs or Home Affairs, and, of course, with the Defense and the Defense Cyber Agency. It’s actually interesting that India today is in every conversation that takes place globally on anything to do with digital, because India has gone deeply digital. And with the Honorable Prime Minister of India offering the DPI, the Digital Personal Infrastructure, for almost all the Global South countries, literally, India seeks to help the Global South in coming up with their DPI. So that having been said, what’s the main threats and challenges that we are looking at? And I would start with the main thing, and that is sovereignty of data. And everything hovers, perhaps, around that. You may have all your infrastructure in place. You may have the storages. You may have the transit points. You may have the networks. You may have the processing infrastructure. But at the end of it, it’s the data which really counts. it’s that’s the thing that makes the May go. So if money makes the May go, it’s data that makes the May go here. The data sovereignty to my mind is something that we really need to keep in mind and India is very cognizant of that. We are, of course, would like to have all our data onshore in India, which perhaps is not totally possible at this minute because we don’t have the entire capacity to be able to store that kind of data. So obviously much of it is offshore. And when it’s offshore, and if we don’t have the capacity to keep the data, we don’t have the skills as Aala had brought out as yet, we would be looking at the legal implications of keeping your data offshore. And that’s something that the DPDP Act that we’ve come out with in 2023 really looks into. And I’ll come to that sometime later in our conversation. The second most important thing is integrity. So we have to look at integrity of data in storage and integrity in transit. Both of these are very important to us because wherever the interfaces happen, wherever there is a joining of networks, wherever there is a joining up with the storages, that is the point where we find vulnerabilities occur. So as your digital penetration increases in India, the contact surfaces increase and the attack surfaces increase. And I think the final point that I’d like to make quickly is that when it comes to emerging tech, AI in cybersecurity begins to get more and more important for us. So the ethical use of AI and also the responsible use of AI is so important. We would look at accountability, wherever AI is used, whether it is for cyber or for protecting infrastructure physically or digitally. we would have to look at interpretability as well, because you should be very clear as to what exactly is coming out of that and how it is protecting your infrastructure. I think also what we’ll have to keep in mind is supply chains, and we’ll talk about that later, but supply chains could be compromised, and that is one huge threat that we need to keep in mind. Today, at the end of it, with all the infrastructure that has come up, DPI that has come up in India, we have now become the 10th most vulnerable country in the world to digital assets. And I see that coming up further and further. And you can make out also by the increase in the number of cyber attacks. Maybe I could give out those figures, but that’s pretty obvious, that the number of attacks are going up geometrically every year. So I pause here and come up with some more thoughts

Genie Gan: later on. Thank you. Thank you. When you say rank number 10, I think what came to my mind was that this is really the kind of ranking we don’t want on top, right? But thank you all for your thoughts. I think it’s time for us to turn to the moderated discussions. We do have a set of policy questions that we will like discussed today and to explore with our speakers. So please, however, feel free to jump in if you’ve got thoughts to add on to whatever that other speakers are saying. But of course, please help me to keep your interventions concise and short, yeah, not more than two minutes maybe. So first, maybe I’ll get Ronke to take a first question. Do we need universal standards of resilience or is it… the case that every country’s digital infrastructure has unique needs that require a customized approach, right? So how should we balance these two perspectives of having something universal versus something that’s highly customized? What are your thoughts?

Aderonke Sola Ogunsola: Yeah. Okay. So for me, universal standards is not negotiable. I think it’s something that’s meant to be open and something that needs to be adopted. Like I did say earlier, you can start from the regional level. But like what we have done in Nigeria, I will use my national perspective, and maybe because we also boast of the largest economy in Africa, our numbers speak, our economies speak, especially our interventions when it comes to development of digital infrastructure in Africa. So back to universal standards. Yes, we do need one. What we have also done in Nigeria is to come up with a critical national information infrastructure order. It outlines strategies, methods, or would I say activities or actions by several stakeholders on how to tackle. When General Powell was speaking, he did talk about the physical protection of infrastructure. We have our own vulnerabilities back home, issues regarding vandalization of structures and all of that. So when you look at it from that perspective, you drill down, you need to look at how you protect that. And from the national to the regional, how do we come up with KPIs, standards? What works for Nigeria may not work for Ghana because of our peculiarities, but you cannot undermine that. The reason why all of us are here in this room is because the infrastructure, digital infrastructure matters to us and we’ve identified the need to see how we can continuously sustain it. So that’s the conversation around the room, whether it’s services or whether it’s whatever that runs on the infrastructure. So you move from regional and we look at the universal perspective. The IT is already working on the submarine cable resiliency, just to guide against what happened. Singapore has probably come up with their own solutions and the DCO also is sharing experiences. So in summary for me, it’s home growth to regional and universal. And at the universal level, I would like to liken it to the SDG. It can be adopted amongst all nations and the children will definitely go back home. So when you say, you talked about does it require a different approach? Implementation may require different approach, but the standards can be global. And this conversation I could detail and probably develop a white paper, but about adopting, having more stakeholders, especially policy level to look at it critically and see how we can run with it. Thank you. I like that response. Thank you for putting it across so elegantly because really standards can be universal and they need to be. It’s like what we have at the UN with the SDGs. Indeed, I think that’s a… pretty apt parallel that you’ve drawn. But of course, drawing experiences at the regional level and also having implementation localized. I think that’s excellent. I think that has helped us to set the stage. I’m not sure if other speakers have anything to add. Dr. Pawan. I totally agree with what Adharanka said. I mean, she really built it up from bottom to top and took the whole width of the subject. But I just want to add here that when we talk about universal standards, standards would be something that we should all be, should be all able to take on because if we don’t do that, we would be not able to connect globally. So those universal standards, I think, are so important. At the same time, when it comes to bringing in regulation, I think we need to be a little careful. So while we set standards, we will have to be careful about compliances and we need to differentiate between the two because the moment you bring in compliances and those compliances become too stringent, then there is a fear of stifling innovation. So we need to find that balance between compliance and innovation and we need to differentiate between standards and compliances.

Genie Gan: Okay. Thank you. Thank you for those remarks. I am going to move. I’m going to ask maybe Alaa a next question. What do you think are the biggest challenges in adopting universal resilience standards that we have been talking about for the most part, especially in developing regions? How can we make sure these standards are accessible and scalable in different parts of the world? And if you could maybe draw some experiences from working in DCO. Thank you. I think we have a lot of challenges in that, several key challenges. from economic and technological disparities between countries, different countries. So, let’s look at it, and even it has been mentioned by my colleagues here. Different countries have different level of readiness. Some of them, even at a stage that they lack infrastructure by itself, not only having it resilience enough. And this goes to having lack of financial support and technological support. Another aspect is also, we talked about it when I mentioned in my key or first opening, which was capacity building. Again, for a country to start adopting standards, are they ready for those standards? Do they have the right human capital to understand the standards, to apply them, to make sure that they are customized in the right way?

Alaa Abdulaal: Definitely, it’s very useful to have a framework and standards for everyone to adopt. But again, there will never be a one-size-fits-all. There will be a need of cascading to the needs of the country, to their status. But it’s very good to have that solid foundation that unifies everyone. And this is why having those right human resources and experts is very crucial on a national level, which will really make sure that it’s being adopted in the right way and implemented in the right way. Another aspect, one of the challenges is As I said, is those standards flexible enough to fit the current status of that country? What is the flexibility of those standards? And maybe another aspect that one of the challenges is currently every country is tackling this challenge by their own, even from only a government perspective, not looking at, okay, what can the private sector provide? What can the academia provide? Again, academia can provide a lot of research and understanding of those standards in coming up with the right ways. Are we putting all those people on the same table? Are they having the conversations? Is it a government approach or is it a multilateral approach, a multi-stakeholder approach? I think all of those challenges is being in the way of us, first of all, having the right standards in place to adopting them and then even measuring their impact and the way that they are executing. Thank you. You have covered several very good points. Again, I think we are seeing this recurring issue or question to do with the human capital and their ability to appreciate the issue, apply the standards, and of course, to rightly implement them in a way that makes sense in their home countries. And even, let me add, because there is a very important point from until we all reach that universal standard, okay, things are accelerating very quickly. Are those standards agile enough? In fact, too fast. Exactly. We are talking about AI. We are even now talking about computers. quantum computing. Again, this is adding another layer of complexity from a security perspective, from an infrastructure perspective. Again, until we reach that agreement on those are the universal standards, we will be in another point of era that we need to make sure that this is another layer of challenge that we really need to start thinking of to move fast. Can we build something that is agile enough to take that very fast advancements that we are moving in?

Genie Gan: It’s great that you point this out because Dr. Pawan and I were just having a chat yesterday after hearing some sessions in the opening segment of IGF and we’re just saying that, you know, shortly after this whole global digital transformation movement and then we have got AI and now already we’re into quantum computing. It’s like we’re trying to play catch up all the time. And I think that is definitely a theme that we need to come back to about how we can seek to remain agile and fast enough to respond, to have standards or laws or policies that actually respond to real issues, real questions that are evolving faster than we would like. I totally agree. It’s all the time a game of catch and most of us

Pawan Anand: would agree on some points and we would have disagreements in some areas and I think the solution lies in quickly reaching the places where we have consensus and issue them as some sort of a guideline or at times even as a regulation where we all agree and then we can keep resolving what we don’t agree upon. So I think when we come together to put to talk about these issues, we need to be very clear. Where is it that we’ve quickly found a consensus and let’s start implementing that as quickly as possible. And the rest, we will work on. At the same time, when we’re working on those, the difficult areas, where consensus is a little more difficult, we need to bring in the new technologies also that start influencing. So perhaps that is the only way that we can remain in the picture.

Alaa Abdulaal: Otherwise, compliances or consensus will always be so far behind. And they really like the word we. You have meant we, we have to work together, we have to agree, because yes, it’s not one person or one nation or one country. I think it is the core of it is that we are working together on it.

Genie Gan: And I also like how he says, we just need to get started. Let’s just stop talking about this and let’s just do it. All right, so I just wanna stay with Dr. Pawan and I would like to ask the next question. How can governments be equipped in digital resonance? What policies, regulations and codes of conduct, as you may like to put it, need to be adopted to ensure a secure infrastructure across healthcare, governments, finance, CIIs and data centers? So that’s really a tough one because when you formulate policy, you have to take so much into consideration.

Pawan Anand: And just to tell you how tough it’s been in India, but it’s been a very short-footed move, we came up with the DPDP Act. And we started talking about the Digital Personal Data Protection Act in 2016. We finally came out with drafts in 2022. It was given out to the public, there was blowback, there was a lot of feedback. They went out with the second bill, that was about six months later, there was a bigger blowback. And finally, you know, the DPDP Act came into existence in the mid of 2023. It may have been late, but it was there and it’s for sure. It is yet to be fully operationalized because there are certain rules which are being worked out and there are about a set of 20 to 22 rules which are going to come into place. So it just gives you a sense as to how you go about making policies with keeping in mind various stakeholders throughout the country and abroad. But I think what really gave us a big boost across the globe was that COVID-19 gave us all a real wake-up call because we all went digital and suddenly all of us realized that we need to have certain policies in place where we are able to converse digitally and of course transfer data digitally. We’ve seen how it’s impacted public services. We’ve also seen to the extent how it’s started impacting elections as well. So during COVID, there were certain elections in Europe which had to be postponed. Even the US and India elections which were held later on were impacted somewhat by this kind of digital interventions which were happening and the influence that it played. I think how you can strengthen the policy structure for this is you have a digital-first approach for public services. So this needs to be built up with almost all within the country and outside the country. We also need a remote-enabled kind of a structure in our policies so that everybody is able to work somewhat remotely and that it is controlled. Of course, everybody needs to give a higher priority to digital infrastructure. So with these three in, connectivity between various digital infrastructures would bring us into a complete picture. So policy has to hover around all of these. Not to forget what we spoke earlier, physical protection of our digital infrastructure. So we have to look at the housing of the infrastructure, underground, a distributed infrastructure, how to protect it physically, how to, it’s not disrupted, how to ensure it’s not interdicted physically. And then of course, during transit. And finally, I think we need to look at third party risks and how these will be managed as technology innovations take place. So all in all policies will have to be whole of government. Now, when it comes, let me just take two examples and to bring out the difference. You know, the United States has in healthcare, has those guidelines for HIPAA. And these are very stringent guidelines which have been brought in for various healthcare, for protection of healthcare and digital information. On the other hand, they also have the SSA 18, where they have certain reports, SOC 1, SOC 2, SOC 3 reports, where these are standards by which you would expect certain reporting to happen in financial transactions. Now, in India, we are very clear that we follow the SSA guidelines. And so our reporting in SOC 1, SOC 2 and SOC 3 is fully in place. So we expect all financial transactions to be fully transparent, to be fully controlled. On the other hand, when it comes to healthcare, and the US has the HIPAA, India has come out with its own standards. And that is, we call it Disha. a basically digital infrastructure in health care. But it leaves a lot of space open for data to be utilized. So you can’t have a universal standard, as per us, in that. But we would say that we’ve left a lot of space open for data to be utilized for research. So we don’t mind our data being used for research, but private data has to be kept in place. So it’s a little more nuanced, if you ask me. And that is the kind of nuance that we need to have so that we are able to utilize digital infrastructure and data to its fullest for innovation.

Genie Gan: Yeah, back to you, Jamie. Thank you. Thank you for that. And I think just one point of clarification, when you talked about third party risks, you are basically talking about different people in the ICT supply chain, right? OK, cool. Now, I think questions are coming. Yeah, please. No, I’m just going to say that questions are starting to come in. But I think let’s go with the flow. I quite like the flow. Please, Edoronke.

Aderonke Sola Ogunsola: I just wanted to add one or two points to what Jennifer said when it comes to how governments need to be equipped with developing policies regarding digital resiliency. So for me, I look at it as sometimes people may see resiliency as being subjective based on levels of development and technology or infrastructure availability. In some cases, issues of topography, weather, may also serve as a point of focus for us to consider. because we did talk about the physical protection on the ground, how do we store this infrastructure. Some places it’s nearly impossible, it’s a Hukulink tax for the infrastructure to be protected in case of being resilient. Then also for the Nigerian perspective, because I’m a regulator, we’ve come up with various policies. We have a Nigerian national broadband plan that helps us to fashion out stages, phases on how to ensure integrity, resiliency, recovery plan, standards for infrastructure across the country. Then like I did say, the critical national infrastructure information order did engage or gives plan for various stakeholders. I recall you did say it’s a private sector aware, the academia, all those are also included. It’s a model maybe for national level, regional or even other countries that may want to look at it. If we say we are looking at or developing standards, we are that proactive. I also want to say perhaps that’s part of what gave us that resilience, so to speak, from the submarine cable breakdown or cut. For regulations as well, we keep doing catch-up, it’s a cliche, but is it something that we want to look at? Can we get to a level where we start considering standards or policies or regulation that can be self-regulated? Maybe we come up with soft laws as regulators, thinking outside the box to speak to these global standards, or universal standards we are looking at, rather than putting laws or regulations like General Powell did say that stifle innovation.

Alaa Abdulaal: I think there is one important point also that I want to build upon because it has been mentioned again, we cannot protect ourselves and be 100% resilience, but we as a country and even internationally, we always have to have like the right response plan for those such emergency. Yes, every country they have their recovery plans, incident response plan, but even I believe from an international level, or even sometimes on a regional level, we really need to have that set up in the right way for to have that immediate exchange of experience, immediate exchange of what did this country do to come back or recover from a specific incident. I think this is a very important point that we should consider specifically if we are talking about government being digitally resilient. And when you talk about the exchange, let me just clarify what you mean is that cross jurisdictional learning. Exactly. And that communication that takes place, I presume

Genie Gan: effectively through platforms such as DCO. So that’s great. Thanks a lot. And now there is actually, I know I’m sort of messing up with the order a little bit, but it’s really just to maintain that flow, because we have a comment from a member of the audience. I’m going to read it. And then I have a small question, which I may want to pose it to Ella. So the comment comes from Vahan. from RIPENCC. And he says, coming from different sessions at IGF, there is a feeling that we still don’t have a universal understanding of what is this DPI. I think Dr. Paola mentioned DPI. Neither the universal approach to what is a core, technical core or public core of the Internet. To develop standards, we should use the same language and have a universal agreed definition of these terms. So let us start from Internet and define what is important for us, what is a core and how we can protect it and ensure the resilience of it. So I think what he is really talking about is to have a common language that we speak when it comes to this topic. So my question is really, how can we begin to shape this common language? Any insights? Maybe Alla can start.

Alaa Abdulaal: Yes, definitely. And I totally agree with his comment. And before I answer this question, let me give you our challenge at DCO when we started. Our organization is focused on digital economy. And when we first started, okay, what is the definition of digital economy? Is there a universal understanding of what does it mean? What does it encompass? So I totally agree with him. The first point is to define and put in a framework the understanding of what we are trying to solve. This is the first, let’s say, ABC in any even research. When you start conducting a research, you really identify what is the question that you want to answer? What is the scope? What is in the scope? What is outside that scope? So, and for us to reach that, we need to sit together, as it’s a we problem that we need to solve. Because again, I can come up with my own definition, that you can come up with your own definition and understanding, but then what is the whole purpose? What is the mischief? Exactly. Yeah. So, we really need to bring all the stakeholders from government, private sector, academia on different region, on different countries. And I believe this is the role of, for example, our organization and other organization where we can bring all of the stakeholders in one table to start defining the definitions of what do we want to solve and then putting an action plan to actually come up with different solutions. Yeah. Yeah. Just wanted to add to this.

Pawan Anand: I think it’s a fair point, which has come from the person who made the comment. But when it comes to digital infrastructure, I think a reasonable amount of definition is in place, especially the technical definitions are, I think, across the globe, quite reasonably understood by all. Probably where the difference comes in is when you talk about policy. And there, the cultural difference begins to play. So there may be a few issues where the lexicon needs to be clarified in some cases when it comes to policy. But on the tech side, I think we are okay. In any case, at the moment, when it comes to AI and when it comes to responsible AI, we are still working out a lexicon. And when AI comes into digital infrastructure and cyber resilience, I think it will get more complicated. So there is definitely a need for some of that lexicon to be firmly put into place.

Genie Gan: I hope it’s not an interrogation. No, it’s not. Definitely not. I’m a lawyer, but I will not interrogate you, not today. What novel threats actually should public and private organizations, because obviously Dr. Pawan, you have experience from both ends, so what novel threats should public and private organizations be looking out for, and what strategies or technologies should be implemented to protect against these emerging threats?

Pawan Anand: So you asked novel threats, so everybody knows about how ransomware is in place, and millions have been paid out in various countries, so fine. So the threats are ransomware, there is a huge threat of DDoS attacks, so ransomware, DDoS attacks, APTs, APTs residing in all our computers, ready to give information all the time back to whoever’s placed them in our computers and in our servers. So I mean, these are the standard threats that we all know about, but what’s novel about some of them? We’re looking at cryptojacking, which seems to be now the more current threat which has come up, as more and more people get involved with cryptocurrencies, you’ll get more and more of these problems coming up. And God forbid when we have quantum coming in, then your blockchains are going to get compromised very easily, and crypto is going to face huge threats. So by that time, hopefully crypto would have also evolved, and blockchains would have evolved to take on the quantum threat. So there’s the speed of compute that quantum will bring in, will actually be a huge game changer, and it’s not coming now, but it’s coming in another… maybe 5-10 years. If the US and China are at about 1200 cubits, India is just struggling with 14 cubits at the moment. But India is really working on it and I think we’ll be there very quickly, especially as soon as we get our cryogenics in place. Another one would be border gateway protocols. I think everywhere, every country has their own kind of protection for border gateways. But these protocols need to be in place internationally, otherwise they’re bound to get compromised. And if that happens, then you’ll have a lot of data and a lot of information which either gets disrupted or gets diverted or is routed through somewhere else and then comes, so therefore fully compromised. Another threat I think is this watering hole attacks, which is very simple to understand. I mean, you just create, there are those places where everybody visits and those are the areas that need to be protected. So somewhere or the other we need to make sure that the usual watering holes are well protected and we have our policies in place for that. I mean, there could be, there will be as many novel threats as there are brilliant minds in the net and on the dark net. So I really won’t be able to give you something comprehensive on that, but this gives you a sense as to where we are heading.

Genie Gan: Thank you for those insights. I think they’re very interesting. Yes, please, Frankie.

Aderonke Sola Ogunsola: General has spoken from the technical side. Very technical. I like it. So I look at human error threats. Human capacity or human resources is usually key, whether organisation, government, nation. So another threat based on the speed and advancements of technology will be human itself. And like Alla said, if you do not have adequate skills, you are vulnerable, no matter what technical structure we put in, you still have the human interface. So I believe at public level, private levels and even organizational levels, your human capacity needs to be updated and sensitization of cyber security or cyber protection needs to be consistent is not something you should leave open. Because once your human resource or capacity is vulnerable, you’re as good as exposed. Then for the policy level, another threat I may see may be inflexibility in regulation and policies. So as governments moving forward, it is expedient for us to rejig or rethink or reopen our minds to regulations. We know, like General Pond did also say, that we should be careful so regulation would not stifle innovation. The advancement is unprecedented, the speed, but we should also come up as be responsive as policymakers to think outside the box. What kind of policies do we put in place? Yes, put up different structures, but how do we make sure it’s not obsolete on arrival? Yeah, obsolete on arrival. Of course, yes, please. I don’t, she’s absolutely right, the policy, policy in itself, if you don’t formulate policy, it’s a threat.

Pawan Anand: So, you know, in some ways, the focus is very narrow at times, because we tend to focus on protection. We tend to focus on disaster recovery, but there is very little focus because it requires money, it requires investment, it requires time and it requires skill at the initial stages to bring it in. So I think Kaspersky is in this business and I would really recommend that most of us, even though it would require time and effort to invest in the initial protection, it’s really important. That has to be prioritized by governments, by companies, by the KMPs, the key management personnel, it has to be prioritized by the financial guys.

Genie Gan: Thank you, thank you for that. I want to maybe ask Ronke, from the perspective of a regulator, what metrics do you think should be used to evaluate the effectiveness of resilience standards and how can organizations continuously improve their practices? So I can hear myself now.

Aderonke Sola Ogunsola: So what metrics? I think it should be homegrown or industry grown for metrics because we’ve had conversations talking about uniqueness of different experience. For me, our metrics would be measuring the quality and experience and residency recovery plan or disaster management ability or plans. for different providers and government itself. So, how do you develop these metrics? It needs to be analytical and scientific, so to speak, because it has to be measurable and it’s something that needs to be adaptive. The technical people will definitely play a huge role in developing these metrics, but as a policy regulator, you should also be open to providing guidelines, so to speak, or coming up with frameworks that can be easily adopted and adaptable. Metrics should not be cast in stone. It should be something that you can review from time to time based on maybe advancement or change in technology or infrastructure expandment or when you expand or update or create your infrastructure. So, these metrics, it’s something for me that should be measurable, but it needs to be something that’s acceptable, developed by stakeholders, so to speak, and we cannot undermine the role of the multi-stakeholder engagement in promoting common good. Thank you for that.

Genie Gan: So, Alaa, how can we then ensure that, and that’s really because I’m tapping on your experience dealing with multi-stakeholder engagements and all that, how can we ensure that all stakeholders, whether they’re governments or they’re private sector or civil society and so on, are actively engaged in the development of digital resilience standards? What role does each play? Because it’s hard to handle. Everyone has got a different set of expectations or interests, and sometimes… they don’t agree. Oftentimes they disagree. So, tough job.

Alaa Abdulaal: So, look, I think we have touched upon this during all our questions and our discussions. Yes, every group has its own role, but again, I think it’s a shared responsibility between all groups. But majorly governments are responsible for shaping and setting the regulations and policies and making sure and frameworks that are needed to be digital resilience. And this responsibility of creating the rules and regulations, it shouldn’t be only that government should do, but they should involve the private sector in the process, the civil society also in the process to make sure that whatever they are coming up with from a regulation and policy, it is impactful and also can be executed easily. The private sector, when we look at the private sector, I think we know the private sector is the hub of innovation. They come up with the technologies, they are aware of all the new technologies and advancements that are happening. They are shifting the gears on the AI, on the computing power, quantum computing. So, it’s very important for them always to have that also conversation with the government. It’s very important for them to keep updating from a cybersecurity perspective and also support in the capacity building, the capacity building of human resources from a government perspective also, to have that support to the government. Again, also private sector can help a lot from a partnership in providing the right funds with the cooperation of the government. Again, as I said, I believe it’s, yes, every group have its role, but it’s a shared responsibility. And then we come to civil society and international organizations, the role of academia, the think tanks, where the research, the hub of the research, the hub of how to think of the new innovations, how we can come up with the right set of standards with all supported, but with the right data. This all comes from the civil society. Last but not least, international organizations, let’s say that we are the connector. We are the one who can put everyone together, try to find the common voice, try to unify the effort, try to find the synergies, because again, we need to look at where are the synergies in every group, in the government group, even the private sector group and the civil society group, from a research aspect, from a funding aspect, from a policy and regulation aspect. So for this all to happen, it needs really an effective engagement between those different stakeholders. It needs cooperation and collaboration. It means that we need a continuous dialogue, an open dialogue. We have mentioned this before, we are facing a new or a very quick era of things that are very developing and accelerating very quickly. So if we do not put our hands together, we will not be able to survive those changes. Be agile enough and be prepared. We are looking at different building blocks, infrastructure as physical infrastructure. The doctor mentioned the data aspect of it. We mentioned the human resource aspect. We mentioned services. It’s a huge, big ecosystem connected to each other. We cannot look at one building block by itself or one group by itself. Rather, we have to look at it as a whole and really adopt that effective communication with those different groups.

Genie Gan: Thank you for that. So everything from the definition of the problem to coming up with the resources, whether it’s thought leadership, research, or financial support, all the way to deriving at implementable solutions, we need that input from different segments of our ecosystem. Right. I got that. If speakers have nothing else to add, I would like to move on to a next question. Doctor, yes, please. No, please jump in. Thank you. I’m sorry for always cutting in. No, don’t be.

Aderonke Sola Ogunsola: I like what Alao said about effective engagements, because that’s what has been on my head for a while. How do we move it from talks? We keep having all this conversation, IGF level, UN level, like you did say, GSMA, IETF, just name it. But how do we move it? And we also need to start focusing on conversation and make sure that we engage the right person. We did say earlier that we just need to move. But how do we move and move effectively? I totally agree with you. you. And I think this also goes to back when we when we talked about, let’s have a definition of the problem. Yes, then let’s

Alaa Abdulaal: put an action plan on what do we need to solve, then let’s all sit on that table and try to solve it. Conversation for this, the sake of conversation and dialogue will not take us anywhere. It needs really to be structured with a specific goal with a specific outcome that we want to reach. And then after that, also with a specific measurement, that our conversation and outcome and that we wanted, is it the correct one? Are we progressing in the right way? Again, with all this acceleration, and the thing and how things are really changing quickly, we really need to always to revise ourselves and and see how effective the current solution that we are doing are really impacting the progress that that we are aiming to.

Genie Gan: Okay. I think also today’s workshop and this dialogue that we’re having, I was hoping to produce a white paper that sort of captures our key highlights from today’s discussion, learnings. Well, I definitely learned a lot. And I think we’re learning from one another. And with this white paper, I’m hoping that we could gain some traction as well from the international audience that we have. And from there, work towards that common goal to find solutions that we can develop in order to galvanize everyone. And then from there, with these universal standards, be able to find ways that individual countries can customize for their own needs. I think this is a good start. I am mindful of time. I have only about 12 minutes left. I would like to ask a couple more questions before we do a summary of our discussions today. If I may, let me turn to Dr. Pawan with this question. How can resilience standards be designed not just for immediate response, but also to support long-term recovery after a disruption? What mechanisms should be in place in order to ensure organizations can bounce back effectively? I think the most important point here is to have a risk-based approach.

Pawan Anand: Therefore, going by the discussion that just took place between Alaa and Aderonke, I would say clearly that we need to have a framework which can be put in place, which talks about a risk-based approach in various sectors where we have our digital infrastructure and the resilience that we need to bring it to it. Going technically, of course, we would have to have a backup strategy. So wherever problems happen, we are able to recover from whatever losses have taken place. We have to have a constant update. Much of the time, we find that our softwares are outdated, our systems are outdated, and that’s why there is loss of data. There is outage time, so to say. We need to work around that to see that we are up to date when it comes to all our technologies. You can’t underestimate the skilling aspect. So, quite obviously, we have to bring people up to speed when it comes to the latest skills of this. So, the main things I would say is following a risk-based approach, create a framework, make sure that you have your backups, make sure you have a rehearsed strategy to bounce back. And that rehearsal part of it needs to be done very carefully because most of the time, again, organizations tend to feel that it’s going to take time away from the real work. And, you know, therefore, they just give it a bit of a lip service. And, of course, the human aspect is the final aspect. So, if people have to be trained, cyber hygiene has to be understood by everyone, they have to make sure that there is controlled access, that everybody understands the risks that the whole organization runs, how they personally run risks. I think we would be able to be in a situation where we don’t suffer from these threats.

Alaa Abdulaal: If you allow me to add, thank you. You have mentioned a very important point when we talk about recovery plans. I think it’s very important to think of the single point of failures that every country and system have, and not only looking at it from a backup perspective, but also having diversity of technologies and systems, not relying on a specific vendor, not relying on one company by itself. You need to really think of having that diversity of system and even looking at open sources because this will really make you build a very solid backup plan as well. as mentioned, because it’s very critical. We need to think out of the box regarding the regular recovery plans that we have, just having a backup, a disaster recovery from an infrastructure perspective, to really thinking of diversifying the systems, the technologies that we are using and even looking at open source.

Pawan Anand: Yeah, I think the best example I can give of something like this, what we just spoke of, is what happened in Denmark 2023, right? So over a period of about three months, there were repeated cyber attacks on their national IOT, critical information infrastructure, to the extent that even some of the dams or the dikes that they were working on, sorry, the dams that they were working on came under threat and they went immediately into island mode. And it took them a long time to get back onto the net. But they had their systems in place and literally people pulled out their cars and drove down and started operating systems physically. So quite obviously, they had worked it out well. But these are recovery plans which need to be very firmly put in place so that you don’t suffer outages. Okay. Okay, we’ve got people from the audience pinging us to say we want to ask questions. So quick ones. 10 seconds. Also, ensure that your infrastructure, you have redundancy and excess capacity, in addition to open source and not a single point of source. So it won’t hit you so hard if all these are put in place. And you also ensure that your infrastructure has enough capacity for redundancy. So if one goes down, you have that space like backup for your resilient buffer. Yeah, yeah. Cool. Thank you. Thank you, speakers. I’m gonna

Genie Gan: take a pause and take questions from the floor. I think some people pinged us. in the Zoom chat to say that. I think this gentleman, can we pass him a mic? No mic or you can, we can pass you a mic. Yeah, please. Thank you. Switch it on. Can you hear me now? Yes, perfect. Thank you very much and congratulations for the great talk.

Audience: My name is Dino Delaccio, I’m the Chief Information Officer at the UN Pension Fund, and I’m here in the IGF, I’m involved in the Best Practice Forum on Cybersecurity, and also leading the Blockchain Dynamic Coalition on Assurance and Standardization. Pleasure to meet you, sir. Likewise. So, I actually wanted to share a comment, because in my specific role in blockchain, what I’m facing is the lack of standards. And I want to be a little bit provocative in your case. I think that in the area of infrastructure resiliency, we do have universal standards. The ISO standards, for years, the whole series, ISO 27001, the 22301, I’m also an ISO auditor, are very well established, and ISO is represented by the National Standards Institute of each country in its technical committee that also is open to various stakeholders. So, is there here a risk to duplicate something, and instead to focus on something that already exists, should we instead focus on threat models? Because the way to translate standards into those specificities that we alluded to before in each country is actually to look at which risk each country is exposed to, because not all countries are exposed to the same risk. And maybe focus on threat modeling and risk assessment rather than reinventing a new standard.

Genie Gan: Thank you. I think it’s an excellent question. Any takers? I totally agree with you.

Pawan Anand: Everything you said made sense. So blockchain certainly would require standards. And I think we need to get down to defining those. But you talked about threat modeling and risk. That’s exactly what I meant by a risk-based framework as well. So each one of these will have to, whether you talk about infrastructure resilience, whether you talk about cybersecurity, you talk about AI, in each of these and across these domains, we’ll have to create frameworks. And that will have to be risk-based. The threat models you talk of will come from scenarios. And you need to keep building scenarios from where the threats emerge. And based on those scenarios that you game out, you’ll be able to actually see what kind of frameworks will have to be built around them. So I totally agree with you on that. And the way to go about it is what I just said. Anything else to add, ladies?

Genie Gan: All good? All right. Thank you. Thank you, sir, for that question. And remark, actually. Those are very good comments as well. Anyone else has got questions? I’m just… Yes, please, sir. You need the mic? Maybe just share the mic. Thank you. Am I being heard? Perfect. Okay. Sure.

Pawan Anand: Just to follow on what was just raised there, and what Dr. Pawan said about a risk-based approach to establishing standards, and we’re talking about universal standards here. The ISO standards came from somewhere. They were developed through some institutional framework. And the question would be, in taking the risk-based framework, are we identifying the institutions or the… the partners maybe that have to come together to develop the relevant universal standards for resiliency, that the various scenarios that we’re identifying need basically, right? I came in a bit late, so I don’t know if one of you are talking about institutions or anything like that before, but maybe in answering some of this, we could try to identify the ways forward. Who are the people who would like to take action on these things?

Aderonke Sola Ogunsola: Thanks. All right, so thank you for your comments. So I’ll start with the last speaker. I recall one of the goals for this conversation is to come up with a white paper. We’ve identified, I believe that’s why we’re all here. We’ve identified that perhaps we do need a universal standard. How do we go about this? I recall that Adela also did say that once we identified that we now move forward to practicality and that’s the next phase. We look at who are all the stakeholders involved. It’s not going to be lopsided conversation, but this is getting the conversation going. Yes, we do recognize the ISO standards and I like that you did say it comes, so the standards are developed by institutions. So what are the roles of these various institutions? I recall also did say earlier on that we have different pockets, regional pockets, national pockets, industry pockets, having conversation, having standards to address or even their own actions to address different issues regarding infrastructure resiliency. The fundamental point for me conclusively is Do we need to develop standards or a framework or measurable actions that would ensure universal resiliency of this infrastructure? Yes. Have we seen trends move from just developing cyber security frameworks to cyber resiliency? Because these threats will continue to come, networks will break, human capacity may fail or human error would occur. But how do we ensure that universally what happened with CrowdStrike, what happened with the submarine cable for the African region does not repeat itself? Jeannie, thank you.

Genie Gan: Thank you. Thank you for those remarks. I will, I have one minute left before some people have to run off. I will summarize our conversations in one minute and then we’ll call it a day, okay? So basically I think what we managed to discuss and agree on is that we do need universal standards which will form, which basically are a common language that we need to develop. And of course, how do we do that? How do we do that will be that we start by asking ourselves what exactly is the mischief or the problem that we’re trying to solve, the why, right? And then with these universal standards as a galvanizing, well, a galvanizer for everyone, we also will have to tap on shared experiences. And that is where the multilateral network, regional and international cooperation from within the community, cross-jurisdictional learnings will come in to play. I think we could also benefit from some use cases, learn from past experience, and then also coming down to the local level to make sure that we have localized implementation and solutions which will work for the individual countries in a customized manner. And that’s simply because nothing is one-size-fits-all. We also discussed some challenges that we could possibly be faced with and already are facing, which basically will be the very fast-evolving threats that are facing the world today, from a technical perspective or from a human element perspective, because humans can be the weak link, which is why we also touched on capacity building. And of course, lastly, the policy element, which basically the usual problem is that, like what Rongke had said, it usually becomes obsolete upon arrival. So we have to try to avoid that and make sure that the policies, the frameworks that we put in place are agile enough to respond or be an effective tool that can help us to respond better to evolving threats, which are fast and furious. And of course, lastly, I think a takeaway, again, I’m saying this again, Dr. Pawan had said that we really just need to get started. We just need to get started. So what we’ll do in terms of next steps is that we will compile today’s discussions into a white paper. And hopefully this will serve as a guiding reference for countries, for regions seeking to enhance their digital infrastructure resilience. And of course, thank you, everyone, for being a part of this panel discussion and important conversation. And we really look forward to continuing our work together to shape a more resilient digital future. Thank you. Well, my ears are hurting. Be there. Thank you. Have a good day. Thank you. Good bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye. Bye.

A

Aderonke Sola Ogunsola

Speech speed

125 words per minute

Speech length

2664 words

Speech time

1271 seconds

Universal standards are necessary but implementation may require customization

Explanation

Aderonke argues that universal standards for digital infrastructure resilience are needed, but their implementation should be adaptable to local contexts. She suggests that standards can be global, like the SDGs, but implementation may require different approaches in different countries.

Evidence

Example of Nigeria’s experience in developing national policies and regulations for digital infrastructure resilience

Major Discussion Point

Need for Universal Standards for Digital Infrastructure Resilience

Agreed with

Pawan Anand

Alaa Abdulaal

Agreed on

Need for universal standards with flexible implementation

Differed with

Pawan Anand

Differed on

Approach to universal standards

Human error and lack of skills are major threats to resilience

Explanation

Aderonke identifies human factors as significant threats to digital infrastructure resilience. She emphasizes the importance of human capacity building and consistent cybersecurity sensitization to address these vulnerabilities.

Evidence

Reference to the need for adequate skills and consistent cybersecurity sensitization at public, private, and organizational levels

Major Discussion Point

Challenges in Developing and Implementing Standards

P

Pawan Anand

Speech speed

152 words per minute

Speech length

3129 words

Speech time

1232 seconds

Standards should be universal but regulations need to be flexible

Explanation

Pawan agrees with the need for universal standards but cautions against overly stringent compliances. He argues that while standards should be globally applicable, regulations should be flexible enough to allow for innovation.

Evidence

Distinction made between standards and compliances, with a warning about the risk of stifling innovation

Major Discussion Point

Need for Universal Standards for Digital Infrastructure Resilience

Agreed with

Aderonke Sola Ogunsola

Alaa Abdulaal

Agreed on

Need for universal standards with flexible implementation

Differed with

Aderonke Sola Ogunsola

Differed on

Approach to universal standards

Rapid technological changes make it difficult to keep standards current

Explanation

Pawan highlights the challenge of keeping standards and policies up-to-date in the face of rapidly evolving technology. He points out that emerging technologies like AI and quantum computing pose new threats that standards must address.

Evidence

Examples of emerging threats like cryptojacking and potential vulnerabilities in blockchain technology due to quantum computing

Major Discussion Point

Challenges in Developing and Implementing Standards

Agreed with

Alaa Abdulaal

Agreed on

Challenges in keeping standards current with rapid technological changes

A

Alaa Abdulaal

Speech speed

131 words per minute

Speech length

2203 words

Speech time

1005 seconds

Universal framework needed but must be adaptable to each country’s status

Explanation

Abdulaal argues for a universal framework for digital infrastructure resilience, but emphasizes the need for flexibility to accommodate different countries’ levels of readiness. She highlights the importance of customizing standards to fit the current status of each country.

Evidence

Reference to different levels of readiness and infrastructure availability across countries

Major Discussion Point

Need for Universal Standards for Digital Infrastructure Resilience

Agreed with

Aderonke Sola Ogunsola

Pawan Anand

Agreed on

Need for universal standards with flexible implementation

Economic and technological disparities between countries pose challenges

Explanation

Alaa Abdulaal identifies economic and technological disparities between countries as major challenges in adopting universal resilience standards. She points out that some countries lack basic infrastructure, while others struggle with financial and technological support.

Evidence

Mention of countries at different stages of readiness, some lacking basic infrastructure

Major Discussion Point

Challenges in Developing and Implementing Standards

Agreed with

Pawan Anand

Agreed on

Challenges in keeping standards current with rapid technological changes

Private sector plays key role in innovation and capacity building

Explanation

Alaa Abdulaal emphasizes the importance of private sector involvement in developing digital resilience standards. She argues that the private sector is crucial for innovation, technology advancement, and supporting capacity building efforts.

Evidence

Reference to private sector as the hub of innovation and their role in developing new technologies

Major Discussion Point

Multi-stakeholder Collaboration

International organizations can facilitate cooperation between stakeholders

Explanation

Alaa highlights the role of international organizations in fostering collaboration among different stakeholders. She argues that these organizations can help unify efforts, find synergies, and facilitate dialogue between various groups.

Evidence

Description of international organizations as connectors that can put everyone together and find common voice

Major Discussion Point

Multi-stakeholder Collaboration

Ensure diversity of systems and technologies to avoid single points of failure

Explanation

Alaa Abdulaal advocates for diversifying systems and technologies to enhance resilience. She suggests that relying on multiple vendors and considering open-source solutions can help build a more robust backup plan.

Evidence

Recommendation to avoid relying on a single vendor or company and to consider open-source options

Major Discussion Point

Strategies for Enhancing Digital Resilience

A

Audience

Speech speed

151 words per minute

Speech length

219 words

Speech time

87 seconds

Existing ISO standards may already provide universal framework

Explanation

An audience member suggests that existing ISO standards, such as ISO 27001 and 22301, already provide a universal framework for infrastructure resilience. They question whether there’s a risk of duplicating efforts by creating new standards.

Evidence

Reference to specific ISO standards (27001, 22301) and their established nature in various countries

Major Discussion Point

Need for Universal Standards for Digital Infrastructure Resilience

Lack of common definitions and language hinders development of standards

Explanation

An audience member points out the lack of universally accepted definitions for terms related to digital infrastructure. They argue that this absence of common language makes it difficult to develop effective universal standards.

Major Discussion Point

Challenges in Developing and Implementing Standards

Adopt risk-based approach and threat modeling

Explanation

An audience member suggests focusing on risk-based approaches and threat modeling rather than creating new standards. They argue that this approach would better address the specific risks each country faces.

Evidence

Suggestion to focus on which risks each country is exposed to, as not all countries face the same risks

Major Discussion Point

Strategies for Enhancing Digital Resilience

Agreements

Agreement Points

Need for universal standards with flexible implementation

Aderonke Sola Ogunsola

Pawan Anand

Alaa Abdulaal

Universal standards are necessary but implementation may require customization

Standards should be universal but regulations need to be flexible

Universal framework needed but must be adaptable to each country’s status

All speakers agree on the need for universal standards for digital infrastructure resilience, but emphasize the importance of flexible implementation to accommodate different national contexts and needs.

Challenges in keeping standards current with rapid technological changes

Pawan Anand

Alaa Abdulaal

Rapid technological changes make it difficult to keep standards current

Economic and technological disparities between countries pose challenges

Both speakers highlight the difficulty of maintaining up-to-date standards in the face of rapid technological advancements and disparities between countries.

Similar Viewpoints

Both speakers emphasize the importance of human capacity building and skills development in ensuring digital infrastructure resilience.

Aderonke Sola Ogunsola

Alaa Abdulaal

Human error and lack of skills are major threats to resilience

Private sector plays key role in innovation and capacity building

Unexpected Consensus

Multi-stakeholder collaboration

Aderonke Sola Ogunsola

Alaa Abdulaal

Pawan Anand

International organizations can facilitate cooperation between stakeholders

Private sector plays key role in innovation and capacity building

Despite coming from different sectors (government, international organization, and academia), all speakers unexpectedly agreed on the importance of multi-stakeholder collaboration in developing and implementing digital infrastructure resilience standards.

Overall Assessment

Summary

The main areas of agreement include the need for universal standards with flexible implementation, recognition of challenges posed by rapid technological changes, importance of human capacity building, and the necessity of multi-stakeholder collaboration.

Consensus level

There is a high level of consensus among the speakers on the fundamental aspects of digital infrastructure resilience. This consensus suggests a strong foundation for developing universal standards, but also highlights shared challenges that need to be addressed collectively.

Differences

Different Viewpoints

Approach to universal standards

Aderonke Sola Ogunsola

Pawan Anand

Universal standards are necessary but implementation may require customization

Standards should be universal but regulations need to be flexible

While both speakers agree on the need for universal standards, they differ in their emphasis. Aderonke focuses on customization in implementation, while Pawan stresses the need for flexible regulations to allow innovation.

Unexpected Differences

Existence of universal standards

Aderonke Sola Ogunsola

Alaa Abdulaal

Pawan Anand

Audience

Universal standards are necessary but implementation may require customization

Universal framework needed but must be adaptable to each country’s status

Standards should be universal but regulations need to be flexible

Existing ISO standards may already provide universal framework

While the main speakers discuss the need for developing universal standards, an audience member unexpectedly points out that ISO standards may already provide a universal framework, suggesting that efforts to create new standards might be redundant.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to universal standards, the balance between standardization and flexibility, and the recognition of existing standards.

difference_level

The level of disagreement among the speakers is moderate. While there is a general consensus on the need for universal standards or frameworks, there are varying perspectives on implementation, customization, and the recognition of existing standards. These differences could impact the development and adoption of universal standards for digital infrastructure resilience, potentially leading to challenges in creating a globally accepted framework.

Partial Agreements

Partial Agreements

All speakers agree on the need for universal standards or frameworks, but they have different perspectives on how to implement them. Aderonke emphasizes customization, Alaa focuses on adaptability to each country’s status, and Pawan stresses the need for flexible regulations to allow innovation.

Aderonke Sola Ogunsola

Alaa Abdulaal

Pawan Anand

Universal standards are necessary but implementation may require customization

Universal framework needed but must be adaptable to each country’s status

Standards should be universal but regulations need to be flexible

Similar Viewpoints

Both speakers emphasize the importance of human capacity building and skills development in ensuring digital infrastructure resilience.

Aderonke Sola Ogunsola

Alaa Abdulaal

Human error and lack of skills are major threats to resilience

Private sector plays key role in innovation and capacity building

Takeaways

Key Takeaways

Universal standards for digital infrastructure resilience are needed, but implementation may require customization for different countries

Major challenges include human error, rapid technological changes, economic disparities between countries, and lack of common definitions

Multi-stakeholder collaboration involving governments, private sector, academia and international organizations is crucial

A risk-based approach focusing on physical protection, disaster recovery, and threat modeling is recommended

Existing ISO standards may already provide a universal framework that can be built upon

Resolutions and Action Items

Compile the discussion into a white paper to serve as a guiding reference for countries seeking to enhance digital infrastructure resilience

Start working on developing universal standards and frameworks rather than just continuing discussions

Unresolved Issues

How to develop standards that remain current given rapid technological changes

How to address economic and technological disparities between countries in implementing standards

How to establish common definitions and language around digital infrastructure resilience

Which specific institutions or partners should lead the development of universal standards

Suggested Compromises

Adopt universal standards but allow for flexible implementation based on each country’s unique needs and risks

Use existing ISO standards as a foundation but adapt them for digital infrastructure resilience

Balance regulation with allowing room for innovation in the private sector

Thought Provoking Comments

Infrastructure was described in a provocative manner as something that remains invisible until it’s broken down.

speaker

Aderonke Sola Ogunsola

reason

This comment provides a thought-provoking perspective on infrastructure, highlighting its critical yet often overlooked nature until problems arise.

impact

It set the tone for discussing the importance of digital infrastructure resilience and why it needs attention before crises occur.

We need to broaden our focus to include not only the security of information and data, but also the physical and operational resilience of the digital infrastructure that house these information and data.

speaker

Genie Gan

reason

This comment expands the traditional view of cybersecurity to include physical infrastructure, introducing a more holistic approach.

impact

It broadened the scope of the discussion to include physical and operational aspects of digital infrastructure resilience, not just data security.

When it comes to emerging tech, AI in cybersecurity begins to get more and more important for us. So the ethical use of AI and also the responsible use of AI is so important.

speaker

Pawan Anand

reason

This comment introduces the critical intersection of AI and cybersecurity, highlighting ethical considerations.

impact

It shifted the conversation to include emerging technologies and their ethical implications in digital infrastructure resilience.

Universal standards is not negotiable. I think it’s something that’s meant to be open and something that needs to be adopted.

speaker

Aderonke Sola Ogunsola

reason

This strong stance on the necessity of universal standards challenges the idea that each country needs a completely unique approach.

impact

It sparked a discussion on balancing universal standards with local implementation needs.

Are those standards agile enough? In fact, too fast. Exactly. We are talking about AI. We are even now talking about computers. quantum computing.

speaker

Alaa Abdulaal

reason

This comment raises the crucial question of whether standards can keep pace with rapidly evolving technology.

impact

It introduced the need for agility and future-proofing in developing digital infrastructure resilience standards.

I think in the area of infrastructure resiliency, we do have universal standards. The ISO standards, for years, the whole series, ISO 27001, the 22301, I’m also an ISO auditor, are very well established… Should we instead focus on threat models?

speaker

Audience member (Dino Dell’Accio)

reason

This comment challenges the premise of needing new universal standards and suggests a focus on threat modeling instead.

impact

It prompted the panel to consider existing standards and shifted the discussion towards practical implementation and risk assessment approaches.

Overall Assessment

These key comments shaped the discussion by expanding its scope from traditional cybersecurity to a more comprehensive view of digital infrastructure resilience. They introduced considerations of physical infrastructure, emerging technologies like AI and quantum computing, and the need for agile, universally applicable standards. The discussion evolved from defining the problem to exploring practical implementation challenges, balancing universal standards with local needs, and considering existing frameworks. The audience input near the end prompted a reflection on whether new standards are needed or if the focus should be on applying existing ones through threat modeling and risk assessment.

Follow-up Questions

How can we develop a common language and universal definitions for digital infrastructure and related concepts?

speaker

Vahan from RIPENCC (audience member)

explanation

A universal understanding of terms like DPI (Digital Public Infrastructure) and core/public core of the Internet is needed to develop effective standards and ensure all stakeholders are on the same page.

How can we move from conversations to effective action in developing and implementing digital infrastructure resilience standards?

speaker

Aderonke Sola Ogunsola

explanation

There is a need to translate discussions into concrete steps and engage the right stakeholders to make progress on digital infrastructure resilience.

How can we design resilience standards that support long-term recovery after disruptions?

speaker

Genie Gan (moderator)

explanation

Understanding how to create standards that address both immediate response and long-term recovery is crucial for comprehensive digital infrastructure resilience.

How can we ensure that resilience standards and frameworks remain agile and responsive to rapidly evolving threats?

speaker

Alaa Abdulaal

explanation

Given the fast pace of technological change and emerging threats, it’s important to develop standards that can adapt quickly.

What role can open-source technologies play in enhancing digital infrastructure resilience?

speaker

Alaa Abdulaal

explanation

Exploring the potential of open-source solutions could provide additional options for creating diverse and resilient digital infrastructure.

How can we better integrate threat modeling and risk assessment into the development of digital infrastructure resilience standards?

speaker

Dino Dell’Accio (audience member)

explanation

Focusing on specific threats and risks faced by different countries could help tailor universal standards to local needs.

Which institutions or partners should be involved in developing universal standards for digital infrastructure resilience?

speaker

Unnamed audience member

explanation

Identifying the key stakeholders and institutions needed to create and implement universal standards is crucial for moving the process forward.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

Networking Session #132 Cyberpolicy Dialogues:Connecting research/policy communities

Networking Session #132 Cyberpolicy Dialogues:Connecting research/policy communities

Session at a Glance

Summary

This discussion focused on cross-community interaction in cyber policy dialogues, organized by Virtual Roots and the Royal United Services Institute (RUSI). The session aimed to connect different communities, including research, practice, technical, and policy sectors, to foster dialogue on responsible cyber behavior.

James Shires and Louise Marie Hurel introduced the session, highlighting the importance of bridging divides between various stakeholders in cyber policy. Corinne Casha from Malta’s Ministry of Foreign Affairs emphasized the need for inclusive cyber policy development, incorporating perspectives from government, industry, academia, and civil society.

The Global Partnership for Responsible Cyber Behaviour was discussed as an initiative to identify researchers working on cyber behavior topics worldwide. Louise Marie Hurel highlighted the limitations of current UN processes in cyber discussions, noting their government-centric nature and the need for more diverse voices, especially from developing countries.

Participants engaged in an interactive session, identifying obstacles to cross-community interaction. Key challenges included language barriers, funding issues, geopolitical tensions affecting accreditation processes, and the tendency for governments to dominate policy-making without sufficient input from other sectors.

The discussion emphasized the importance of funding and sponsorship to enable participation from less-represented groups, particularly from developing countries. Participants also stressed the need for organizations to improve diversity and regional representation within their own structures.

The session concluded by highlighting the importance of building bridges not just between different stakeholder groups, but also within each sector to ensure more inclusive and representative cyber policy dialogues.

Keypoints

Major discussion points:

– The importance of cross-community interaction and dialogue in cyber policy

– Obstacles to greater cross-community engagement, including language barriers, funding/resources, and limited access to key forums

– The need for more inclusive and representative participation, especially from developing countries and diverse stakeholders

– Challenges in bridging divides between government, academia, industry, and civil society perspectives

– Ways to overcome obstacles and improve cross-community collaboration

The overall purpose of the discussion was to explore how to foster greater interaction and dialogue between different communities involved in cyber policy, including government, academia, industry, and civil society. The goal was to identify obstacles to cross-community engagement and brainstorm potential solutions.

The tone of the discussion was collaborative and solution-oriented. It began in a more formal, presentation-style format but shifted to become more interactive and participatory as attendees were asked to contribute their perspectives. The organizers emphasized that they wanted to hear from participants rather than just talk at them. The tone remained constructive throughout, with a focus on identifying challenges but also proposing ways to overcome them.

Speakers

– James Shires, Co-director of Virtual Roots

Expertise: Cybersecurity and internet governance

– Louise Marie Hurel, Researcher at the cybersecurity program at the Royal United Services Institute (RUSI)

Expertise: Cybersecurity policy challenges

– Corrine Casha, Deputy Director for Global Affairs at the Malta Ministry of Foreign Affairs

Expertise: Cyber policy, government perspective

– AUDIENCE

Role: Various participants from different sectors

Additional speakers:

– Yasmine Azzouzi, International Telecommunications Union, Intergovernmental, WEOG

Role: Co-host

– Erik Kursetgjerde, NATO/CCDCOE, Intergovernmental, WEOG

Role: Co-host

– Anni Adamson, European Cyber Conflict Research Initiative (ECCRI), Civil Society, WEOG

Role: Facilitator (likely for online participants)

Full session report

Cross-Community Interaction in Cyber Policy Dialogues: A Comprehensive Summary

Introduction

This discussion, organised by Virtual Roots and the Royal United Services Institute (RUSI), focused on fostering cross-community interaction in cyber policy dialogues. The session aimed to connect diverse stakeholders from research, practice, technical, and policy sectors to promote dialogue on responsible cyber behaviour. Key speakers included James Shires, co-director of Virtual Roots; Louise Marie Hurel, researcher at RUSI’s cybersecurity programme; and Corinne Casha, Deputy Director for Global Affairs at the Malta Ministry of Foreign Affairs.

Overview of the Global Partnership for Responsible Cyber Behaviour

Louise Marie Hurel introduced the Global Partnership for Responsible Cyber Behaviour initiative, which aims to identify researchers working on cyber behaviour topics worldwide. This project seeks to create a comprehensive database of experts and their work, facilitating connections and collaborations across different communities involved in cyber policy discussions.

Interactive Session and Word Cloud

The discussion included an interactive element where participants were asked to contribute words they associated with responsible cyber behaviour. This activity resulted in a word cloud that highlighted key themes and concepts, providing a visual representation of the group’s collective understanding of the topic.

Breakout Discussions

Attendees were divided into smaller groups for breakout discussions, focusing on specific aspects of cross-community interaction in cyber policy. These sessions allowed for more in-depth exploration of challenges and potential solutions, with key points later shared with the larger group.

Key Challenges and Obstacles

Several barriers to inclusive participation in cyber policy dialogues were identified:

1. Language Barriers: Highlighted by James Shires and demonstrated by a participant from Chad who greeted the group in Arabic, underscoring the challenges faced by non-English speakers.

2. Lack of Resources and Funding: Corinne Casha noted that insufficient resources often prevent some stakeholders from participating fully in policy discussions.

3. Government-Centric Decision Making: Casha acknowledged that governments often dominate policy development without adequately incorporating perspectives from other sectors.

4. Geopolitical Tensions: Audience members pointed out how international political dynamics can affect participation in multilateral forums, including issues with accreditation.

5. Limited Diversity Within Stakeholder Groups: Louise Marie Hurel emphasised the need for improved diversity and regional representation within stakeholder organisations themselves.

6. Limitations of UN Processes: Hurel highlighted the government-centric nature of UN cyber discussions, which often marginalise voices from developing countries.

Proposed Solutions and Initiatives

To address these challenges, several initiatives and suggestions were proposed:

1. Sponsorship Programmes: Corinne Casha discussed Malta’s efforts to sponsor fellows for participation in the Internet Governance Forum (IGF).

2. Government Funding: Casha emphasised the importance of government funding to support participation from developing countries in international cyber policy forums.

3. Leveraging Existing Structures: Hurel suggested utilising existing UN system structures to facilitate more inclusive dialogue.

4. Improving Internal Diversity: Hurel stressed the importance of enhancing diversity within stakeholder organisations to ensure better representation of different perspectives.

5. Global Partnership Database: The ongoing development of a comprehensive database of cyber behaviour researchers to facilitate connections and collaborations.

Conclusion

The discussion highlighted the complex challenges in achieving truly inclusive and representative cyber policy processes. It emphasised the need for continued efforts to fund and sponsor participation of developing countries and underrepresented groups in cyber policy forums. The speakers and participants agreed that leveraging existing UN system structures, examining internal diversity within organisations, and addressing language barriers are crucial steps forward. As cyber policy continues to evolve, addressing these challenges will be essential for developing comprehensive and effective strategies that reflect the diverse needs and perspectives of the global community.

Session Transcript

James Shires: and this is focused on cyber policy dialogues. The idea here is that we will be connecting different communities between research and practice, between technical and policy communities, and trying to do a little bit of interactive networking. So although you see four people up here on the stage, this is not going to be us talking to you very much. It’s going to be you talking to us and talking to each other, and ideally everyone online also talking to each other as well. So welcome and thank you for joining this session. I’ll talk a little bit about just the overall outline and then we can get started. So this is being organized by Virtual Roots. My name is James Shires. I’m co-director of Virtual Roots. We’re an NGO that focuses on research, education, and public engagement in cybersecurity and internet governance. I’ll hand over now to introduce my co-organizer, Louise Marie Harrell.

Louise Marie Hurel: Hello everyone, can you hear me okay? Yes, great, good. I’m Louise Marie Harrell. I am a researcher here at the cybersecurity program at the Royal United Services Institute, which is basically a very fancy name for a think-tank based in London working on a range of different security and defense issues, and obviously one of them is cybersecurity. And we conduct research and we also convene different sectors to discuss some of the domestic but also international policy challenges on cybersecurity. So it’s lovely to be here, even if online this time, but it’s great to see some familiar faces as well.

James Shires: Thank you, Louise. And what we’ll do is the following. Louise will say a little bit about Rootsie’s main program in this space, which is a global partnership on responsible cyber behavior. We will then turn to our key speaker. We are very pleased to have Corinne Kasher here from the Ministry of Foreign Affairs at Malta to give her personal perspective on crossing different communities. I will then ask you all to interact with us to fill out a few questions that we have online. So you can either use a QR code if you’re here in person, or you can access a link to the quiz. And then we will break out into different rooms to try and unpack these questions in a little bit more detail. So those of you online will go into smaller groups, and those of you in the room will do so as well. You will be assisted very ably in this task by two of our co-hosts, Yasmin and Eric here, who will be going around and answering any questions, stimulating discussion based on their own extensive knowledge of this area. So Yasmin and Eric, thank you very much for helping out here. I will now stop here and hand the floor to Corinne for some opening remarks. I will also move my video so those online people can see her in person.

Corrine Casha: Thanks James, and thank you everyone for being here. It’s a real pleasure to be among you. My name is Corinne, I am Deputy Director for Global Affairs at the Malta Ministry of Foreign Affairs, and the reason I’m here at the IGF Forum is precisely because of the fact that our government is currently sponsoring fellows to participate here in this IGF Forum. And the reason why we are sponsoring fellows to participate in this IGF Forum is because we are firm believers in the fact that cyber policy should not be restricted solely to governments, but that cyber policy should also incorporate different facets. It should incorporate industry, it should incorporate academia, it should incorporate civil society, and also it should incorporate researchers. So one of the main, let’s say, tasks that we are currently conducting is to bridge the divides and also have the cyber researchers and cyber government officials interact more with each other. And from my personal experience this is a very important task because when it comes to, for example, drafting national cyber strategies, it’s important to have different ideas, not only restricted to what cyber policy is by government, but it’s important to also factor in the research, the academia, the government, let’s say, perspectives coupled with other partners. So I’m very pleased to be here today because this is exactly the culmination of what we wanted to be, a networking event where we discuss ideas, we discuss knowledge, where we impart information together and hopefully build up better cyber resilience as well amongst ourselves. I also wanted to outline that being a European Union member, Malta is also obliged by the regulation, the European Parliament and the Council to actually incorporate these different factors together. So we are actually obliged to also include, so there’s this policy of inclusiveness as well, and to include different communities. So it’s also a part of our, let’s say, EU membership or of our EU obligations to have these different perspectives come together. So that’s what I wanted to say from my perspective and I now hand over to you, James, again. Thanks.

James Shires: Thank you very much Corinne and we’ll go straight to Louise to talk a little bit about the Global Partnership.

Louise Marie Hurel: Wonderful. Again, thank you very much. You know, just a year ago, some of us were together in Kyoto for the regional launch of the Global Partnership for Responsible Cyber Behaviour, which is an initiative that we have established since 2022, actually, here at RUCI, with a purpose of bringing together the research community, studying and engaging in topics related to responsible cyber behaviour. And I think to be back at the IGF just really shows the importance of this space for us to be able to connect not only with other researchers, but most importantly, to make the most out of this multi-stakeholder community that the IGF has been nesting for decades now. And I just like to kind of echo a lot of what Corinne just said. The objective of the partnership is really to bridge part of that conversation, but most importantly, is to identify who’s researching topics related to responsible cyber behaviour. And I say responsible cyber behaviour because it’s not just about state responsibility in cyberspace, but it’s really recognising that we will only be able to have a proper conversation, and we will only be able to develop a critical mass and critical thinking if we’re able to identify those researchers that are based in different regions, right? And that has been our arduous task to identify researchers working in these topics from the different parts of the world. So we launched that. But I think the background for the conversation that we’re having today, and I think it’s a perfect, you know, match between what we’re doing over here at RUCI with a global partnership and what Virtual Routes is doing as well in terms of connecting also different communities. Because at this moment, as I think many of you are very familiar, it’s been decades that the UN processes have been discussing the rules of the road for responsible state behaviour, right? And while it is an important space that has developed norms and, you know, the recognition of how international law applies to cyberspace, even though only 32, 33 government entities, and I say government entities, because 30, 31 of them are actual member states, and two of them are regional bodies have, you know, published their views on how international law applies to cyberspace. And that’s all good. And that’s important in terms of thinking about peace and stability in cyberspace. But these UN processes, they revolve very much around a particular modus operandi. And that includes the fact that it’s very government centric, even though there’s some participation of stakeholders, even though quite contested since the start of this latest process of the open ended working group. And even though we see 193 countries represented there, it’s still very much in the room, a challenge of balancing two different poles. So US Western democracies on one side, and obviously, you know, Russia, China, and a couple of others, right. And I think the debate here is to bring those different bits and pieces and these different countries, especially developing countries to have a little bit more voice in those processes. And sometimes that is obfuscated by these broader strategic competition. I think another thing is that it’s really focused on international peace and security and much less so on other areas of thinking about like responsible cyber behaviour such as development and how it enables a lot of the conversation of states being able to be accountable as well as other stakeholders. Of course, there’s some conversation around due diligence, but then it’s still very limited to the language of this particular space. And I think, you know, it also means limiting who can be in those rooms, right? Even though there’s some accredited organisations that can attend the, let’s say, open-ended working group on cyber security, we still see that there’s a certain privilege to be in this room. You need to be accredited, you need to have the resources. So the whole point of this conversation and it has emerged as part of this dialogue between us and virtual routes, really, to think about how can we actually look at other spaces within the UN system that actually already has that structure for us to facilitate a dialogue where we can bring these communities together, where we can leverage the knowledge and the research that others have been conducting in this field. How can we identify researchers within, let’s say now, like within other regions? How can we understand other views of responsible cyber behaviour beyond, let’s say, just a very state-centric or just very related to these two poles, right? Or just to like governments. And how can we think about responsibility as something that is encoded at the international level in these discussions, but at the regional level, the domestic level and the operational level, thinking about how countries, how states and how different actors justify kind of developing cyber capabilities, right? So I just wanted to really set, like just really give a little bit of a glimpse at the background of where we are coming from as RUSI and our commitment to foster networking sessions such as these, but also our collaboration with Virtual Routes, which is also a member and a partner of the GPRCB. So I just wanted to welcome you all to the session, to thank the opportunity, and again, really gutted to not be able to be there, but very excited to what we’re going to learn from each other and also from the conversation that we’ll have here online. So thank you very much, and sorry if I spoke a little bit too much, but looking forward to connecting with you.

James Shires: Thank you, Louise, and that context was super important, so we’re really glad that you could share that with us. What we’re going to do now is we’re going to move from the bit where we talk at you to the bit where you engage in a little bit of a Q&A to start with. So as you will all be able to see in person, there is now a Slido with three short questions on. So indulge us, please do fill out these three short questions to get a sense of who we have in the room and what you think the main challenges are. Annie, I believe, is going to share the link to this same Slido online for those participating in hybrid form, and I can see people are already filling it out, so maybe the online people are super quick, they’ve got their laptops already open, and we’ve broken this down into a few different communities. You could slice a pie in as many ways as you want, but this is how we’ve decided to look at civil society, academia, IGOs, government, industry, or other, of course. So very pleasingly, it looks like we have quite a similar mix of different attendees, so that’s good if there’s everyone from one community, that might be a lesson in itself. So we are now changing results, and we can see that actually IGOs are representing the majority of people in the room, and Eric and Yasmin haven’t even filled out the poll themselves. So technical community is unfortunately not listed, no, because we believe that technical people do come from industry, they come from government, and they come from civil society. So yes, technical community is an identity, but they also have affiliations in these different areas as well. So if you identify as technical, then also please list your organizational affiliation as well. So I will now go to the second question we asked you, which was how often do you engage with people outside your main community? So we know there’s a lot of IGO people in the room, there’s some civil society, some industry, and how often do you talk to people from other communities? I like that this is going often is the main response, we’re seeing no one’s first time in engaging other communities is good, but you know we would be welcoming that as well. Okay, so occasionally some responses, some people are doing this all the time, but it looks like most people are doing this often, regularly, but not sort of day-to-day in their normal jobs. Maybe what we would expect in a multi-stakeholder internet governance environment. So that’s question two. That gives us, but also you, a sense of who else is in the room and how they see their sort of community interaction. So now we’re going to go to kind of the most important question. This is the one that is going to frame your breakout sessions. So it’s not a ranking, this is more of a word cloud. So what do you think are the main obstacles to greater cross-community interaction? Louise mentioned a few already, things like accreditation to key events, things like resources and participation, others coming out as well. So please fill in as many words as you would like to here as well. Don’t feel you have to come up with additional words, you can also double click or double down on the ones that are already there as well. I will give you a few more minutes to think about this, it’s a little bit more of a challenging question than the first two. And some really thought-provoking answers coming out already. So Wow So I can still see a couple of people typing, so I want to give people the space to make sure we capture as many views as possible in terms of these obstacles. So just to take a quick poll of everyone in the room, does everyone in the room feel that you’ve filled out to your comfort, right? You’ve put as much as you want into the word cloud. I’m going to take that as a yes and same for people online as well. Thank you very much for putting this through. Just a few, I already have a lot of responses thinking already, but I’m going to hold them back because I don’t want to talk any more to you. I want you all to discuss this word cloud between yourselves, right? This is a networking session, not a presentation session. And so what we will do is break into different groups here, small groups in the room. We will go into breakout rooms online. For people online, Annie will be assigning you to breakout rooms. We’re looking at about three or four people per group. And then what we will do is spend 15 minutes just talking about these key points. You can pick one of them and really drill into that, say, what is the issue here? Why is this such an issue? So what are the reasons for that issue? And then most importantly, how can we overcome the obstacle? I really want everyone to get towards that solution part of the conversation in your groups as well. So I can already see three clear groups here, one on the left for me, one on the right, and one in the middle. If you feel free to move around, feel free to make sure you want to have different communities involved in that group. And online, I hope you are also being assigned roughly equally as well. So that’s it from me. I’ll break now and we’ll come back in 15 minutes to hear your thoughts on these obstacles and how to overcome them, and in a way we’ll reflect with our participants as well. So thank you very much, and we’ll see you in 15 minutes.

AUDIENCE: for example in other multilateral forums such as the Open End Working Group, it also depends a lot on geopolitical tensions, so for example I work at the German Council on Foreign Relations, a think tank based in Berlin, and what we see is that sometimes the accreditation process can be rather difficult or it can depend on vetoes from other countries and that sometimes is kind of arbitrary in the sense that some similar organizations in Germany don’t have the chance to participate because xyz so yeah that sometimes we find ourselves in a privileged position when we are able to give our statements there but that not everyone has the same chances and that’s kind of sad because I think that for example also at the OEWG it’s great to see the involvement of different stakeholders because sometimes we are the ones doing the research and providing yeah I would say impactful research on those conversations so if you want to add something other members of the group please feel free to do so now right yeah

James Shires: thank you very much that is very much appreciated before we turn to Corinne and Louise I just want to highlight one group one issue that was discussed in this group that I was fortunate to be part of which is language barriers right we had one attendee here from Chad who speaks French and Arabic and not English and was nonetheless seeking to participate in what is here unfortunately an English language only forum so I would like to invite her to say a little bit in another language on your reflections that you had throughout the group

AUDIENCE: As-salamu alaykum

James Shires: so of course just to say a little bit about those challenges facing a country where not only is a language a barrier to participation you have visa applications you have aware popular awareness of things like cyber security measures and also ultimately uh government resources and money to contribute right so um and the unfortunately the language barriers themselves have been demonstrated in this session so in the last five minutes of the session i would like to thank you all for your participation turn to corinne and louise to give a few concluding reflections on the thoughts

Corrine Casha: yes and just a few remarks i think i don’t have a lot more to add than what the participants actually stated i thought i was very pleased to hear what they what they said and they were all very very valid points and and they are really food for thought also for me coming from a perspective of government i think for me i see two things um one is the fact that um coming from uh government i see that very often decision making at the highest level for example at the un is really sort of the privilege of governments and i don’t only say that in the realm of cyber i’ve seen it in the negotiations for example on oceans um and and therefore this is not just um something that is restricted to cyber policy so also in other in other policy areas and also at the let’s say national level so going from the multilateral but also to the national level what i see is that the government sort of there’s this culture where government thinks that it owns policy and so when whenever we are coming up with a strategy for example not enough research not enough evidence by academia or by industry is is fed into the the sort of strategy um document for example so what i see is that when we were coming up with a policy position very often it’s taken by government but it’s not does not include the the the different perspectives of of the other players and i think it’s really important um to bring everybody on board and this is what we are trying to do and just not to take up a lot of time because i know louise is also wanting to to say some remarks but on the issue of funding um we’re very very well aware of that and one of the things that my government has been doing is we are funding for example when it comes to negotiations at the un level we fund countries we fund um also governments to we sponsor uh let’s say least developed countries um delegates who cannot travel who cannot participate and and we fund them so that they are also included in the process apart from funding also fellows academia and and researchers because we believe it’s very important that we continue to fund um we continue to fund the these different um let’s say stakeholders so that their their ideas and their knowledge is fed into the process i think it’s very important so that’s all i can say from my end but um i think louise has a little bit more to say as well from her end

James Shires: thank you and louise we have already had a sign saying please wrap up so uh if you could summarize in one minute um that would be much appreciated

Louise Marie Hurel: absolutely and i mean not gonna take time uh and corinne definitely uh said a lot of what i was gonna say i just wanted to thank you all for for the contributions and the thought provoking kind of like discussions the one thing that i would stress though is this point on uh usually in these conversations be it at the igf even at the un we’re talking about how to ensure that cross-stakeholder representation is um is more effective right but i think the lesson that i’m taking from this this dialogue today is within our respective stakeholder groups how can we make it more um representative how can we build those bridges within our respective sectors right and that is something that akati and our group mentioned within like civil society you need to you can always pressure and do advocacy to make other spaces more representative but are you as an organization actually walking the talk of making your staff more diverse of ensuring that you have equity and ensuring that you are sensitive to the different let’s say regional representation within your team so i think that is the the key takeaway for me from our conversation here today and thank you again and thanks james for you know holding the fort and for the wonderful session so yeah thanks all for the contributions

James Shires: thank you louise thank you corinne uh thank you annie eric and jasmine for facilitating uh with us thank you everyone online and in person shock and jazzyland and uh have a great rest of your day

C

Corrine Casha

Speech speed

135 words per minute

Speech length

815 words

Speech time

361 seconds

Importance of including diverse perspectives

Explanation

Corrine Casha emphasizes the need to incorporate various perspectives in cyber policy, including industry, academia, civil society, and researchers. This approach is seen as crucial for developing comprehensive national cyber strategies.

Evidence

Malta’s sponsorship of fellows to participate in the IGF Forum

Major Discussion Point

Major Discussion Point 1: Cross-community engagement in cyber policy

Agreed with

Louise Marie Hurel

Agreed on

Importance of diverse perspectives in cyber policy

Differed with

Louise Marie Hurel

Differed on

Approach to inclusive policy-making

Bridging divides between researchers and government officials

Explanation

Casha stresses the importance of fostering interaction between cyber researchers and government officials. This collaboration is viewed as essential for developing well-rounded cyber policies and strategies.

Major Discussion Point

Major Discussion Point 1: Cross-community engagement in cyber policy

Government-centric decision making in policy development

Explanation

Casha acknowledges that governments often dominate policy-making, particularly at high levels like the UN. This approach can lead to insufficient inclusion of research and evidence from academia or industry in strategy documents.

Evidence

Examples from UN negotiations and national-level policy-making

Major Discussion Point

Major Discussion Point 2: Obstacles to greater cross-community interaction

Agreed with

Louise Marie Hurel

AUDIENCE

Agreed on

Challenges in cross-community engagement

Lack of resources and funding for some stakeholders

Explanation

Casha recognizes the issue of insufficient funding for some stakeholders to participate in cyber policy discussions. She highlights her government’s efforts to address this by funding participation of least developed countries and sponsoring academics and researchers.

Evidence

Malta’s funding for delegates from least developed countries and sponsorship of fellows

Major Discussion Point

Major Discussion Point 2: Obstacles to greater cross-community interaction

Sponsoring fellows to participate in IGF Forum

Explanation

Casha mentions that the Maltese government sponsors fellows to participate in the IGF Forum. This initiative aims to increase diverse participation in cyber policy discussions.

Major Discussion Point

Major Discussion Point 3: Initiatives to improve cross-community engagement

Agreed with

Louise Marie Hurel

Agreed on

Need for initiatives to improve cross-community engagement

Government funding for participation of developing countries

Explanation

Casha describes her government’s efforts to fund participation of least developed countries in UN-level negotiations. This initiative aims to make cyber policy discussions more inclusive and representative.

Evidence

Funding for delegates who cannot travel or participate due to resource constraints

Major Discussion Point

Major Discussion Point 3: Initiatives to improve cross-community engagement

Agreed with

Louise Marie Hurel

Agreed on

Need for initiatives to improve cross-community engagement

L

Louise Marie Hurel

Speech speed

148 words per minute

Speech length

1285 words

Speech time

519 seconds

Identifying researchers globally working on responsible cyber behavior

Explanation

Hurel discusses the Global Partnership for Responsible Cyber Behaviour’s efforts to identify researchers worldwide working on responsible cyber behavior. This initiative aims to develop a critical mass of expertise and diverse perspectives on the topic.

Evidence

Launch of the Global Partnership for Responsible Cyber Behaviour

Major Discussion Point

Major Discussion Point 1: Cross-community engagement in cyber policy

Agreed with

Corrine Casha

Agreed on

Importance of diverse perspectives in cyber policy

Challenges of balancing different stakeholder voices in UN processes

Explanation

Hurel highlights the difficulties in balancing diverse stakeholder voices in UN processes related to cyber policy. She notes that these processes are often government-centric and dominated by strategic competition between major powers.

Evidence

Observations from UN processes on rules for responsible state behavior in cyberspace

Major Discussion Point

Major Discussion Point 1: Cross-community engagement in cyber policy

Agreed with

Corrine Casha

AUDIENCE

Agreed on

Challenges in cross-community engagement

Global Partnership for Responsible Cyber Behaviour initiative

Explanation

Hurel describes the Global Partnership for Responsible Cyber Behaviour, an initiative launched by RUSI in 2022. The partnership aims to bring together researchers studying responsible cyber behavior and foster connections between different communities.

Evidence

Regional launch of the initiative in Kyoto

Major Discussion Point

Major Discussion Point 3: Initiatives to improve cross-community engagement

Agreed with

Corrine Casha

Agreed on

Need for initiatives to improve cross-community engagement

Leveraging existing UN system structures for dialogue

Explanation

Hurel suggests using existing UN system structures to facilitate dialogue between different communities on cyber policy. This approach aims to overcome limitations of current UN processes and include more diverse perspectives.

Major Discussion Point

Major Discussion Point 3: Initiatives to improve cross-community engagement

Agreed with

Corrine Casha

Agreed on

Need for initiatives to improve cross-community engagement

Need for more diverse representation within stakeholder groups

Explanation

Hurel emphasizes the importance of improving representation within stakeholder groups, not just between them. She suggests that organizations should focus on internal diversity and regional representation to truly enhance cross-community engagement.

Major Discussion Point

Major Discussion Point 2: Obstacles to greater cross-community interaction

Differed with

Corrine Casha

Differed on

Approach to inclusive policy-making

A

AUDIENCE

Speech speed

175 words per minute

Speech length

181 words

Speech time

61 seconds

Accreditation difficulties for participating in multilateral forums

Explanation

An audience member highlights the challenges in obtaining accreditation for multilateral forums like the Open End Working Group. The process can be difficult and sometimes arbitrary, limiting participation of certain organizations.

Evidence

Personal experience from working at the German Council on Foreign Relations

Major Discussion Point

Major Discussion Point 1: Cross-community engagement in cyber policy

Agreed with

Corrine Casha

Louise Marie Hurel

Agreed on

Challenges in cross-community engagement

Geopolitical tensions affecting participation

Explanation

The audience member notes that geopolitical tensions can impact participation in multilateral forums. Some organizations may be denied accreditation due to vetoes from other countries, limiting diverse input in cyber policy discussions.

Evidence

Observations from the Open End Working Group process

Major Discussion Point

Major Discussion Point 2: Obstacles to greater cross-community interaction

J

James Shires

Speech speed

158 words per minute

Speech length

1665 words

Speech time

631 seconds

Language barriers limiting participation

Explanation

Shires highlights language barriers as a significant obstacle to cross-community interaction in cyber policy discussions. He notes that non-English speakers face challenges in participating fully in predominantly English-language forums.

Evidence

Example of an attendee from Chad who speaks French and Arabic but not English

Major Discussion Point

Major Discussion Point 2: Obstacles to greater cross-community interaction

Agreements

Agreement Points

Importance of diverse perspectives in cyber policy

Corrine Casha

Louise Marie Hurel

Importance of including diverse perspectives

Identifying researchers globally working on responsible cyber behavior

Both speakers emphasize the need to incorporate various perspectives, including industry, academia, civil society, and researchers, in cyber policy discussions and development.

Challenges in cross-community engagement

Corrine Casha

Louise Marie Hurel

AUDIENCE

Government-centric decision making in policy development

Challenges of balancing different stakeholder voices in UN processes

Accreditation difficulties for participating in multilateral forums

Speakers agree on the existence of obstacles to cross-community engagement, particularly in government-dominated policy-making processes and multilateral forums.

Need for initiatives to improve cross-community engagement

Corrine Casha

Louise Marie Hurel

Sponsoring fellows to participate in IGF Forum

Government funding for participation of developing countries

Global Partnership for Responsible Cyber Behaviour initiative

Leveraging existing UN system structures for dialogue

Both speakers highlight various initiatives aimed at improving cross-community engagement in cyber policy discussions, including funding participation and creating platforms for dialogue.

Similar Viewpoints

Both speakers emphasize the importance of connecting researchers with government officials and identifying global expertise in responsible cyber behavior.

Corrine Casha

Louise Marie Hurel

Bridging divides between researchers and government officials

Identifying researchers globally working on responsible cyber behavior

The speakers agree on the existence of various barriers to participation in cyber policy discussions, including resource constraints, accreditation challenges, and geopolitical factors.

Corrine Casha

Louise Marie Hurel

AUDIENCE

Lack of resources and funding for some stakeholders

Accreditation difficulties for participating in multilateral forums

Geopolitical tensions affecting participation

Unexpected Consensus

Internal diversity within stakeholder groups

Louise Marie Hurel

Need for more diverse representation within stakeholder groups

While most discussions focus on cross-community engagement, Hurel unexpectedly emphasizes the importance of improving diversity and representation within stakeholder groups themselves.

Overall Assessment

Summary

The speakers generally agree on the importance of diverse perspectives in cyber policy, the challenges in cross-community engagement, and the need for initiatives to improve participation. There is a strong consensus on the need to bridge divides between different stakeholders and address barriers to participation.

Consensus level

High level of consensus among speakers, with agreement on major issues. This suggests a shared understanding of the challenges and potential solutions in improving cross-community engagement in cyber policy. The implications are that collaborative efforts to address these challenges may be well-received across different stakeholder groups.

Differences

Different Viewpoints

Approach to inclusive policy-making

Corrine Casha

Louise Marie Hurel

Importance of including diverse perspectives

Need for more diverse representation within stakeholder groups

While both speakers emphasize the importance of inclusivity, Casha focuses on including diverse external perspectives in policy-making, while Hurel stresses the need for internal diversity within stakeholder groups.

Unexpected Differences

Focus on funding and resources

Corrine Casha

Louise Marie Hurel

Lack of resources and funding for some stakeholders

Identifying researchers globally working on responsible cyber behavior

While Casha emphasizes the importance of funding and resources for participation, Hurel unexpectedly does not address this issue directly, instead focusing on identifying and connecting researchers globally. This difference in focus might indicate varying priorities in addressing cross-community engagement challenges.

Overall Assessment

summary

The main areas of disagreement revolve around the specific approaches to achieving inclusive and diverse participation in cyber policy discussions. While speakers generally agree on the importance of cross-community engagement, they differ in their emphasis on internal vs. external diversity, funding priorities, and methods of facilitating dialogue.

difference_level

The level of disagreement among the speakers is relatively low, with more emphasis on complementary perspectives rather than conflicting views. This suggests a generally aligned approach to improving cross-community engagement in cyber policy, which could lead to more comprehensive and inclusive policy-making processes if different strategies are combined effectively.

Partial Agreements

Partial Agreements

Both speakers agree on the need for better cross-community engagement in cyber policy, but they differ in their approaches. Casha emphasizes direct interaction between researchers and government officials, while Hurel focuses on leveraging existing UN structures to facilitate dialogue.

Corrine Casha

Louise Marie Hurel

Bridging divides between researchers and government officials

Challenges of balancing different stakeholder voices in UN processes

Similar Viewpoints

Both speakers emphasize the importance of connecting researchers with government officials and identifying global expertise in responsible cyber behavior.

Corrine Casha

Louise Marie Hurel

Bridging divides between researchers and government officials

Identifying researchers globally working on responsible cyber behavior

The speakers agree on the existence of various barriers to participation in cyber policy discussions, including resource constraints, accreditation challenges, and geopolitical factors.

Corrine Casha

Louise Marie Hurel

AUDIENCE

Lack of resources and funding for some stakeholders

Accreditation difficulties for participating in multilateral forums

Geopolitical tensions affecting participation

Takeaways

Key Takeaways

Cross-community engagement is crucial for effective cyber policy development

There are significant obstacles to inclusive participation, including language barriers, lack of resources, and geopolitical tensions

Initiatives like sponsoring fellows and creating partnerships can help improve cross-community engagement

More diverse representation is needed both across and within stakeholder groups

Government-centric decision making in policy development limits inclusion of other perspectives

Resolutions and Action Items

Continue efforts to fund and sponsor participation of developing countries and underrepresented groups in cyber policy forums

Leverage existing UN system structures to facilitate more inclusive dialogue

Organizations should examine their own diversity and representation internally

Unresolved Issues

How to overcome language barriers in international cyber policy discussions

How to balance different stakeholder voices effectively in UN processes

How to ensure consistent inclusion of academic and industry perspectives in government policy-making

Suggested Compromises

Governments funding participation of diverse stakeholders in policy discussions to improve inclusivity while maintaining some control over the process

Thought Provoking Comments

So one of the main, let’s say, tasks that we are currently conducting is to bridge the divides and also have the cyber researchers and cyber government officials interact more with each other. And from my personal experience this is a very important task because when it comes to, for example, drafting national cyber strategies, it’s important to have different ideas, not only restricted to what cyber policy is by government, but it’s important to also factor in the research, the academia, the government, let’s say, perspectives coupled with other partners.

speaker

Corrine Casha

reason

This comment highlights the importance of cross-sector collaboration in developing effective cyber policies, emphasizing the need to include diverse perspectives beyond just government.

impact

This set the tone for the discussion by emphasizing the importance of multi-stakeholder engagement in cyber policy development. It led to further exploration of challenges and opportunities in cross-community interaction.

And I think the debate here is to bring those different bits and pieces and these different countries, especially developing countries to have a little bit more voice in those processes. And sometimes that is obfuscated by these broader strategic competition.

speaker

Louise Marie Hurel

reason

This comment brings attention to the power dynamics and representation issues in international cyber policy discussions, particularly highlighting the challenges faced by developing countries.

impact

It broadened the scope of the discussion to include global power dynamics and representation issues, leading to considerations of how to make international cyber policy processes more inclusive.

What we see is that sometimes the accreditation process can be rather difficult or it can depend on vetoes from other countries and that sometimes is kind of arbitrary in the sense that some similar organizations in Germany don’t have the chance to participate because xyz so yeah that sometimes we find ourselves in a privileged position when we are able to give our statements there but that not everyone has the same chances

speaker

Audience member

reason

This comment provides a concrete example of the challenges in participation and representation in international cyber policy forums, highlighting issues of access and arbitrary exclusion.

impact

It grounded the discussion in real-world examples and led to further exploration of barriers to participation in international cyber policy discussions.

As-salamu alaykum

speaker

Audience member from Chad

reason

While brief, this comment powerfully demonstrated the language barriers present in the discussion itself, making the abstract concept of language barriers concrete and immediate.

impact

It brought immediate attention to the language barriers in international discussions and led to reflection on how these barriers impact participation and representation, especially for non-English speaking countries.

What i see is that when we were coming up with a policy position very often it’s taken by government but it’s not does not include the the different perspectives of of the other players and i think it’s really important um to bring everybody on board and this is what we are trying to do

speaker

Corrine Casha

reason

This comment acknowledges the limitations of current government-centric policy-making processes and expresses a commitment to more inclusive approaches.

impact

It shifted the discussion towards practical steps for improving inclusivity in policy-making processes and led to sharing of initiatives aimed at increasing participation.

Usually in these conversations be it at the igf even at the un we’re talking about how to ensure that cross-stakeholder representation is um is more effective right but i think the lesson that i’m taking from this this dialogue today is within our respective stakeholder groups how can we make it more um representative how can we build those bridges within our respective sectors right

speaker

Louise Marie Hurel

reason

This comment shifts the focus from external representation to internal diversity and inclusivity within stakeholder groups, introducing a new dimension to the discussion.

impact

It prompted reflection on internal practices within organizations and sectors, encouraging participants to consider how they can improve representation and diversity within their own contexts.

Overall Assessment

These key comments shaped the discussion by progressively broadening its scope from the initial focus on cross-sector collaboration to encompass global power dynamics, practical barriers to participation, language issues, and internal organizational practices. They helped to make the discussion more concrete by providing real-world examples and challenges, while also pushing participants to reflect on their own roles and responsibilities in improving representation and inclusivity in cyber policy discussions. The comments collectively highlighted the complexity of achieving truly inclusive and representative cyber policy processes, touching on issues of access, language, funding, and organizational culture.

Follow-up Questions

How can we improve cross-community interaction in cyber policy dialogues?

speaker

James Shires

explanation

This was the central theme of the discussion and was posed as a question for participants to explore in breakout sessions.

How can we identify researchers working on responsible cyber behavior in different regions of the world?

speaker

Louise Marie Hurel

explanation

This was mentioned as an ongoing challenge for the Global Partnership for Responsible Cyber Behaviour initiative.

How can developing countries have a stronger voice in UN cybersecurity processes?

speaker

Louise Marie Hurel

explanation

Louise highlighted that developing countries’ voices are often obfuscated by broader strategic competition in UN processes.

How can we address language barriers in international cyber policy discussions?

speaker

James Shires (on behalf of an attendee from Chad)

explanation

This was identified as a significant obstacle to participation for non-English speakers in international forums.

How can governments better incorporate research and evidence from academia and industry into national cyber strategies?

speaker

Corrine Casha

explanation

Corrine noted that government policies often lack input from other stakeholders, suggesting a need for more inclusive policy-making processes.

How can organizations within each stakeholder group improve their own diversity and representativeness?

speaker

Louise Marie Hurel

explanation

Louise emphasized the importance of stakeholder groups ‘walking the talk’ by ensuring diversity and regional representation within their own organizations.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #25 Multistakeholder cooperation for online child protection

WS #25 Multistakeholder cooperation for online child protection

Session at a Glance

Summary

This discussion focused on protecting children in the digital world, addressing the evolving threat landscape and strategies to mitigate risks. Experts highlighted the rapid pace of technological change and the increasing sophistication of online threats targeting children, including cyberbullying, grooming, and exposure to inappropriate content. They emphasized the need for a multi-stakeholder approach involving governments, tech companies, educators, parents, and children themselves.


Key challenges identified included the online disinhibition effect, the ease of creating deepfakes, and the collection of children’s biometric data through gaming. Participants stressed the importance of age-appropriate digital literacy education and the development of practical cybersecurity skills for children. They also discussed the role of parental controls and the need for open communication between parents and children about online safety.


The discussion touched on regulatory approaches, with some advocating for stricter content moderation and others cautioning against blanket bans on technology use. Experts emphasized the importance of international cooperation in addressing cross-border cyber threats. They also highlighted the need for ongoing research into emerging risks and the development of evidence-based interventions.


Participants agreed that while the threat landscape is likely to worsen, increased awareness and improved digital skills could help children navigate online spaces more securely. The discussion concluded with a call for continued dialogue and collaboration among stakeholders to ensure children’s rights and safety in the digital environment.


Keypoints

Major discussion points:


– The increasing threats and risks to children in the digital world, including cyberbullying, grooming, and exposure to harmful content


– The need for a multi-stakeholder approach involving government, industry, civil society, and academia to protect children online


– The importance of education and awareness for children, parents, and teachers about online safety


– The challenges of regulating and enforcing child protection measures in the fast-evolving digital landscape


– The role of technology companies in developing tools and solutions to enhance child safety online


The overall purpose of the discussion was to examine the current and future threats to children’s safety in the digital world and explore potential solutions and best practices for protecting children online through collaboration between different stakeholders.


The tone of the discussion was serious and concerned, reflecting the gravity of the issues being discussed. However, it was also constructive and solution-oriented, with speakers offering practical suggestions and examples of initiatives to address the challenges. The tone became more urgent and action-oriented towards the end, with participants emphasizing the need for immediate and coordinated efforts to protect children online.


Speakers

– Gladys O. Yiadom: Moderator


– Melodena Stephens: Professor of innovation and technology governance at Mohammed bin Rashid School of Government in Dubai, UAE


– Elizaveta Belyakova: Chairperson for the Alliance for the Protection of Children in the Digital Development


– Elmirti Arousafi: Cyber security expert and board member of the Moroccan Centre for Polytechnic Research and Innovation


– Heng Lee: Senior government affairs and public policy manager at Kaspersky


– Andre Gorobets: Representative from Ministry of Education (country not specified)


– Margarita Yurova: Translator for Andre Gorobets


– Anne Mickler: Online moderator


Additional speakers:


– Ethan: Youth ambassador of the One Power Foundation from Hong Kong


– Grace: Representative from Pan-African Youth Ambassadors for Internet Governance, from Uganda


– Jutta Croll: Child rights advocate from the German Digital Opportunities Foundation


Full session report

Protecting Children in the Digital World: A Multi-Stakeholder Approach


This discussion focused on the critical issue of protecting children in the rapidly evolving digital landscape. Experts from various fields, including academia, government, and the technology sector, convened to address the growing threats to children’s safety online and explore potential solutions.


Survey Insights


The session began and ended with a survey of participants’ views on the threat landscape for children in the digital world. Notably, there was a shift in perception by the end of the discussion, with more participants recognizing the severity of online threats to children.


Key Threats and Challenges


The participants highlighted the increasing sophistication of online threats targeting children. Melodena Stephens, a professor of innovation and technology governance, emphasized the alarming rise of deepfakes and the lack of alignment on standards for age-appropriate content. Cyberbullying and emotional harm from online interactions were also identified as significant concerns.


Elmirti Arousafi, a cybersecurity expert, pointed out that the rapid pace of technological change is outpacing regulatory responses, creating a challenging environment for protecting children. Andre Gorobets, representing a Ministry of Education, stressed the transborder nature of online threats, further complicating efforts to safeguard children.


Multi-Stakeholder Collaboration


A recurring theme throughout the discussion was the need for a multi-stakeholder approach to address these complex issues. Melodena Stephens highlighted the crucial role of political will from governments in prioritizing child safety. She also stressed the importance of industry alignment on values and ethics, as well as the role of researchers in studying both the benefits and harms of technology.


Education and Awareness


The experts agreed on the critical importance of education and awareness in protecting children online. Elmirti Arousafi advocated for gamified, interactive curricula to teach children about online safety, making the learning process more engaging and effective. He specifically mentioned the Espace Maroc Cyber Confiance program as an example of such initiatives. Heng Lee, from Kaspersky, shared examples of educational resources such as cybersecurity alphabet books for children.


The discussion also touched on the need for practical guidance for parents on protecting their children online. Elmirti Arousafi emphasized the importance of equipping parents with the knowledge and tools to navigate the digital landscape alongside their children, stressing the importance of building trust between parents and children for effective online safety.


Regulatory Approaches and Challenges


The participants discussed various regulatory approaches to enhancing child safety online, while acknowledging the challenges in implementing and enforcing such measures. Heng Lee highlighted the difficulty of enforcing age restrictions for social media use and suggested the need for dedicated regulatory bodies focused on child online protection.


There was some disagreement on the approach to regulation. While Melodena Stephens emphasized the need for strong political will from governments, Heng Lee suggested a more balanced approach that considers both consumer protection and innovation in the tech industry.


Jutta Croll, a child rights advocate, introduced the UN Convention on the Rights of the Child as a basis for government action on child online protection. She specifically mentioned General Comment No. 25, adopted in 2021, which obliges states that have ratified the convention to implement children’s rights in the digital environment.


Role of Technology Companies


The discussion explored the role of technology companies in developing tools and solutions to enhance child safety online. Heng Lee presented Kaspersky’s Safe Kids as an example of parental control software designed to protect children in the digital space. He detailed features such as GPS tracking, screen time management, and content filtering, emphasizing the app’s ability to help parents guide their children’s online activities.


Youth Involvement and Empowerment


An unexpected point of consensus emerged around the importance of involving young people in developing solutions for online child protection. The potential for youth-led awareness campaigns on online risks was discussed, recognizing children as active participants rather than just passive recipients of protection measures. The Pan-African Youth Ambassadors for Internet Governance expressed interest in raising awareness about online risks among their peers.


A youth ambassador from Hong Kong raised the controversial topic of banning children from using mobile phones or the internet, sparking a discussion about the balance between protection and digital literacy.


Alliance for the Protection of Children in the Digital Environment


Elizaveta Belyakova, Chairperson for the Alliance for the Protection of Children in the Digital Development, presented the organization’s activities. The Alliance focuses on creating a safer digital environment for children through various initiatives and collaborations.


Thought-Provoking Insights


Several thought-provoking comments deepened the discussion. Heng Lee illustrated the long-term risks of seemingly harmless online interactions by describing how a chatbot could casually collect sensitive information from a child. Melodena Stephens highlighted the misconception that home internet use is always safe, framing it as a literacy issue. She also shared an example of a seven-year-old boy’s approach to identifying strangers in online games, demonstrating children’s potential for developing safety strategies.


An audience member raised the issue of content creation challenges and the need for skilled creators to produce engaging, age-appropriate content for children.


In conclusion, the discussion highlighted the complex and evolving nature of online threats to children, emphasizing the need for a comprehensive, collaborative approach involving governments, industry, civil society, parents, and children themselves. While challenges remain, the experts agreed that increased awareness, improved digital skills, and coordinated efforts could help create a safer online environment for children.


Session Transcript

Gladys O. Yiadom: . Can the online moderator share her screen? Full screen, please. Thank you. So the first question is this. . . . . . . . . How will the threat landscape for children in the digital world develop over the 3-4 next years? Just as a reminder, an on-site participant can scan the QR code to participate to the survey. First point, it will increase significantly and lead to increased abuse and cybercrime. The threat situation is getting worse. It will increase significantly, but at the same time, children’s awareness and knowledge of cybersecurity issues and protection against threats in the digital world will also increase. The threat situation will remain more or less the same. It will increase significantly, but better knowledge and well-developed defense skills, as well as better developed digital skills, will ensure that children can operate more securely in the digital world. In this respect, the threat situation will improve, or I cannot give an estimation. So just let you a couple of minutes to take the survey, so that we can view the results all together. And please, do we have the results? Okay. The results are coming. Just a few seconds, please. Just a couple of seconds. I just received them, give me one second to share the graph. Thank you, and okay, let me switch the screen, then I’ll be able to share them. There we go, thank you. So we now have the results, so 17% of you have responded that it will increase significantly and lead to increased abuse and cybercrime, 33% of you has indicated that it will increase significantly but at the same time, children’s awareness and knowledge, my mic is not clear apparently, can you please fix it, thank you, that it will increase significantly but at the same time, children’s awareness and knowledge of cyber security will increase. See, 32% of you has answered that it will increase significantly but brilliant knowledge and well-developed defense skills will ensure that children can operate more securely in the digital world, and at least 17% of… few cannot give an estimation. So we will run again this survey at the end of the session and see if the result has changed. Thank you very much Anne. So let’s now start with our conversation with our speaker. I am pleased to have with me today Melodina with us. Brief introduction, Melodina is a professor of innovation and technology governance at Mohammed bin Rashid School of Government in Dubai, UAE. She has three decades of senior leadership international experience and consult in strategy working on policy issue with organizations such as Agile Nations, Council of Europe and the Dubai Future Foundation. So Melodina please tell us in your opinion what are the main threats, risks and dangers for children in the digital world and can you illustrate the negative effects with a few examples and I will kindly ask you to share Melodina’s slides please. Thank you.


Melodena Stephens: Thank you so much. Let me start with just a little question. If I show you this picture can you tell me where the threat comes? So this is what children play with. Post pandemic we went a lot online and right now we’re using gaming as an educational medium. For example I’ll just give you figures Minecraft has 173 million users active users and approximately if you look at the figures less than 15 year olds are 20.6 percent but 15 to 21 year olds so this is a little bit funny we don’t know what that number is 15 or 16 years old is 43 percent. Let’s take another online game Roblox 71.5 million users two-thirds are children right. And this raises these interesting questions because when I look at this figure, I don’t know who’s a stranger, who’s a bot, what kind of information is getting. And if children online are using VR sets, and they are because parents sometimes let them unsupervised, basically when they play a game which is approximately 20 minutes, there’s two million biometric points of data being collected. This may not seem like much, but tomorrow this could be information that could be used for security. So I think the challenge that we have right now, the threats we don’t see because we aren’t having enough discussions. We’re taking a one-sided view, oh, it’s online education and it’s developing digital skills. But we’re not looking at what might be the security concerns. We’re not asking the questions, what happens if children are too long online? And here’s another example that I wanted to show you. This is not working. Next slide, please. Could you put the next slide? Okay, so everything has gone a little bit off. So if I take something like this, there are a lot of age-appropriate codes being developed right now. But there’s a little bit of a problem. The global online safety currently showed that 18 to 19-year-olds scored very high on addiction. So that’s one fourth of them. And 18% of 13 to 17-year-olds, they didn’t do anything lower, but there are children much smaller. I see babies with iPads who are three years and four years old because parents want to keep them occupied, right? And then we also see that many of these children have profiles put up, either directly or indirectly. So if a parent puts up a child’s profile on Instagram or TikTok, there is data being collected. So we see that there is a problem in this gap. Another interesting thing is when there are crimes that are happening, most likely it’s associated with people you know. so it’s friends and family. Imagine a child playing online with a friend. They don’t know this friend. Physically, the parents have not met them. So you can see how much of that threat increases. And then you add the concept of these non-player characters, which are AI bots. And we recently saw a 14-year-old commit suicide because he fell in love with his AI bot in the US. The bot did not ask him to commit suicide. It said, please join me. And the child assumed that was take a gun and shoot himself. So we see this little bit of, I wouldn’t say little bit, horrific consequences of unsupervised online time. Another big challenge that I think is really important is when you look at the standards on how they decide what is allowed for children to play, there is no alignment. So I took this picture across different, it’s the same game, and you can see six-year-old, seven-year-old, 10-year-old, 12-year-old. So we don’t have alignments on standards, and there isn’t enough education for parents on this. So if I look at all the online harms, and I’m just gonna leave that as an impulse, you see there’s quite a lot, but I want to talk about things like self-harm, right? We right now find that WHO says the fourth largest cause of death among, and I’m taking 15 to nine-year-olds, is bullying and cyberbullying. Cyberbullying often happens online, and we may not recognize it because it does not result in physical harm, but it results in deep emotional harm. So I think there are challenges when we look at how we want to manage this process. I’m going to leave it there, but we’ll leave more for questions afterwards. Thank you.


Gladys O. Yiadom: Thank you so much, Milodena, and what you highlighted is very key, and it shows that there is a need to undertake actions to protect children, and this leads us to our next speaker, which is Elisabeta Beliakova. So very pleased to have you with us, Elisabeta. Elisaveta is the chairperson for the Alliance for the Protection of Children in the Digital Development. Elisaveta, I wanted to ask you, you are part of these alliances, what were the motives for founding the Alliance for the Protection of Children in the Digital World? What are the key stakeholders you work with and what are the goals of your organisation? Over to you.


Elizaveta Belyakova: Dear colleagues, good afternoon. I speak Russian because my Russian is much better than English. Sorry. Dear colleagues, good afternoon. I am glad to see you all. Thank you very much for the question.


Gladys O. Yiadom: Elisaveta, sorry, I need to interrupt you. I would kindly ask you to speak in English because we do not have interpreter services here. So, I will kindly ask you to speak in English, thank you.


Elizaveta Belyakova: My English is not good, but okay. I am glad to present to you the activity of the Alliance for the Protection of Children in the Digital Environment. In the Includive, there has been bridging together Russian technologies companies for the, yes, to create digital for our children. Founded in 2021, the Alliance has been working in addressing the key challenges of the digital age. The Alliance has become the unique platform that unites the largest companies in Russia. It’s Laboratory Kaspersky, VTK and many others. One of the most important activities of the creation of the digital liberty. Education, it is a big education portal. This portal provides the children, parents and teachers with access to cyber risk materials that help develop skills of protection. agency digital fees, such as psychological cases, data breaches and others. We have also developed a kind of book Risk of the Digital Environment. This is also a big interesting project, so the future risk of others. We also pay special attention to corporations in international areas. The activity shows our plans with Luvon, or BRICS and many other organizations. Let me also… Sorry, this is really not… Okay. Let me also answer a few questions for the lecture today. First, what is the best way to address children, parents and teachers? It is important for children to use games and education forms to involve them in the education process. Second, how to adapt to the learning programs? We believe the programs of the Internet interactively take into account the psychological characteristics of children, so the difference between games and the inclusive parental care. Third, how should the dialogue be simulated? We see great potential in the organization, regulation, meeting, workshop, and around the table, bringing together the business, government and civil society. Thank you for your attention and I look forward to the continuing and prospective dialogue. Thank you so much.


Gladys O. Yiadom: Thank you very much, Elisaveta, for your words. The contribution that you’re doing in this space is very important. Thank you for having shared your experience with us. Let me now turn over Elmirti, who is with us here. Thank you again also for being with us, Elmirti. A quick bio, Elmirti Arousafi is a cyber security expert and board member of the Moroccan Centre for Polytechnic Research and Innovation, where he plays a key role in advancing cyber security initiatives and research. He also is a core contributor to Espace Maroc Cyber Confiance, a national program dedicated to protecting children and vulnerable groups overall in the digital space. My question to you, Elmirti, is this one. In your opinion and experience, what are the main challenges when it comes to protecting children in the digital space?


Elmirti Arousafi: Thank you so much, Gladys. First of all, I would like to thank Kaspersky for the invitation and for creating this opportunity to talk about a very important subject. I was really surprised to see the numbers you showed, Elmirti, that this issue is really getting worse year after year. And as you kindly introduced, our experience in Morocco, actually with the EMC, Espace Maroc Cyber Confiance, helped us first-hand see how difficult it is to implement a national program in order to raise awareness around those issues. And I would like to just share some of them. I really think the issue is much bigger than that, but we can summarize them into… maybe three or four challenges. The main challenges according to what we see is the speed at which technology evolves. Unfortunately, as well, it’s misuse. So technology advances at a speed, but we can see that harmful people will use it as well, would use those advances such as AI and deep fakes and those kinds of technologies to create more harms, to create more complex harm to combat. So this sophistication really poses a difficulty, both from a regulatory perspective, but as well as technical perspective. And acting quickly is something we are trying to implement as well through the implementation of the helpline to help children, for example, remove content online. We also try to go beyond children. We talk about really weak populations. So in order to act quickly, those target people we are trying to touch needs, and I come to my second challenge, to be aware, right? To be a minimum trained, to be able to detect actually a fraud scenario, I would say, or maybe a cyber threat in the internet realm. So while children are usually digitally savvy, we noticed that they are not well-trained into the main threats, into internet and digital worlds. And equally, parents and educators are also, may lack the knowledge, especially the technical knowledge and tools to guide them effectively in this endeavor. So this is the gap we see, and the cyber criminals actually exploit that gap. The third, maybe, and final challenge would be the regulatory inconsistency across borders. And I emphasize on the across border. Our experience at EMC is to act locally, but we very quickly found out that. we cannot. We are talking about internet, we are talking about internet giants, we are talking about platforms, international platforms, so we have to reach, we have to expand our reach, hence the collaboration with some of our partners like Kaspersky, to actually touch and be in contact with the regulatory boards across borders. So we are trying, and in Morocco we’ve been addressing these challenges through multi-faceted approach. I will leave some of those remarks to the next questions.


Gladys O. Yiadom: Thank you. Thank you very much El-Mahdi, and I think what you share with us also highlight the need to have this multi-stakeholder approach at the end of the day, so having representatives from the civil society of course, including academia and government, which is key, and this leads me to our next speaker, Ang, who will represent the industry. So Ang, again also, thank you very much for being with us today. Just a quick intro and then I’ll share my question to you. So Ang Lee is Singaporean and a lawyer by training. He’s a senior government affair and public policy manager at Kaspersky. Prior to that he worked at the Singapore’s Ministry of Home Affairs as assistant director of technology and data policy, and he studied issues at the intersection of law enforcement and technology, including crimes targeting children and cyber bullying. So Ang, my question to you, why Kaspersky as a cyber security company is committed to protecting children in the digital space, and what projects has the company initiated in that regard?


Heng Lee: Thank you, thank you Gladys and my fellow esteemed speakers. Maybe let me answer this in two parts, the why and the how. Firstly, the why. I think Melodina and Almedi had shared a lot about threats and challenges of protecting children online. I just want to add one more perspective from the vantage point of a cybersecurity company, and why has the online quality of this problem made it particularly difficult and thus relevant to the entire tech industry. So in the physical world, we have special standards for children like child safety seats in a car, and warning labels about small parts that could be easily swallowed by a toddler. These are tangible things that you can touch and feel, and if there is a bully at the playground, you know who he or she is and you can see what he or she has done. But the online world gives rise to a different set of behaviour, what is called the online disinhibition effect, where individuals behave differently than they would in face-to-face interactions. It is partly because of the anonymity and lack of accountability that the cyberspace offers, and partly because consequences aren’t felt as immediately. And children who often do not have full appreciation of these consequences become particularly susceptible to this effect. Since the mischief here had arisen because of tech, the tech industry is naturally well-placed to offer many of the solutions to counter these, because tech practitioners have a good acumen of trends and designing these into a workable solution. Which brings me to the how. So what are some of the projects that Kaspersky as a company has initiated in this space? The first and foremost that I want to share is in parental control. Many cybersecurity companies have come up with solutions in parental control, and so has Kaspersky. Ours is called Safe Kids, and it’s been around for 10 years, having been launched in 2014. This app, when installed on a device, protects children from harmful content. It will interact with search engines and browsers. block search requests and once a week parents will receive reports on what their child had searched for on the Internet. This helps them to better understand the child’s interests and to remind them what is suitable for them to search on the Internet and what is not. The app also allows web filtering by enabling parents to block adult content, violent sites and video games. Moving on to usage control, Safe Kids also allows the blocking of inappropriate apps based on the child’s age. Parents can set time limits for the usage of the device by shelling time slots and days off and the device will also be blocked when a time limit is up. It can be switched off at a given time when the child needs to do their homework or be engaged in other screen-free activities. For how all-arounded this solution is, Safe Kids has received awards from the Independent Assessor AV test from Germany and also at the Mobile World Congress in Barcelona. Of course we know that such apps are not completely without controversy so we’ve also issued guidelines on whether the installation of the app should be discussed with children and how. For instance we are suggesting that from the ages of 3 to 6 there is no discussion needed, from 7 to 10 children need to be informed, from 11 to 13 there has to be a discussion and from 14 to 17 there should also be mutual agreement. This is where tech really intersects with policy so Kaspersky also extends into thought leadership and education programs engaging even parents and teachers who need to be equipped with knowledge about cyber threats to make fully informed decisions about what is best for their child. On that front Kaspersky has been conducting events to promote cyber hygiene habits and in 2023 alone we did 107 events reaching out to some 700,000 people around the world. What we really sought to do is to share some of our findings on threats and make them actionable for parents and teachers like anti-hacking, protecting children’s privacy identifying indicators of cyberbullying, and helping children become more resilient when digital devices become a norm. Kaspersky has also published children’s books on good cyber hygiene habits. One of them launched this year is called Cyber Security Alphabet, where we are teaching A to Z. But here, A is for authentication, B is for backup, C is for capture, and D is for digital footprint. It is a profound reflection of the world that we live in today and how fast changes are happening, because I don’t think even adults may know all of these words. So I encourage everyone to download a copy. There are limited physical copies, which I think Gladys has also been distributing in our booth at IGF. Of course, there are still many other initiatives like a joint study with the UAE government on children’s online habits, and a white paper written with the Singapore Institute of Technology on motivations for safe online behavior. These initiatives allow us, as a cybersecurity company, to contribute to the ecosystem as practitioners with real stories to share that is backed by our data. But given the limited time, I won’t be able to go to the details of many of these initiatives. I’ll be happy to share more about these later on in the questions, as well as during the interaction session with the audience. Thank you very much.


Gladys O. Yiadom: Thank you very much, Eng, for this comprehensive overview of the action led by Kaspersky. As you were saying, there is a need to have this conversation from the get-go with children. So now moving on to sort of conversation that I want to have with all of you panelists. So, El-Mahdi, you referred later to risks that are coming up with AI. So I will address this question to Melody and Eng, but AI is often exploited by cybercriminal to harm children via deepfakes, as you mentioned, El-Mahdi, or text that look convincingly real. What can the different stakeholder groups do to counter this danger and mitigate risk? So I’ll kindly ask Melodina to share her insight, then Engin, and yes, thank you.


Melodena Stephens: So I think the first thing is to know how many pictures it takes to make a deep fake. And you could do that with one. How much a voice recording, 15 minutes. Then think about where all children’s voices are being recorded or pictures are being put up. And this is schools put up children’s pictures online because they are like, this is my new class. We use platforms like learning platforms where we record things. But we never ask the questions, what happens to these recordings? Are these recordings with the platforms? What are the safety things? Because there’s constantly training. I mean, Zoom is currently now training on the recordings that are there. The problem is the large companies may choose, and I don’t know yet because I don’t have clarity, not to share the information, but there are many, many educational apps. Who’s vetting them? And we know apps fail very quickly. You need a minimum user, kind of 50 million users, otherwise you’re not going to be successful. When they die, what happens to the data that the teachers have used to keep the kids engaged? And we don’t have answers because no one’s vetting them. No one’s asking these questions. So I think the problem with deep fakes, it’s very easy to make, and we need a whole of society responsibility, but we need regulators to get onto this too. We need ministries of education to perhaps vet apps and say, these are approved, these are not approved, and also monitor them because if they fail, make sure their data is not leaked. If you look at the deep web, one of the biggest challenges is child pornography. The children that are being trafficked there are synthesized. They take real pictures and superimpose them on compromising pictures. Imagine that child growing up and being confronted with a picture that’s a deep. fake? What will be the psychological damage for that child years later when they’re trying to get a job or anything else? We do not have an idea how this will evolve in the future, but it’s a little bit scary and I am worried for the children.


Gladys O. Yiadom: Likewise, Melodina. Hank, could you share your thoughts on that as well, please?


Heng Lee: Certainly. I think let me divide it into a few parts once again. So it’s really the people, the process, the technology. The people here would really need to gain the kind of awareness and education as to what kind of threats children are facing. In terms of policy, we have examples very recent, in fact, from Australia. I think many of you might have read in the news that Australia is going to become the first country in the world which intends to ban children under the age of 16 from using social media. It is in fact the world’s most restrictive regime so far. But there are also questions about how this is going to be enforced. How do you ensure that children under 16 don’t have access to the social media? Because there are age limits for alcohol, but that doesn’t stop children under the age of 18 or 21 or whichever age limit there is from getting alcohol in different countries. So enforcement can be a problem. And then, of course, there is technology, which I think, once again, coming from a tech company, I feel that the value of what a tech company can contribute is practitioner experience, the understanding of what the latest threats are and how to guard against them. Since we’re on the topic of AI, Melodina has shared a lot about how quickly it is to, how easy it is to create a deep fake, just 15 seconds of voice and just one picture. And AI is actually making it easier to groom children as well. Just imagine if adults can fall for deep fake, what more would it be for children? The kind of grooming also comes in the form of conversations where the child could think that he or she is talking to a friend who is playing an online game, but it could well be a bot which is being programmed to gather some of the personal data. And these have long-lasting effects. I’ll just cite one example. If we have a bot which is collecting conversation very casually with a child asking, what is your blood type? The child takes it as a very innocuous question, answers it. And this is something that stays on the internet forever because it is pretty unlikely that the child’s blood type is going to change. So on the internet or in a dark web database, whatever blood type this person has remains on the internet forever. So this is actually quite a sobering thought, which if you think about the kind of damage that online gaming modules and together with AI can create. So tech companies need to guard against these, to flag these out as early as possible whenever they come across these new threats. And coming from wearing my hat as a regulator previously from the Singapore Ministry of Home Affairs, I also think that there needs to be an enlightened approach when it comes to regulations to ensure the balance between consumers, protection, and innovation. Try not to jump into any new threat that emerges. Start with broad principles, guidelines, rather than a very blunt question like, how do we regulate chat GPT? So I think the kind of balance from regulators, contributions from tech companies are essential to create an ecosystem that’s safe for children. Thank you.


Gladys O. Yiadom: Thank you, Veng. Absolutely. And when you’re mentioning guidelines, we can see here that it’s about an ecosystem, really. So we’re talking about children, but there is a need to also address parents. and teacher, which leads me to my next question that I will address to Elisabetta and Elmeri. What is the best way to address children, parents and teachers? And how can curricula be adapted to create sufficient appropriate cybersecurity offerings? So Elisabetta, please, I know that you answered some of the question earlier, but if you want to perhaps add some more comments to that? Elisabetta, are you with us? Perhaps let’s start with you, Elmeri, if you can answer this question, please.


Elmirti Arousafi: Sure. So if I got the question right, we are talking about how to actually address children, parents, educators, and all the stakeholders. And of course, different targets would require different approaches. And first of all, let’s talk about children. So what we noticed is that effective curricula are actually gamified. So we’d like or we would want to create interactive experiences while we teach children. We don’t want to have a strict technical curricula. We want to get engagement. So remember, we said that one of the challenges is to get children trained, able to spot threats and alarms. So the easiest way and the most effective way would be to create gamified curriculas. We tried that through our subsections of EMC, EMC Youth, where we created actually games on internet and such kind of interactive experiences to get as many children engaged as possible. Now for parents, the focus should be on practicality, right? So the guidance with parents needs to be ongoing and it needs to be helping parents be equipped really to protect their children. And this is again, due to the pace of technology. So when we talk with parents and us included, talking about our generation, we start feeling that technology is actually, you know, far ahead of us. So we actually need practical guidance. We need to understand. I mean, I understood Roblox from my daughter and it took me some time to understand the threats behind that. So it’s really very interesting to see how this space and how parents sometimes feel lost. So again, at EMC, we created guides, practical guides for parents. Now, we need also to think about teachers, right? Because we think that peer-based and structured learning is also important when it comes to cybersecurity. So teacher would play a crucial role and they are in constant contact with children. Us as an NGO, we are not as often with children as the teachers and parents would be. So teachers, the curricula would need to be very specific cybersecurity curricula that aligns with their teaching objectives. So depending on which level we’re talking, which kind of school, et cetera. So again, equip them with foundational cybersecurity skills and it is essential, again, to make it practical. We don’t wanna have only theory on that. And I think part of your question is how to adapt the curricula. So the module should be actually part of standard education. Ideally, we wanna have from an early age, something evolving into more complexity as the child will grow in different levels. It can be like… digital safety weeks, for example. This is some of the initiatives we have been doing, or online hygiene session. So we had that as well done with some of the classes. And again, it involves developing age-appropriate resources. So the key is to adapt curricula to every audience. Thank you.


Gladys O. Yiadom: Thank you very much, El-Mahdi. I saw that Elizabeth was with us a couple of seconds ago. Elizabeth, are you back? No, so let us move then to the next question that I will address to Henk and Elizabeth as well, if she is back. Henk, how can a multistakeholder dialogue and cooperation on online child protection on national and regional level be stimulated?


Heng Lee: Thank you. Thank you, Gladys, for the question. I think it really needs to start from the recognition that the problem cannot be faced or solved by any single stakeholder, because no one will have all the answers to this problem, even the complexity of it involving regulators, tech companies, parents, teachers. So that humility and understanding that it needs to be some all hands on deck, it needs to start with that understanding. And also, there needs to be a recognition that this problem is not something that’s confined to a certain country. So whenever we see a case of something very alarming happening in another country, that could soon be on our shores very quickly. So that dialogue gives the urgency of this problem, really gives the impetus to put together a dialogue that is very specific to this problem. And instances like the workshop that… having today is an example of how we’re coming together to influence policy at a national level because we have regulators sitting with us today, we have government representatives who can take these ideas back. It’s a cross-pollination of ideas not just from the industry but also from people who have done it before, practitioners, NGOs and academics like Melodina giving very good ideas on how we can shape a balanced approach to regulating content on the internet that can protect children. And some of the examples of what have been thematic discussions in the past would be such as the World Anti-Bullying Forum where I think the example I raised just now about online disinhibition in fact had been widely discussed and also there’s a Safer Internet Forum that is done by the EU. I don’t know where there’s going to be another repetition of it but instances like this really allows people to sit together and learn what is it that has succeeded or failed especially instances which have failed so that we know how to draft laws in a way that avoids these pitfalls. And the example finally coming to the example I talked about just now about how health data are becoming very crucial about how blood types could be data about blood types could be taken up from game modules. This is something that I think even health authorities from around the world could be interested in. So especially for children because adults may understand the importance of keeping health data close to themselves but children may not and they could see as oh I’m just sharing it to see if I’m making a good friend or they look at it in terms of whether I’m just sharing my horoscope there’s nothing wrong with it. So that awareness and education, once again, has to be present across different verticals, healthcare being one of them. So I think the involvement of different verticals, and I don’t have a comprehensive list of what these verticals may be yet, but as and when there are new challenges that are targeted at them, they should be involved in this conversation. I think as a start, there will be a good approach to understanding who is it that we need to gather for these conversations. Thank you.


Gladys O. Yiadom: Thank you. Thank you very much, Ang, for these thoughts. And this gave us the opportunity also to open the floor for some comments and questions. And I’ve seen that we have a request from the Ministry of Education. Could we please give open mic to Mr. Andre Gorobets? It’s online, right? It’s online.


Andre Gorobets: Hello.


Gladys O. Yiadom: Hi, we can hear you, sir.


Andre Gorobets: Can you turn on the mic of our translator? I can speak in Russian. And he can say it.


Gladys O. Yiadom: But I can only ask you to speak in English because we do not have interpretation.


Andre Gorobets: Okay. I will translate it online. You can turn for her a mic, a microphone. And Margarita Yurova.


Gladys O. Yiadom: Okay, we have someone in the adjunct that can translate in Russian.


Margarita Yurova: I will help. I will translate from the audience here.


Gladys O. Yiadom: Please go ahead, sir.


Andre Gorobets: Thank you very much. I support my colleague. The issue of digital education is quite important. It is considered as one of the main trends of development in all states.


Margarita Yurova: So, I’m supporting colleagues. The issue of how to deal with digital challenges is one of the key ones, not only for Russian Federation, Russian government, but for other governments across the world. It’s a very important issue.


Andre Gorobets: We believe that our key goal is to focus on new skills development, new competences for our kids to adapt to the new digital challenges. And we believe that new technologies, new technological instruments need to be instrumental and helpful and not to be a showstopper. First of all, Russian government and Russian state pay attention to the fact that security issues, first-stage security issues, need to be combined and considered as pedagogical issues and psychological development of our kids. And we believe that we need to address the issue holistically, and cyber security issues, challenges, has to go along with psychological and educational goals and tactics. So it all has to be in one, to address holistically. And the most important point that all our colleagues should pay attention to, is the interaction between the state and the protection of our children. Because trans-border threats, trans-border interactions, affect our work primarily now. And cooperation needs to be improved to address kids’ safety goals. To address kids’ safety, we are working on three levels. The first level is technological level, to ensure that devices kids are using are protected from harms, unwanted content, threats, etc. The second level is software level. to ensure that the device is equipped with software for checking the content. The highest level is, of course, the content. This is what we should pay close attention to, because the content contains the important moments of the development of our children, which we should control first and, first of all, protect them from external harm. Colleagues have already said that the chat GPT and any other artificial intelligence that is implemented in education poses high risks. Yes, we agree with this. But we must understand that this new technology can be used, and we are already using it, in the work, including for the development of skills, knowledge and skills, which allows the teacher to build a lesson But we need to admit that it’s also generated by the chat GPT, also instrumental in terms of teaching kids, and they can be used for good in educational process. And the role of the government is to equip teachers with the technologies, which can be embedded in a safe way for the educational process. And we need to all together work on the common goal of equipping schools, universities, with the good modern technologies for the modern, modern education. Thank you.


Audience: And what I wanted to address, and I’m sorry, I don’t know all the names of the panelists sitting around here, but I’d like to ask, and the lady, and the sir from Morocco. We found that the modus operandi of perpetrators reaching out to children for either for radicalization, as well as for grooming, is the same. So they use the same pattern for a different crime, meaning that the most vulnerable children, one of you mentioned that, is even more at stake in the online environment. And I must say I’m kind of disappointed because I was hoping this room was packed today, because this is one of the most threatening subjects we have online at the moment, and I don’t think we’re doing enough. We keep talking about it, but at the same time the big tech is just over our shoulders and not protecting our children enough. So, my question would be, you mentioned it, I believe, but how do you emphasize a regulator or regulatory body to be able to put more regulation online, protect our children, if you know that it’s for the perpetrators, just another easy crime.


Gladys O. Yiadom: Thank you. Melodina, I’ll ask you to answer the question first.


Melodena Stephens: So your question was just what should we do for more regulations, if I’m correct? Okay, so there is, I think, a literacy gap, and this is at not only at societal level, but at also regulatory levels. We’re not understanding what, and even with engineers, so I’m also working with big tech, I work with IEEE, even engineers don’t understand the consequences of designing code. Startup founders, they mean to change the world for the better, but the moment is when you’re embedded in 50,000 devices, and you don’t have the safe calls or the protocols, or there is somebody who says, where’s my money, because I’ve invested 50 million, I need, shareholders do the same thing on the stock market. So the issue is really a very large literacy issue. And the scale, and I think you mentioned that too, so I’ll just give you this example. It took 68 years for airplanes to reach 50 million customers, but it took Pokemon Go 19 days. There is no regulator in the world that reacts in 19 days. So we are far behind. What we need is a public sector that is thinking 20 years ahead of the private sector, which is not the case right now. Right. So what can we do right now? I think we need to make hard no-go areas. Does a child who’s five years old need to be exposed to the internet and have a mobile? Should that child have the right to childhood right now and just exploring simple things, learning how to read, looking at books? I don’t think books are bad. When you talk to psychologists, they say reading in books is a slow process, but it helps their brain. develop differently than online stuff, gamified stuff does. So we need a lot more research funding going into these areas. We need stricter penalties on what is a crime online. I don’t think that is very transparently clear. And it’s very difficult to apprehend criminals, make them accountable if they’re in another jurisdiction. So we do need a lot more coordination for apprehension of criminals, a lot more transparency on what is criminals. And I want to come back to cyberbullying for this. A lot of children bully children, and it would be considered a crime. They don’t know better. They think it’s OK to put a face on something else, and it’s fine. So I agree with you. Grooming is the same. With AI, it’s easy. You’re basically mirroring a child. So if a child smiles, you smile back at it. The child develops trust. And therefore, if you say, it’s bath time, the child will do whatever needs to be done, and the camera’s on. So it is very difficult to catch, unless parents have a rule that no kid should be on the computer unsupervised. We assume if it’s at home, it’s fine. So it’s a literacy issue, I think.


Gladys O. Yiadom: Thank you, Mr. El-Medhi. A couple of comments on that, please.


Elmehdi Erroussafi: I would comment from the NGO perspective. And some of the work we do in Morocco is actually with partners and regulators, whether it is telecom regulators, data privacy regulators, are actually partners. So one very important point is to have the common goal, is to align everyone on a shared objective. Because this issue has been taken from different perspectives. Us as technical people, as researchers, we would look into the technicality of it and try to come up with standards and big compliance items to be checked. Whereas from the regulator, those are more into a work in progress. I would say policymaking more into protecting end customer. So we need this shared goal and we need to understand the pain point of each other. So collaboration is key. We don’t need to have multiple initiatives here and there. We need to have focus and we need to have collaboration. I would comment from that point alone because I think we won some months, maybe not years ahead, but months ahead when we worked since day one in a collaborative manner.


Gladys O. Yiadom: Thank you, Almedia. And perhaps since we also have Ang with us and I know that Kaspersky is only working with regulator, could you please share the perspective from the industry?


Heng Lee: Certainly, maybe I’ll just share from my region, which is in Asia Pacific. I think when it comes to regulations and how we respond, like what Melodina rightly pointed out, there is no regulator in the world who could be responding to a challenge like this of this magnitude in 19 days. So in my home country of Singapore, there have been different outfits that’s being set up to respond to new challenges that are emerging. So for example, an issue like child safety will straddle across a few ministries. In public security, the police might come into it, internet regulators might come into it, the social affairs people might want a hand in it. So the latest problem to enjoin such an outfit is something like the protection from falsehood. So there is a dedicated office called the Protection from Falsehood Office in Singapore. Yes, I can hear you. Can you hear me now?


Gladys O. Yiadom: No, I don’t.


Heng Lee: I am not hearing you very well. It’s very choppy on your end. And I’m here to check if it’s well. Can you say something again? That is, are you saying anything?


Gladys O. Yiadom: Yes. Sorry, there is some issue. Online you can hear it, so it’s here actually, it’s onsite. The issue comes from onsite. What we will do is while we’re fixing this issue, we’ll give the floor to another question and then I’ll come back to you once we are fixed. Because online they can’t hear you, but I believe that we’re having an issue with that.


Heng Lee: Do you want me to continue? Because I hear you.


Gladys O. Yiadom: Now it works. Now it works. Please go ahead and then we will take your question afterward. Question from the audience. Please go ahead.


Heng Lee: Sure. I was talking about how there are different outfits that are being set up for specific purposes in Singapore, like the protection from falsehood office, the anti-scam commands. So have we come to a time when we need a child protection authority for online content? And how does that overlap with existing outfits, which are in the physical space for child protection? And what is the qualitative difference other than online disinhibition that warrants such an outfit? These are all problems that we need to think about from a regulatory perspective. And it’s tempting to think about how we answer questions in terms of let’s have a new team of people doing it. But are the people who are staffing such a new outfit sufficiently equipped with the response skillsets to understand this is an upcoming trend that needs to be addressed? This is something that will happen no matter whether you regulate it or not. So I think. That acumen has to be gained over time, so there is really no easy answer to this, but the shape and form that outfit could take is something that we could probably start thinking about. Thank you.


Gladys O. Yiadom: Thank you, Hang. So we will take one question from the on-site audience, and then we will have a question from the online audience. Could you please give the mic to the young man over there? Thank you. We can hear you. I will kindly ask you to share your name, organization, and who you addressed the question to.


Audience: I’m Ethan from Hong Kong, and I’m the youth ambassador of the One Power Foundation. Basically, I’m asking this question to every one of you. In Hong Kong, specifically, I guess, many parents try to protect their children from the internet by physical ways, banning their children from using phones, or maybe banning them from contacting the internet. Before, not five years ago that you mentioned, but actually 12 years ago or below, they are not able to use phones because of some kind of parents’ problem. And I’m just wondering, because I’ve only heard perspective from teachers, parents, and my peers that this way, because they comment on this kind of ways of protecting children. And I want to hear from different perspective that comments on this way. So will banning children from using mobile phone, or maybe banning children from using the internet, an effective way or maybe a suitable way for them to not being cyber-bullied or insecurity in internet. Yeah, that’s a question.


Gladys O. Yiadom: Thank you. Milodina, please go ahead.


Melodena Stephens: So, I think the technology is here to stay. I don’t think banning alone is good enough because we have to teach people how to use the technology safely. And so it’s a dual problem. educate 8 billion people of the world. 30 percent are still not online. We need to educate everyone online. We need to educate parents as their children grow and as new technologies come, how to use that. And we have to educate children. And I want to give you this one example. I was working for a company and we were trying to look at preventing bullying and harassment on an online game. And everything we did, we parents came, and a lot of us were adults, so we came together and we looked at cyber tools, we looked at content moderators, we looked at algorithms, we looked at AI bots. But the thing is, the children were still accessing it because somebody’s parent’s friend allowed them to use it. So I don’t think banning actually works, correct? The children, whatever’s banned, they want to find a way to use it. So the kids were still finding ways to do that. And then I decided to interview a little boy who was seven years old. And I asked him, how do you know it’s a stranger on Minecraft? And we had all these answers as parents, but he said something very simple. If that is my friend, they go to my school, they will know what I ate for lunch. So the first question I asked them is, what did I have for lunch yesterday? What happened in school in this class? And if they don’t know that answer, I don’t play with them. And I thought this was not something I would never have come up with myself. But we also need to involve, and I’m very happy you’re here, we need to involve the children in this dialogue, because they may have answers, because I’m not playing on the ground, I’m not comfortable with that thing. So thank you for asking that.


Gladys O. Yiadom: Thank you Meletana. Elmedli, a couple of comments on that?


Elmehdi Erroussafi: Yeah, I think everything has been said. I would just say that this method of forbidding to children might seem to be effective, but what we noticed talking to children and parents is that there is a more effective way, which is to actually build trust with your child as a parent or as an educator. The child needs to trust you enough, to be able to come forward if he’s being harassed or if he’s being bullied on the Internet without fear of reprisal or without fear of punishment. So, open communication is key. We advise technology will also, you know, is very fast and children will get access to technology. Banishing access to the device itself might not be effective in our current world. So, open communication is key and building trust with your child so that everything can be said, you know, in an open manner. That’s our advice.


Gladys O. Yiadom: Thank you. And Maddy, Ang, do you have brief comments from an industry perspective?


Heng Lee: Yes, yes. Thank you for the question. I’m very happy to answer. Having lived in Hong Kong myself, I’m going to start with a Cantonese answer, which is So, in short, this really means there are trends in the world that we can’t really disobey. And like what Melodina says, I agree with her that the tech is here to stay, which is why going back to what I mentioned just now about the guidelines for safe kids, when we interact with parents on whether or not to tell them or to discuss with the children before they install safe kids on their children’s phone apps. We’re saying that for the younger children, maybe there is no discussion needed. At a certain age, there needs to be information about where am I doing this for? So that it doesn’t have to be a very prolonged kind of information, just that there is an app that I’m installing to protect you. But at some point in time, you realize that by especially by 14 to 17, our guidance has changed to you must get the children’s permission before you install the app, because we also recognize that you can probably install the app, but the children is also, they’re also wise enough to know how to uninstall the app from their phone. They could even be reinstalling it every day before they come home, or they could have a different phone altogether for their day-to-day interaction, and a dummy one just to show the parents. So there is really no hard and fast or blunt way of solving the problem itself, but I think that constant communication to maintain trust between parents and children, rather than a blanket ban, will be something that is more effective and sustainable. Yeah, thank you.


Gladys O. Yiadom: Thank you. Thank you, Ang, for your answer. Now, perhaps turning to our online audience to check whether or not there’s a question. Anne, please.


Anne Mickler: Yeah, there is one question, and it comes from Jochen Michels, who is the Head of Public Affairs, Europe, at Kaspersky. And this question goes to Melodina and Elmedi, please. So the question is, how can the different competencies of the various stakeholders be brought together to ensure better protection, good and sufficient training, and appropriate solutions for children, parents, teachers, and other stakeholders? And should this start at a local level and then be expanded regionally and globally?


Gladys O. Yiadom: Thank you. And I’ll kindly ask Elmedi to answer first.


Elmehdi Erroussafi: Yeah, thank you for the question. So I think that it’s a problem for everyone. So again, we are emphasizing on cooperation. So every stakeholder will bring something to the table. Regulators will state, as I said, common objectives, I would say. and those big ethical guidelines, NGOs will help to touch the ground. We believe in the NGO work, we think it’s very effective. Vendors will provide as well technical solutions, technical responses. Educators, academia will provide the research, oversight, early warnings. Those years ahead we need, it’s actually needed from research and we need to be aiming for the best. We really need to be looking forward and having those years ahead. So everyone can collaborate to actually build a common, I would say, strategy. So acting locally is very important. This is where we touch, this is where we also take in consideration local points such as the one we just heard. So locally it’s very important, but we need to open those channels of communications. One example would be, for example, AI regulation. This is a big subject and cannot be local. This is a global issue and it needs to have a global regulation. It needs to have global guidelines. So acting, again, in the spirit of collaborations. Let me just share that we work with those big tech companies like NETA and TikTok to open this channel of communications and to be able to, as a partner of trust, they call us to be able to report on them so that they remove it, maybe quicker than on a regular basis. Collaboration. And hopefully we get there with the help of everyone.


Melodena Stephens: I agree with everything you said. I just want to talk about competencies. I want from government political will. That means this ruthlessness that I will ensure the child is safe. And what does it mean safe? So I want political will from governments. I want alignment on values. What does it mean when I say for good? So this alignment on values across all industry. From society, we want a reflection of culture. I have to say that we have different sorts of culture around the world, and I would like that to be there. So for me, for example, a child is not just below 14 years of age. I would think a child is 18 or 19 or 20. So maybe the way I look at the child is slightly different than maybe somebody in Europe does. I want that also, those nuances of culture, to be there. And from researchers, researchers tend to publish research either for something or against something. I think we have now evolved enough, I would like them to publish both for some of what is harmful, what could be some of the goods.


Gladys O. Yiadom: Thank you. Thank you, Melodina, for this answer. Perhaps checking from the audience on site, do you have any questions? We do have questions from the audience. Can someone share a mic, please? So one question, one question, one question. Okay. Okay. Please name, organization, and who you address this question to, please.


Audience: Hello. I’m Grace from the Pan-African Youth Ambassadors for Internet Governance. Yes, so I come from Uganda, where awareness about online risks is very low. So my question is addressed to anyone who can answer. So I’d like to maybe have campaigns. How can I collaborate with any of you to raise these campaigns and let people know about online child protection?


Gladys O. Yiadom: Thank you. Perhaps this question, I will address it to El-Mahdi, who can share his concrete experience and share some of the best practices that Espace Maroc Cyberconfluent has been doing.


Elmehdi Erroussafi: Yeah, sure. So again, I’ll profit from this question to also share some best practices. So I talked about common goals. I think clearly here we share common goals, but one very effective way to work and to start such campaigns would be to work by small focused work streams. We call them task force. So let’s get all our partners. So we have a team, but we’ll get there and contact every stakeholder partner, whether from the public sector or the private sector. We go on the team, we talk, we produce a project, we work in a project mode, and we build those task force. And if, for example, you would like to build this kind of campaign, I would suggest that first you list the stakeholder. The local area is actually able to support that project. For example, regulators, telecom operators, IT companies, and maybe also ministries of education. So those kinds of stakeholders, we need to contact them to this kind of partnership. Coming from the NGO, it also gives you a kind of a trusting, I would say, hat, because you are willing to basically profit from your time. And as young people, I believe this is very important. This is part of your education. So a portion of your time should go toward that work. And then, as I said, working in small teams. We want to build a campaign to our schools. We need someone from the ministry. We need someone from the education to build the content. We need some experts. And From my experience, I know we are in the same continent and I know how generous African people are. So, whenever we talk about an opportunity for a project, we get more participants than we initially anticipated. So, we end up with a very good problem to have this out-of-the-tunnel relevant people in the work stream. So, again, we are looking for progress and I trust that people will appear. At CMRP, we will share our interests as well. We will be very happy to open this collaboration and discussions with other organizations.


Anne Mickler: Sorry if I chip in for the online audience. The sound quality got really bad at the moment. The online audience can’t hear. Sorry about that.


Heng Lee: Yes, is there anyone speaking on site? I can’t hear you.


Elmehdi Erroussafi: Sometimes, from time to time, we can help. I can’t hear you. I can’t hear you. looking at the mix of vaccines and COVID-19. So it’s becoming such issues, and it’s a policing thing. So the only way to come up with the issue is now to build some solutions, some experiences from the countries whereby open solutions have been found. But now, all what we notice is also from the content creators. Sometimes, because we are calling upon local content, now, local content creators are not really skilled. There is no sound in Zoom.


Gladys O. Yiadom: Now, they’re solving the issue. There is no sound in the Zoom room, apparently?


Heng Lee: No, there wasn’t. Until you promised.


Audience: Yes, I was saying, like, another issue, but part of the issue is lying at the content creator-wise, because they are not really skilled. They are not really aware of educational curricula. What it requires for them is just to have the solution, to have something, like material, something on the hand that they can share and they can maybe share out. So now, what I can suggest, like from this workshop, is that to allow us, as we are coming from such countries, to allow us to have a proper way of bringing about awareness to the public, to those… children at schools, because some of them, maybe the school they are going to is not very equipped, but they are finding, they are coming across the devices out there, so maybe at a friend’s place, and now the only way to save those children is to bring about some programs, such as what we were talking about, from some of those kind of experiences, so that they be aware, and they be willing to say no to those who come to bully them. Yes, this is kind of collaboration I can ask for, to whoever can bring its expertise.


Gladys O. Yiadom: Thank you very much for sharing that, and I believe El-Mahdi is a concrete example of the partnership that we have with the civil society, so please do not hesitate to talk to him, we also have a booth, so very happy to have you there and can discuss further. So we’ll take a last question, please, before the next service, so please.


Jutta Croll: Thank you for giving me the floor. My name is Jutta Kroll, I’m a child rights advocate from the German Digital Opportunities Foundation. I just wanted to mention that what Melodina said about the age of children pretty much resonates with me, because we have these cultural nuances, but we also have the UN Convention on the Rights of the Child, which defines everyone under the age of 18 as a child, and with regard to all the questions that we’ve been discussing, I would like to refer to the principle of the evolving capacities of the child, so it’s not a matter of exact age of whether a child is 12, 13 or 16, it’s about their evolving capacities, and I also would like to go back to Grace’s question, because In 2021, the United Nations Committee on the Rights of the Child adopted General Command No. 25 in regard to… This is the document that you can base all these questions, whether there will be education. The states that have ratified the UN Convention are obliged also to implement the rights of the child in the digital environment. So when you’re asking for training for digital literacy and for cooperation, you can go to your state government and say, hey, you have ratified the UN Convention and here is the basis, you are obliged to do something for the young generation to make the rights come true in the digital environment. So I think it’s kind of groundbreaking that we have a document from the United Nations where you can base child online safety, but all their rights to provision and participation. Thank you so much for listening.


Gladys O. Yiadom: Thank you very much for your comments. So we’re entering now the end of our session. So before closing, we will run out again the survey. So Anne, I’ll kindly ask you to share the slide. So it’s the same question, see that it has evolved. I’ll kindly ask the on-site participant to scan the QR code and participate in the survey. And then we’ll review the result afterwards. So just a couple of minutes so that everyone can take the survey. And then also Anne, our online moderator, can share the results. And it will also be the opportunity to see if the answers have changed after the session. So, we have received the results. Just give me one second to change the screen and show you the results. Can you see the results now? Oh, yes. So, we have seen that the results have changed quite a bit. So, now actually more of you think that the threat will increase significantly and lead to increased abuse and cybercrime, that the situation will get worse, both on the A and the B aspect. But nonetheless, also more of you believe that the awareness and the knowledge of cybersecurity issues and protection against threats in the digital world will increase. And, oh, also 40%. So, now we’ve moved a little bit from 32% to 40% of those of you believe that better knowledge and well-developed defence skills, as well as better developed digital skills will ensure that children can cooperate more securely. And actually, less of you are uncertain. So, we’re seeing that after the workshop, the results have evolved a little bit. So, I would like now to kindly thank our speakers for their contribution. Thank you, Aang, thank you, Melodina, thank you, El-Mehdi, and also El-Evizabeta, which was with us. Thanks to the audience for the great contribution that you shared regarding this topic. And I’d also like to thank Anna, who was our online moderator. And please, let us continue the conversation. Can we… We will discuss this after… Okay, we will share about that, but let me just conclude the session and then we can take this conversation together. So, I would like to thank you all for participating in the session and please let us continue the conversation together. Thank you.


M

Melodena Stephens

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Increasing sophistication of online threats like deepfakes

Explanation

Online threats to children are becoming more sophisticated, with technologies like deepfakes posing new risks. These threats can be created with minimal data, making them particularly dangerous.


Evidence

A deepfake can be created with just one picture, and a voice recording of only 15 minutes is needed to replicate someone’s voice.


Major Discussion Point

Threats to children in the digital world


Agreed with

Elmirti Arousafi


Agreed on

Increasing sophistication of online threats to children


Lack of alignment on standards for age-appropriate content

Explanation

There is a lack of consistency in standards for determining what content is appropriate for children of different ages. This inconsistency makes it difficult to protect children from inappropriate content across different platforms and regions.


Evidence

The speaker showed an example of the same game being rated for different ages (6, 7, 10, 12 years old) across different platforms.


Major Discussion Point

Threats to children in the digital world


Cyberbullying and emotional harm from online interactions

Explanation

Cyberbullying is a significant threat to children’s well-being, often resulting in deep emotional harm. It can be difficult to recognize as it may not result in physical harm, but its impact can be severe.


Evidence

The World Health Organization states that bullying and cyberbullying are the fourth largest cause of death among 15 to 19-year-olds.


Major Discussion Point

Threats to children in the digital world


Need for political will from governments to prioritize child safety

Explanation

Governments need to demonstrate strong political will to ensure children’s safety online. This involves a commitment to taking decisive action to protect children in the digital space.


Major Discussion Point

Role of different stakeholders in protecting children online


Agreed with

Elmirti Arousafi


Elizaveta Belyakova


Agreed on

Need for multi-stakeholder collaboration


Differed with

Heng Lee


Differed on

Approach to regulating children’s online safety


Importance of industry alignment on values and ethics

Explanation

There needs to be alignment on values and ethics across all industries involved in the digital space. This alignment is crucial for ensuring consistent protection of children online.


Major Discussion Point

Role of different stakeholders in protecting children online


Agreed with

Elmirti Arousafi


Elizaveta Belyakova


Agreed on

Need for multi-stakeholder collaboration


Role of researchers in studying both benefits and harms of technology

Explanation

Researchers should focus on studying both the potential benefits and harms of technology for children. This balanced approach is necessary for a comprehensive understanding of the impact of technology on children.


Major Discussion Point

Role of different stakeholders in protecting children online


E

Elmirti Arousafi

Speech speed

143 words per minute

Speech length

953 words

Speech time

397 seconds

Rapid pace of technological change outpacing regulatory responses

Explanation

The speed at which technology evolves makes it difficult for regulators to keep up. This gap between technological advancement and regulatory response creates challenges in protecting children online.


Evidence

The speaker mentioned that it took Pokemon Go only 19 days to reach 50 million users, while regulators cannot react that quickly.


Major Discussion Point

Threats to children in the digital world


Agreed with

Melodena Stephens


Agreed on

Increasing sophistication of online threats to children


Need for gamified, interactive curricula to teach children about online safety

Explanation

Effective curricula for teaching children about online safety should be gamified and interactive. This approach helps engage children and makes the learning process more effective.


Evidence

The speaker mentioned their organization’s creation of games on internet safety to engage children.


Major Discussion Point

Approaches to protecting children online


Agreed with

Heng Lee


Agreed on

Importance of education and awareness


Importance of practical guidance for parents on protecting children online

Explanation

Parents need ongoing, practical guidance on how to protect their children online. This guidance should help parents understand and navigate the rapidly changing digital landscape.


Evidence

The speaker mentioned creating practical guides for parents as part of their organization’s efforts.


Major Discussion Point

Approaches to protecting children online


H

Heng Lee

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Development of parental control software like Kaspersky Safe Kids

Explanation

Kaspersky has developed parental control software called Safe Kids to help protect children online. This software allows parents to monitor and control their children’s online activities.


Evidence

The speaker described features of Safe Kids, including web filtering, app blocking, and time limits for device usage.


Major Discussion Point

Approaches to protecting children online


Creation of educational resources like cybersecurity alphabet books for children

Explanation

Kaspersky has created educational resources to teach children about cybersecurity. These resources aim to make complex cybersecurity concepts accessible to children.


Evidence

The speaker mentioned a ‘Cyber Security Alphabet’ book that teaches cybersecurity concepts from A to Z.


Major Discussion Point

Approaches to protecting children online


Agreed with

Elmirti Arousafi


Agreed on

Importance of education and awareness


Difficulty of enforcing age restrictions for social media use

Explanation

Enforcing age restrictions for social media use is challenging. Even with legal restrictions, it’s difficult to prevent underage children from accessing these platforms.


Evidence

The speaker referenced Australia’s proposed ban on social media for children under 16, questioning how it would be enforced.


Major Discussion Point

Challenges in regulating children’s online safety


Need for dedicated regulatory bodies focused on child online protection

Explanation

There may be a need for dedicated regulatory bodies specifically focused on child online protection. These bodies could address the unique challenges of protecting children in the digital space.


Evidence

The speaker mentioned examples of specialized offices in Singapore, such as the Protection from Falsehood Office and anti-scam commands.


Major Discussion Point

Challenges in regulating children’s online safety


Balancing consumer protection and innovation in regulations

Explanation

Regulations need to strike a balance between protecting consumers (including children) and allowing for innovation in the tech industry. This balance is crucial for effective and sustainable online safety measures.


Major Discussion Point

Challenges in regulating children’s online safety


Differed with

Melodena Stephens


Differed on

Approach to regulating children’s online safety


E

Elizaveta Belyakova

Speech speed

106 words per minute

Speech length

352 words

Speech time

198 seconds

Collaboration between government, industry and civil society stakeholders

Explanation

Effective protection of children online requires collaboration between various stakeholders including government, industry, and civil society. This multi-stakeholder approach allows for comprehensive solutions to complex online safety issues.


Evidence

The speaker mentioned the Alliance for the Protection of Children in the Digital Environment, which brings together Russian technology companies to address digital challenges for children.


Major Discussion Point

Approaches to protecting children online


Agreed with

Elmirti Arousafi


Melodena Stephens


Agreed on

Need for multi-stakeholder collaboration


A

Andre Gorobets

Speech speed

93 words per minute

Speech length

471 words

Speech time

301 seconds

Transborder nature of online threats to children

Explanation

Online threats to children are not confined by national borders. This transnational nature of threats makes it challenging to address them effectively through national regulations alone.


Major Discussion Point

Threats to children in the digital world


U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Potential for youth-led awareness campaigns on online risks

Explanation

There is potential for youth-led campaigns to raise awareness about online risks. These campaigns could be particularly effective in regions where awareness about online risks is low.


Evidence

A participant from Uganda asked about how to collaborate on awareness campaigns about online child protection.


Major Discussion Point

Role of different stakeholders in protecting children online


Importance of equipping content creators with child safety knowledge

Explanation

Content creators need to be equipped with knowledge about child safety online. This is particularly important for local content creators who may lack awareness about educational curricula and online safety standards.


Major Discussion Point

Role of different stakeholders in protecting children online


J

Jutta Croll

Speech speed

154 words per minute

Speech length

280 words

Speech time

108 seconds

Using UN Convention on Rights of the Child as basis for government action

Explanation

The UN Convention on the Rights of the Child, particularly General Comment No. 25, provides a basis for government action on children’s rights in the digital environment. This international framework obliges states to implement children’s rights in the digital space.


Evidence

The speaker referenced the UN Convention on the Rights of the Child and General Comment No. 25 adopted in 2021.


Major Discussion Point

Challenges in regulating children’s online safety


Agreements

Agreement Points

Increasing sophistication of online threats to children

speakers

Melodena Stephens


Elmirti Arousafi


arguments

Increasing sophistication of online threats like deepfakes


Rapid pace of technological change outpacing regulatory responses


summary

Both speakers highlighted the growing complexity of online threats to children, emphasizing how quickly technology evolves and creates new risks faster than regulators can respond.


Need for multi-stakeholder collaboration

speakers

Elmirti Arousafi


Elizaveta Belyakova


Melodena Stephens


arguments

Collaboration between government, industry and civil society stakeholders


Need for political will from governments to prioritize child safety


Importance of industry alignment on values and ethics


summary

The speakers agreed on the importance of collaboration between various stakeholders, including government, industry, and civil society, to effectively address online child protection issues.


Importance of education and awareness

speakers

Elmirti Arousafi


Heng Lee


arguments

Need for gamified, interactive curricula to teach children about online safety


Creation of educational resources like cybersecurity alphabet books for children


summary

Both speakers emphasized the need for engaging, age-appropriate educational resources to teach children about online safety and cybersecurity.


Similar Viewpoints

These speakers shared concerns about the challenges in regulating and enforcing age-appropriate content and access for children online, suggesting the need for more focused regulatory efforts.

speakers

Melodena Stephens


Elmirti Arousafi


Heng Lee


arguments

Lack of alignment on standards for age-appropriate content


Difficulty of enforcing age restrictions for social media use


Need for dedicated regulatory bodies focused on child online protection


Unexpected Consensus

Involvement of children in developing solutions

speakers

Melodena Stephens


Unknown speaker


arguments

Potential for youth-led awareness campaigns on online risks


explanation

There was an unexpected consensus on the importance of involving young people in developing solutions for online child protection. This approach recognizes children as active participants rather than just passive recipients of protection measures.


Overall Assessment

Summary

The main areas of agreement included the increasing sophistication of online threats to children, the need for multi-stakeholder collaboration, the importance of education and awareness, and the challenges in regulating age-appropriate content online.


Consensus level

There was a moderate to high level of consensus among the speakers on the key issues. This consensus suggests a shared understanding of the challenges in protecting children online and the need for comprehensive, collaborative approaches. However, there were some differences in emphasis and proposed solutions, indicating that while there is agreement on the problems, there may still be diverse views on the most effective ways to address them.


Differences

Different Viewpoints

Approach to regulating children’s online safety

speakers

Melodena Stephens


Heng Lee


arguments

Need for political will from governments to prioritize child safety


Balancing consumer protection and innovation in regulations


summary

While Melodena Stephens emphasizes the need for strong political will from governments to ensure children’s safety online, Heng Lee suggests a more balanced approach that considers both consumer protection and innovation in the tech industry.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the approach to regulating children’s online safety and the balance between protection and innovation.


difference_level

The level of disagreement among the speakers is relatively low. Most speakers agree on the fundamental issues but offer slightly different perspectives on how to address them. This suggests a general consensus on the importance of protecting children online, with variations in proposed strategies and emphases.


Partial Agreements

Partial Agreements

All speakers agree on the increasing sophistication of online threats and the challenges in regulating them. However, they propose different solutions: Melodena emphasizes political will, Elmirti highlights the need for rapid regulatory responses, and Heng suggests creating dedicated regulatory bodies.

speakers

Melodena Stephens


Elmirti Arousafi


Heng Lee


arguments

Increasing sophistication of online threats like deepfakes


Rapid pace of technological change outpacing regulatory responses


Need for dedicated regulatory bodies focused on child online protection


Similar Viewpoints

These speakers shared concerns about the challenges in regulating and enforcing age-appropriate content and access for children online, suggesting the need for more focused regulatory efforts.

speakers

Melodena Stephens


Elmirti Arousafi


Heng Lee


arguments

Lack of alignment on standards for age-appropriate content


Difficulty of enforcing age restrictions for social media use


Need for dedicated regulatory bodies focused on child online protection


Takeaways

Key Takeaways

The threats to children in the digital world are increasing in sophistication and scale, outpacing regulatory responses.


Protecting children online requires a multi-stakeholder approach involving government, industry, civil society, parents, and children themselves.


There is a need for age-appropriate, interactive education on online safety for children, as well as practical guidance for parents and teachers.


Regulatory approaches need to balance consumer protection with innovation, and consider transborder nature of online threats.


The UN Convention on the Rights of the Child provides a basis for government action on child online safety.


Resolutions and Action Items

Continue dialogue and collaboration between different stakeholders on child online protection


Develop more interactive, gamified curricula to teach children about online safety


Create practical guidance materials for parents on protecting children online


Consider establishing dedicated regulatory bodies focused on child online protection


Use UN Convention on Rights of the Child as basis for government policies on child online safety


Unresolved Issues

How to effectively enforce age restrictions for social media and other online platforms


How to address the rapid pace of technological change in regulatory approaches


How to balance children’s rights to privacy and protection in parental control measures


How to align different cultural perspectives on appropriate content/age limits


How to equip local content creators in developing countries with child safety knowledge


Suggested Compromises

Balancing technology bans with education to help children use technology safely


Adapting parental control measures based on child’s age, with more discussion/permission as child gets older


Focusing on building trust and open communication between parents and children rather than strict bans


Combining local action with global coordination on issues like AI regulation


Thought Provoking Comments

If we have a bot which is collecting conversation very casually with a child asking, what is your blood type? The child takes it as a very innocuous question, answers it. And this is something that stays on the internet forever because it is pretty unlikely that the child’s blood type is going to change.

speaker

Heng Lee


reason

This comment vividly illustrates the long-term risks of seemingly harmless online interactions for children, highlighting how easily sensitive personal information can be collected and potentially misused.


impact

It deepened the discussion on the sophistication of online threats and the need for more comprehensive education on digital safety for children and parents.


We assume if it’s at home, it’s fine. So it’s a literacy issue, I think.

speaker

Melodena Stephens


reason

This succinctly captures a key misconception about online safety and frames it as an educational challenge rather than just a technological one.


impact

It shifted the conversation towards the importance of digital literacy for both children and parents, emphasizing education as a crucial component of online safety.


The child needs to trust you enough, to be able to come forward if he’s being harassed or if he’s being bullied on the Internet without fear of reprisal or without fear of punishment.

speaker

Elmehdi Erroussafi


reason

This comment highlights the importance of trust and open communication in protecting children online, moving beyond just technological solutions.


impact

It broadened the discussion to include the role of parent-child relationships in online safety, emphasizing a more holistic approach to protection.


In 2021, the United Nations Committee on the Rights of the Child adopted General Command No. 25 in regard to… This is the document that you can base all these questions, whether there will be education. The states that have ratified the UN Convention are obliged also to implement the rights of the child in the digital environment.

speaker

Jutta Croll


reason

This comment introduced a crucial legal and policy framework for child protection online, grounding the discussion in international law and state obligations.


impact

It provided a concrete basis for advocacy and action, shifting the conversation towards practical steps that can be taken to protect children online based on established international agreements.


Overall Assessment

These key comments shaped the discussion by broadening its scope from purely technological solutions to a more comprehensive approach encompassing education, trust-building, and legal frameworks. They highlighted the complexity of online child protection, emphasizing the need for collaboration between various stakeholders including tech companies, parents, educators, and policymakers. The discussion evolved from identifying threats to exploring multifaceted strategies for creating a safer online environment for children.


Follow-up Questions

How can we address the gap in literacy about cyber threats and their consequences among regulators, engineers, and startup founders?

speaker

Melodena Stephens


explanation

This gap in understanding leads to inadequate regulations and potentially harmful product designs, making it crucial for protecting children online.


What should be the shape and form of a dedicated child protection authority for online content?

speaker

Heng Lee


explanation

As existing regulatory bodies may not be equipped to handle the unique challenges of online child protection, a specialized authority could be necessary.


How can we involve children more in the dialogue about online safety?

speaker

Melodena Stephens


explanation

Children may have unique insights and solutions that adults might not consider, making their involvement crucial in developing effective protection strategies.


How can we create effective, gamified curricula to teach children about online safety?

speaker

Elmirti Arousafi


explanation

Gamified approaches may be more engaging and effective in teaching children about cybersecurity, making this an important area for development.


How can we develop practical guides and ongoing support for parents to help them protect their children online?

speaker

Elmirti Arousafi


explanation

Parents often feel lost or overwhelmed by rapidly changing technology, so practical guidance is essential for effective child protection.


How can we better align standards for age-appropriate content across different countries and platforms?

speaker

Melodena Stephens


explanation

The lack of alignment in standards for what is appropriate for children of different ages creates confusion and potential risks.


How can we improve international cooperation for apprehending online criminals who target children?

speaker

Melodena Stephens


explanation

The cross-border nature of online crimes makes international cooperation crucial for effectively combating threats to children.


How can we develop global guidelines for AI regulation, particularly in relation to child safety?

speaker

Elmirti Arousafi


explanation

As AI presents both opportunities and risks for children online, global guidelines are necessary to ensure consistent protection across borders.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

High-Level Session 5: Protecting Children’s Rights in the Digital World

High-Level Session 5: Protecting Children’s Rights in the Digital World

Session at a Glance

Summary

This panel discussion focused on protecting children’s rights in the digital world. Experts from government, technology companies, and civil society organizations discussed the challenges and potential solutions for safeguarding children online.

Panelists highlighted how the digital landscape for children has dramatically changed, with increased internet access bringing both opportunities and risks. Key threats identified included privacy violations, cyberbullying, exposure to inappropriate content, and manipulation by bad actors. The speakers emphasized that while technology offers many benefits for children’s development and education, it also poses unprecedented risks that require new protection approaches.

There was consensus that a multi-stakeholder, collaborative approach is needed, involving governments, tech companies, educators, parents, and children themselves. Suggestions included improving digital literacy education, developing better parental controls and age verification systems, and creating unified international frameworks for child online protection. Several panelists stressed the importance of balancing protection with allowing children to benefit from digital opportunities.

The role of technology companies in proactively addressing risks was discussed, with examples given of AI tools to detect harmful content. However, some argued tech companies need to do more. The challenges of keeping pace with rapidly evolving technology were noted, as well as the need for more proactive and predictive approaches.

Overall, the panel agreed that protecting children online is a complex, global challenge requiring coordinated efforts across sectors. While optimistic about finding solutions, they emphasized the ongoing nature of this work as technology continues to advance.

Keypoints

Major discussion points:

– The current digital landscape for children presents both opportunities and risks

– There is a need for a multi-stakeholder approach involving governments, tech companies, parents, and educators to protect children online

– Digital literacy and education are crucial for both children and adults

– Technology solutions like AI can help, but human oversight and compassion are still essential

– International cooperation and unified frameworks are needed to address global challenges

The overall purpose of the discussion was to examine the challenges of protecting children’s rights in the digital world and explore potential solutions involving various stakeholders.

The tone of the discussion was generally serious and concerned, given the gravity of the topic. However, there were also notes of optimism, especially toward the end, with several panelists expressing confidence that solutions can be found and that children will adapt and thrive in the digital age. The tone became more collaborative and action-oriented in the closing statements, with panelists emphasizing the need for cooperation and concrete steps forward.

Speakers

– Marleni Cuellar – Moderator

– Sofiene Hemissi – Minister of Communication Technologies in Tunisia

– Sara Alfaisal – Member of the Human Rights Commission, Kingdom of Saudi Arabia

– Muhammad Khurram Khan – Professor at King Saud University, Kingdom of Saudi Arabia

– Deepali Liberhan – Global Director of Safety Policy at Meta

– Eugene Kaspersky – CEO of Kaspersky

– Syed Munir Khasru – Chairman of the Institute for Policy Advocacy and Governance

– Andrei Zarenin – Deputy Minister of Digital Development, Communications, and Mass Media, Russia

Full session report

Protecting Children’s Rights in the Digital World: A Comprehensive Panel Discussion

This panel discussion brought together experts from government, technology companies, and civil society organizations to address the critical issue of protecting children’s rights in the digital world. The conversation highlighted the complex challenges and potential solutions for safeguarding children online in an era of rapidly evolving technology.

Current Digital Landscape and Threats

The panelists agreed that the digital landscape for children has dramatically changed, presenting both opportunities and risks. While the digital world offers learning opportunities, it also poses risks to emotional and mental health. Key threats include privacy violations, cyberbullying, virtual masking, deepfakes, and data harvesting. Children often interact with people without clear identities online, creating significant risks.

Challenges in Protecting Children’s Rights Online

The discussion revealed several key challenges, including the lack of consistent international legal frameworks and policies for child protection online, the limitations of purely technological solutions, and the need to balance protection with allowing children to benefit from digital opportunities.

Technological Solutions and Education

The panel explored various technological solutions, including parental control systems, child-appropriate verification, age assurance protocols, and platform-specific safety features. Meta, for example, has implemented over 50 features and tools to address risks, including specialized teen accounts and the adoption of the “best interests of the child framework” in product development.

Education emerged as a crucial component, with emphasis on cyber hygiene programs and digital literacy initiatives for both children and adults. As Eugene Kaspersky provocatively stated, “I think education for adults is more important than education for kids. Because the danger from non-educated adults is much more.”

Multi-stakeholder Collaboration

There was strong consensus on the need for a collaborative approach involving governments, civil society, and the private sector. Suggestions included developing specifications, protocols, and standards for child online protection, especially for smaller companies and startups. Cross-platform initiatives like Project Lantern were highlighted as examples of addressing harms at scale.

Role of Parents and Balancing Protection with Opportunities

The discussion emphasized the crucial role of parents in children’s digital lives. Syed Munir Khasru stressed the importance of parental involvement, stating, “We have to train, educate, inspire the children. And in my opinion, they are very smart.” Andrei Zarenin highlighted the importance of human communication and trust between children and parents.

Eugene Kaspersky suggested implementing flexible limits on children’s screen time based on factors like school performance, balancing technological protections with education to empower children to navigate risks.

Future Challenges and Ethical Considerations

The panel identified several unresolved issues, including effective age verification, balancing privacy with protection, addressing the digital divide, and keeping pace with emerging technologies like brain-computer interfaces and implanted chips.

Conclusion and Call to Action

The discussion concluded with a sense of cautious optimism. Panelists emphasized the shared responsibility of protecting children’s rights in the digital world and the power to make an impact through commitment. As Marleni Cuellar noted, it’s about “the importance of all children and wanting the best for them in the digital world.”

Eugene Kaspersky expressed confidence in fixing problems and saving kids, while Sara Alfaisal emphasized the power to make an impact through commitment. The panel underscored the need for a holistic approach combining technological solutions, education, policy frameworks, and international cooperation.

As the digital landscape continues to evolve, ongoing dialogue, research, and action will be crucial to ensure children can safely benefit from the digital world while protecting their rights and well-being.

Session Transcript

Marleni Cuellar: We promise to have a very riveting discussion about a very important topic. I’m Marleni Cuellar and I’m going to be your moderator for today. And of course, today’s session is all about protecting children’s rights in the digital world. Now, I know many of us, whether we want to admit it or not, our childhood was distinctly different. Communication meant landlines. You couldn’t hide any communication you were having. Dial-up internet would announce to everybody that you were logging on online. And so, protection mechanisms and safety were very different when we were growing up. But today’s landscape is dramatically different. And the protection of children’s rights now becomes even more complicated when we’re talking about children living and growing up in a digital sphere. So, our conversation today, I’ll be joined by a very esteemed panel to be able to share some thoughts about what the current situation is, what the plans are in terms of developing safety mechanisms, and of course, hearing their own interpretation of what needs to be done and by whom. Some of the things that we hope to cover is understanding the current digital landscape for children and their rights in the digital world, identifying the main challenges and risks in protecting children’s rights in the digital world, explore technological solutions that can safeguard children’s digital rights and the role of education, examine regulatory frameworks and policies, and promote comprehensive and collaborative approach among all stakeholders. So, let me introduce you to our esteemed panel. to the esteemed panel for today. His Excellency, Mr. Sofiene Hemissi, Minister of Communication Technologies in Tunisia. Dr. Sara Alfaisal, member of the Human Rights Commission, Kingdom of Saudi Arabia. Mr. Eugene Kaspersky, CEO Kaspersky. Mr. Syed Munir Khasru, Chairman of the Institute for Policy Advocacy and Governance. Ms. Deepali Liberhan, Global Director of Safety Policy Meta. Professor Mohammed Khurran Khan, King Saud University, Kingdom of Saudi Arabia. His Excellency, Mr. Andrei Zarenin, Deputy Minister of Digital Development, Communications, and Mass Media, Russia. Yes, yes, welcome. We’re all here. Let’s get seated. So as everyone gets settled, I want to be able to say welcome once again to all of you who are joining us live in this session, and also to those of you who are tuned in online. Child protection is of course a very important topic to many, many of you who are parents, and of course those who are simply concerned about the future for our children. So everyone is settled in? Let’s get started. All right, so one of the first things to establish in this conversation is knowing from both your experience and from the work that you do, what’s the current digital landscape for children, what does it look like, and what are some. of the most urgent threats that they face. Let’s start off with Mr. Kaspersky.

Eugene Kaspersky: Oh, okay. Good morning, everyone. And speaking about the threat landscape and what’s wrong with the kids in the cyberspace and what’s the difference is when we were kids and the kids right now, they spend too much with mobile phones. They spend too much time with the internet. And the main difference is that they play too much games. And these games are different. But from my point of view, I have five kids. So when I see them, I recognize that they’re playing the games, they train themselves with the different skills. So when we were kids, we were playing the different games. Now they play the online games. It’s different, but I don’t think it’s really bad. Because it’s just that it’s different skills. So our parents, they were playing the different games like we did. Our grandparents, they were playing also different games. So right now, I don’t see this as a major problem with the games. Just limit the time they spend there. The second thing is they consume too much information. Is it good or bad? Because they train, again, they train the different skills. They train themselves to consume different kind of information. The probably possible problem is it doesn’t get too deep in the mind. It stays on the top and disappear. And thinking about myself when I was a kid, and my parents. It was also different. My parents never kids. They consumed a different kind of information. We were kids It was all different. We survived So it’s a good question. Is it good or bad? Is it positive or negative and definitely negative is That the content they consume is random It could be dangerous the people they contact online They’re random people could be dangerous, especially for their very small kids. So this is the major problem so Then the keys they’re in a new Cyber a they in a cyber age. So when they’re in this cyber environment It’s a different with our experience than we were kids But this is the new world. It’s a new reality. So it’s a positive and negative at the same time So I recognize that the main threats that’s manipulation Manipulating the kids with the information with the online contacts and with Some modern toys the physical toys some of them they connected to the Internet and Well, I’m from cyber security. So we know that so some of these toys physical toys. They are vulnerable So it’s possible to hack them into manipulate kids online not from the mobile phone not from computer but from physical toys Which are smart toys so of course their cyber age It’s a different so it’s a positive and negative at the same time. So we need to be very Carefully understand so don’t stop our kids to use the positive To train the positive skills, but at the same time we need to control the content they consume and the people they contact and their safety of the toys they play. Thank you.

Marleni Cuellar: Thank you. And for you, Your Excellency, Hemissi.

Sofiene Hemissi: Good morning. I would like to thank you for this kind of invitation. As a matter of fact, it’s a very sensitive topic today. The world, the cybersecurity landscape witnesses unprecedented opportunities and risks. Right now, the virtual world constitutes the key part of their learning and education and part of their lifestyle. And we have a lot of challenges and a lot of opportunities. Today, the digital world is an opportunity to express themselves and to prove their existence. And this is a huge world, and it’s an opportunity to avoid the educational divide. But at the same time, it poses a lot of risk for their emotional and psychological health and mental health. And one of the key challenges is the manipulation and abuse and bullying and sex abuse and for attracting them for some malpractices. And this is very worrying and disturbing for us. And we have, as well, some risks related to the protection of the privacy of the children. Today, the gamification and the… electronic platforms combine a lot of information which is hard to consume, and we don’t know the real target of such information and the end of this information. So one of the key risks for the children is the digital divide between different segments of the society. We have some children who are well-connected for the e-learning platforms, and we have the majority of the children are underprivileged children, and this is a huge challenge to ensure equal opportunities for the children across the country. In addition to such risks and threats, it’s very worrying the origin of the privacy of people regarding the identity, the cultural identity, and the privacy of the people, especially in the Islamic and the Arab society, which is built on specific pillars, and such a privacy can put the cultural identity of our people at risk, especially in cyber security, and especially for the children. And I think the platform of gamification, I think they weren’t designed to take into consideration the cultural identity, and I think we should work together between different No one country can work alone, but this requires coordination, cross-border coordination, and concerted efforts between the civil society, private sector, and governments, and families, and the educational sector to set frameworks fit and put the rights of the children at the top priority. Thank you.

Marleni Cuellar: Thank you. And there we see some similarities, acknowledging that there is much good for children, but of course, major concerns as well. Let’s move to His Excellency Zarenin. In your opinion, what are the most urgent threats that children face online?

Andrei Zarenin: So I will be speaking in the Russian language. The interpretation was great yesterday, thanks to our interpreters, while this forum is very important for us, and so we were preparing for the next IGF forum. We were preparing to participate, but the decision was taken not in our favor. So as we said, the digital world imposes more and more influence on our children and opens more horizons for education, for creativity and communication, but simultaneously, in spite of those so many opportunities, it poses lots of risks. And today we can see that there are deep changes in the digital landscape that influence the psychological and social behavior of people and teenagers. And one of the most important challenges is the so-called virtual masking. Here we see lots of fraudsters that are using deepfakes, are creating a very false sense of trust, manipulating children’s behavior. And that’s… not only poses risks, but also causes psychological traumas, and this virtual manipulations that I’m discussing, and when we are using kids’ trust, they are creating a false identification in the digital world. And apart from that, teenagers are coming across different risks. This is cyber bullying, and also bad content, fraud, and also they engage in some terrorist communities, and also we are coming across the problem that also trafficking and sexual abuse, and all these threats can be devastating for the emotional and physical health of our children. For example, according to the data of the All-Russian Action Cyber Writings Crisis, lots of children pass through it, lots of teenagers cannot detect fraud in online gaming, and it’s very important to form the right skills of digital literacy when children need to detect fraud in online, and the cyber, one of the problems is the gap in their skills. Sometimes parents do not have the skills to have the dialogue with the children to discuss threats that poses the digital world, and our research shows that lots of parents are not certain about their skills to monitor the online activity of their children. That is increasing a lot these days, and that breaches the trust of the children and makes them more vulnerable. And considering all that, it’s very important to create interstate cooperation where not only governmental bodies and schools have to participate, but also families have to support their children in mastering the digital world. And I would say that the problem becomes more and more critical when we’re dealing with artificial intelligence. We discussed it yesterday, when like social media is used more and more widely, and like other tools are developing, and these new technologies are posing threat to the psychology of children. And I believe we need to create safe environment that would assist children to adapt to the digital world and minimize those risks. And this is why it’s very important, the collaboration of all the stakeholders, governmental bodies, families, educational bodies, private sector, to develop the strategy to protect our children and teenagers in the cyber world. Thank you for your attention.

Marleni Cuellar: And a very critical point that you brought up there about parents not being fully able to teach their children the protection mechanisms that are necessary. Let’s hear your thoughts on this, Mr. Khasru. Mr. Khasru, President of Saudi Arabia.

Syed Munir Khasru: First of all, I would like to thank the government of Saudi Arabia and also the Digital Governments Authority and the UN for having kindly invited. I would like to start by saying that I am very pleased to be here, and I’m very happy to be here, and I’m very happy to be here, and I’m very happy to be here, and I would like to push the boundary a little bit. Although I can’t see the audience very clearly. How many of you, after you get up from sleep, in the first 10 minutes, don’t go near your mobile phone? Can you please raise your hand? One, two, hardly three, four. Zero. Second question, those of you who have children, how many of you feel very anxious? that they are into some world where you think they’re exposed to a lot of risks and you’re not being able to keep a proper watch. How many? Can you raise your hand? Okay. The reason I was pushing the boundaries, basic human instinct is something we have to manage. We cannot fight against it. So I would tend to push this hypothesis. We have to train, educate, inspire the children. And in my opinion, they are very smart. Even in this year, people in the age group of 12 and below, 35 to 40 million will be using internet, which is 12 million more than people in the age group of 12 to 17. So what does it tell us? It tells us the primal human instinct that a child has is very strong even for a parents to manage. So I would like to propose we first understand what we’re dealing with here. So I have been one of those who have said, we need to manage, we need to find better ways to educate them, we need to find ways because what also is true, just as people who are exposed to cyber bullying are twice as likely to commit suicide, just as it is true, we have seen in many cases, like in America a few years back, a teenager committed suicide on online games, even where he had the game headset on. So it’s a tricky situation. And 60% of the people, children, interact with human beings without clear identity. Were there risks there? Yes. How do we handle that? We have to look at the benefit it brings. We have to look at the benefit it brings. We have to look at the benefit it brings. And that’s what this session is about. We also have to look at the benefit it brings because for a lot of marginalized communities, children on the fringes, disabilities, to many, Internet is a lifeline. Many of the people do monetization with very little investment. We have to look at the benefit it brings. We have to look at the benefit it brings for all of us. There was a dish she was supposed to prepare. In the evening when I saw the dish, I asked her, how did you do it? You said you had never done it. Her six-year-old daughter picked up a phone, went to YouTube, downloaded it, and taught her mother how to do it. It’s a multistakeholder engagement. It’s a multistakeholder engagement. As his Excellency, the Danish minister, has mentioned, which we’ll come to later part, it’s a multistakeholder engagement. Children, in our first thought, oh, children, five, six, ten year-olds, no. They are the future netizens. They have more power in their hand, and this simple device can change everything. So my opening remarks would be, and I can give you many, many examples, but I would like to give you a few examples. One of the things that we have seen is the European Union tried to put parental guidance in 2011. It failed. European Union tried to put parental guidance. It failed. That means kids are very smart. They can outsmart their parents. So we have to find a better way to inspire, encourage them to be better net citizens, and digital literacy would be one of the key things, in the absence of which it would be very, very difficult. So that’s my opening remarks. I’m looking forward to more engagement with my fellow panelists. Thank you.

Marleni Cuellar: Yes, of course. And digital literacy seems to be a resounding theme when we speak about protecting children online. Now, we understand that children interact in the digital space in many different ways, but let’s go ahead and say one of the primary ways they interact is through social media. And so, Deepali, as the Global Director of Safety Policy at META, we recently saw the rollout of the teen account, and we’ve seen a continuous development of some safety mechanisms for children online. What are the main challenges and risks in protecting children’s rights online?

Deepali Liberhan: Thanks for the question, and it’s a pleasure to be here. At the outset, I wanna say that we want young people to use our platforms to connect with their families and friends, to be able to explore their interests without having to worry about being unsafe or being subject to any kind of inappropriate behavior. And that’s why at META, we worked with experts as well as consulted parents and teens to make sure that we’re building these safe and age-appropriate experiences for them on our platforms. Some of the risks and challenges that we face, and the panelists also have talked about it, are essentially threefold, which is echoed by parents as well. The first is content. Parents are worried that their teens online are exposed to inappropriate content. The second is what you call contact risks. Parents are concerned that their teens may be exposed to unwanted interactions that could put them in harm’s way. And the third, as my esteemed panelists also mentioned, that parents are really concerned about the amount of time that their teens are spending online. So based on these… concerns and other risks that we see online. META has worked for the last couple of years to build and develop over 50 features and tools to address some of these risks. We also have a multi-pronged approach to safety so it’s not just about one thing, it’s about many things like we want to have robust policies which say very clearly what is okay and not okay to share online and we proactively enforce those policies and today we can say that a majority of the content that we have we’re able to remove it even before somebody reports it to us. We’ve also built over 50 tools and features including parental supervision because we’ve heard loud and clear from parents that they want to be involved in their teens lives and we’ve also heard from parents that some of them don’t have the skills to be able to have these conversations with young people and therefore we’ve worked with safety partners to make these education resources available and as you mentioned comprehensively we’ve brought all of this together where we launched teen accounts in September this year and we’re globally rolling it out. Teen accounts is essentially all teens on Instagram are placed in an inbuilt protective experience which address a lot of the concerns that I just talked about, you know, what is the content that you’re seeing, who is able to contact you and being able to spend meaningful time online. I’ll quickly go through some of the default protections that there are there in place. So teens are defaulted to a private account. We also put on the strictest messaging settings so that they are not exposed to unwanted interactions from adults. We’ve also put on the strictest controls content settings so there’s limited exposure to sensitive content which goes above what we already have in place which are our community standards which deals with violating content. We also heard from parents that, you know, they were worried about the time that the teens were spending online. So when you’re on Instagram, teens will actually get a reminder if they’ve spent more than an hour online and asking them to leave the app. And we’ve also enabled an automatic sleep mode. So between 10 PM and 7 AM in the morning, all your notifications get muted. So these are some of the things that we have done when we’ve launched teen accounts to address all of the risks that we’ve talked about. We’ve already rolled out teen accounts in the US, UK, Canada, and Australia. And we’re globally rolling it out in the rest of the world. I think one of the things that’s most important to note is with the rollout of teen accounts, we’ve actually engaged and heard from parents. We’ve heard from teens themselves. And we’ve worked with experts in this field to make sure that we’re building the appropriate safe experience for young people on our platforms.

Marleni Cuellar: All right, Professor Khan, let’s get your thoughts on this.

Muhammad Khurram Khan: Thanks, Malini. And this is a great question. And first of all, I would like to appreciate the IGF for hosting this wonderful event. The topic of child online production is very close to my heart, because I have been working on this area for the last 10 years. And before we dive into the challenges to understand them, how they are evolving, we need to understand how technology is transforming for children and will keep transforming for the future, especially when children interact and engage with the technology. So the first important thing is that because children are using internet more than ever before, so there are challenges every day emerging online. For example, we have digital realm, so people or children. are going and they are accessing the websites and they are going to the metaverse, for example, the virtual environment. So these all things are blurring the lines between the physical and the digital realm. And the most important thing for me to foresight, you know, as a foresight and to foresee the future of the challenges is how the technology will permeate into the people. Now we are interacting with the technology, but in the future we will be having technology permeating inside by having, you know, brain-computer interface and we will have the chips implanted in our minds, you know, the brains, you know. And I’m afraid that, you know, what would be the situation look like in the future. Because if children are getting a lot of misinformation and disinformation from the platforms, but when the platforms will be, you know, embedding such kind of information in their brain and where they want to control them. So that is a very, you know, you can say challenging task and we have to be very vigilant about that. So there are two kind of risks I can categorize. The first is active risk, in which child is basically a direct victim of any kind of abuse. And the second is passive kind of risk. So passive kind of risk is, for example, children, you know, they put their photographs online and they do not know what is going to happen with them. So these photographs, for example, from the social media can be picked up by, you know, data harvesting groups and they can use those photographs to create deepfake of the children. And children don’t know that what happened with them. Nobody knows, you know, and this is becoming a very pervasive kind of challenge because the generative AI can do anything, you know. So anybody can create the deepfake videos and, you know, images of any person. So we have to see from the both perspective that how can we address the active and the passive attacks, and how can we build the policies, like our distinguished panelists already told, that we have to have a multilateral and multi-branch approach to address this challenge, and we have to have governance also for the platform so that nobody goes beyond their limits. So that is very important, yeah. Thank you.

Marleni Cuellar: All right. And Dr. Alfaisal?

Sara Alfaisal: Thank you. I’m very pleased and honored to be with you today. Okay, about the challenges, I think one of the main challenges in the cyber world is the privacy violation, and also what we see from all the attacks that happen to the children and cyberbullying and all these things. From human rights perspectives, we need to focus on the right of privacy, because the right of privacy, it’s a big right, and it’s included in the Universal Declaration and also it’s included in the Treaty of Children’s Rights. So when we lose this huge right of privacy, this will make the children face all these maybe difficulties, and I think maybe if we focus on this right and try to promote a solution, you know, our duty is trying to protect and promote human rights. So I think that we need to focus, number one, on these rights.

Marleni Cuellar: All right. And His Excellency, Hemissi? Challenges and risks in protecting children online. Tell us about your perspective.

Sofiene Hemissi: Child protection on the Internet. net, as my dear colleagues have mentioned, there are formidable challenges associated with it, among which is the lack of legislations and policies on the level of every government, every country, and on an international level. It’s correct that many countries have enacted laws and policies regarding child protection on the cyberspace, however, remaining all these legal frameworks and policies are inconsistent and there is a lack of unified framework that we could all rely on. Among other challenges, as was mentioned, is the abuse and exploitation on the internet, as well as the issue of data privacy and monitoring mechanism on this data on the level of each and every country and on an international level, specifically with the increase of solutions related to AI, the analysis of this data as it represents an opportunity and a room for more development comes with it or associated with many risks for the children. Also the inappropriate content for these age brackets, for these lower age brackets and the teenagers, exposing many of them to a great deal of issues that amount to suicide, as was mentioned in the past. I would also like to address the issue of the digital divide between the various brackets of the society. youth and children that have all the opportunities and the means. On the other hand, there are millions of people and millions of children cannot access the networks, especially in some of the African countries and some of the Asian countries, where the access percentage remains very low, and I think it’s our shared responsibility to provide equal access. I would also like to emphasize the importance of what we’re witnessing of the reduction of cultural and societal values, reflecting what the Internet content that has been provided over the past few years, as well as the phenomena of addiction, of social media, which is impacting the safety and sanity of the youth, and the psychological health and the physical health. In order to address all of these formidable challenges, I think it’s necessary to adopt a comprehensive approach that includes all the parties, governments, civil society, content producers, and I think this is the only way to move forward in order to come up with a unified framework to protect the rights of the children, especially in an era where we witnessed challenges that we have never witnessed in the past. Also, we need to protect their privacy and as well as enhance their well-being in order to create an equitable digital future that includes everyone. Thank you.

Marleni Cuellar: One of the things I think that we have all clearly identified is that there is a level of literacy that has to be established. There are many stakeholders in the protection for children. We talk about governments. We talk about regulatory bodies. We talk about parents. And we talk about the educational sector. Dr. Khan, from your perspective, what are some of the technological solutions that perhaps can help to safeguard children online? And in that, what role does education play?

Muhammad Khurram Khan: Yeah, it’s a great question, Malini. There is a very famous saying about technology. If you think technology will solve your problems, then you do not understand technology and you do not understand your problem. And this is very true when it comes to safeguarding children online. Because when we are giving access to a child to the internet, so it means that, on the other hand, we are giving online world access to that child. So in this manner, we need to have some kind of safeguards which should be implemented into the platforms. And for example, there should be some oversight of the algorithms. And there should be some kind of protocols which should be followed. So there is no single solution which can cater all the problems we have for the children. For example, we should have to have a multi-pronged and multilateral approach, which I earlier said in the previous question you asked me. What is that about? So first, we have to have a framework or an approach which should be at different levels. For example, strategic level, tactical level, and operational level. When I talk about the strategic level, I’m talking about some kind of regulations, laws, governance, and compliance, this kind of stuff we have to build. And the second is tactical. Tactical means we should have more standards, specifications for the tech platforms to adopt when they build the system. For example, when they build a system for children, then there should be child-appropriate verification. Age verification, age assurance protocols. As well as, there should be some kind of operational approaches, which means that the tools we are using for protection of children, for example, parental control tools. On the other hand, we should have awareness and education. So that should be inculcated into the curriculum of the children. Not only for the children, for the digital parenting, for educators, in the schools, in our environment. So all these things are very important to build a technological solution. It’s not just one thing, we can just build the technology. Technology sometimes fails, and technology obsoletes. So that is most important, that we need to have a multi-pronged and multilateral approach to build some kind of system which can protect our children while they go online.

Marleni Cuellar: And Mr. Kaspersky, from a cyber security perspective, you have seen the multitude of threats that people are exposed to daily. And perhaps in that experience, it’s challenging to imagine children being able to decipher what puts them at risk and what doesn’t. When we talk about technological solution, on which end is it more important? From the user or from the implementer?

Eugene Kaspersky: I think it’s from both. Technically speaking, it’s possible to build these parental control systems, which you just mentioned. But unfortunately, well, fortunately, it’s good news. The kids are growing and getting more smart, so they’re able to disable these tools. Unfortunately, the mobile phone operating systems, I’m sorry, I’m getting into the technical details, they don’t guarantee that all applications they run forever. So it’s possible to find the tricky ways how to disable some applications. And when the kids are 13-14 years old, they are smart enough to find these tricks and disable any kind of protection. So when I speak about mobile phone, for 10 years old, 11 years old, it still works. When it’s older, when they’re 16 years old, parental control works in a different way. The kids control parents. So speaking about the technical measures, it works for some kind of the level of the age. But maybe it’s also the good news. It makes them find the ways how to bypass the protection, training the mind how to behave if they’re facing the challenges. The other story is how to protect them from the other side, from the content side, from the Internet, how to find the information, wrong information, abuse information, forbidden information on the Internet. That’s the technologies. And actually this is also, we need help of the regulators to make Internet services, make them must do control the traffic they provide. And also the law enforcement to find the bad guys which generate this content, which generate this wrong abuse content, which want to try to manipulate kids. And in this case, we need to have more strong international cooperation. Unfortunately now the geopolitical situation is that like some countries they don’t talk to each other and unfortunately this just give their more opportunities to the very bad guys which use this the fact that the Internet doesn’t have borders and it’s more easy for them to do any kind of crime including the child abuse and stay unrecognized absolutely recognized so that’s a final my final point is that we need to have more strong international cooperation to find they’re not any kind of cyber criminals especially the child abuse criminals.

Marleni Cuellar: Thank you no just one of the things I wanted to say in setting the backdrop we’re talking about children’s rights the establishment of the United Nations Convention on the rights of a child took many things into consideration ultimately the protection and ensuring children reaching the full potential this was created before there was the digital realm and obviously gets more complicated now but the structure of how different entities come together to protect the most vulnerable citizens in our communities remains the same so when we transfer this idea into protection for a digital realm how do you see that collaboration playing out.

Sara Alfaisal: Okay if you allow me to switch to Arabia an answer to your question on the collaboration between the government’s with the civil society organizations is an important It’s an important question and it’s essential in providing protection to the children on the digital world. There are many modalities, among which is the exchange of expertise and increasing the awareness level in order to achieve important outcomes in that regard. We have all seen the Riyadh announcement yesterday. It includes many, many important elements in that regard. The enhancement of digital inclusion, the role of AI in the sustainable development and making most use of AI in innovation and enhancing the governance of AI, which is an essential point, an important point related to the child protection and the cyber world on the management of risks, potential risks of AI through creation of safe, sustainable systems. This cannot happen without the collaboration of the three partners, governments, private sector and civil society. They need to collaborate together in order to reach comprehensive, sustainable solutions to achieve the safety aspired for the children in the cyber world.

Marleni Cuellar: And Dr.Khasru ?

Syed Munir Khasru: I think this is, as most of us in one way or another have expressed the opinion, it’s a very multi-stakeholder engagement and I would, in my opinion, it starts with the tech companies because today Google, Meta, Microsoft, their cumulative worth is almost $12 trillion, which is more than four times the combined GDP of the African continent. So I personally believe they have a more active role to play. With all due respect to my colleague on the left from Meta. So I would like to start with you, and I would like to start with you, Mr. Yes, you have been doing content opinion, it’s still far short of what I have observed, it is more reactive than proactive. We can have different opinions, and happening in between this transition, are getting affected, and one of the things I would like to say is that technology is a bottom-up approach, not top-down, because it is impossible for any government, any tech, anybody to go to that nano-level. So it starts with small family, then society, then community, then nation, then region. Even before this meeting, I was speaking to His Excellency, that regionally, what some of the things can be done. So I think this is a very important issue, and I think it is very important to understand, because, it’s my impression, technology always leapfrogs policy in any part of the world, whether it’s authoritarian regime or democracy, it takes somewhere between six months to two years to pass a law. By the time you pass the law, technology has moved to the next level. So you’re always catching up. So I think we’re heading towards a very different situation, because, in the next one decade, the government will have to give you in that mandate, there’s some central force which adopts this laws and other countries follow, because the risk zone that we’re playing now, particularly when AI will kick in within the next one decade, that will be a very, very different situation. We’re not talking hypothetically, but when practically we’ll kick in, starting from the way you grade your students to what is original, what is copy, it will be a very different situation, because, if you’re going to have a large number of people to cluster around, it will be a big mess, unless there is some central force. So UN probably will have to scale up the operation. So that will be my answer from bottom-up how to scale it up. And I think tech companies have a very important role to play because to be honest with you, they’re more powerful than governments They’re spread around the world. They have access to all our personal data. What we eat where we live. What is our favorite color? What is our date of birth? So there are more information than FBI more information than Interagencies, so onus is on them. They are doing but I think they need to do more. Thank you

Marleni Cuellar: His Excellency is a running Let’s talk about the collaboration

Andrei Zarenin: Many thanks we heard a lot about the cooperation between government and the civil Society are we are studying of fathers and children by Turgenev and it’s about the perception The gap in the perception of the world between the generations. I’m sure when Ivan Turgenev was writing this novel he couldn’t Even comprehend how much it will decrease this gap in the age of technology and in today’s the digital world the we have essential role of Technologies and of course it plays a great role in providing the digital Protection for the children in Russia. We work with Internet companies with providers of Internet And they should adapt according to the current day threats Yevgeny has mentioned that children in many ages can Go through navigate through Bringing down the barriers and in Russia. We are working we have special portals from the government bodies who tackle these problems in the portal of government services, we have a special department for children protection in cyberspace. We have the recommendations for parents. And together with our partners, Lyons, working on the child protection on the web, and we are planning to do the mailer from the portal of government services. None of the current web filters can replace the human cooperation. And we have developed a charter of protection of children, and there are principles protecting the rights and identity of the child. And we under highlight the protection of the identity and privacy of data, and it’s very important in the world of technologies. Also, I would like to pay attention that there’s united the responsibility to protect the children. As adults, this is our task to show to the children how wonderful and diverse is the world. And using gadgets and the web is one of the tools of learning things about this world. And of course, we are working in our ministry. We have the program of cyber hygiene, and the goal of this program is to explain to children in simple language the rules of behavior on the web. Over 10 million, and including 4 million children and adults, participated in different projects related to cyber security. We work with non-profit organizations. We have run so many events which were related for child protection on the web, the lesson on digital world, the digital dictation. We support such initiatives, and we are always open to productive interaction, and we always work to protect the children in cyberspace.

Marleni Cuellar: The Press What is taking place in Russia? Now, Ms. Liberhan, let’s talk about it from Meta’s point of view. As private sector, how do you see this collaboration playing out with civil societies and government?

Deepali Liberhan: So as I said at Meta, you know, we have an approach to safety which is a multi-layered and a multi-pronged approach, where we think about having the right policies, features, tools, and also working with experts, parents, and young people to build safe and age-appropriate experiences. But over the last few years, we’ve gone beyond that because we know a lot of bad actors don’t restrain themselves to one platform, and they move across platform. And I think this is why it’s really important to underscore collaboration between industry, between civil society regulators, to be able to address some of these harms at scales. I want to give two examples of what we are currently doing now. One is Project Lantern. So Project Lantern is run by the Tech Coalition, where all participating companies have the opportunity to share signals about accounts and behavior of those who are violating the child safety policies. Now this goes beyond just Meta. It’s available to all participating companies, and what this helps in doing is it helps all these platforms do their own investigation, so we are able to address these harms in a scalable way because we know predators will move from platform to platform. The second thing that I want to mention is StopNCII.org, as well as Take It Down, which is run by the Revenge Porn Hotline and Nick Mick, which is again a way to address these harms in a scalable way because all participating companies are part of this, and what anyone who’s afraid about their intimate image being shared can actually share a hash with StopNCII.org or Take It Down, and companies like us will receive those hashes, so when that content is actually we are able to remove this kind of content. The reason why I’m giving these two examples is that we hear often that technology companies are not doing enough, but these are some of the ways that we are coming together to address these harms in a very scalable way, not just at a meta level, but at a cross-platform level. And it’s also really useful to have discussions because education about these features and tools and services is really important as well. So what is the role of educators? What is the role of parents? What is the role of regulators? And how can we as a community come together to address these harms and also identify what the gaps are and then work together to address those?

Marleni Cuellar: Now, picking up on some of the points that have been discussed so far, one of the things that still sticks out to me is when we talk about educating our children on how to stay safe online, we’re laying expectations, quite frankly, on a generation that is not up to date with the threats that exist. In my own work, when we talk about media and information literacy, we’re constantly teaching adults on how to detect disinformation and misinformation and the understanding of deep fakes. And so when we talk about these persons then having to teach children about those risks and now there are other manipulation tools that exist specifically for children, how do we pass on that responsibility when they’re starting from a place of knowing less than the person they’re educating? Dr. Khan, you wanna start us off?

Muhammad Khurram Khan: Yeah, absolutely. So, you know, like I mentioned, that technology is not a panacea, you know, or it’s not just a solution for everything. So regulation. Governance are very, very important, you know, and especially when we talk about the AI, which is just the narrow AI we are having now, by the way. So we are not talking about the AGI, you know, Artificial General Intelligence and Artificial Super Intelligence. When we will have this kind of technologies, you know, coming, you know, in the future and brain computing interfaces, so the challenges will be, you know, at the unprecedented level. So what I would say is that, because child online protection is a global problem, and it also needs a global solution. And what solution? The question is that. The solution is that, because the Western world, the developed countries, you know, so they already have skills, resources and technologies. But what about the rest of the world, the least developed countries or developing countries? So we should have some kind of inclusiveness, you know, in the safety and protection of children, not only from the developed world, from the rest of the world also. So what we need to do is that we need to have some kind of cooperation under maybe United Nations, ITU, G20 and G7 and many other, you know, multilateral organizations to build the curriculum, to build the tools and technologies, to build the policies, governance and foresight of the technologies, because like it was mentioned that we are reactive to the technology. That is not a solution. We have to be, you know, predictive about the technologies, like I’m mentioning. We have, for example, quantum computing, you know, and which is changing all the ways. So how would be the impact of these kind of technologies on the children? So it has to be, you know, like studied, it has to be, you know, you can say communicated with all the stakeholders who are building the policies, who are building the technologies and solution. Yeah.

Marleni Cuellar: I think Mr. Kaspersky said earlier, let’s remember that there is good that comes along with the bad. Obviously, I know for a lot of parents who are probably watching, as they hear about the potential threats that they may be unaware of. The inclination is, let me protect my child by limiting any access whatsoever. What’s the balancing act here?

Eugene Kaspersky: That’s a good question. That’s a good question. We have to find a good balance. Actually, that’s what I do with my kids. The limit is flexible and it depends on the school results. So if they perform better, they have more time on the Internet. If they perform before the acceptable limit, they exchange their mobile phone, smartphone with the old-style Nokia. That’s it. And speaking about education, about cyber threats and wrong information, well, I’m very sorry. I have a completely different opinion. I think that education for adults is more important than education for kids. Because the danger from non-educated adults is much more. They can do much more dangerous mistakes. Okay, we as a private company, we do a lot of cybersecurity, a lot of education. We do it for any kind of the audience. We do it for the professionals, for students, for kids, for very small kids. Welcome to the booth. You will see the book about the cybersecurity, the book with the pictures. It’s for very small kids. And we do it a lot. But my experience is that kids, they are learning much faster than adults.

Marleni Cuellar: They outsmart them very often. So this is an open question if there’s anybody else who wants to share thoughts.

Syed Munir Khasru: I think, I’ll just add to what he just said. One of the things probably we are oversighting, the role of parents. And one of the things which I fear in this generation, the proportionate loss of appreciation, understanding of the human component. So no matter what AI, what advanced technology we embrace in the coming decades, the human compassion, empathy, that human touch will always be the number one factor. Even in medicine, you may have the best AI driven medical facility. End of the day, a patient wants to hold the hand of a nurse, of a doctor. And one of the things I can see right and left, I’ll give a very simple basic example. When kids accompany their parents, even go to social events, I see kids with mobile and pads, busy. And I’ve seen very few parents even to tell them that, you know, you need to interact, engage with the people who have come to your home. Very basic things. So what is happening, what I fear, we are unleashing a generation, extremely talented, very well informed. And imagine our forefathers, whatever cerebral capacity they had. So in the same garage, you’re now parking four cars, because over flooding people with information. So there’s an information overload. So, but God has not changed the composition of your brain. In the same garage, you’re trying to park four cars where you used to park one car, right? So people are very stressed. And one of the reason you may have noticed, many of the social media-led movement has always not been successful. Why? Because they lacked a proper human face. If you look at last 15, 10 years, for even the political scapegoats happening, why so many young people come to the street, put their life in the line of fire, and then fail? That’s a question we have to answer. Where is the missing link? And there I think parents have an important role to play. We have talked a lot about children, absolutely. But we must not oversight the overarching important role parents have to play. And so just buying a kid. a smart phone and giving access is a very shortcut route. And the merit and value of bedtime stories is still as important as it was 20, 30 years back. My concern is in the drive, a natural inclination toward technology, we’re sidestepping some basic human factors which continue to shape how we make the future citizens. Thank you.

Marleni Cuellar: And Deepali, before we move into our closing statements, I wanted to kind of broaden the conversation back to technology. Give us some insight, because I think earlier it was said that technology companies seem to be reactive versus proactive, but it seems that that’s a situation that we find ourselves in. The technology comes out, there’s a rapid uptake, and the manipulation that comes along with it. But are some of the technology that’s emerging, are they gonna be helpful tools in being able to guide or ease the protection mechanisms for children? Verification of age, for example, using AI to detect some of the deep fakes. Is this also an opportunity moment for us?

Deepali Liberhan: I think that’s a really good question, and it is an opportunity. And let me give you an example. I’ve been with Metta for over a decade, and when I joined Metta, we had community standards, which were clear rules about what was okay and not okay to post online. But we were very dependent on user reporting to be able to tell us that that content is bad content and we need to take it down. Over the years, we’ve heavily invested in technology, in proactive technology, that now we are able to remove a majority of the content before it’s reported to us by anyone. So if you look at the latest transparency report that we have where we publish these numbers. we were able to remove I think for child sexual abuse material more than 7 million pieces of content, for bullying and harassment more than 7 million plus pieces of content, for suicide and self-injury content about more than 5 million pieces of content and I think a majority of this content we were able to remove before anybody reported it to us and that’s the work of AI and we’re continuing to work with proactive technology in the areas that you’ve mentioned whether it’s verifying age you know what are the signals that we can look for to be able to identify when somebody is under the age of 18 or continuing to see how we can use this technology to be faster and more effective and I do think technology has a very important role to play to be able to address some of these harms and we’ve been using it a long time and we’re gonna continue to do build on it to address a lot of these issues that have been that have been raised today.

Marleni Cuellar: All right well there’s a lot more that we can discuss on this particular topic but I do want to give our panelists the opportunity to give their closing statements.

Sofiene Hemissi: Shukran. Thank you. It was very enriching discussions and I know that the complexities of this digital world we have one true fact which is the protection of human the children rights it’s not a option but it’s a shared responsibility in the digital world the child has the right to safe digital space which ensure growth and and provide opportunities for skill development it’s a space to express themselves and talents to achieve this end. It’s not the responsibility of the governments or civil society or private companies, but it’s a shared and joint responsibility without any exception. The government is responsible for certain framed policies, which we cannot prevent or apply the conventional methods. However, the civil society has a role, a greater role to play for the awareness, education, as well as the private companies play a greater role to develop and create appropriate content to children and to their identity and the culture, and as well as the comprehensive awareness and appropriate, as I said, to this kind of complexity. So under cooperation between international organization and the states, it’s very important. No single country can achieve this end without the international cooperation at different levels. And I hope that we create a very safe digital space and landscape for all. Thank you.

Sara Alfaisal: Thank you for your contribution today. As we conclude our session, I think that we have to remember the key points that we have discussed, and also the action that we need to take to move forward. I think that, together, we have the power to make great impact. It’s all about commitment. Thank you.

Eugene Kaspersky: Thank you. Yes, the wrong and abused content on the internet. That’s bad Manipulating the kids. That’s really bad. We need to fight with that but Now we say that the kids they spend too much time with mobile phones But the previous generation will spend too much time playing computer games The generation before they were too much watching TV Before they were too much reading books So I am optimistic We will fix the problems. We will save our kids and that will be the happy generation and They will have kids too And they will face the same problems

Marleni Cuellar: I

Syed Munir Khasru: Mean I Would like to conclude whatever I I Jeff in a way stands for what I was going to say I for inspiring the children to learn To navigate the web in a manner that adds value to their life Then comes the G the governance part Where we have to do a better job to try to make it as safe secret as possible Then comes the F that That free the human spirit and if you go by UN Convention on child rights IGF is all about child rights and Another thing I think since we are having a very free and frank exchange of ideas I would like to draw attention because I think it would be incomplete Everybody in this room must recognize Any child anywhere in the world Life lost injured sick Should have equal value of life. That means a Rohingya child born in a camp in Bangladesh refugee or a Palestinian child struggling for life all have equal value. And I must say, it’s my opinion, many of the social media companies are not being just fair and equal. And I do not believe they should be dragged into bigger geopolitics. They should be like the United Nations, where every country, every citizen has a voice. Because I have seen many people deserting social media in recent times, which should be a wake-up call. So, people are smart, children are smart, youth are smart, teen are smart. And we must understand, every life, every child matters. Thank you.

Deepali Liberhan: I want to close by saying that, you know, we’re really grateful for the opportunity to be here because a lot of these discussions really helps inform the work that we do on our trust and safety space. So, I want to thank the panelists for their very clear and helpful comments. The other thing that I would close by saying is that, at META, we’ve adopted the best interests of the child framework when we’re thinking about building products and features for young people, which has been guided by the UN Convention on the Rights of the Child. I won’t go into all the considerations, but two important ones are, one is engagement with the families and teens who use our apps, which is why we’re going to continue to work with parents and teens and experts as we think about building for them. And the second consideration is building safe and age-appropriate experiences. And everything that we’ve done, including teen accounts, including working with other organizations, whether it’s Project Lantern or StopNCII.org or Take It Down, is in furtherance of making sure that we… can do all that we can to keep young people safe on the platforms. But it is a collaborative approach and there is a role for educators, there is a role for civil society, there is a role for regulators as well. And I think it’s really important to have more opportunities to come together and develop those roles. I also think that it’s important to have legislative solutions and we’ve also proposed a legislative solution where we think that age assurance at an app level or an OS level and parental consent at that level really will make things easier for parents to be able to navigate their online experience and also minimize the data collection and address some of the age assurance concerns that our stakeholders have raised. And I think it really is a multi-stakeholder community approach and we need to come together to address a lot of these issues as we have been doing.

Muhammad Khurram Khan: Yeah, as we know that Saudi Arabia has two global initiatives. One is child online protection and protecting and empowering women in cybersecurity. So in my opinion, the child online protection should be woven into the fabric of our society. So it should be, you know, very important and we should have some kind of specifications, protocols, formulations and standards which we are actually lacking to guide the social media platforms and online platforms when they build that technology, they should include into their systems. Because not only META and, you know, big organizations, they have the resources, right? But what about small startups and small companies? So they do not know about these things. So if there are some specifications, standards, you know, protocols, so they would include those things into their systems when they build. So it will assure that child remains safe and sound and their well-being is protected while they go online. Yeah, thank you very much.

Andrei Zarenin: I have a responsible mission in my closing word because I’m the final one speaking, so I’ll try to be positive. I would like to stress that security and legislative environment for children, it’s also a social challenge. And as we have already discussed, cooperation of families, business, society, without all that, we cannot be effective without our children will feel safe and protected. And from my point of view, Internet, we have already discussed it, it’s just one of those windows to all this world. And the key point, from my point of view, is the human communication, this is trust between children and parents. And no filters and technologies might replace this actual dialogue. And from my point of view, we need to concentrate on complex solutions that will unite education, technical protection, and psychological support in some situations. We, as Russia, we are ready for international cooperation. We are ready to share with our tools, with our educational programs, and be in some cooperation to protect our children. So let’s create a safe and inspiring environment for our children. And let me conclude with this phrase that our children will be smarter than us.

Marleni Cuellar: In some cases, they already are. So with that, I want to say thank you to all of you for joining us this morning, and for those of you who took the time to join us this morning as we had this conversation. What is the resounding theme here is that there is a recognition that there has to be a multi-stakeholder approach in ensuring the protection of children in the digital realm. Just up here, we have a wide representation, private sector, education, government, civil society organizations, and the technology organizations as well. So this is perhaps a one of many steps to be taken to formalize what the plan of action would be. But I do think the point that you made, Mr. Kaspersky, is probably most important. We’ll survive this too. But all our children are important and we just want the best for them as they interact in the digital world. Thank you all for joining us. And enjoy the rest of your day. ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪

E

Eugene Kaspersky

Speech speed

141 words per minute

Speech length

1179 words

Speech time

501 seconds

Children spend too much time with mobile phones and games, but this trains different skills

Explanation

Kaspersky acknowledges that children spend excessive time on mobile phones and games. However, he argues that this is not necessarily bad as it helps children develop different skills compared to previous generations.

Evidence

Kaspersky mentions his personal experience with his five children.

Major Discussion Point

Current digital landscape and threats for children

Agreed with

Sofiene Hemissi

Andrei Zarenin

Syed Munir Khasru

Deepali Liberhan

Agreed on

Digital world offers both opportunities and risks for children

Differed with

Sofiene Hemissi

Differed on

Impact of technology on children

Parental control systems work for younger children but older teens can bypass them

Explanation

Kaspersky points out that while parental control systems can be effective for younger children, older teenagers are often able to bypass these protections. He suggests that this limitation calls for alternative approaches to online safety for older teens.

Evidence

Kaspersky mentions that children aged 13-14 are smart enough to find tricks to disable protection applications.

Major Discussion Point

Technological solutions and education

Differed with

Muhammad Khurram Khan

Differed on

Effectiveness of technological solutions

International cooperation needed to find and stop child abuse criminals online

Explanation

Kaspersky emphasizes the need for strong international cooperation to combat online child abuse. He argues that the borderless nature of the internet requires coordinated efforts across countries to effectively address this issue.

Evidence

Kaspersky mentions that the current geopolitical situation, where some countries don’t communicate with each other, gives more opportunities to criminals.

Major Discussion Point

Multi-stakeholder collaboration

S

Sofiene Hemissi

Speech speed

98 words per minute

Speech length

1043 words

Speech time

636 seconds

Digital world offers opportunities for learning but poses risks to emotional and mental health

Explanation

Hemissi points out that the digital world provides learning opportunities for children. However, he also highlights that it presents risks to children’s emotional and psychological well-being.

Major Discussion Point

Current digital landscape and threats for children

Agreed with

Eugene Kaspersky

Andrei Zarenin

Syed Munir Khasru

Deepali Liberhan

Agreed on

Digital world offers both opportunities and risks for children

Differed with

Eugene Kaspersky

Differed on

Impact of technology on children

Lack of consistent international legal frameworks and policies for child protection online

Explanation

Hemissi argues that there is a lack of consistent international legal frameworks and policies for protecting children online. He emphasizes the need for unified frameworks that all countries can rely on.

Major Discussion Point

Challenges in protecting children’s rights online

Agreed with

Syed Munir Khasru

Deepali Liberhan

Andrei Zarenin

Agreed on

Need for multi-stakeholder collaboration

A

Andrei Zarenin

Speech speed

130 words per minute

Speech length

1119 words

Speech time

513 seconds

Virtual masking and deepfakes create false sense of trust and manipulate children’s behavior

Explanation

Zarenin highlights the dangers of virtual masking and deepfakes in the digital world. He argues that these technologies create a false sense of trust and can be used to manipulate children’s behavior.

Evidence

Zarenin mentions data from the All-Russian Action Cyber Writings Crisis showing that many children cannot detect fraud in online gaming.

Major Discussion Point

Current digital landscape and threats for children

Agreed with

Eugene Kaspersky

Sofiene Hemissi

Syed Munir Khasru

Deepali Liberhan

Agreed on

Digital world offers both opportunities and risks for children

Cyber hygiene programs and digital literacy initiatives are important for educating children

Explanation

Zarenin emphasizes the importance of cyber hygiene programs and digital literacy initiatives in educating children about online safety. He argues that these programs are crucial in helping children navigate the digital world safely.

Evidence

Zarenin mentions that over 10 million people, including 4 million children and adults, have participated in various cybersecurity projects in Russia.

Major Discussion Point

Technological solutions and education

Agreed with

Sofiene Hemissi

Syed Munir Khasru

Deepali Liberhan

Agreed on

Need for multi-stakeholder collaboration

S

Syed Munir Khasru

Speech speed

167 words per minute

Speech length

1893 words

Speech time

679 seconds

Children interact with people without clear identities online, posing risks

Explanation

Khasru points out that children often interact with people online whose identities are not clear. This lack of clarity in online identities poses significant risks to children’s safety.

Evidence

Khasru states that 60% of children interact with human beings without clear identity online.

Major Discussion Point

Current digital landscape and threats for children

Agreed with

Eugene Kaspersky

Sofiene Hemissi

Andrei Zarenin

Deepali Liberhan

Agreed on

Digital world offers both opportunities and risks for children

Tech companies should play a more active role given their resources and reach

Explanation

Khasru argues that tech companies should take a more active role in protecting children online. He emphasizes their vast resources and global reach as reasons for them to take greater responsibility.

Evidence

Khasru mentions that the cumulative worth of Google, Meta, and Microsoft is almost $12 trillion, which is more than four times the combined GDP of the African continent.

Major Discussion Point

Multi-stakeholder collaboration

Agreed with

Sofiene Hemissi

Deepali Liberhan

Andrei Zarenin

Agreed on

Need for multi-stakeholder collaboration

S

Sara Alfaisal

Speech speed

118 words per minute

Speech length

391 words

Speech time

198 seconds

Privacy violations and cyberbullying are major challenges from a human rights perspective

Explanation

Alfaisal highlights privacy violations and cyberbullying as significant challenges in protecting children’s rights online. She emphasizes the importance of the right to privacy in international human rights frameworks.

Evidence

Alfaisal mentions that the right to privacy is included in the Universal Declaration of Human Rights and the Treaty of Children’s Rights.

Major Discussion Point

Challenges in protecting children’s rights online

M

Muhammad Khurram Khan

Speech speed

156 words per minute

Speech length

1403 words

Speech time

539 seconds

Passive risks like data harvesting and deepfakes created from children’s photos posted online

Explanation

Khan highlights passive risks to children online, such as data harvesting and the creation of deepfakes from photos posted on social media. He argues that children and parents may be unaware of these risks.

Major Discussion Point

Challenges in protecting children’s rights online

Need for child-appropriate verification, age assurance protocols, and standards for platforms

Explanation

Khan emphasizes the need for child-appropriate verification methods, age assurance protocols, and standards for online platforms. He argues that these measures are crucial for protecting children in the digital world.

Major Discussion Point

Technological solutions and education

Differed with

Eugene Kaspersky

Differed on

Effectiveness of technological solutions

D

Deepali Liberhan

Speech speed

160 words per minute

Speech length

1748 words

Speech time

654 seconds

Content risks, contact risks, and excessive screen time are key concerns for parents

Explanation

Liberhan identifies three main concerns for parents regarding their children’s online activities: content risks, contact risks, and excessive screen time. She argues that these are the primary issues that need to be addressed to protect children online.

Major Discussion Point

Challenges in protecting children’s rights online

Agreed with

Eugene Kaspersky

Sofiene Hemissi

Andrei Zarenin

Syed Munir Khasru

Agreed on

Digital world offers both opportunities and risks for children

Meta has implemented over 50 features and tools to address risks, including teen accounts

Explanation

Liberhan highlights Meta’s efforts to protect children online, including the implementation of over 50 features and tools. She specifically mentions the introduction of teen accounts as a measure to create a safer online environment for young users.

Evidence

Liberhan provides details about teen accounts, including default private settings, strict messaging controls, and content restrictions.

Major Discussion Point

Technological solutions and education

Cross-platform initiatives like Project Lantern help address harms at scale

Explanation

Liberhan discusses cross-platform initiatives like Project Lantern as effective ways to address online harms at scale. She argues that collaboration between different platforms is crucial in combating threats to children’s safety online.

Evidence

Liberhan explains how Project Lantern allows participating companies to share signals about accounts and behaviors violating child safety policies.

Major Discussion Point

Multi-stakeholder collaboration

Agreed with

Sofiene Hemissi

Syed Munir Khasru

Andrei Zarenin

Agreed on

Need for multi-stakeholder collaboration

Agreements

Agreement Points

Digital world offers both opportunities and risks for children

Eugene Kaspersky

Sofiene Hemissi

Andrei Zarenin

Syed Munir Khasru

Deepali Liberhan

Children spend too much time with mobile phones and games, but this trains different skills

Digital world offers opportunities for learning but poses risks to emotional and mental health

Virtual masking and deepfakes create false sense of trust and manipulate children’s behavior

Children interact with people without clear identities online, posing risks

Content risks, contact risks, and excessive screen time are key concerns for parents

Speakers agree that the digital world presents both opportunities for learning and skill development, as well as risks to children’s safety and well-being.

Need for multi-stakeholder collaboration

Sofiene Hemissi

Syed Munir Khasru

Deepali Liberhan

Andrei Zarenin

Lack of consistent international legal frameworks and policies for child protection online

Tech companies should play a more active role given their resources and reach

Cross-platform initiatives like Project Lantern help address harms at scale

Cyber hygiene programs and digital literacy initiatives are important for educating children

Speakers emphasize the importance of collaboration between governments, tech companies, civil society, and educational institutions to address online child protection issues effectively.

Similar Viewpoints

Both speakers emphasize the importance of implementing technological solutions and standards to protect children online, such as age verification and specialized features for young users.

Muhammad Khurram Khan

Deepali Liberhan

Need for child-appropriate verification, age assurance protocols, and standards for platforms

Meta has implemented over 50 features and tools to address risks, including teen accounts

Both speakers highlight the importance of addressing privacy concerns and manipulative behaviors in the digital space to protect children’s rights and well-being.

Sara Alfaisal

Andrei Zarenin

Privacy violations and cyberbullying are major challenges from a human rights perspective

Virtual masking and deepfakes create false sense of trust and manipulate children’s behavior

Unexpected Consensus

Positive aspects of children’s digital engagement

Eugene Kaspersky

Syed Munir Khasru

Children spend too much time with mobile phones and games, but this trains different skills

Children interact with people without clear identities online, posing risks

Despite acknowledging risks, both speakers unexpectedly highlight positive aspects of children’s digital engagement, such as skill development and access to information, suggesting a nuanced view of the digital landscape for children.

Overall Assessment

Summary

The main areas of agreement include recognizing both opportunities and risks in the digital world for children, the need for multi-stakeholder collaboration, and the importance of technological solutions and education in addressing online child protection issues.

Consensus level

There is a moderate to high level of consensus among the speakers on the complexity of the issue and the need for collaborative efforts. This consensus implies that future initiatives to protect children online are likely to involve diverse stakeholders and multifaceted approaches, combining technological, educational, and regulatory measures.

Differences

Different Viewpoints

Impact of technology on children

Eugene Kaspersky

Sofiene Hemissi

Children spend too much time with mobile phones and games, but this trains different skills

Digital world offers opportunities for learning but poses risks to emotional and mental health

While Kaspersky sees technology use as potentially beneficial for skill development, Hemissi emphasizes the risks to emotional and mental health.

Effectiveness of technological solutions

Eugene Kaspersky

Muhammad Khurram Khan

Parental control systems work for younger children but older teens can bypass them

Need for child-appropriate verification, age assurance protocols, and standards for platforms

Kaspersky highlights the limitations of technological solutions, especially for older teens, while Khan advocates for more robust technological measures.

Unexpected Differences

Role of parents in digital education

Eugene Kaspersky

Syed Munir Khasru

Parental control systems work for younger children but older teens can bypass them

Children interact with people without clear identities online, posing risks

While most speakers emphasized parental involvement, Kaspersky unexpectedly highlighted the limitations of parental controls, suggesting a need for alternative approaches, especially for older teens.

Overall Assessment

summary

The main areas of disagreement revolved around the impact of technology on children, the effectiveness of technological solutions, the role of tech companies, and the balance between education and technical measures.

difference_level

The level of disagreement was moderate. While speakers generally agreed on the importance of protecting children online, they differed in their emphasis on various approaches and solutions. These differences highlight the complexity of the issue and the need for a multi-faceted approach to children’s online safety.

Partial Agreements

Partial Agreements

Both agree on the importance of tech companies’ role in protecting children online, but differ on the extent of current efforts. Khasru calls for more action, while Liberhan highlights existing measures.

Syed Munir Khasru

Deepali Liberhan

Tech companies should play a more active role given their resources and reach

Meta has implemented over 50 features and tools to address risks, including teen accounts

Both emphasize the importance of education and technological measures, but focus on different aspects – Zarenin on education programs, Khan on technical standards and protocols.

Andrei Zarenin

Muhammad Khurram Khan

Cyber hygiene programs and digital literacy initiatives are important for educating children

Need for child-appropriate verification, age assurance protocols, and standards for platforms

Similar Viewpoints

Both speakers emphasize the importance of implementing technological solutions and standards to protect children online, such as age verification and specialized features for young users.

Muhammad Khurram Khan

Deepali Liberhan

Need for child-appropriate verification, age assurance protocols, and standards for platforms

Meta has implemented over 50 features and tools to address risks, including teen accounts

Both speakers highlight the importance of addressing privacy concerns and manipulative behaviors in the digital space to protect children’s rights and well-being.

Sara Alfaisal

Andrei Zarenin

Privacy violations and cyberbullying are major challenges from a human rights perspective

Virtual masking and deepfakes create false sense of trust and manipulate children’s behavior

Takeaways

Key Takeaways

Protecting children’s rights in the digital world requires a multi-stakeholder approach involving governments, tech companies, civil society, and parents

The digital landscape offers both opportunities and risks for children, including educational benefits but also threats to privacy, mental health, and safety

Technological solutions like parental controls and AI-powered content moderation can help, but must be balanced with education and human oversight

International cooperation and consistent legal frameworks are needed to address cross-border online threats to children

Digital literacy and cyber hygiene education are crucial for both children and adults

Resolutions and Action Items

Continue developing and implementing technological safeguards like Meta’s teen accounts and cross-platform initiatives like Project Lantern

Increase collaboration between stakeholders to develop standards, protocols and specifications for child safety to be integrated into online platforms

Expand digital literacy and cyber hygiene educational programs for children, parents and educators

Work towards more consistent international legal frameworks and policies for online child protection

Unresolved Issues

How to effectively verify age and implement age-appropriate protections online without compromising privacy

Balancing children’s rights to privacy and expression with parental oversight and protection

Addressing the digital divide to ensure equal access and protections for children globally

How to keep pace with rapidly evolving technology and emerging threats

Suggested Compromises

Flexible limits on children’s screen time based on factors like school performance

Balancing technological protections with education to empower children to navigate risks

Implementing child protection measures at the operating system or app store level to reduce data collection concerns

Thought Provoking Comments

We have to train, educate, inspire the children. And in my opinion, they are very smart. Even in this year, people in the age group of 12 and below, 35 to 40 million will be using internet, which is 12 million more than people in the age group of 12 to 17.

speaker

Syed Munir Khasru

reason

This comment challenges the notion that children need to be protected from technology, instead suggesting they need to be empowered to use it wisely. It also provides surprising statistics about young children’s internet usage.

impact

This shifted the conversation from a protective stance to one focused on education and empowerment. It led to further discussion about digital literacy and the role of parents and educators.

Now we are interacting with the technology, but in the future we will be having technology permeating inside by having, you know, brain-computer interface and we will have the chips implanted in our minds, you know, the brains, you know. And I’m afraid that, you know, what would be the situation look like in the future.

speaker

Muhammad Khurram Khan

reason

This comment introduces a futuristic perspective on technology integration, raising ethical concerns about the long-term implications of technology on human cognition and autonomy.

impact

It broadened the scope of the discussion beyond current issues to consider future challenges, prompting others to think about long-term technological trends and their potential impacts on children.

From human rights perspectives, we need to focus on the right of privacy, because the right of privacy, it’s a big right, and it’s included in the Universal Declaration and also it’s included in the Treaty of Children’s Rights.

speaker

Sara Alfaisal

reason

This comment frames the discussion in terms of fundamental human rights, emphasizing the importance of privacy in the digital age.

impact

It refocused the conversation on legal and ethical frameworks, leading to further discussion about the role of governments and international bodies in protecting children’s rights online.

If you think technology will solve your problems, then you do not understand technology and you do not understand your problem.

speaker

Muhammad Khurram Khan

reason

This provocative statement challenges the over-reliance on technological solutions, emphasizing the need for a more holistic approach.

impact

It led to a more nuanced discussion about the limitations of technology and the importance of education, policy, and human factors in addressing online child protection.

I think education for adults is more important than education for kids. Because the danger from non-educated adults is much more. They can do much more dangerous mistakes.

speaker

Eugene Kaspersky

reason

This comment offers a counterintuitive perspective, suggesting that adult education is more critical than child education in addressing online risks.

impact

It shifted the focus of the discussion to the role of parents and adults in creating a safe online environment, leading to further exploration of intergenerational digital literacy gaps.

Overall Assessment

These key comments shaped the discussion by broadening its scope from immediate concerns about online safety to long-term technological trends, human rights implications, and the limitations of purely technological solutions. They challenged participants to consider the complexity of the issue, emphasizing the need for a multi-stakeholder approach that includes education, policy, and human factors alongside technological solutions. The discussion evolved from a focus on protecting children to empowering them, and highlighted the importance of adult education and intergenerational understanding in addressing online risks for children.

Follow-up Questions

How can we address the digital divide between privileged and underprivileged children?

speaker

Sofiene Hemissi

explanation

This is important to ensure equal opportunities for all children in the digital world.

How can we protect children’s cultural identity and privacy in the digital sphere?

speaker

Sofiene Hemissi

explanation

This is crucial for preserving cultural values and protecting children’s personal information.

How can we develop unified international frameworks and policies for child protection online?

speaker

Sofiene Hemissi

explanation

This is necessary to create consistent protection measures across different countries.

How can we address the manipulation of children through AI and data analysis?

speaker

Sofiene Hemissi

explanation

This is important to protect children from potential exploitation through advanced technologies.

How can we improve digital literacy for parents to better protect their children online?

speaker

Andrei Zarenin

explanation

This is crucial for enabling parents to effectively guide and protect their children in the digital world.

How can we create safe environments that help children adapt to the digital world while minimizing risks?

speaker

Andrei Zarenin

explanation

This is important for balancing the benefits of technology with necessary safety measures.

How can we inspire and encourage children to be better net citizens?

speaker

Syed Munir Khasru

explanation

This is crucial for developing responsible digital behavior in children.

How can we address the challenges posed by emerging technologies like brain-computer interfaces and implanted chips?

speaker

Muhammad Khurram Khan

explanation

This is important to anticipate and prepare for future technological developments that may affect children.

How can we develop global solutions for child online protection that include developing and least developed countries?

speaker

Muhammad Khurram Khan

explanation

This is necessary to ensure comprehensive and inclusive protection measures worldwide.

How can we balance the benefits of technology use with the need to limit screen time for children?

speaker

Eugene Kaspersky

explanation

This is important to promote healthy technology use habits in children.

How can we improve international cooperation to combat child abuse and exploitation online?

speaker

Eugene Kaspersky

explanation

This is crucial for effectively addressing cross-border online threats to children.

How can we develop standards and protocols for child safety that can be easily implemented by small startups and companies?

speaker

Muhammad Khurram Khan

explanation

This is important to ensure consistent safety measures across all digital platforms, regardless of company size.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.