Day 0 Event #142 Navigating Innovation and Risk in the Digital Realm

15 Dec 2024 08:15h - 09:45h

Day 0 Event #142 Navigating Innovation and Risk in the Digital Realm

Session at a Glance

Summary

This discussion at the Internet Governance Forum in Saudi Arabia focused on navigating risks and innovation in the digital realm. Participants explored the challenges and opportunities associated with digital technologies, particularly artificial intelligence (AI).


Key risks identified included cybersecurity threats, data privacy concerns, and over-dependence on technology. Dr. Maha Abdel Nasser highlighted how technological failures can severely disrupt daily life and business operations. The discussion also addressed the risks of AI bias, with Hadia Elminiawi noting how AI systems can perpetuate unfair treatment if trained on biased data.


Participants emphasized the need for robust reporting mechanisms for online abuse and cyberbullying. Dr. Maha and others noted the lack of unified platforms for reporting such incidents in many countries. The discussion touched on the challenges of holding tech platforms accountable for abusive content while balancing innovation.


Caleb Ogundele stressed the importance of cross-border collaboration and public-private partnerships in managing digital risks. He also highlighted the need for more support for women entrepreneurs in the tech sector.


While acknowledging the risks, participants like Amr Hashem cautioned against allowing fear to stifle innovation, drawing parallels to historical resistance to new technologies. The discussion concluded with a recognition that while digital technologies pose challenges, they are now an integral part of modern life, necessitating a balanced approach to risk management and innovation.


Keypoints

Major discussion points:


– Risks and challenges associated with digital innovation, including cybersecurity threats, privacy concerns, and technological dependency


– The need for frameworks, strategies and collaboration to mitigate online risks and abuse


– Challenges in reporting and addressing online harassment, especially for women and vulnerable groups


– The impact of AI and deepfakes on online safety and content authenticity


– Balancing innovation opportunities with associated risks in the digital realm


The overall purpose of the discussion was to explore the risks that accompany digital innovation and discuss strategies for harnessing innovation while effectively managing and mitigating associated risks.


The tone of the discussion was generally serious and concerned when discussing risks and challenges, but became more optimistic and solution-oriented as speakers discussed potential frameworks, collaborations, and opportunities. There was a shift towards the end to emphasize not letting risks completely stifle innovation and progress.


Speakers

– HADIA ELMINIAWI: Chief expert at the National Telecom Regulatory Authority of Egypt, member of ISOC Egypt, chair of the Africa Regional At-Large Organization (AFRALO) at ICANN, member of ICANN’s Security and Stability Advisory Committee


– DR. MAHA ABDEL NASSER: Member of the Egyptian Parliament, vice president of the Egyptian Social Democratic Party, executive member in the Salvation Front, member of the Supreme Council of the Egyptian Syndicate of Engineers


– CALEB OLUMUYIWA OGUNDELE: Internet public policy expert, member of the Board of Trustees of the Internet Society, chair of the Nigerian School on Internet Governance


– NOHA ASHRAF ABDEL BAKY: Support engineer at Dell, member of ISOC Egypt, instructor at the Pan-African Youth Ambassador IG


Additional speakers:


– MARIAM FAYEZ:


– LISA VERMEER: Policymaker from the Netherlands working on the European AI Act


– AMRA HASHEM: Member of Internet Master


– MOUSSA: Student from Nigeria studying in Malaysia


– RAZAN ZAKARIA: VIAG ambassador and content creator from Egypt


Full session report

Expanded Summary of Discussion on Navigating Risks and Innovation in the Digital Realm


This discussion at the Internet Governance Forum in Saudi Arabia brought together experts to explore the challenges and opportunities associated with digital technologies, particularly artificial intelligence (AI). The conversation covered a wide range of topics, from cybersecurity threats to the potential of AI, highlighting the complex landscape of digital innovation.


Key Risks and Challenges


Participants identified several major risks associated with digital innovation:


1. Cybersecurity and Data Privacy: Dr. Maha Abdel Nasser, a cybersecurity expert, and Caleb Olumuyiwa Ogundele emphasised the critical nature of cybersecurity threats and data privacy concerns. They agreed on the need for robust measures to protect individuals and organisations from these risks.


2. Technological Dependency: Dr. Maha Abdel Nasser highlighted the growing dependence on technology as a significant threat, noting how technological failures can severely disrupt daily life and business operations.


3. AI Bias and Transparency: Hadia Elminiawi raised concerns about bias in AI systems, explaining how AI applications can perpetuate unfair treatment if trained on biased data. This point broadened the conversation to include ethical considerations in AI development, including risks such as deepfakes and non-consensual porn.


4. Digital Divide: Noha Ashraf Abdel Baky, a digital rights advocate, drew attention to the gap between privileged and less privileged users in accessing and benefiting from digital technologies.


5. Regulatory Challenges: Caleb Olumuyiwa Ogundele pointed out the potential for abuse of regulatory frameworks by those in power, while Lisa Vermeer highlighted the difficulties in implementing consistent AI regulations across different countries, mentioning the European AI Act as an example.


Strategies for Mitigating Risks


The discussion then shifted to potential strategies for managing these risks:


1. Government-led Initiatives: Caleb Olumuyiwa Ogundele advocated for government-led initiatives, including regulatory sandboxes, to allow companies to test innovations safely.


2. Cross-border Collaboration: There was a consensus on the importance of international cooperation and data sharing to address global digital challenges effectively. Dr. Maha Abdel Nasser emphasized the need for political will in establishing collaborative platforms for digital innovation.


3. Alternative Solutions: Dr. Maha Abdel Nasser stressed the need to develop backup systems and alternative solutions to mitigate the risks of technological failures.


4. Responsible AI Principles: Hadia Elminiawi suggested establishing clear principles for responsible AI use by organisations to address issues of bias and transparency. Caleb Ogundele proposed using meta tags to indicate AI-generated content.


5. Improved Reporting Mechanisms: Several speakers emphasised the need for better reporting systems for online abuse and cyberbullying. Dr. Maha highlighted the lack of a unified platform for reporting such incidents in many countries.


6. Digital Literacy: Noha Ashraf Abdel Baky stressed the importance of raising awareness and improving digital literacy among vulnerable groups.


Challenges in Addressing Online Abuse


The discussion revealed several obstacles in effectively tackling online abuse:


1. Lack of Trust: Dr. Maha Abdel Nasser pointed out that many victims do not trust existing reporting mechanisms.


2. Cultural Barriers: Noha Ashraf Abdel Baky noted that cultural factors often prevent victims from reporting incidents.


3. Anonymity: Audience members raised concerns about the difficulty in tracing anonymous online actors.


4. Platform Accountability: Dr. Maha Abdel Nasser highlighted the challenges in holding tech platforms accountable for abusive content while balancing innovation. Razan Zakaria discussed the complexities of content moderation on social media platforms.


5. Need for Collaboration: Razan Zakaria emphasized the importance of collaboration between governments and tech platform owners in addressing online abuse.


6. Role of Civil Society: Mariam Fayez highlighted the importance of civil society initiatives in addressing online abuse.


Balancing Innovation and Risk Management


The conversation also explored the delicate balance between fostering innovation and managing risks:


1. Embracing Innovation: Amr Hashem cautioned against allowing fear to stifle innovation, drawing parallels to historical resistance to printing technology.


2. Inclusive Funding: Caleb Olumuyiwa Ogundele stressed the need for more support for women entrepreneurs in the tech sector through inclusive funding strategies, highlighting challenges such as limited access to capital and networking opportunities.


3. Regulatory Challenges: Lisa Vermeer highlighted the difficulties in implementing AI regulations consistently across different countries, citing the European AI Act as an example of ongoing efforts.


4. Freedom of Expression: Audience members raised concerns about bias in social media algorithms affecting freedom of expression.


Conclusion


The discussion concluded with a recognition that while digital technologies pose significant challenges, they are now an integral part of modern life. This necessitates a balanced approach to risk management and innovation. Key takeaways included the need for better frameworks and strategies to manage online risks, the importance of collaboration between governments, tech platforms, and civil society, and the ongoing challenge of balancing innovation with risk management.


Unresolved issues remain, such as effectively holding tech platforms accountable for abusive content, addressing anonymity online, and implementing consistent AI regulations across different countries. These topics provide fertile ground for future discussions and policy development in the realm of digital governance.


Session Transcript

HADIA ELMINIAWI: So, okay, so I’m starting. Welcome, everyone, to the Internet Mosque session at the IGF in Saudi Arabia, navigating risks and innovation in the digital realm. First, I would like to thank the forum and our host for their excellent organization and welcoming atmosphere. My name is Hadia Elminiaoui, chief expert at the National Telecom Regulatory Authority of Egypt. However, I am here today in my capacity as a member of ISOC Egypt and chair of the Africa Regional At-Large Organization, AFRALO, at ICANN. I’m also a member of ICANN’s Security and Stability Advisory Committee. I’m an engineer by training and hold a Master of Science in Management and Leadership. Today, I will be speaking and co-moderating this session with my colleague, Mrs. Noha Ashraf Abdel-Baey, who is on site in Saudi Arabia. Mrs. Noha Ashraf is a support engineer at Dell, member of ISOC Egypt, and an instructor at the Pan-African Youth Ambassador IG. In today’s session, we will be exploring the risks that accompany digital innovation, including cybersecurity threats, ethical dilemmas, and other emerging challenges. Our discussion will focus on strategies and frameworks for harnessing innovation while effectively managing and mitigating associated risks. We are honored to have with us today Dr. Maha Abdel-Nasser, a distinguished member of the Egyptian Parliament. Dr. Maha is vice president of the Egyptian Social Democratic Party and one of the founding members. She has been an executive member in the Salvation Front and has been elected as a member of the Supreme Council of the Egyptian Syndicate of Engineers. Dr. Maha holds an engineering degree, an MBA, and a PhD degree in political marketing. She is also a certified instructor at the American University in Cairo, the Arab Academy for Science and Technology, and the American Chamber of Commerce in Egypt. We are honored as well to have with us online today Mr. Caleb Ogundel, an accomplished internet public policy expert. Caleb is a dedicated volunteer with ISOC Nigeria and the Manitoba chapters and currently serves as a member of the Board of Trustees of the Internet Society. Caleb chairs and coordinates the Nigerian School on Internet Governance and was a former management lead of the Information Technology Unit for the University of Ibadan Distance Learning Center and a project director at the African Academic Network on Internet Policy. He is also an instructor at the African Network Operating Group. Caleb holds two master’s degrees in computer systems and information science. science. Engineer Noha will be managing the queue, both on site and online. We look forward to an engaging and insightful discussion. Thank you for joining us today. Without further delay, let me start with my first question to Dr. Maha Abdul-Nasir. Dr. Maha, it’s an honor to have you on site with us today on this session. And thank you for your time and effort and shared thoughts. My first question to you is, in your opinion, what are the primary risks accompanying digital innovation?


DR. MAHA ABDEL NASSER: Well, thank you very much for ISOC and for Internet Egypt. I’m glad to be here. I’m honored to be with all of you. And thank you for the audience for being here. Actually, when we are talking about the threats for the digital transformation or digital era, the first thing definitely we will get in our minds is the cybersecurity, which is the most important thing and the most aspect that a lot of people are thinking about. And again, the data privacy, we are all worried about our data and how we are visible to the world. Our data is now almost everywhere, and we cannot do anything about it. But there is another threat that I think people may be not thinking about it a lot, which is the dependency on the digital transformation or having this technological dependency, which actually may. cause things to be completely stopped if something happened which we already have seen in the airports across the world. We couldn’t think that a small bug can actually do all this harm to the people who are traveling and made them delayed to the work and got people to think really about what we are going to do if there is a shutdown in electricity, a shutdown in anything. We are so dependent on the technology now and I think this is one of the major threats, risk and challenge at the same time.


HADIA ELMINIAWI: Thank you so much Dr. Meher for this insight and actually pointing out the dependency on technology and how a small bug as you mentioned could like put the world on a halt. So I will follow up with a question. So in your opinion how could we mitigate this?


DR. MAHA ABDEL NASSER: It’s very difficult to say but of course if you are talking about cyber security, all of us knows about the firewalls taking all the precautions which actually will not stop still the cyber attacks that we are seeing every day everywhere across the world. It is just a matter of who is racing who, who can take the lead and somehow be able to avoid or attack. So it’s kind of… mouse and cat scenario because all of the governments, all of the corporates, all of the organizations are just trying to avoid the cyber attacks and at the same time the attackers are trying to do the cyber attacks. So it’s, I don’t think that there is something that can be done or there is a legislation for anything that can help in that. For the data privacy, I guess we all know that we have the GDPR and most of the countries are trying to follow or to do some legislations or acts similar to the GDPR, despite the fact that some of them are not really successful in that. For the technological dependence, I don’t think that we can, we cannot be dependent on technology anymore, but we have to find ways, we have to find some kind of alternative solutions if the technology fails us. We’ve been working without technology, we’ve been living without technology for centuries, but now we are so dependent when we find that there is a problem with our phone, we just, we panic, we feel that as if the world stopped. We cannot even remember how the life was before that. If the internet goes down, we feel that we are in threat. So I don’t think that we can really help it because it’s more of a feeling or lifestyle that we cannot get away from it. But for the corporates, they have to find a way, they have to find alternative solutions. So when the technology fails, they have to mitigate it somehow, they have to have the traditional way as a backup. if there is a problem. So you will not get everything stopped. This is how I see it.


HADIA ELMINIAWI: Thank you so much, Dr. Maha. And indeed, as you mentioned, maybe diversifying and and maybe also depending on local or community systems or applications. I’m not sure, but maybe also there could be a role here for frameworks or developed maybe by governments or incentives provided by governments. But I will stop here and move to Kalib. Kalib, I would like to also ask you the same question. In your opinion, what are the risks accompanying digital innovation?


CALEB OLUMUYIWA OGUNDELE: Thank you very much. Thank you very much, Aria and Dr. Maha. Very interesting perspective from Dr. Maha. I must say that I do admire the way she approached the question. But first, one of the few things I did pick from our conversation was basically the fact that we can no longer do it without technology these days, right? And so because we can no longer do it without technology these days, that means that we are stuck with it, right? And if we are stuck with it, we need to really find solutions basically to some of those innovations and the risk that will follow. So one of the few things I did think while preparing for this panel session was, first, we need to first of all start having what we call government-led initiatives. Some of those initiatives could also be based on legislation, regulatory frameworks, sandboxes, where a company can also test innovation safely. Take for example, we are now in the age of the AI, and people need to get some regulatory assurances that AI is not going to take over their lives. So government needs to start having regulatory sandboxes that can help them safely test some of the AI systems. I’m aware that the Singapore government has a testing framework that allows companies to test AI systems while sharing insight into some risk and solutions as well, and we need to start having what I call cross-border collaboration mechanism across different spectrums. Now basically, the entire idea of having open standards is because we want to have collaborations from different perspectives of technology and innovation, and so it’s good that the government, and not just the government, the civil society, as well as the academia, start encouraging what we call cross-border collaboration mechanisms. There will be a lot of international data sharing agreements for risk assessment, global standards for risk assessment, and also trying to standardize frameworks for sharing some of the threats, cyber threats, and intelligence that we have across the board. I’m also aware that you also works in Egypt, where you guys take a lot of cyber threat intelligence very seriously. However, we cannot remove the fact that there are different types of cyber actors. Bad actors, that I will say, even when we look at the geopolitics of cyber threats, that are also interested in sabotaging some of the efforts of this open standards cross-border collaboration just for their own benefit. So my encouragement basically at this point is that we should continue to have a lot of public-private partnerships. We should continue to have joint research initiatives between governments, private sectors, to manage some of this innovative risk that we do have. More importantly is to also have a very good funding model that supports even private organizations that are into some of this risk assessment. The reason why I’m saying this is, trust me, people will definitely go out of funding, but when they are doing some important work that has to deal with, take for example national security, global security, and when it comes to some of these things, it’s always good that we have government supporting them. We also have cyber security framework knowledge base that also tries to support some of the things that they do as well. So I just want to stop here so that I don’t take so much of our time, and we can allow other speakers also contribute some of the inputs into what their thoughts are about innovation and risk assessment. Thank you.


HADIA ELMINIAWI: Thank you Caleb, and I move to Noha, and I know that Noha also would like to speak about her thoughts about risks accompanying digital innovation. So Noha, the floor is yours.


NOHA ASHRAF ABDEL BAKY: Thank you Hadia. So I believe the digital innovation race is way faster than building national strategies, than acquiring new digital skills, than drafting policies and good frameworks. So I believe that the primary risks that will have a bigger digital divide, a bigger digital gap between the privileged and less privileged technology users, because we already half of the world population are still offline. So the more privileged people who are well-educated, who have


HADIA ELMINIAWI: And that many stories are just made up and are not real. Also, those women wouldn’t have cared much about what has been. That people do not know where or how to report. And of course I’m aware of those incidents where we have. Yes, awareness of course is crucial, but awareness is mainly crucial for people to understand. And we don’t want to get into the cases of the women who actually took their lives. But honestly, when I heard about those incidents, I thought if those ladies knew a little bit, like if they knew better, they would have never done so. And if the community also and the society was well aware that cyber bullying exists and that many stories are just made up and are not real. Also, those women wouldn’t have cared much about what has been posted about them and made them take their lives. However, again, the question is, do we have a place to which if someone is exposed to, is bullied online, they can go online and report it? Like how easy is it to report abuse? And what channels can they report that abuse through?


DR. MAHA ABDEL NASSER: Well, actually, if you’re talking about Egypt, because I don’t know about other countries, but in Egypt, there is a phone number you can call. Unfortunately, there isn’t a portal to report on. This actually one of the suggestions I made in the parliament, but it didn’t go through. In each police station you can do so, and there is a hotline in the Women’s Council. They take the reports of violence too, against women specifically, but the Ministry of Interior has another hotline for reporting all the abuses over the Internet for women, for children, for men, for anyone, because anyone can get actually hacked or blackmailed online. It doesn’t have to be a woman. Women are more vulnerable, but a lot of people actually have these issues. So there are ways to report, and I guess there are a lot of places now, or organizations from civil society, who are trying to spread the message, because as you said, if those women knew better, they wouldn’t commit suicide, and this is extremely sad. We have this burden on our shoulders that we didn’t let them know, but we have to work all together to spread this information and to do the awareness in every country, not just in Egypt, because it’s happening everywhere.


HADIA ELMINIAWI: Thank you.


NOHA ASHRAF ABDEL BAKY: We can also report to the social media platforms themselves, so they can suspend the account or take it off, in parallel with the government reporters. So Hadia, now I have some questions for you. What are the primary risks and challenges associated with the quality bias and security of AI training data? And second question is, how do these factors impact the ethical deployment and effectiveness of AI systems?


HADIA ELMINIAWI: Thank you, Noha. So again, privacy and data protection are among the key risks accompanying digital innovation. Services and applications using IoT and AI depend mainly on collecting and processing huge amounts of personal data, which raises privacy concerns, but it also, in addition of course, failure to comply with data protection regulations could result in large legal penalties. So it’s both ways, you know, but let me speak specifically about the risks associated with the quality of data. So bias in AI applications and systems happens when outcomes of AI systems favor or disadvantage a certain group, or favor certain outcomes, or favors certain individuals. This bias can result in decisions and unfair treatment, depending on the field in which the AI system is deployed. So examples could include random security screenings, and a lot of us, you know, face this at airports where, you know, a specific ethical group are always selected for this random security screening. It affects employment opportunities, even job search results, unequal treatment in legal or medical systems. And this is all because of the data that is used in training the AI systems. So data used in training AI systems, or many AI systems, let’s not say all, but many AI systems use historical data that reflects past human decisions, behaviors. So if the data contains some kind of prejudice, or biases, or is taken only based on a, is not diversified, the AI will inherit and replicate those biases in the decision-making process. And the other thing also that comes to my mind here is, so after the decision also is made, how do you know what this decision was made on? And what data was used for that? So it’s also about transparency and accountability, right? And this human prejudice can be intentional or unintentional. It doesn’t really matter whether it’s intentional or unintentional, but there should be a way in which we do not have those flawed training data, and there should be some kind of transparency and accountability also associated with this. So addressing those issues, of course, are crucial for fairness, accountability, and transparency in AI applications. If we talk about security risks associated with AI, so AI usually uses vast data sets that may contain sensitive personal information, so improper handling of this data can lead to harmful consequences. Also data could be, from a security point of view, harmful or misleading data could be injected into the training process, corrupting the AI models and performance and causing unintended behaviors. So those are all risks associated with the bias and quality of data. Noha, the floor is yours.


NOHA ASHRAF ABDEL BAKY: Thank you, Hadi. I would like to ask our audience if they have any questions to any of the speakers. I don’t see any questions from the online audience. Okay, I guess we can move on and leave the Q&A section at the end of the session.


HADIA ELMINIAWI: Okay, thank you. I wanted to ask Dr. Maha about, so she was, Dr. Maha was speaking about reporting cyber bullying, and I was wondering if there is like a single platform at a national level that people could report cyber bullying through, and I know that there are some countries have those like platforms or single platform to report cyber bullying, that’s one of the questions, and the other is to all the speakers, and the same question will go also to Caleb, and then the other question is related to like international frameworks, and through which also people can report online incidents, and maybe if we take, for example, DNS abuse, you know, and online security of users, do we have like sites through which we could report this? And of course we have, we can report directly to the services, directly, as Noha mentioned. So I will give the floor to Dr. Maha, and then Caleb, and then Noha to discuss this. Thank you.


DR. MAHA ABDEL NASSER: Thank you, Hedy, as I said earlier, there is no platform, single platform, as I said, I suggested that to have something like that, and the reporting should be online, as it is all online crimes, but it didn’t go through, so unfortunately in Egypt we don’t have this single platform to report, and the reporting is a process as I explained earlier, and internationally I think there is nothing except what Noha said, than reporting to Facebook specifically, or Instagram specifically, to mainly specific, not platforms, but the applications itself, you can just report what happened to you, or report the account, or report a specific person, or something like that, but I’m not aware of anything else, unfortunately.


CALEB OLUMUYIWA OGUNDELE: Thank you very much. So one of the ways I’m thinking about this is, first of all, every human right that is physical is also the same thing online, which means that if I harass you, just in context, not that I’m harassing anyone, take for example, if Mr. A harasses someone, harasses Mrs. B, when they are not online, the same rule should apply to when they are online, therefore, if you look at it, that what platforms, or how should the reporting be done? The first instance is, does the person being harassed know their right? Do they know that they are being harassed? Do they know how they can be protected if they report some of these issues? So one of the issues that I have with the online reporting is the anonymity, is the fact that most people who report online are not even sure that if they report online, they will get the necessary protection that they deserve. Some of them also do not know that if they report online, actions will be taken, right? So that fear is also there, despite the fact that some of them might even have the information, they have the training, the digital literacy like Noah and Dr. mentioned, right? Now, the question now leads us to how do we start all of this reporting and all of that? I will give you an example of some of the context in my own country, Nigeria, and some of the abuses that has happened with regulatory frameworks, even though there are regulatory frameworks that protect women, that protect the vulnerable, or those who are exploited online. We realize that the political class is beginning to exploit the Cyber Crime Act, which encompasses some of the laws and acts that need to protect those vulnerable people. Take for example, someone who feels that he has so much power, instead of saying that they want to protect the vulnerable, they will rather tell the vulnerable person who is complaining and not really harassing them online, and say that, you are harassing me online, and then they use the powers of the police to get that person arrested, thrown in jail, and all of that. So we’ve seen a lot of abuses, even by some of our political classes, and I feel that these are issues that we need to bring to the forefront, these are conversations that we should not stop talking about, even at this political sense that the vulnerable needs to be protected by the law, not the political class only. And then we find out that even our police are, in a way, respectfully to our law enforcement agencies, are trying to choose who they prosecute when some of those issues are reported. So do we have justice for those who are vulnerable? Do we have justice for those women that we are talking about? So these are questions that I would probably want us to go back and have a second thought about, as I see that we have someone who has raised his or her hand to ask a question. So I would just let the floor over to you, online moderators.


HADIA ELMINIAWI: And Noha, over to you, Noha.


NOHA ASHRAF ABDEL BAKY: Yeah, thank you, Hadia. I guess there is, as mentioned by Dr. Maha and Caleb, there is a lack of trust with the reporting mechanisms, and also a culture barrier when it comes to, like, victims can sometimes feel ashamed of reporting the incidents or the attacks, and instead they go silent about it. They are ashamed of their communities or how people talk about them or whatever, but we need to break this barrier, this culture barrier, and stop victimizing the attackers. I guess we saw many incidents. Egypt, as Dr. Ma highlighted, where young women and even teenagers took their lives and suicided. So we need to stand in solidarity with all victims or online victims of cyber attacks and stop attacking them again online because sometimes when you report an incident, people will start to comment with hate speech or share their negative comments. So sometimes even victims, they took back their reports to avoid all of this hassle. So yeah, we need to look at it from a 360 view, putting the good legislation, trust the process, awareness, civil society to lead the awareness part and internet users to be more responsible when using the technology because technology is here to help us. So we need to use the good part of it and report the bad part of it. And that’s it for me. Thank you. Back to you, Hadia.


HADIA ELMINIAWI: Thank you. And I see Mariam has her hand up. So Mariam, do you want to take the floor?


MARIAM FAYEZ: Yes. Hello, Hadia. Hello, everyone. I like how the conversation is really going because it boils out to the human rights and what each and every person has to feel comfortable, safe and empowered online and offline. I second what Noha and MP Vaha have been saying and I really think the civil society should be moving this concern forward. In Egypt, we have multiple successful initiatives, whether initiated for the betterment or for the safety of women or vulnerable women to address their issues or just the different groups and different rights. We have many, from women harassment, for example, to even first responders in terms of crisis, like the e-SIM card and all the e-SIM activity and even women harassment. We have very, very successful initiatives and all those initiatives have attracted the politics or the government. They looked closely at those initiatives and they let them grow or those initiatives had the opportunity to grow because they had the people’s support and the people’s momentum. People in Egypt were lacking, for example, women in Egypt lacked the opportunity to feel safe and to feel safe in reporting and to speak up when any sort of abuse happens and the trust was not there, but the trust started building when those campaigns and when those initiatives took momentum and they did not take momentum on the ground. Social media was a very strong tool. WhatsApp, for example, chatting tools were very strong that supported such mechanisms. So when it starts in the grassroots organizations or with civil society and then it will move forward, I think this is a good thing to start the momentum. So civil society, I think, comes first at this stage. Thank you.


HADIA ELMINIAWI: Thank you so much, Mariam. Indeed, civil society has a big role in leading the way when it comes to online awareness, when it comes to online awareness processes. I would like to turn to Dr. Mahanao and ask her about, first, are there practical strategies and frameworks for effectively managing and mitigating the risks that we have been talking about? And is it doable to have practical strategies and frameworks for that? And do we have such frameworks to mitigate online risks? And not necessarily rules or regulations? Well, actually, if we’re talking about frameworks.


DR. MAHA ABDEL NASSER: I’m not the person who should be answering this. this should be the government or the executive body. We have a lot of strategies in Egypt. We have a strategy for the cyber security, we have a strategy for the digital innovation, we have a strategy for AI. I still don’t see the real implementation of those strategies. Strategy is a very nice word, you can write very nice things, but when it comes to implementation, it needs a lot to be seen actually on the ground. We are still far behind in a lot of things, especially if you are talking about cyber security. I know that the government is taking it very seriously, but still, we don’t have the on-ground activities that needed to deal with cyber security. For the AI, we are definitely, definitely far behind. There are no incentives for SMEs or startups or any innovators who could work in the AI, which is extremely important and needed. You can find fragmented initiatives and people are working on themselves, but there is no structured work concerning these things. And I think it’s extremely needed now. This is my point of view. Thank you.


HADIA ELMINIAWI: Thank you, Dr. Meha. And you mentioned AI, so I will go back to AI. And indeed, artificial intelligence has the power to transform businesses and is important for governments to perform more effective and to be more effective and perform more efficiently. And that applies not only to governments, but to all forms of businesses. And that gets me back to the question of frameworks. And maybe what’s required is for organizations and entities to establish clear principles for their responsible use of AI. So any organization or entity that’s using AI will need to define guiding principles for using AI and commit and adhere to those principles defined. So those could be principles related to accuracy, accountability, fairness, safety, ethical responsibilities that would be established and published by organizations or entities using artificial intelligence. And again, we go back. back to as we started that, as you started, Dr. Maha, by saying that we cannot, technology is now part of our life. And we have this dependency that’s not going away. It will only increase. And humans and machines have always been working together. And moving forward with AI, this is also what’s going to happen, or is supposed maybe to happen. And so since the very early human history, I would say, so people were using carts. And then they’re using machines in agriculture. And then this keeps on moving. And then computers, and then mobiles. And then so it has always been humans and machines. But it’s again, how do we do that? And I. Hadia, we have one intervention from the on-site audience and another one from the online audience. So yeah, please. Go ahead. So thank you so much. It’s a very insightful discussion.


AUDIENCE: Thank you, Dr. Maha. Thank you, Noha, and particularly Hadia. Actually, I’m coming from a technical community. So I’m a security researcher. I actually know the other side problems, the one who is creating these problems. So I can actually give some insights on that. So I’m running this organization, Secure Purple. And we are doing, our focus was actually on the end users’ safety, so particularly women and kids. We have been very active in that. We arrange workshops, trainings, awareness in different regions in Pakistan. I’m from Pakistan, basically. But you know, normally in our trainings and workshop, what we used to do is to train the women and kids because they’re the most vulnerable part of the internet. particularly on image-based abuse, particularly on cyberbullying and stuff like that. So what we used to do was to train them how to stay protected from these kind of threats. And one of the recommendations we used to do was never share different or questionable or maybe indecent pictures of yours, maybe if you’re in an online relationship or normally or anywhere over the internet, because that was the main cause of creating or maybe give a rise to image-based abuse. But no, I’m actually in doubt of that recommendation because with AI now, you can create any sort of content with just a singular image. So now I’m thinking, what’s the next step, you know? And I’ll give you statistics, actually. There is an organization, Sensity AI, and they have been tracking the defects since 2015. And they have given the statistic that 95% of the defects are actually non-consensual porn. So imagine the defect, a huge technology coming up, and the 95% of its consumption is actually on the non-consensual porn. I mean, what would be the amount of the image-based abuse? What would be the morality and the social structure of the society if there is so much content, questionable content, producing daily just using AI? So I mean, it’s a lot of discussion. I mean, I can’t quite add up to every insights the speakers have shared, but due to time limitation, I would just say, you know, we just need to identify every single stakeholders of the internet, and we just need to reach them out. For example, on reporting, even if I report, the reputation damage it caused to me, the virility the video gets, I mean, the damage has been done, you know? I know, yeah, accountability is necessary, but still, particularly for a moment, reputation is gone, the damage has been done, they might not get able to get a job. You know, there have been cases we dealt with where actually, you know, people get divorces just because of a single image being getting public. That might be an indecent picture, but still, you know, the impact is too much. Legislations or rules, I mean, coming from a technical side, I can get away with these stuff. You know, the anonymity internet gives me. I can create a fake Facebook account with a fake email, with a fake phone number. Who are you gonna trace me? So, I mean, there’s a lot more, you know, to still consider in the internet space. And I’m still, I mean, we even from the technical end, we still confuse, I mean, how do we deal with it? And it may take time to evolve. So, yeah. Thank you.


NOHA ASHRAF ABDEL BAKY: Thank you. Thank you for the very realistic and on-ground intervention. Hadia, I guess we have Toray, Moussa Toray has a raised hand, as well as Caleb. And we need to conclude people are waiting outside, by the way. We have till 1245, so


CALEB OLUMUYIWA OGUNDELE: Let me just quickly jump in because of time. Okay. So, back to the last speaker. One of the few things that I think I observed is he asked a question about the conscious effort of even the technology organizations, such as that own AI infrastructures, right? What are the conscious efforts that they are putting in into making sure that there are no abuses on some of this AI generated items that come online that could become viral? Now, one of the things I know that Meta does is that Meta allows you to be able to flag AI generated contents. And because you’re able to flag that, it kind of, in a way, reduces the virality. But one of the things that I am not sure of is that if other social media platforms are beginning to follow or toe in line and have some form of governance board, accountability board, that also helps review some of these things and some of the accounts that they have. At least I’m aware that Facebook is making conscious efforts on that. I’m not sure about X. I’m not sure about BlueSky. I’m not sure about other ones. But it would be a very interesting thing to see that they are taking conscious efforts to make sure that they are able to flag AI-generated contents such that those AI-generated contents do not become viral. And one of the things that I also like to see is that for AI-generated content, there should be meta tags that indicate within the images that are generated within those contents to indicate that these are AI content. And there should be a global standard that allows for those kind of meta tagging of AI-generated contents, even from any platform. Yeah. Thanks. That’s my quick intervention to the last speaker. Thanks. Thank you, Caleb. We had a raised hand from Musa.


HADIA ELMINIAWI: OK. Thank you, Musa. Hadi, are you still there? OK. You’re muted. Yeah, sorry. Yeah, I was disconnected for a few moments, but I’m back again. And I wanted to ask Dr. Moussa to talk a little bit I wanted to ask Dr. Maha after hearing what Caleb said. How could we require tech platforms to take responsibility for abusive content? So is this possible? And how could we actually put it in action?


DR. MAHA ABDEL NASSER: Well, yes. Definitely, yes. We have to put them responsible for abusive content. And actually, they can do that. They have the resources. They have the tools. And we’ve been seeing what they have been doing for an instant during the what was. what was happening in Gaza and the conflict, they could have taken down all the content that they thought it’s not right from their point of view, they were biased, they were not neutral, so they can do whatever they want, so they should be responsible for taking any abusive content, hate speech, all these kind of things. And again I didn’t answer your question about the framework for AI, it’s ongoing debate between having an AI framework or legislation, I think we all know that the EU already has or had an act and it’s not working, most of the countries from the European Union are not working with this act and I guess mainly France who tried to work with the act and it didn’t work out, so we’ve been thinking that a framework could be more realistic, taking into consideration the huge action and speed in the AI which is moving almost every day and changing every day, so a legislation could be a little bit not good for this, but we are still, because we have actually proposed legislation in the parliament for the AI and we are waiting to see what will happen in that. But still either legislation or framework, we have to have something to the ethics, what should be done, the responsibility, and it’s extremely, extremely difficult if something happened with a self-driving car with the software and it killed a person. Who is responsible? And I’m a policymaker or legislator. I cannot say who could be responsible. Am I going to put the man who made the program in prison or the car? You can do nothing about that. You will just try to make as much as precautions, but it will be never, ever enough. And we will always have these kind of things. And I’m talking about the self-driving car because it’s already there. It’s happening. And we are hearing about the avatars, so you can have your avatar go and do a murder somewhere and no one can go back on you, and the VR, and a lot, a lot. Actually, thank you very much for what you said because it’s almost impossible. I think, well, the ones from you who saw Black Mirror, I think we are in the Black Mirror era and we don’t know what will happen after that. Thank you, Haji.


HADIA ELMINIAWI: Thank you so much. And if no one from the ground wants to make an intervention now or Noha, if you want to make it. Yeah, we have a raised hand here. Okay. Can you hear me? We hear you.


LISA VERMEER: Thank you so much. My name is Lisa Vermeer. I’m from the Netherlands. I work as a policymaker in Holland. Thank you so much for this interesting discussion. I wanted to share one thing with you and also ask a question related to the innovation part of this session. The first thing I wanted to share is that I work on the European AI Act in the Netherlands. It’s quite a new law. And at the moment, all the member states are working to really transform it, transition it into their own legislative systems. So this is quite a challenge, but I’m still hopeful that it will work. And maybe related to the discussion about abusive content and about deepfakes and the problems that are arised in that, there are provisions in the European AI Act that try to address this issue. And personally, I find it very exciting to see whether it’s going to work or not, because there are provisions that state that it has to be made clear by the developer of, for example, deepfakes, that it is AI generated and manipulated. And it also needs… to be machine readable and also for users of the internet it needs to be recognizable that it’s that it’s generated or manipulated content so maybe that can help but on the other hand of course there’s lots of actors who make this content and their intention is to do harm so in that sense the law is always limited to what extent it will help but at least it gives some kind of power to legislators in the EU to enforce and address when things are going wrong so and I wanted to ask in what kind maybe to you dr. Ma and to the speaker online Caleb to what extent do you think that the risk that you mentioned in your presentations hamper the entrepreneurial spirit of people of SMEs for example in your countries I’m very eager to learn whether for example female entrepreneurs are really stopped by the abusive practices they experience online or whether they still have for example in Nigeria that SMEs are still going on and not being being limited too much by this abuse of the law for example so thank you in advance for shedding some light on the innovation I can ecosystems in your countries thanks so much


HADIA ELMINIAWI: dr. Maha and then Caleb if but the initial shall we take the other question or I think be all the questions yeah thank you Caleb and thank you for the


AUDIENCE: excellent interventions I really enjoyed it much my name is Amra Hashem I I am a member of internet master and it was quite enlightening listening to all those interventions but let me share with you because we are we have always been thinking of the risks but let me share with you a story that happens maybe five centuries ago and it has costed this part of the world that we are living in, the Arab world, a lot over those 500 years because they were afraid to adopt innovation that was coming up at this point in time which was printing. They were afraid that through printers and through printing the Quran which is the holy book for the Muslims worldwide could be forged and it could not be accurate as the handwritten Quran. So actually the Sultan back then, Bayezid II, decided that he is not, he is forbidding printing to be deployed throughout the Islamic world that was controlled by the Ottoman Empire back then because they were afraid of the challenges and the risks associated with printing. So if we are reflecting today on what is happening with technology and innovation, let us please not consider the risks as much as we are considering the opportunities that could be, that could AI be opening to us. And if we are afraid of deep fakes or something that happened, let us instead of thinking about forbidding it because at the end of the day we cannot forbid it. The people who want to do deep faking or something like that will have access to the resources and will do it anyways. But let us think about counter technologies that would help us to make this a reality and to actually get the good part without the bad part. Thank you and looking forward to hear your comments on that.


HADIA ELMINIAWI: Thank you Amr for the positive note. Thank you. I also wanted to say something about what Amr, yeah this is Hadi. So I wanted to just point out something about what Amr just said. So what happened really that is Quran spread faster with the printing and instead more people could read it. And also now with the internet and having everything online you can just go on your mobile and read it. So it turned out that it was even better for the spreading of the holy book. And I just before the, I know you had a question with regard to innovation and I had one also with in regard to innovation. So I would pose the question so that you can answer both together. And the question is related to the establishment of collaborative platforms like how How can policy makers facilitate the establishment of collaborative platform for the exchange of insights among peers and experts in the field of digital innovation and risk management, of course? So I will stop here. And Noha, do you want to give the floor to Dr. Meha and then maybe Caleb? Yeah, sure, Dr. Meha.


DR. MAHA ABDEL NASSER: OK, I will start by answering your question. Well, actually, the female entrepreneurs, no, they are still working. And the innovators, the female innovators, I don’t think that they can be stopped easily by any cyber bullying or cyber attacks or things like that, because they are entrepreneurs and they are in the innovation business. And there are measures, actually, and there are things that they can use or they can get them to help them. And as Mariam said in her intervention, actually, there are still problems, but there are a lot of initiatives, actually, which is somehow helping, especially women in this area. Commenting on what you said, Amr, there is nothing can be stopped anywhere. We are not talking about stopping anything. If we are a bit worried or afraid or seeing the risks, we definitely see the opportunities and what AI can be helpful in. It is just. we need to be careful, we need to see the challenges and to address the challenges and as you said go to the find counter technologies to work with that but it’s moving extremely fast and I think this is what is frightening it’s not just worrying it’s even for us who are working in the technology field and in the digital field but we think this is it’s moving extremely fast that’s all what I can say as for your question Hadia it is not again the policy makers who can do or who can work with collaborative platforms it’s the executive bodies and there is nothing that can prevent this from happening it is just the problem of collaboration anyway any kind of collaboration and and unfortunately we are talking about digital collaboration but everything goes back to politics and as one who is working in politics I can tell you that it’s not easy because to get the data flow in Africa it has to have a political will to do this to to have a platform joint platform to to work with between countries it has to have political will and it is a political decision it’s not it’s not a digital decision and it’s not from the policy makers or from the legislators it’s from the executive bodies and the governments I guess and and we can give them our ideas our thoughts and try to push them to do so because I think we need to do this especially in Africa, in the Middle East, in the Arab world, we need to collaborate together, we need to cooperate together, and to work together to get things done, as I think that no country can do it on its own. It’s now beyond the countries. We need really to work together. Thank you.


HADIA ELMINIAWI: Thank you, Dr. Maha and Noha. I see a hand from the floor. I just had a follow-up for Dr. Maha, but maybe I can just post the question and move to Noha, and then Dr. Maha can answer later. And the question is, how do those bodies, executive bodies, work together in order to exchange insights in the field of digital innovation and risk management? How do they coordinate? And I will pause here, Noha, and give you the floor to manage the queue.


NOHA ASHRAF ABDEL BAKY: Thank you, Hadia. We have a raised hand from our online audience. Moussa, do you want to take the floor? Or maybe share your questions in the chat? Will you raise your voice a bit, please? Yeah, we can hear you. Please go ahead.


AUDIENCE: My name is Moussa from Nigeria. I’m a student here in Malusia, in Al-Baghdadi National University. I would like to make a contribution regarding what the former speaker has said about collaboration between the bodies. I think that’s the strongest way to do it. Sorry, Moussa, the voice quality is not good. Can you try to raise your voice, please, or post your questions in the chat? Are you listening now? Yeah, yeah, okay. Okay, please proceed. Yeah, I want to make a suggestion and contribution to what the former speaker has said about the collaboration between the bodies. I think that is the perfect way to change life to do our own security in Africa. Because back there in my country, there is an incident that’s just going on. Whenever you like, you post something bad about a government personnel like a politician in the country, it is very easy for the government to press you and get you and like, they’ll press you and get you and deal with you regarding the issue you caused. But like for the institutions, I don’t know what the kind of technology are they using to press the person that post on the social media against a politician. And the kind of technology they use to press like the bandit and the other people that post on social media. So I think there must be a collaboration between the government and the leg to come together and put, that we should come together and talk to each other so that we get to do our own policy and security. That’s what I want to do.


NOHA ASHRAF ABDEL BAKY: So Musa was emphasizing on the importance of collaboration between the government and other bodies to facilitate the reporting mechanism or finding the attackers. Yeah, Maha, you have a comment? No, no, I guess he was talking about


DR. MAHA ABDEL NASSER: anyone who can post anything against the government or against someone from the politicians can be easily caught. And trust me, Moussa, it’s not just Nigeria, it’s in a lot of places, yeah. It’s, I guess, yes, all across Africa and some other countries too. We don’t want to specify. And I don’t think that this is what we need the governments to be collaborating in. We need the governments to be collaborating in doing, in trying to make the internet a safer place. And for the, what, your question, Hadia, it’s, I don’t know how they can do that. It’s their job to cooperate. They can easily, in a place like this, in the IJF, in other forums, the officials from different countries, they can sit together and agree on a way of cooperation. They can agree on having a unified platform for reporting the abuse, for instance, or doing things, helping each other in approaching a safer place in the internet for all vulnerable groups. So it’s doable. It is just, as I mentioned before, it needs a political web. It needs them to be really wanting that and feeling the importance of this cooperation and collaboration, which I don’t think is happening right now across Africa so far. Thank you.


NOHA ASHRAF ABDEL BAKY: Thank you, Dr. Maha. Caleb, I saw your hand raised.


CALEB OLUMUYIWA OGUNDELE: Yeah, so I just wanted to comment on the lady who did ask a direct question to me about how we encourage women-owned SMEs when it comes to innovation and risk management. So I just wanted to mention something that, first of all, with respect to my male gender, I would say that women are the best money managers, right? And we need to give them their flowers when it comes to that. So why am I saying that? It means that women are actually the ones who power the underground of the economy that we have even globally. Trust me, women are always the ones at the marketplace, which makes them even more exposed and vulnerable. And I’ll take it back to how we can encourage women SMEs when it comes to around innovation and risk. First thing that I see is that the system itself is very biased against them when it comes to funding and supporting women such that they can innovate, such that they can expand businesses, make it more scalable. They are even more exposed to risk than even the male gender themselves. So I would say that they don’t have the same access to funding like their male counterpart. And so it limits their ability to want to scale and innovate. I haven’t seen so many female innovators when it comes to AI. I’ve always seen a lot of male, right? Why is it that the percentage of women are lower than those of men? So I’d like to see, in my own view, responding to the question that was thrown directly at me earlier on, to see that more women are actually supported when it comes to networking, funding, mentorship, and even risk management, as well as capacity training to help them have that. And then government should be conscious about having inclusive funding strategy or procurement to support women and those who are disabled, at least have a certain percentage for them. I’m aware of that, that that is being done in Kenya, that at least about 30% of government procurement are given to us women. men, those with disability, and a couple of other criteria. But I feel that more can be done. So that’s just my little intervention to the question that was thrown by the lady in red. I couldn’t pick her name when she asked the question. My apologies on that, please.


NOHA ASHRAF ABDEL BAKY: Thank you, Caleb, for responding and for being a good ally for women empowerment. We need to wrap up, but there is one raised hand here from the on-site audience. Razan, you have, please, in less than one minute.


AUDIENCE: I’m Razan Zakaria from Egypt. I’m a VIAG ambassador, and I’m a content creator. So you talked about the collaboration with the government, and I saw that the most important to collaborate with the platform owners, like Elon Musk, Mark Zuckerberg, because they already have a social media platform that you can say that control on our minds and use algorithm sometimes to control in our point of view. As a content creator, when I want to share my point of view, especially in the politics topics, I see that algorithm that’s forbidding me to share my point of view and consider it as a hated speech or something like this, especially that’s related to the war in Gaza, and we said it at the last year, and we already have some tricks like symbols like watermelon or dots between the words, but we don’t have a freedom to share our point of view, and it’s a foreign platform, and the owners have a background with politics thoughts, and they manage on our sharing of our point of view. So this is my issue, actually.


NOHA ASHRAF ABDEL BAKY: Thank you, Razan. So yeah, everyone has a hidden agenda, and it’s good that we’re trying to trick the algorithms. Dr. Maha, do you have any comments? And please add your closing remarks, because we need to wrap up.


DR. MAHA ABDEL NASSER: Okay, thank you, Razan. As I said earlier, you can do nothing about it, because they own these platforms, and whatever we tell them, they own it. them, actually we talked to them directly and they claim that they don’t do that, but we know very well that they are doing that and there is nothing we can do about it, except tricking them as you said, and they couldn’t do anything about it, so we could manage somehow. As a closing remark, I think I will close with the positive note from Amr that the technology, despite of all the facts that we’ve been talking about, about the threats, the risks, the challenges, but we should look at the opportunities and we should think what could be the world without technology. It’s a completely different world and I don’t think that we can do anything without technology anymore, so this is the end of it, we have to live with it, even if we had to sacrifice some of our resources.


D

DR. MAHA ABDEL NASSER

Speech speed

113 words per minute

Speech length

2420 words

Speech time

1278 seconds

Cybersecurity threats and data privacy concerns

Explanation

Dr. Maha identifies cybersecurity and data privacy as primary risks in digital innovation. She emphasizes that these are the most important aspects that people worry about in the digital era.


Evidence

Reference to widespread concerns about data visibility and inability to control personal data online.


Major Discussion Point

Risks and Challenges of Digital Innovation


Agreed with

CALEB OLUMUYIWA OGUNDELE


Agreed on

Cybersecurity and data privacy are major risks in digital innovation


Differed with

NOHA ASHRAF ABDEL BAKY


Differed on

Focus on primary risks in digital innovation


Technological dependency and potential for system failures

Explanation

Dr. Maha highlights the risk of over-reliance on digital transformation. She points out that this dependency can lead to significant disruptions if systems fail.


Evidence

Example of airport shutdowns due to small bugs, causing widespread travel delays.


Major Discussion Point

Risks and Challenges of Digital Innovation


Developing alternative solutions and backups for technology failures

Explanation

Dr. Maha suggests that organizations need to find alternative solutions and backups for when technology fails. She emphasizes the importance of having traditional methods as a fallback option.


Evidence

Suggestion that corporations should have ways to mitigate technology failures and not have everything stop when systems fail.


Major Discussion Point

Strategies for Mitigating Online Risks


Differed with

CALEB OLUMUYIWA OGUNDELE


Differed on

Approach to mitigating technological dependency risks


Improving reporting mechanisms for online abuse

Explanation

Dr. Maha discusses the need for better reporting mechanisms for online abuse. She mentions existing hotlines and reporting options but acknowledges the lack of a centralized online portal for reporting.


Evidence

Reference to hotlines in Egypt for reporting online abuse and violence against women.


Major Discussion Point

Challenges in Addressing Online Abuse


Agreed with

NOHA ASHRAF ABDEL BAKY


Agreed on

Need for improved reporting mechanisms for online abuse


Challenges in holding tech platforms accountable for abusive content

Explanation

Dr. Maha argues that tech platforms should be held responsible for abusive content. She states that these platforms have the resources and tools to address such issues.


Evidence

Reference to platforms’ ability to take down content during conflicts, showing their capability to control content.


Major Discussion Point

Challenges in Addressing Online Abuse


C

CALEB OLUMUYIWA OGUNDELE

Speech speed

142 words per minute

Speech length

1849 words

Speech time

780 seconds

Need for government-led initiatives and regulatory sandboxes

Explanation

Caleb emphasizes the importance of government-led initiatives in managing innovation risks. He suggests the use of regulatory sandboxes to safely test new technologies like AI.


Evidence

Example of Singapore’s testing framework for AI systems.


Major Discussion Point

Strategies for Mitigating Online Risks


Agreed with

DR. MAHA ABDEL NASSER


Agreed on

Cybersecurity and data privacy are major risks in digital innovation


Differed with

DR. MAHA ABDEL NASSER


Differed on

Approach to mitigating technological dependency risks


Importance of cross-border collaboration and data sharing

Explanation

Caleb stresses the need for international collaboration in addressing digital risks. He advocates for data sharing agreements and global standards for risk assessment.


Evidence

Suggestion for international data sharing agreements and global standards for cyber threat intelligence.


Major Discussion Point

Strategies for Mitigating Online Risks


Abuse of regulatory frameworks by those in power

Explanation

Caleb points out that regulatory frameworks meant to protect vulnerable people online can be exploited by those in power. He highlights how political classes may misuse these laws to silence opposition.


Evidence

Example of the Cyber Crime Act in Nigeria being used to arrest people criticizing politicians.


Major Discussion Point

Risks and Challenges of Digital Innovation


Need for inclusive funding strategies to support women entrepreneurs

Explanation

Caleb argues for more support for women-owned SMEs in innovation and risk management. He emphasizes the need for inclusive funding strategies and capacity building for women entrepreneurs.


Evidence

Reference to Kenya’s policy of allocating 30% of government procurement to women, disabled persons, and other criteria.


Major Discussion Point

Balancing Innovation and Risk Management


N

NOHA ASHRAF ABDEL BAKY

Speech speed

106 words per minute

Speech length

641 words

Speech time

361 seconds

Digital divide between privileged and less privileged users

Explanation

Noha highlights the risk of a growing digital divide between privileged and less privileged technology users. She points out that half of the world’s population is still offline.


Evidence

Reference to half of the world’s population being offline.


Major Discussion Point

Risks and Challenges of Digital Innovation


Differed with

DR. MAHA ABDEL NASSER


Differed on

Focus on primary risks in digital innovation


Rapid pace of innovation outpacing policy and skills development

Explanation

Noha argues that the speed of digital innovation is outpacing the development of national strategies, digital skills, and policy frameworks. This creates challenges in managing the risks associated with new technologies.


Major Discussion Point

Risks and Challenges of Digital Innovation


Raising awareness and digital literacy among vulnerable groups

Explanation

Noha emphasizes the importance of raising awareness and improving digital literacy among vulnerable groups. She suggests that this could help prevent incidents of online abuse and cyberbullying.


Evidence

Reference to incidents where young women and teenagers took their lives due to online abuse.


Major Discussion Point

Strategies for Mitigating Online Risks


Cultural barriers preventing victims from reporting incidents

Explanation

Noha points out that cultural barriers often prevent victims from reporting online abuse incidents. She mentions that victims sometimes feel ashamed and prefer to remain silent about their experiences.


Evidence

Reference to victims retracting their reports to avoid public scrutiny and negative comments.


Major Discussion Point

Challenges in Addressing Online Abuse


Agreed with

DR. MAHA ABDEL NASSER


Agreed on

Need for improved reporting mechanisms for online abuse


H

HADIA ELMINIAWI

Speech speed

112 words per minute

Speech length

2298 words

Speech time

1224 seconds

Bias in AI systems and lack of transparency

Explanation

Hadia discusses the risk of bias in AI systems and the lack of transparency in their decision-making processes. She explains how historical data used in AI training can perpetuate existing prejudices.


Evidence

Examples of biased outcomes in security screenings, employment opportunities, and legal or medical systems.


Major Discussion Point

Risks and Challenges of Digital Innovation


Establishing clear principles for responsible AI use by organizations

Explanation

Hadia suggests that organizations using AI need to establish clear principles for its responsible use. She emphasizes the importance of defining guiding principles related to accuracy, accountability, fairness, and safety.


Major Discussion Point

Strategies for Mitigating Online Risks


A

AUDIENCE

Speech speed

133 words per minute

Speech length

1346 words

Speech time

604 seconds

Difficulty in tracing anonymous online actors

Explanation

An audience member points out the challenge of tracing anonymous online actors who engage in harmful activities. They highlight how easy it is to create fake accounts and bypass existing regulations.


Evidence

Example of creating fake social media accounts with fake email addresses and phone numbers.


Major Discussion Point

Challenges in Addressing Online Abuse


Importance of not hindering innovation due to fear of risks

Explanation

An audience member argues for the importance of not letting fear of risks hinder innovation. They suggest focusing on the opportunities that new technologies bring rather than solely on the challenges.


Evidence

Historical example of the Islamic world’s reluctance to adopt printing technology due to fears of inaccuracy in reproducing religious texts.


Major Discussion Point

Balancing Innovation and Risk Management


Bias in social media algorithms affecting freedom of expression

Explanation

An audience member raises concerns about bias in social media algorithms affecting freedom of expression. They point out how certain viewpoints, especially on political topics, are suppressed or labeled as hate speech.


Evidence

Personal experience as a content creator facing difficulties in sharing political views, especially related to the war in Gaza.


Major Discussion Point

Balancing Innovation and Risk Management


M

MARIAM FAYEZ

Speech speed

106 words per minute

Speech length

289 words

Speech time

162 seconds

Need for better collaboration between government and civil society

Explanation

Mariam emphasizes the importance of collaboration between government and civil society in addressing online risks. She suggests that successful civil society initiatives can attract government attention and support.


Evidence

Examples of successful initiatives in Egypt addressing women’s harassment and crisis response.


Major Discussion Point

Challenges in Addressing Online Abuse


L

LISA VERMEER

Speech speed

145 words per minute

Speech length

397 words

Speech time

163 seconds

Challenges in implementing AI regulations across different countries

Explanation

Lisa discusses the challenges in implementing AI regulations across different countries, using the example of the European AI Act. She highlights the complexities of transitioning such laws into national legislative systems.


Evidence

Reference to ongoing work on the European AI Act and its provisions for addressing deepfakes and manipulated content.


Major Discussion Point

Balancing Innovation and Risk Management


Agreements

Agreement Points

Cybersecurity and data privacy are major risks in digital innovation

speakers

DR. MAHA ABDEL NASSER


CALEB OLUMUYIWA OGUNDELE


arguments

Cybersecurity threats and data privacy concerns


Need for government-led initiatives and regulatory sandboxes


summary

Both speakers emphasize the importance of addressing cybersecurity threats and data privacy concerns in the digital era, suggesting the need for government initiatives and regulatory measures.


Need for improved reporting mechanisms for online abuse

speakers

DR. MAHA ABDEL NASSER


NOHA ASHRAF ABDEL BAKY


arguments

Improving reporting mechanisms for online abuse


Cultural barriers preventing victims from reporting incidents


summary

Both speakers highlight the importance of enhancing reporting mechanisms for online abuse and addressing cultural barriers that prevent victims from reporting incidents.


Similar Viewpoints

All three speakers express concerns about the challenges in regulating and holding accountable tech platforms and those in power for online content and expression.

speakers

DR. MAHA ABDEL NASSER


CALEB OLUMUYIWA OGUNDELE


NOHA ASHRAF ABDEL BAKY


arguments

Challenges in holding tech platforms accountable for abusive content


Abuse of regulatory frameworks by those in power


Bias in social media algorithms affecting freedom of expression


Unexpected Consensus

Importance of balancing innovation and risk management

speakers

DR. MAHA ABDEL NASSER


AUDIENCE


arguments

Developing alternative solutions and backups for technology failures


Importance of not hindering innovation due to fear of risks


explanation

Despite discussing risks, both Dr. Maha and an audience member unexpectedly agree on the importance of not letting fear of risks hinder innovation, suggesting a balanced approach to digital transformation.


Overall Assessment

Summary

The speakers generally agree on the importance of addressing cybersecurity threats, improving reporting mechanisms for online abuse, and the need for better regulation of tech platforms. There is also a shared recognition of the challenges in balancing innovation with risk management.


Consensus level

Moderate consensus on major issues, with some variations in proposed solutions and emphasis. This level of agreement suggests a common understanding of the challenges in digital innovation and online safety, which could facilitate collaborative efforts in developing strategies to address these issues.


Differences

Different Viewpoints

Approach to mitigating technological dependency risks

speakers

DR. MAHA ABDEL NASSER


CALEB OLUMUYIWA OGUNDELE


arguments

Developing alternative solutions and backups for technology failures


Need for government-led initiatives and regulatory sandboxes


summary

Dr. Maha emphasizes developing alternative solutions and backups, while Caleb focuses on government-led initiatives and regulatory sandboxes to address technological risks.


Focus on primary risks in digital innovation

speakers

DR. MAHA ABDEL NASSER


NOHA ASHRAF ABDEL BAKY


arguments

Cybersecurity threats and data privacy concerns


Digital divide between privileged and less privileged users


summary

Dr. Maha prioritizes cybersecurity and data privacy risks, while Noha emphasizes the risk of a growing digital divide between privileged and less privileged users.


Unexpected Differences

Perspective on technological dependency

speakers

DR. MAHA ABDEL NASSER


AUDIENCE


arguments

Technological dependency and potential for system failures


Importance of not hindering innovation due to fear of risks


explanation

While Dr. Maha expresses concern about technological dependency and its risks, an audience member unexpectedly argues for embracing innovation despite potential risks, citing historical examples. This difference highlights the tension between risk mitigation and fostering innovation.


Overall Assessment

summary

The main areas of disagreement revolve around prioritizing different risks in digital innovation, approaches to mitigating these risks, and the balance between risk management and fostering innovation.


difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the existence of various risks in digital innovation, speakers differ in their prioritization of these risks and proposed solutions. These differences reflect the complexity of managing digital innovation risks and highlight the need for multifaceted approaches that consider various perspectives and stakeholder needs.


Partial Agreements

Partial Agreements

All speakers agree on the need to address online abuse, but they propose different approaches. Dr. Maha focuses on improving reporting mechanisms, Caleb emphasizes cross-border collaboration, and Noha stresses the importance of raising awareness and digital literacy.

speakers

DR. MAHA ABDEL NASSER


CALEB OLUMUYIWA OGUNDELE


NOHA ASHRAF ABDEL BAKY


arguments

Improving reporting mechanisms for online abuse


Need for cross-border collaboration and data sharing


Raising awareness and digital literacy among vulnerable groups


Similar Viewpoints

All three speakers express concerns about the challenges in regulating and holding accountable tech platforms and those in power for online content and expression.

speakers

DR. MAHA ABDEL NASSER


CALEB OLUMUYIWA OGUNDELE


NOHA ASHRAF ABDEL BAKY


arguments

Challenges in holding tech platforms accountable for abusive content


Abuse of regulatory frameworks by those in power


Bias in social media algorithms affecting freedom of expression


Takeaways

Key Takeaways

Digital innovation brings significant risks like cybersecurity threats, data privacy concerns, and technological dependency


There is a need for better frameworks and strategies to manage online risks while fostering innovation


Collaboration between governments, tech platforms, and civil society is crucial for addressing online abuse and risks


Women and vulnerable groups face disproportionate challenges in the digital realm and need targeted support


Balancing innovation with risk management is an ongoing challenge that requires flexible approaches


Resolutions and Action Items

Establish clear principles for responsible AI use by organizations


Develop alternative solutions and backups for technology failures


Improve reporting mechanisms for online abuse


Raise awareness and digital literacy among vulnerable groups


Create more inclusive funding strategies to support women entrepreneurs in tech


Unresolved Issues

How to effectively hold tech platforms accountable for abusive content


How to address anonymity and traceability of bad actors online


How to implement AI regulations consistently across different countries


How to balance freedom of expression with content moderation on social media platforms


How to bridge the digital divide between privileged and less privileged users


Suggested Compromises

Using regulatory sandboxes to test AI systems while allowing for innovation


Balancing government oversight with industry self-regulation for tech platforms


Focusing on frameworks rather than strict legislation to allow flexibility for rapidly changing technology


Thought Provoking Comments

We are so dependent on the technology now and I think this is one of the major threats, risk and challenge at the same time.

speaker

Dr. Maha Abdel Nasser


reason

This comment highlighted a less obvious but critical risk of digital innovation – over-dependence on technology. It shifted the focus from more commonly discussed risks like cybersecurity to a broader societal challenge.


impact

This led to further discussion on the need for backup systems and alternative solutions when technology fails, deepening the conversation on risk mitigation strategies.


We need to first of all start having what we call government-led initiatives. Some of those initiatives could also be based on legislation, regulatory frameworks, sandboxes, where a company can also test innovation safely.

speaker

Caleb Olumuyiwa Ogundele


reason

This comment introduced concrete ideas for managing innovation risks through policy and regulatory approaches. It provided a practical perspective on how to balance innovation and risk.


impact

It sparked discussion on the role of government in facilitating safe innovation, leading to conversations about cross-border collaboration and public-private partnerships.


So bias in AI applications and systems happens when outcomes of AI systems favor or disadvantage a certain group, or favor certain outcomes, or favors certain individuals.

speaker

Hadia Elminiawi


reason

This comment brought attention to the critical issue of bias in AI systems, highlighting the ethical implications of digital innovation.


impact

It led to a deeper exploration of the challenges in ensuring fairness and accountability in AI applications, broadening the discussion beyond just technical risks to include social and ethical considerations.


There is no platform, single platform, as I said, I suggested that to have something like that, and the reporting should be online, as it is all online crimes, but it didn’t go through

speaker

Dr. Maha Abdel Nasser


reason

This comment highlighted a practical gap in addressing online abuse and cybercrime, pointing out the lack of a unified reporting system.


impact

It sparked discussion on the need for better reporting mechanisms and the challenges in implementing such systems, leading to considerations of both technical and political barriers.


Let us please not consider the risks as much as we are considering the opportunities that could be, that could AI be opening to us.

speaker

Amr Hashem


reason

This comment provided a counterpoint to the risk-focused discussion, reminding participants of the potential benefits of digital innovation.


impact

It shifted the tone of the conversation towards a more balanced view of digital innovation, encouraging participants to consider both risks and opportunities.


Overall Assessment

These key comments shaped the discussion by broadening its scope from specific technical risks to wider societal, ethical, and policy considerations. They encouraged a more nuanced and multifaceted examination of digital innovation, balancing concerns about risks with recognition of opportunities. The comments also highlighted practical challenges in implementing safeguards and reporting systems, leading to a more grounded discussion of real-world implementation issues. Overall, these insights deepened the level of analysis and introduced greater complexity to the conversation, moving it beyond surface-level concerns to more systemic and forward-looking considerations.


Follow-up Questions

How can we establish a single national platform for reporting cyber bullying?

speaker

Hadia Elminiawi


explanation

A centralized reporting system could improve response to online abuse and make it easier for victims to seek help


What international frameworks exist for reporting online incidents like DNS abuse?

speaker

Hadia Elminiawi


explanation

Understanding existing global mechanisms could help improve coordination in addressing online security issues


How can we require tech platforms to take responsibility for abusive content?

speaker

Hadia Elminiawi


explanation

Holding platforms accountable could reduce the spread of harmful content and protect users


To what extent do online risks hamper the entrepreneurial spirit of SMEs, particularly female entrepreneurs?

speaker

Lisa Vermeer


explanation

Understanding the impact of online risks on business innovation could inform policies to support entrepreneurs


How can policymakers facilitate the establishment of collaborative platforms for exchanging insights on digital innovation and risk management?

speaker

Hadia Elminiawi


explanation

Improved collaboration could lead to more effective strategies for managing digital risks


How do executive bodies work together to exchange insights in the field of digital innovation and risk management?

speaker

Hadia Elminiawi


explanation

Understanding current coordination efforts could identify areas for improvement in addressing digital challenges


How can we develop counter-technologies to address issues like deep fakes while preserving the benefits of AI?

speaker

Amr Hashem


explanation

Balancing innovation with risk mitigation is crucial for responsible technological advancement


Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.