AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023
Event report
Speakers and Moderators
Speakers:
- Nabinda Aryal, Government, Asia-Pacific Group
- Tatiana Tropina, Civil Society, Western European and Others Group (WEOG)
- Sarim Aziz, Private Sector, Asia-Pacific Group
- Michael Ilishebo, Government, African Group
Moderators:
- Babu Ram Aryal, Civil Society, Asia-Pacific Group
Table of contents
Disclaimer: It should be noted that the reporting, analysis and chatbot answers are generated automatically by DiploGPT from the official UN transcripts and, in case of just-in-time reporting, the audiovisual recordings on UN Web TV. The accuracy and completeness of the resources and results can therefore not be guaranteed.
Knowledge Graph of Debate
Session report
Full session report
Sarim Aziz
In the discussion, multiple speakers addressed the role of AI in cybersecurity, emphasizing that AI offers more opportunities for cybersecurity and protection rather than threats. AI has proven effective in removing fake accounts and detecting inauthentic behavior, making it a valuable tool for safeguarding users online. One speaker stressed the importance of focusing on identifying bad behavior rather than content, noting that fake accounts were detected based on their inauthentic behavior, regardless of the content they shared.
The discussion also highlighted the significance of open innovation and collaboration in cybersecurity. Speakers emphasized that an open approach and collaboration among experts can enhance cybersecurity measures. By keeping AI accessible to experts, the potential for misuse can be mitigated. Additionally, policymakers were urged to incentivize open innovation and create safe environments for testing AI technologies.
The potential of AI in preventing harms was underscored, with the “StopNCII.org” initiative serving as an example of using AI to block non-consensual intimate imagery across platforms and services. The discussion also emphasized the importance of inclusivity in technology, with frameworks led by Japan, the OECD, and the White House focusing on inclusivity, fairness, and eliminating bias in AI development.
Speakers expressed support for open innovation and the sharing of AI models. Meta’s release of the open-source AI model “Lama2” was highlighted, enabling researchers and developers worldwide to use and contribute to its improvement. The model was also submitted for vulnerability evaluation at DEF CON, a cybersecurity conference.
The role of AI in content moderation on online platforms was discussed, recognizing that human capacity alone is insufficient to manage the vast amount of content generated. AI can assist in these areas, where human resources fall short.
Furthermore, the discussion emphasized the importance of multistakeholder collaboration in managing AI-related harms, such as child safety and counterterrorism efforts. Public-private partnerships were considered crucial in effectively addressing these challenges.
The potential benefits of open-source AI models for developing countries were explored. It was suggested that these models present immediate opportunities for developing countries, enabling local researchers and developers to leverage them for their specific needs.
Lastly, the need for technical standards to handle AI content was acknowledged. The discussion proposed implementing watermarking for audiovisual content as a potential standard, with consensus among stakeholders.
Overall, the speakers expressed a positive sentiment regarding the potential of AI in cybersecurity. They highlighted the importance of open innovation, collaboration, inclusivity, and policy measures to ensure the safe and responsible use of AI technologies. The discussion provided valuable insights into the current state and future directions of AI in cybersecurity.
Michael Ilishebo
The use of Artificial Intelligence (AI) has raised concerns regarding its negative impact on different aspects of society. One concern is that AI has enabled crimes that were previously impossible. An alarming trend is the accessibility of free AI tools online, allowing individuals with no computing knowledge to program malware for criminal purposes.
Another concern is the challenges AI poses for law enforcement agencies. AI technology performs tasks at a pace that surpasses human comprehension, making it difficult to differentiate between AI-generated content and human interaction. This creates obstacles for law enforcement in investigating and preventing crimes. Additionally, AI’s ability to generate realistic fake videos and mimic voices complicates the effectiveness of digital forensic tools, threatening their reliability.
Developing countries face unique challenges with regards to AI. They primarily rely on AI services and products from developed nations and lack the capacity to develop their own localized AI solutions or train AI based on their data sets. This dependency on foreign AI solutions increases the risk of criminal misuse. Moreover, the public availability of language models can be exploited for criminal purposes, further intensifying the threat.
The borderless nature of the internet and the use of AI have contributed to a rise in internet crimes. Meta, a social media company, reported the detection of nearly a billion fake accounts within the first quarter of their language model implementation. The proliferation of fake accounts promotes the circulation of misinformation, hate speech, and other inappropriate content. Developing countries, facing resource limitations, struggle to effectively filter and combat such harmful content, exacerbating the challenge.
Notwithstanding the negative impact, AI also presents positive opportunities. AI has the potential to revolutionize law enforcement by detecting, preventing, and solving crimes. AI’s ability to identify patterns and signals can anticipate potential criminal behavior, often referred to as pre-crime detection. However, caution is necessary to ensure the ethical use of AI in law enforcement, preventing human rights violations and unfair profiling.
In the realm of cybersecurity, the integration of AI has become essential. National cybersecurity strategies need to incorporate AI to effectively defend against cyber threats. This integration requires the establishment of regulatory frameworks, collaborative capacity-building efforts, data governance, incidence response mechanisms, and ethical guidelines. AI and cybersecurity should not be considered in isolation due to their interconnected impact on securing digital systems.
In conclusion, while AI brings numerous benefits, significant concerns exist regarding its negative impact. From enabling new forms of crime to posing challenges for law enforcement and digital forensic tools, AI has far-reaching implications for societal safety and security. Developing countries, particularly, face specific challenges due to their reliance on foreign AI solutions and limited capacity to filter harmful content. Policymakers must prioritize ethical use of AI and address the intertwined impact of AI and cybersecurity to harness its potential while safeguarding against risks.
Waqas Hassan
Regulators face a delicate balancing act in protecting both industry and consumers from cybersecurity risks, particularly those related to AI in developing countries. The rapid advancement of technology and the increasing sophistication of cyber threats have made it challenging for regulators to stay ahead in ensuring the security of both industries and individuals.
Developing nations require more capacity building and technology transfer from developed countries to effectively tackle these cybersecurity challenges. Technology, especially cybersecurity technologies, is primarily developed in the West, putting developing countries at a disadvantage. This imbalance hinders their ability to effectively defend against cyber threats and leaves them vulnerable to cyber attacks. It is crucial for developed countries to support developing nations by providing the necessary tools, knowledge, and resources to enhance their cyber defense capabilities.
The pace at which cyber threats are evolving is surpassing the rate at which defense mechanisms are improving. This disparity poses a significant challenge for regulators and exposes the vulnerability of developing countries’ cybersecurity infrastructure. The proactive approach is crucial in addressing this issue, as reactive defense mechanisms are often insufficient in mitigating the sophisticated cyber threats faced by nations worldwide. Taking preventive measures, such as taking down potential threats before they become harmful, can significantly improve cybersecurity posture.
Developing countries often face difficulties in keeping up with cyber defense due to limited tools, technologies, knowledge, resources, and investments. These limitations result in a lag in their cyber defense capabilities, leaving them susceptible to cyber attacks. It is imperative for both developed and developing countries to work towards bridging this gap by standardizing technology, making it more accessible globally. Standardization promotes a level playing field and ensures that both nations have equal opportunities to defend against cyber threats.
Sharing information, tools, experiences, and human resources plays a vital role in tackling AI misuse and improving cybersecurity posture. Developed countries, which have the investment muscle for AI defense mechanisms, should collaborate with developing nations to share their expertise and knowledge. This collaboration fosters a fruitful exchange of ideas and insights, leading to better cybersecurity practices globally.
Global cooperation on AI cybersecurity should begin at the national level. Establishing a dialogue among nations, along with sharing information, threat intelligence, and the development of AI tools for cyber defense, paves the way for effective global cooperation. Regional bodies such as the Asia-Pacific CERT and ITU already facilitate cybersecurity initiatives and can further contribute to this cooperation by organizing cyber drills and fostering collaboration among nations.
The responsibility for being cyber ready needs to be distributed among users, platforms, and the academic community. Cybersecurity is a collective effort that requires the cooperation and active involvement of all stakeholders. Users must remain vigilant and educated about potential cyber threats, while platforms and institutions must prioritize the security of their systems and infrastructure. In parallel, the academic community should actively contribute to research and innovation in cybersecurity, ensuring the development of robust defense mechanisms.
Despite the limitations faced by developing countries, they should still take responsibility for being ready to tackle cybersecurity challenges. Recognizing their limitations, they can leverage available resources, capacity building initiatives, and knowledge transfer to enhance their cyber defense capabilities. By actively participating in cybersecurity efforts, developing countries can contribute to creating a safer and more secure digital environment.
In conclusion, regulators face an ongoing challenge in safeguarding both industry and consumers from cybersecurity risks, particularly those related to AI. To address these challenges, developing nations require greater support in terms of capacity building, technology transfer, and standardization of technology. A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucial components in building robust defense mechanisms and ensuring a secure cyberspace for all.
Babu Ram Aryal
Babu Ram Aryal advocates for comprehensive discussions on the positive aspects of integrating artificial intelligence (AI) in cybersecurity. He emphasizes the crucial role that AI can play in enhancing cyber defense measures and draws attention to the potential risks associated with its implementation.
Aryal highlights the significance of AI in bolstering cybersecurity against ever-evolving threats. He stresses the need to harness the capabilities of AI in detecting and mitigating cyber attacks, thereby enhancing the overall security of digital systems. By automating the monitoring of network activities, AI algorithms can quickly identify suspicious patterns and respond in real-time, minimizing the risk of data breaches and information theft.
Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurity. As AI systems become increasingly intelligent and autonomous, there are concerns about their susceptibility to malicious exploitation or manipulation. Understanding these vulnerabilities is crucial in developing robust defense mechanisms to safeguard against such threats.
To facilitate a comprehensive examination of the topic, Aryal assembles a panel of experts from diverse fields, promoting a multidisciplinary approach to exploring the intersection of AI and cybersecurity. This collaboration allows for a detailed analysis of the potential benefits and challenges presented by AI in this domain.
The sentiment towards AI’s potential in cybersecurity is overwhelmingly positive. The integration of AI technologies in cyber defense can significantly enhance the security of both organizations and individuals. However, there is a need to strike a balance and actively consider the associated risks to ensure ethical and secure implementation of AI.
In conclusion, Babu Ram Aryal advocates for exploring the beneficial aspects of AI in cybersecurity. By emphasizing the role of AI in strengthening cyber defense and addressing potential risks, Aryal calls for comprehensive discussions involving experts from various fields. The insights gained from these discussions can inform the development of effective strategies that leverage AI’s potential while mitigating its associated risks, resulting in improved cybersecurity measures for the digital age.
Audience
The extended analysis highlights several important points related to the impact of technology and AI on the global south. One key argument is that individual countries in the global south lack the capacity to effectively negotiate with big tech players. This imbalance is due to the concentration of technology in the global north, which puts countries in the global south at a disadvantage. The supporting evidence includes the observation that many resources collected from the third world and global south are directed towards the developed economy, exacerbating the technological disparity.
Furthermore, it is suggested that AI technology and its benefits are not equally accessible to and may not equally benefit the global south. This argument is supported by the fact that the majority of the global south’s population resides in developing countries with limited access to AI technology. The issue of affordability and accessibility of AI technology is raised, with the example of ChatGPT, an AI system that is difficult for people in developing economies to afford. The supporting evidence also highlights the challenges faced by those with limited resources in addressing AI technology-related issues.
Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as persistent issues. While accessibility and inclusivity may be promoted in theory, they are not universally implemented, thereby exposing existing inequalities across different regions. The argument is reinforced by the observation that politics between the global north and south often hinder the universal implementation of accessibility and inclusivity practices.
The analysis also raises questions about the transfer of technology between the global north and south and its implications, particularly in terms of international relations and inequality. The sentiment surrounding this issue is one of questioning, suggesting the need for further investigation and examination.
Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents AI as a tool with the potential to be used against humans, leading to various threats. Furthermore, the importance of responsive measures that keep pace with technological evolution is emphasized. The argument is that measures aimed at addressing new tech threats need to be as fast and efficient as the development of technology itself.
Concerns about the accessibility and inclusion of AI in developing countries are also highlighted. The lack of infrastructure and access to electricity in some regions, such as Africa, pose challenges to the adoption of AI technology. Additionally, limited internet access and digital literacy hinder the effective integration of AI in these countries.
The potential risks that AI poses, such as job insecurity and limited human creativity, are areas of concern. The sentiment expressed suggests that AI is perceived as a threat to job stability, and there are fears that becoming consumers of AI may restrict human creativity.
To address these challenges, it is argued that digital literacy needs to be improved in order to enhance understanding of the risks and benefits of AI. The importance of including everyone in the advancement of AI, without leaving anyone behind, is emphasized.
The analysis delves into the topic of cyber defense, advocating for the necessity of defining cyber defense and clarifying the roles of different actors, such as governments, civil society, and tech companies, in empowering developing countries in this field. The capacity of governments to implement cyber defense strategies is questioned, using examples such as Nepal adopting a national cybersecurity policy with potential limitations in transparency and discussions.
The need to uphold agreed values, such as the Human Rights Charter and internet rights and principles, is also underscored. The argument is that practical application of these values is necessary to maintain a fair and just digital environment.
The analysis points out the tendency for AI and cybersecurity deliberations to be conducted in isolation at the multilateral level, emphasizing the importance of multidisciplinary governance solutions that cover all aspects of technology. Additionally, responsible behavior is suggested as a national security strategy for effectively managing the potential risks associated with AI and cybersecurity.
In conclusion, the extended analysis highlights the disparities and challenges faced by the global south in relation to technology and AI. It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to ensure equitable benefits and mitigate risks. Ultimately, the goal should be to empower all nations and individuals to navigate the evolving technological landscape and foster a globally inclusive and secure digital future.
Tatiana Tropina
The discussions surrounding AI regulation and challenges in the cybersecurity realm have shed light on the importance of implementing risk-based and outcome-based regulations. It has been recognized that while regulation should address the threats and opportunities presented by AI, it must also avoid stifling innovation. Risk-based regulation, which assesses risks during the development of new AI systems, and outcome-based regulation, which aims to establish a framework for desired outcomes, allowing the industry to achieve them on their own terms, were highlighted as potential approaches.
There are concerns regarding AI bias, accountability, and the transparency of algorithms. There is a need to address these issues, along with the growing challenge of deepfakes. The evolving nature of AI technology poses challenges such as the generation of malware and spear-phishing campaigns. Future challenges include AI bias, algorithm transparency, and the impact of deepfakes. These concerns need to be effectively addressed to ensure the responsible and ethical development and deployment of AI.
Cooperation between industry, researchers, governments, and law enforcement was emphasized as crucial for effective threat management and defense in the AI domain. Building partnerships and collaboration among these stakeholders can enhance response capabilities and mitigate potential risks.
While AI offers significant benefits, such as its effective use in hash comparison and database management, its potential threats and misuse require a deeper understanding and investment in research and development. The need to comprehend and address AI-related risks and challenges was underscored to establish future-proof frameworks.
The discussions also highlighted the lack of capacity to assess AI and cyber threats globally, both in the global south and global north. This calls for increased efforts to enhance understanding and build expertise to effectively address such threats on a global scale. Furthermore, the importance of cooperation between the global north and south was stressed, emphasizing the need for collaboration to tackle the challenges and harness the potential of AI technology.
The concept of fairness in AI was noted as needing redefinition to encompass its impact globally. Currently, fairness primarily applies to the global north, necessitating a broader perspective that considers the impact on all regions of the world. It was also suggested that global cooperation should focus on building a better future and emphasizing the benefits of AI.
Regulation was seen as insufficient on its own, requiring accompanying actions from civil society, the technical community, and companies. External scrutiny of AI algorithms by civil society and research organizations was proposed to ensure their ethical use and reveal potential risks.
The interrelated UN processes of cybersecurity, AI, and cybercrime were mentioned as somewhat artificially separated. This observation underscores the need for a more holistic approach to address the interdependencies and mutual influence of these processes.
The absence of best practices in addressing cybersecurity and AI issues was recognized, emphasizing the need to invest in capacity building and the development of effective strategies.
The proposal for a global treaty on AI by the Council of Europe was deemed potentially transformative in achieving transparency, fairness, and accountability. Additionally, the EU AI Act, which seeks to prohibit profiling and certain other AI uses, was highlighted as a significant development in AI regulation.
The importance of guiding principles and regulatory frameworks was stressed, but it was also noted that they alone do not provide a clear path for achieving transparency, fairness, and accountability. Therefore, the need to further refine and prioritize these principles and frameworks was emphasized.
Overall, the discussions highlighted the complex challenges and opportunities associated with AI in cybersecurity. It is crucial to navigate these complexities through effective regulation, collaboration, investment, and ethical considerations to ensure the responsible and beneficial use of AI technology.
Session transcript
Babu Ram Aryal:
Good evening, tech team, it’s okay. Welcome to this workshop number 86 at this hall. It’s very a pleasure to be here discussing about artificial intelligence and cyber defence, especially for developing counter-prospective. This is Babu Ram Aryaland by profession I’m a lawyer and I’ve been engaged in various law and technology issues from Nepal. And I’d like to introduce very briefly my panellist this evening. My panellist, Sarim, is from META and he leads META South Asia policy team and is significantly engaged in AI and policy and technology issues. And he will be representing this panel from business perspective. My colleague is Vakas Hassan. He is lead of international affairs of Pakistan telecommunication authority, Pakistan. And he is engaged in regulatory perspective and he will be sharing regional and of course from Pakistan perspective on regulatory perspective. My colleague Michael is from Zambia and he is cyber analyst and he is investigator in cyber crime and he will be representing from law enforcement agency and Tatia Natropina is… assistant professor from Leiden University, and she will be representing policy perspective, especially from European perspective. So artificial intelligence has given very significant opportunity for all of us. It has now become a big word though it’s not a new one, but recently it has become very popular tools and technology, and lots of threats also have posed by the contribution of technology of artificial intelligence. At this panel, we’ll be discussing how artificial intelligence could be beneficial in especially cybersecurity perspective or defense perspective, and also how we can discuss on the framework of defense side on potential risk of artificial intelligence in cybersecurity, cybercrime mitigation of these kind of issues. I’ll go directly with Michael who is experiencing directly various risk and threats and handling cybercrime cases in Zambia. Michael, please share your experience and the perspective, especially you have been very engaged in IGF perspective. I know you have been MAG member and engaged in African continent as well. Floor is yours, Michael.
Michael Ilishebo:
Good afternoon, and good morning, and good evening. I know the time zone for Japan is difficult for most of us who are not from this region. Of course, in Africa, it’s morning. In South America, it’s probably in the evenings. All protocols observed. So, basically, I am a law enforcement officer working for Zambia Police Service and the Cybercrime Unit. In terms of the current crime landscape, we’ve seen an increase in terms of crime that are technology enabled. We’ve seen crimes that you wouldn’t hope, like expect that such a thing would happen, but at the end of it all, we’ve come to discover that most of these crimes that have been committed are enabled by AI. I’ll give you an example. If we take a person who’s never been to college or who’s never done any computing course, is able to program a computer malware or a computer program that they’re using it for their criminal intent, you’d ask, what skills have they got for them to execute such? All we’ve come to understand is everything has always been enabled by AI, especially with the coming of just GPT and other AI-based tools online, which basically are free. With their time on hand, they will be able to come up with something that they may execute in their criminal activities. So, this itself has posed a serious challenge for law enforcers, especially on the African continent and mostly to developing countries. Beyond that, of course, we handle cases, we handle matters where it has become difficult to distinguish a human element and an artificial intelligence generated as an image, whether it’s a video. So, as a result, when such cases go to court or when we arrest such traitors, it’s a great area on our part because the AI technology are able to do much, much more and much, much faster than a human can comprehend. So as a result, from the law enforcement perspective, I think AI has caused some, a bit of some challenges. What kind of challenges you have experienced as a law enforcement agency significantly? So basically, it comes to the use of digital forensic tools. Like, I’ll give an example. A video can be generated, that would appear to be genuine and everyone else would believe it and yet it is not. You can have cases where, which have to do with freedom of expression, where somebody’s voice has been copied and if you literally listen to it, you’d believe that indeed this is a person who has been issuing this statement, when in fact not. So even emails, you can receive an email that genuinely seems to come from a genuine source and yet probably it’s been AI written and everything points out to an individual or to an organization, at the end of the day, as you receive it, you have trust in it. So basically, there are many, many, many areas. Each and every day, we are learning some new challenges and new opportunities for us to probably catch up with the use of AI in our policing and day-to-day activities as we also try to distinguish AI activities and human interaction activities.
Babu Ram Aryal:
Thank you, Michael. I’ll come to Tatiana. Tatiana is a researcher and significantly engaged in cybersecurity policy development. And as a researcher, how you see the development of AI, especially in cybersecurity issues and as you represent in our panel from European stakeholder. So what is the European position on this kind of issues from policy perspective, policy frameworks, what kind of issues are being dealt by European countries? And Tatiana.
Tatiana Tropina:
Thank you very much. And I do believe that in a way the threat and the opportunity that artificial intelligence brings for cybersecurity or security in general, like let’s say if we put it as protection from harm, would might be almost the same everywhere but the European Union indeed is trying to sort of deal with them and foresee them in a manner that would address the risks and harms. And I know that the big discussion in the policy community circles and also in academic circles is not the question anymore whether we need to regulate AI for the purpose of security and cybersecurity or whether we do not. The question is, how do we do this? How do we do, how do we protect people and also systems and like kind of from harm while not stifling innovation? And I do believe that right now there are two approaches that are discussed or not two, but mostly, we are targeting two things, the risk-based regulation. So when the new AI systems are going to be developed, the risk is going to be assessed and then based on risk regulation will either be there or not. And outcome-based regulation, you want to create some framework of what you want to achieve and then give industry some ability to achieve it by their own means as long as you protect from harm. But I do believe and I would like to second what the previous speaker said. From the law enforcement perspective, from crime perspective, the challenges are so many that sometimes. we are looking at them and we are getting sort of, how do I say it, not our judgment is clouded, but we have to do two things. We have to address the current challenges, why foresee the future challenges, right? So I do believe that right now we are talking a lot about risks from large language models, generation of spare fishing campaigns, generation of malware, and this is something that right now is already happening and it’s hard to regulate, but if we are looking to the future, we have to address a few things in terms of cybersecurity and risks. Sorry, yeah. Well, first of all, the AI bias, the accountability and transparency of algorithms. We have to address the issues of deepfakes and here it goes even beyond cybersecurity, it goes to information operations into the field of national security. So this is just my baseline and I’m happy to go into further discussions on this.
Babu Ram Aryal:
Thank you, Tatiana. Now, at the initial remarks, I’ll come to Sarim. And from an industry player, Meta is one of very significant player and Meta platform is also a platform that is very popular as well as there are many risk Meta platform where complained and not only Meta platform, you are just here, that’s why I mentioned, but these platforms are sometime, many countries they have complaints and they are not contributing, they are just been doing business and then technologies are weighing. issues by people and the bad people. So there are a few things like business perspective, technology perspective, as well as social perspective. So as a technology industry player, how you see the risk and opportunity of artificial intelligence, especially the topic that we have been discussing. And what could be the response from industry on addressing these kind of issues? Sorry.
Sarim Aziz:
Thank you, Babu, for the opportunity. I think this is a very timely topic. There’s been a lot of debate around sort of opportunities with AI and excitement around it, but also challenges and risks as our speakers have highlighted. I think I just wanna reframe this discussion from a different perspective. From our perspective, we have to actually understand the threat actors we’re kind of dealing with. They can sometimes be using quite simple methods to evade detection, but sometimes can use very sophisticated methods, AI being one of them. We have a cybersecurity team at Meta that’s been trying to stay ahead of the curve of these threat actors. And I wanna point to a sort of a tool, which is like our adversarial threat report, which we produce quarterly. And that’s just a great information tool out there just for policy tool as well, to understand the trends of what’s going on. This is where we report on in-depth analysis of influence operations that we see around the world, especially around coordinated inauthentic behavior. If you think about the issues we’re discussing around cybersecurity, a lot of that has to do with inauthentic behavior. Someone who’s trying to appear authentic, from a phishing email to a message you might receive and hacking attempts and other things. So that threat report is a great tool just to, and that’s something we do on a quarterly basis. We’ve been doing that for a long time. We also did a state of influence ops report between 2017 and 20 that shows the trends of how sophisticated these actors are. But from our perspective, I think we’ve seen three things with AI from a risk perspective that honestly does not concern us as much. I’ll explain why. Because one is, yes, like as Michael mentioned, the most typical use cases, AI generated photos and you try to appear like you’re a real profile, right? But frankly, if you think about it, that was happening even before AI. In fact, most of the actions that we were taking with on accounts that were fake, previously all had profile photos. It’s not like they didn’t have a photo. So whether that photo is a generated by AI or a real person shouldn’t matter because it’s actually about the behavior. And I think that’s my main point is that I think the challenge with gen AI is that we get a little bit stuck on the content and we need to change the conversation about how do we detect bad behavior, right? And so that’s one. Second thing we notice is that because of gen AI being the hype cycle, the fact that it’s almost every session here at IGF is about AI, it becomes an easy target for phishing and scams because all you need to do is say, hey, click on this to access chat GPT for free. And people are, because they’ve heard of AI, they think it’s cool. They’re more willing to get duped into those kinds of sort of hype cycles, which is common with things like AI and other things. The third is like we, as I think Michael saluted this and Tatiana as well, that it does make it a little bit easier for, especially I would say non-English speakers who want to scam others to use gen AI, whether you wanna make ransomware or malware to make it easier because now you’ve got a tool that will help you fix your language and make it look all pretty. So it’s like, okay, so you’ve got a very nice auto-complete spell checker that can make sure your things are well-written. So those are sort of the three high-level threats, but honestly, what I would say is that we haven’t seen. a major difference in our enforcement. And I’ll give you an example. In quarter one of this year, we also have a transparency report where we report on, we measure ourselves and how good is our AI. And I think that’s the point I’m trying to get to is that we are more excited about the opportunities AI brings in cybersecurity and helping cyber defenders and helping people keep safe versus the risk. And this is one example. 99.7% of the fake accounts that we removed in quarter one of this year on Facebook were removed by AI. And if I give you that number, it’s staggering. It’s 676 million accounts were removed in just one quarter by AI alone, right? That’s the scale. So when you talk about scale detection and has nothing to do with content, I just wanna bring it back to that. What we detected was inauthentic behavior, fake behavior. It shouldn’t matter whether your profile photo was from chat GPT or it doesn’t matter or your text issue. Because once you get into the content, you’re getting into a weeds of what is the intent and you don’t know the intent, right? Whether it’s real or… And in fact, I’ll also point to the fact that some of the worst videos, you talked about fake videos are actually not the gen AI ones. If you look at the ones that went the most viral, they are real videos. And it’s the simplest manipulations that have fooled people. So I’m pointing to the US Speaker of the House, Nancy Pelosi, her video that went viral. All that they did was they slowed it down and they didn’t use any AI for that. And that had the most negative, like the highest impact because people believed that there was a problem, right? With the individual, which clearly wasn’t the case. It was an edited video. So I guess what I’m trying to say is that the bad actors find a way to use these tools and they will find any tool that’s out there. But I think, so we really have to get focused on the behavior and detection piece and I can get into that more. That’s it for now.
Babu Ram Aryal:
Thanks Sarim. It’s very… Encouraging thing that 99% fake accounts are removed by AI. And what about reverse situation? Is there any intervention from AI on negative side platform?
Sarim Aziz:
Like I said, I mentioned the three areas. Obviously, when you get into large language models, you know, I also wanna make the point that we believe the solution here in getting to solutions a bit early, but is that more people in the cybersecurity space, people who, you know, we talk about amplifying the good, we need to use it for good and use it for keeping people safe. And we can do that through open innovation and open approach and collaboration, right? So of course the risks are there, but if you keep something closed and you only give it access to a few companies or a few individuals, then bad actors will find a way to get it anyway, and they will use it for bad purposes. But if you make sure it’s accessible and open for cybersecurity experts, for the community, then I think you can use open innovation to really make sure the cyber defenders are using the technology to improving it. And this 99.7 is an example of that. I mean, we open source a lot of our AI technology actually for communities and for developers and other platforms to use as well.
Babu Ram Aryal:
Thanks, I’ll come back to you on next round of Q and A. Waqas, you are at very hot seat. I know regulatory agencies are facing lots of challenge by technology and now telecom regulators have very big roles on mitigating risk of AI and telecommunication and of course internet. So from your perspective, what you see is the major issue as a regulator or as a government when artificial intelligence is. challenging the platform in the way that people are feeling risk and of course from your Pakistani perspective as well and how you dealt in this kind of situation in your country. Can you say some lines on this?
Waqas Hassan:
Yeah, thanks Babu. Actually thanks for setting up the context for my initial remarks here because you already said that I’m in a hot seat. Even now I’m in the middle of my platform, the police and the researcher even in this seating. With regulators it’s a bit of a tricky job because at one hand we are connected with the industry. On the other hand we are directly connected with the consumers as well. This is more like a job where you have to do the balancing act whenever you’re taking any decisions or any moving forward on anything. With cyber security itself being a major challenge for developing countries for so long, this new mix of AI has actually made things more challenging. You see the technology is usually and primarily and inherently has been developed in the West. And that technology being developed in the West means that we have a first mover disadvantage for developing countries as well because we’re already lacking on the technology transfer part. What happens is that once, because of internet and because of how we are connected these days, it is much easier to get any information which could be positive or negative. And usually the cyber security threats or the… or the elements that are engaged in such kind of cybercrimes and all, they’re usually ahead of the curve when it comes to defenses. Defense will always be reactive. And for developing countries, we have always been in a reactive mode. Meta has just mentioned that, you know, their AI model or their AI project has been able to bring down the fake accounts on Facebook within one quarter by 99.7%. That means that they do have such an advanced or such a tech-savvy technology available to them or resources available to them that they were able to do to achieve this huge and absolutely tremendous milestone, by the way. But can you imagine something like this or some solution like this in the hands of a developing country with that kind of investment to deploy something like this which can actually, you know, serve as a dome or a cyber security net around your country? That’s not going to happen anytime soon. So what does it come down to then for us as regulators? It comes down to, number one, removing that inherent fear of AI which we have in the developing countries. Although it is absolutely tremendous to see how AI has been doing, bringing in positive things, but that inherent fear of any new technology is still there. This is more related to behavior, which Sarim was mentioning. And I think it also points down to one more point, which is intention. I think intention is what leads towards anything, whether it is on cyberspace or off the cyberspace. I think what developing countries need to tackle this new form of cyber security, I would call it, with the mix of AI, is to have more capacity, is to have more institutional capacity, is to have more human capacity, is to have a national collaborative approach which is driven by something like a common agenda of how to actually go about it. We are so disjointed even in our national efforts for a secure cyber space that doing something on a regional level seems like a far sight to me right now. Just to sum it up, for example, in Pakistan we have a national cyber security policy as well. We do have a national centre for cyber security. PTA has issued regulations on critical telecom and infrastructure protection. We do threat intelligence sharing as well. There is a national telecom cert as well. There are so many things that we are doing but if I see the trend, that trend is more like last three, four years maybe where things have actually started to come out. But imagine if these things were happening ten years back, we would have been much more prepared to tackle AI now into our cyber security postures. So from a governance or cyber security or from a regulatory perspective, it is more about how we tackle these new challenges with more collaborative approach and looking at more developed countries for kind of technology transfer and to build institutional capacity to address these challenges. Thank you.
Babu Ram Aryal:
Thank you, Waqas. Actually, I was supposed to come on capacity and Waqas, you just mentioned the capacity building of the people. Tatiana, I would like to come with you that how much investment on policy frameworks and capacity buildings coming in framing law and ethical issues in artificial intelligence and whether industries are contributing to manage these things and also from government side. So what is the level of capacity on policy research on framing artificial, I mean, framing the way out for these artificial intelligence and legal issues? It’s working, right?
Tatiana Tropina:
Thank you very much for the question. And I must admit, so I’ve heard the word investment. I’m not an economist. So I’m going to talk about people, hours, efforts, and whatever. So first of all, when it comes to security, defense, or regulation, I think we need to understand that to address anything and to create future frameworks, we need to understand the threat first, right? So we need to invest in understanding threats. And here it’s not only, and I think I mentioned this before, it’s not only about harms as we see it, for example, harm from crime, harm from deep fakes. It’s also harm that is caused by bias, but ethical issue, because the artificial intelligence model is only as good, it brings as much good as the model itself, the information you feed it, the final outcome. And we know already, and I think that this is incredibly important. for developing countries to remember that AI can be biased. And technologies created in the West can be double biased once technology transfer and adoption happens somewhere else. For example, when I’ve heard about meta-removing accounts based on behavioral patterns, I really would like to know how these models are trained. Be it content, be it language, be it behavioral pattern, does it take into account cultural differences between language, countries, continents, and whatever? And here, I do believe that what we talk about in terms of cooperation between industry, researchers, and governments, and law enforcement is crucial. Just a few examples. Scrutiny, external scrutiny of algorithms, and I believe that both industry, government, or not both, three of you will agree with me, that it is incredibly important once the algorithm is created and trained to open it for scrutiny from civil society, from research organizations, because you need somebody to see if it’s ethical from the outside. You know, to me, testing algorithm just by adopting them ethically is the same as testing medicine or cosmetic on animals. We don’t do this anymore. So, it’s not only building capacity itself, it’s adopting a completely new mindset, how we are going to do this. And in terms of investment in creation of future-proof frameworks, you really need to see the whole picture and then see, okay, what kind of threats I’m addressing today, and what kind of threats I might foresee tomorrow. And this is why I was talking about sort of, it is hard to think about future-proof frameworks because, indeed, defense will always be a bit behind. But if you forget about technology itself, technology can change tomorrow, but you can think about how you frame harm. what do you want to achieve in your innovation? And then say, okay, META, I want to achieve this level of safety. If you see this risk, please provide this safety and leave it to META and make META open in this also for external research and this cooperation might bring you somewhere to the point where it would be more ethical, where it would be more for good in terms of defense. And I also want to say the existential fear of AI exists everywhere, I believe. And this is why every second session here is about AI, just because we are so scared. But I also do believe that we cannot stop what is going on. We really have to invest in here. I’m talking again, not about money, but about people. And also, if I may, if I have not spoken for too long yet, I think that there are so many issues here that we have to detangle. And again, look at harms and look at algorithm itself. For example, the use of algorithm in creation of spare fishing campaigns or malware. We know how to address it. We need to work with prompt engineers because it creates malware only as good as the prompt you give it. And if a year ago, you could say to charge DPT, just create me a piece of malware or ransomware and it will do it. Now you cannot do this. You need to split it into many, many prompts. So we have to make this untenable for criminals. We have to make sure that every tiny prompt, every tiny step that they can execute in creation of this malware by algorithm will be stopped. And yes, it is work, but this is work we can do. And so with any other harm. Sorry for speaking for too long. Thank you.
Babu Ram Aryal:
It’s absolutely fine. Thank you very much. and bringing more issues in the table. Sorry, there was very interesting response from Tatiana. Setting the, what is harm, how we understand and then setting this, previously Vakas mentioned the fear of AI. So do we have any fear of these things from technology platforms like yours? How you are handling this kind of fear and risks technologically? I don’t know whether you could be able to responding from technological side, but still from your platform perspective.
Sarim Aziz:
I think, yeah, I mean, any new tech can seem scary, but I think we need to move beyond that. And like, yeah, as Tatiana and others mentioned, the existential risk always becomes a distraction in the conversation. I think there’s like near short-term risks that need to be managed. And on their approaches, I think there’s some really good principles and frameworks out there with the OECD principles, about fairness, about transparency, accountability. I mean, the White House commitments as well. So there are good policy frameworks for countries to look at, and they certainly need to be localized to every region. But there’s plenty of good examples, G7 Hiroshima process, that I think industry generally is supportive of, in terms of making sure that we make AI responsibly and for good. But to me, I think the bigger question, the harms are sort of clear. The idea, I think now, is that how do we get this technology into the hands of more people who are working in the cybersecurity space? Because if you think about cybersecurity space, also 20 years ago, it was quite closed. But now you have a lot more collaboration and open innovation happening in that. It took 20 years for us to realize that, actually, that. Keeping cyber security close to a few does not help, because the bad actors get this stuff anyway and then you just have like your defense list against them. So I think the same thing has to happen with AI, it’s going to be tough but I think the governments and policymakers, if they need to incentivize open innovation. So when you have a model that’s close you don’t know how it was trained you don’t know you know how it was built. You don’t have a responsible like it makes it difficult for you know it’s for the community to figure out what are the risks and I think we one of the things we did for example was we submitted our model as our model is open source it was launched just in July of this year and already in one month it was downloaded by 30,000 people. Now, are, of course we did red teaming on it we tested it for, but no amount of testing is going to be perfect and the only way to get it tested perfectly is to get it out there in the open source community and responsible players have access to it they know what they’re doing. And that’s the beauty of AI I think that’s a game changer because mentioned that you know there’s a capacity issue yes there is a capacity issue. We have a capacity issue as meta, you can’t hire enough people to remove the bad content. AI helps us do that. Right, you don’t. So instead of having you can have millions of people looking at what’s on the platform and removing content, it’ll never be enough. Right, AI helps us get better so that human, you still need human review you still need experts to know what they’re doing, but it helps them be more efficient and more effective. And the same thing and open innovation model can help developing countries catch up on cybersecurity because now you don’t need thousands and thousands of cybersecurity experts you just need a few who have access to the technology. And that’s what open innovation and open sourcing does which is what we’ve done with our model. We even submitted our model to DEF CON, which is a cybersecurity conference in Las Vegas and we said, you know, break this thing find the vulnerabilities. What are we not doing where, where are the risks, and we’re waiting for the report but that’s how you get make it better. Right, of course we did our best to make sure that it takes care of the CBR and risks of you know chemical biological. technological nuclear risks, but there are other risks that we may not have seen. So I think this is where putting it on open source, giving access to more researchers, it doesn’t matter whether you’re in Zambia or Pakistan or any other country, you have access to the same technology now that Meta has built. And that’s how we get there to an open innovation approach. There are many other language models. I’m not going to name them, but they are not open and Meta’s is. So I think that’s what we need to get policymakers to incentivize open hackathons on these kinds of things, break this thing and create sandboxes to safely test this on, because a lot of the testing you can do is only based on what’s publicly available. If governments have access to information that they can make available to hackers to say, okay, like use this language model and see if we can do this. And in a safe environment, obviously, ethically without violating anybody’s privacy and things like that. So I think that’s, that’s where we need to focus the discussion on policy.
Babu Ram Aryal:
Thanks Sarim. I think one interesting issue is we are discussing from development country perspective, right? This is our basic objective and there are opportunities to all of countries. Access is always there, as you Sarim mentioned, but there are big gaps between developing countries and developed countries about the capacity. We have been talking about, and especially if I see from Nepalese perspective, we have very limited resources, technology, as well as human resource, that that is a big challenge on this defense. So Michael, what is your personal experience leading from the front and what is the capacity of your team and what do you see the gap between developing countries and developed countries on capacity of? of addressing these issues?
Michael Ilishebo:
So basically my experience is probably shared by all developing countries. We are consumers of services, products from developing countries. We haven’t yet reached that stage where we can have our own homegrown solution to some of these AI module languages where we can maybe localize it or train it on our own data sets. Whatever we are using or whatever is being used out there is a product of the Western world. So basically one of the major challenges that we’ve encountered through experience is that the public availability of these language models in itself has proved to be a challenge in the sense that anyone out there can have access to the tools. It simply means that they can manipulate it to an extent for their criminal purposes. As reported by Meta, in the first quarter of their use of the language model that they are using they got close to a billion fake accounts. Am I correct? Close to, no, no, yeah. Whatever, it could be images, it could be anything that was not meeting the standards of Meta. So if you look at those numbers those numbers are staggering. Now imagine some of the information that Meta has brought down because of ethical and probably safety and other concerns were deployed to a third world country that has no single capacity to probably filter that which is correct, filter that which is not correct. It is becoming a challenge. As much as the crime trend is increasing. also with the borderless nature of the internet, the AI models have really become something that you have to weigh the good and the bad. Of course, the good outweighs the bad, but again, when the bad comes in, the damage it causes within a short period of time, like outshines the good. So at the end of it all, there are many, many challenges that we face through experience, only if we could be at the same level with developing countries in terms of their tools they are using to filter anything that is probably will bring public opinions in terms of misinformation, in terms of hate speech, in terms of any other act that we may deem not appropriate for society or any other act that probably is a tool for criminal purposes.
Babu Ram Aryal:
Thanks, Michael. Vakasa would like to intervene on this issue.
Waqas Hassan:
I think, like already mentioned, the pace with which the threats are evolving is I think unequal to, inequal to what at the pace with which we are, our defense mechanisms are improving. And why this is happening? This is because we don’t have as much faster, the forensics is not as fast as the crimes are happening. Like Michael has already mentioned, this it’s a good thing that the tools or these models are open source, but at the same time, these models are equally available to people who want to misuse it as well. Now, the capacity of people who want to misuse it is sort of… When it outweighs the capacity of people who have to defend against it, you find incidents and you find such situations where we eventually say that AI is bad for us or bad for the society and all. But when we are better prepared, we are proactive, like what Facebook did is sort of proactive thing. Rather than those accounts doing something which would eventually become a big fiasco, they actually took it down before something would happen. That is something which developing countries are usually lagging behind. Doing cyber security or having their cyber defense in a proactive mode rather than being a reactive mode. I am not saying that not prepared and I am not saying that there is no proactive approach there. There is. But that proactive approach is hugely dependent on what kind of tools and technologies and knowledge and resources and investment available to the developing countries, rather than just saying that, you know, okay, fine, we are doing proactive approach and we are doing these things. I mean, Michael is at the forefront of everything. I think everybody would know that the kind of threats that are emerging now are much more sophisticated than they were ever before. Be as sophisticated and as prepared as you were before, I leave that question on the table. Thank you.
Sarim Aziz:
Can I just add a perspective? Yes. It’s your story. Yeah. So I think I’m coming back to my introduction. I don’t think the risk vectors have changed. Sorry, you want to add something? Oh, yeah. Okay. All right. Yeah. I mean, I think, yes, you might. As I said, the bad actors who want to cause harm are using the same vectors they were before. Jenny, I don’t think just because they’re putting the. So it’s like phishing, right? Like phishing is a good example. You don’t solve for phishing. Okay, fine, they can have a much better email that’s written that seems real and logos that look real and whatever, right? But that’s not how you solve phishing. You solve phishing by making authentication credentials one-time use. Because any one of us, the most educated person in this room can be phished, right? I mean, if you’re running in a rush, you don’t have time to check the email address, you just read something and it looks real, you’re gonna click on it. Yeah, we’ve all been there. We’ve all done it, right? I’m gonna raise my hand. So that threat vectors in terms of what you’re talking about haven’t changed, same with the fake accounts. So our fake account detection doesn’t care how real your photo is or isn’t. It’s based on AI, it’s based on behavior. And that behavior, yes, of course, we have 3.8 billion users. We have to be careful that this is the spammy behavior we’re seeing people creating multiple accounts on the same device or sending 100 messages in a minute and spamming people and things like that. So it’s really bad behavior. It doesn’t matter what it is, it’s wrong. What country you’re from, what culture you’re from. So that’s the kind of stuff, it is universal, right? And same with phishing, it’s quite universal. So yes, there are certain risks, same with NCII. So NCII was still there before GENAI, non-consensual intimate imagery. So you can use Photoshop for that, you don’t need GENAI. And that’s, unfortunately, that’s the biggest harm we see. That’s the biggest risk, we talked about risk. That we, and that’s a separate topic where I’m talking on a panel on child safety as well, where you need collaboration. We have an initiative called StopNCII.org, where if you are a victim of NCII, and again, this is where AI helps. So if anyone is a victim, you know a victim of NCII, and their pictures have been compromised and whoever is blackmailing them and things like that, you can go to StopNCII.org and you can submit that video or image. And we use AI to block them across all platforms, all services. This is the power of AI, right? Even if it’s like slightly change or, because we take that hash and we do that. So it’s, this is the power of AI. I think it helps us with sort of preventing actually a lot of harm. Whereas it would be, without AI, you can easily do the same thing. You know, it might make it a little bit easier or maybe it makes it high quality, but the quality of the impersonation or the quality of the intent doesn’t really change the risk factor.
Tatiana Tropina:
Tatiana? Yeah. Yeah, thank you. What I wanted to say largely goes in line with what you say, because I made one line when I was listening. Misuse will always happen. We have to understand that we should stop fixating on technology itself. Any technology would be misused. If you want to create bulletproof technology, you should not create any technology because it will always be people who misuse it, who would find the way to misuse it. Crime follows opportunity. That’s it. Any technology will be misused. And also about phishing, for example, the human is the weakest link always. You’re not fooling the system only, you’re fooling humans. And in the same way, we have to talk about harms. And here I go to one of my intro remarks. We have to focus on harms, not on technology per se. We have to see where the weakest link is, what exactly can be abused in terms of harms, where harm is caused. And in this way, I strongly believe that AI can bring so much good. And thank you for reminding me about the project of non-consensual image sharing. Of course, AI can do it automatically. You can compare hashes, you can have databases. But then again, when we look layer after layer, we can ask ourselves how this can be misused as well, and how this can be addressed, and so on and so forth. We just should always ask questions. And also I would like to remind. again and again. It’s not only about technology, let’s always remember that it is humans who are making mistakes and humans who are abusing this technology and this is where we also have to build capacity. Not only in technological development, not only in regulatory capacity, but after all the whole chain of risk, you know, focuses at the end on humans, on humans developing technology, on humans developing regulation, on humans being targeted, on humans making mistakes and this is where we have to look at as well.
Babu Ram Aryal:
Thanks Tatiana. Now, I would like to open the floor and if you have any question, if Ananda Gautam, my colleague, is moderating online and if there is any question if you want to ask to the panel from online as well who are joining this discussion, you can also put your question to the panel and also I would like to request participants to speak or share your questions to the panel. Yes, please introduce yourself briefly for the record.
Audience:
Hello everyone, I’m Prabhas Subedi from Nepal. It’s been so interesting in discussion, thank you so much panel. I want to just explore a little bit what we miss from today’s discussion, probably that is the capacity of individual countries to negotiate with big tech players, right? If you look at the present scenario, so much resources that is being collected from so-called third world, global south to the developed economy. And of course they are boosting their economies through deploying this sort of technologies and we have nothing. And that is one of the main reasons we are not empowered, we are not capable to tackle this sort of challenges. And of course another thing is the technology is so much concentrated to the global north and I’m not pretty sure that they do care equally, inclusively to the big number of population living in the global south and economy comes first. So it will be continue what is happening today and will be continue in the AI time, AI dominated time. That is my observation and what is yours I would like to ask from panelist side. Any specific resource person you would like to ask? Anyone can, thank you.
Sarim Aziz:
I mean I think as I said before, first of all I agree with you, there is a way of making technology more inclusive and that has to be by design and that is why I think principles when it comes to the frameworks that are out there on AI being led by Japan and OECD and the White House, it is about inclusivity, fairness, making sure there is no bias in there. But those are all policy frameworks. I think from a tech perspective, as I said, I think open innovation is the answer. And AI can be the game changer where, as I explained, it is out there. There is no reason why… The same technology that we’ve open sourced that the Western countries have now, researchers and academics and developers in Nepal and other countries in Africa can also access. And this is an opportunity to get ahead. And you don’t need, AI is the game changer because it’s about skill. It’s about doing things at scale and being able to, especially thinking about systems and protecting systems and the threats you’re talking about. It’s not a problem where you throw people at and it’ll get solved. Of course you need to do capacity building and you need experts, but it helps them be more efficient, more effective. So I’d love to see what the community, it’s only a few months old, our model, it’s called Lama 2. You can go and look at it. You can look, there’s a research paper along with it that explains how the model was built because we’ve made it, we’ve given an open source license under acceptable use policy. And so, yeah, and there’s derivatives already out of it. So you can’t even use the language argument anymore because the Japanese took that model and they already made it into, they call it, I think, ELISA and the Japanese university in Tokyo has made a Japanese version of that model. So it’s, and we’re excited to see what the community can do. And I think that’s the way we can continue to innovate, make sure that nobody gets left behind.
Audience:
I do not completely agree with you because you can already see that, for example, ChatGPT has the premium and free version and the majority of users are from, of course, from the developed economies and it’s quite difficult to afford. And there is no always a chance to be the openly and easily availability of such resources. And if you are not habited and if you are not well equipped with the resources, how can you be capable to tackle the upcoming challenges?
Sarim Aziz:
So I don’t work for ChatGPT or OpenAI, so I can’t speak for them, but our model is open source. It’s already public and it’s the same and anyone can basically write another chat GPT competent competitor using that
Babu Ram Aryal:
Thank You privacy Tatiana he raised one interesting debate on global north and global south Do you see? You
Audience:
Well, thank you very much, this is very interesting debate and International relations, I am dr. Mohammed Shabbir from Pakistan Representing here civil society the dynamic coalition on accessibility and disability so The the debate here going on I sort of as a student of international relations I would agree to this that we don’t live and in an equal world the Terminologies inclusivity accessibility. They all seem very fine on the paper but in reality when we see what is happening here is is Unfortunately, we live live in a real world and not in an ideal world where everything would be equal towards one another What cost is a very Valid point and I would ask want to ask that question from work us and then I would seek the response from from meta you talked about the Transfer of technology. What sort of technology are you talking about here? And my question from meta and and the global North is that how far are they ready to share that technology with the global south? when they when it comes to diversity inclusivity not to talk of the the earlier point my friend raised about the the price and and the open and free plus premium versions of different softwares that are out there in the market. Those will remain there, but what sort of technology are we talking about here in terms of transferring? Of course, AI is a tool like any other tool, but I can see that when it was human against human, so it would be like a sharp knife that could be used against any other person, but that would be human using against human, a tool. But this time, AI as a tool being used as not just as a computer, it would be a computer against a human who would be targeted. So the threat, as my friend from Meta is talking about, is just a real one, and it seems that it’s like a fishing one. The example cannot be equated. I think this is something that we need to discuss. The response measures have to be as sharpened, as quick, and as faster as the technology that we are developing here is, but I would want to seek the response on my earlier point from Vakas and then from Meta. Thank you.
Waqas Hassan:
Okay, thank you. I think when we say technology, it’s primarily, of course, one of the examples is how Meta has just open-sourced their AI model, which of course is something that any nation can use to develop their own models. What we’re talking about is a standardization of these technologies, in my view. Once something gets standardized, it is available to everybody. That’s how telecom infrastructure works across the world. If there is a standardized technology, of course, it is easier to. for, for, for developing countries or developed countries, anybody, any, any interested party to take advantage of. Threat intelligence, what kind of threats are out there? What kind of issues are they dealing with? What kind of information sharing could be there? What kind of new crimes are being introduced? How AI is being misused? And then how that situation is being tackled by, by the West? Technology itself is just a word. It is, it is more about how, what, what are you sharing? Are you sharing information? Are you sharing the tools? Are you sharing experiences? Are you even sharing human resources? You mentioned that now it is human versus AI, but can we, how about AI versus AI? You know, can we develop such tools or AIs that can preempt and work like, it’s like, I’m, I’m, I’m going back into the, into the cyber warfare movies and all that, which used to predict that in the future bots would be fighting against each other, but we’re not there yet. But if we are investing in AI for defense mechanisms of to improve the cyber security posture, like Meta has just done, that investment muscle is currently not that much available to the developing countries. So we have to look towards the West. And what they are developing is something that we need. And we’re going to need for a foreseeable future in terms of the tools, in terms of the information, in terms of the experience sharing, and in terms of the threat intelligence that they have. Thank you. And I’ll leave it to Sarim to respond to the other part.
Sarim Aziz:
Thank you, Waqas. So I think it’s a good question. Maybe I didn’t set the context of what Lama to it. So Lama2 is a large language model, similar to OpenAI’s ChadGPT, except the difference is it’s free for commercial use, and it’s open source. So the technology is available for any researcher, anyone to deploy their own model within their own environment. So you can put it on, if you’ve got the computational power in your own cloud, you can deploy it there on your computer, or you could deploy it on Microsoft’s Azure, or AWS, and any other. So it’s basically a large-legged model that helps you perform those automated tasks. But it’s out there for open source, meaning that we invite the community to use it, invite the community. It’s free. We don’t charge. There’s no paid version of it. Obviously, you have to agree to the conditions and agree to the Responsible Use AI Guide. But beyond that, yeah, that’s what we’ve launched just this year. And we’re excited to see how the community around the world uses it for different use cases. And there are use cases we didn’t even realize. That’s the beauty of open sourcing, is that we won’t know how it will get used by different governments, by institutions to deploy. Of course, we only make it better and safer through red teaming, through testing, all that. But the more cyber security experts tell us the vulnerabilities and use it, that’s how we’ll improve it.
Babu Ram Aryal:
Thanks, Ari. Tatiana, observing these two questions, I was supposed to ask you the debate of global south and global north capacity and the impact on artificial intelligence and cyber defense issue.
Tatiana Tropina:
I must admit here that I cannot say that I cannot speak for global south, which is global majority, right? It is hard for me to assess capacity there. But I can certainly tell you that even in the global north, if we call it, if we call the global minority, global north, the artificial intelligence in cyber, so capacity in cyber defense. On the one hand, of course, if we’re talking about expertise, we might talk about some high quality specialists and better testing and whatever, but believe me, the threat, the threat is still there and there is lack of understanding what kind of threat is there in terms of national security, in terms of cyber operations. Because so much is connected in the global north, because people follow things on the internet so much, the question, for example, deepfakes and elections, and I love the story about Nancy Pelosi video because you don’t have to change anything, you just have to slow down or speed up and whatever. So the question here, again, boils down to capacity to assess the threats before you have capacity to tackle them. And I do believe that right now, in the so-called global north, we have this problem as well, capacity to understand the threat. Are we just saying, oh my God, it’s happening? Or are we really kind of disentangling it, looking at what is actually happening and then assessing it? And I do believe that indeed, indeed, there is a gap when we talk about developing countries and developed countries in technological expertise, in what you can adopt, in how you can address it. But in terms of understanding the threat, we still lack capacity in global north as well. We still lack understanding of the threat itself. And there is a lot of fear-mongering going on as well. And I do believe that in this term, we have to share this knowledge. We have to share this capacity because, yeah, the threat can be. vary from region to region, but at the same time the harm will be to people, be it elections, be it cyber threats, be it national security threats. And here I do believe that there is such a huge potential for cooperation between what you call global north and global south. And by the way, I do think that we have to come up with better terms.
Babu Ram Aryal:
Tatiana and I will come on cooperation. I’ll go to the question.
Audience:
Thank you for giving me the floor. My name is Ada Majalo. I’m coming from the Africa IGF as a MAG member. Very interesting session, really. I think when we talk about AI, most of the time it’s us from the global south or developing countries who have the most questions to ask because we have the bigger concerns. We are still tagging along. When it comes to AI, we are concerned about how inclusive it is and how accessibility it is. For example, coming from an African context, we are still struggling with the infrastructure. We talk about electricity, access to electricity. It’s a problem. And you need to be online, you need to be connected to be able to utilize most of these facilities that come with AI. But we are already having those challenges. So it’s difficult for us to actually either follow the trend or keep up with the trend. It always brings us to mind also as well, we have so many people that don’t really have no access to the internet. We don’t even know what is digital. And we talk about inclusion. How do we bring those people along? And how can they keep up with the whole idea? There is always a concern what are the risks, what are the challenges? How do we move away from the status quo? How do we follow suit? And what are the risks for us? And usually what are the benefits we get? But then it comes back to understanding. digital literacy, how people are digitally trained to understand what are the risks and what the benefits that might come from it and how we practically come to, I would say tag along with already the global not that are far ahead from where we are coming from. There is always the issue of people trusting AI. From where I’m coming from, people will ask, is AI here to take our jobs? How much can we be dependent on AI and not really, would it balance how creative we are? Because some of the consumers, when you are a consumer of AI, you are consuming. So does that really limit you being creative and also just being the consumer and just receiving and receiving and receiving and not also, it limits how can we balance the creativity of the human being? So it’s a bit off balance, but it’s good to bring this to the table to ensure that. When we are moving, there must be people left behind, but we see how to draw them along. And this is something that I just wanted to show out there. Thank you.
Babu Ram Aryal:
Thank you very much. Anything you would like to address us or I have one important side of cooperation. Just we started about the global north and global south and we’re talking from a government perspective and how we can build up a cooperation and addressing at national, regional and global level. So what could be the possible framework for addressing these issues? Tatiana?
Tatiana Tropina:
Okay. Sorry. I think that we already mentioned the principles and they are basically, okay. not that global, but I do believe that, I absolutely love the previous intervention, I’m sorry, I didn’t catch the name, but I do think that there are so many, so if we look about principles of AI, like for example, fairness, transparency, accountability, and so on and so forth, I think that we really need to redefine what fairness means. We really need to redefine what fairness means, because I think that right now when we’re talking about fairness, we do talk about applicability of fairness to what you call global north. And I think that if we look at fairness much broader, it will include the use of technologies and the impact of these technologies to any part of the world, to any part of society. It is hard for me to think about cooperation on the global level, like, you know, we all get together and happily develop something. I’m not sure this can happen really, unless the threat is imminent, but yeah. So I do believe that we have to, when we think about global cooperation, when we think about global capacity building, we should not start from threats. We should start from building a better future, we should start from benefits. And I think that fairness would be the best way to start. How do we make technology fair? How do we make every community benefiting from this technology? I know that you probably want me to talk about more practical steps. I don’t have, I’ll be honest here, I do not have an answer to this question, because unless we frame the place where we start from, which will include fairness for every country and every region and every user, instead of threats, instead of, oh my God, we are all going to die tomorrow from AI, or we are going to be insecure tomorrow. We should start with the benefit, how AI can. and benefit everybody, every population, every community, everyone. And if we start from the premise of good and define it and somehow frame it, and it’s already framed in a way, but you know, widen this frame. I think starting there would be a much better place. And in terms of practical steps, I do believe that the steps, the baby steps already taken by civil society, by the industry, which were certain players throw away the concept, move faster and break the things to the concept, let’s go more open, more fair, more transparent, more inclusive. This is already a good start. I do not know if regulation, attempt to regulate would bring us there. I do not think so, actually. I think that attempts to regulate should go hand-in-hand in what we do as civil society, as technical community, as companies cooperating with each other. But to me, honestly, the first step would be to redefine the concept of fairness.
Waqas Hassan:
I’d like to add one thing to say, we would just start, and I said, she’s spoken about global cooperation as well. I’d like to take this from the other angle, which is a reverse angle, which is starting from the national level. Information sharing, threat intelligence sharing, developing tools, mechanisms against, or using AI for cyber defense, that starting point is, of course, your national level policy or national level initiatives or whichever body that you have in your country. For example, in Pakistan, we do have such bodies. Now, on APAC level as well, there are bodies, for example, there is an Asia-Pacific cert as well. They do cyber drills. ITU also organizes cyber drills. for countries to participate on and all. So there is some form of collaboration happening. How effective it is, I can’t say for sure, because this particular mix of AI into this into this cyber security and cyber is something which I haven’t seen in any agenda so far. But the starting point is again, a discussion forum like we are sitting at right now, like in IG, foreign for national cyber security dialogue to start, which can then come, you know, sort of meta size into into a regional dialogue, which then eventually, you know, gives input to the global dialogue. Whether it’s human, whether it’s AI, whatever it is, the starting point of every solution is a dialogue, if in my opinion. So I think this is where collaboration comes in. This is where information sharing comes in, especially for the developing countries. If you if you don’t have the tools or technologies, technologies, at least what we have is each other to share information with. So I think that should be the starting point. Thank you.
Babu Ram Aryal:
Michael on cooperation. How we can build a cooperation on cyber defence and how what kind of strategies we can take on that?
Michael Ilishebo:
So basically, we’ve discussed a lot of issues. Most of them, we’ve looked at issues that have to do with fairness, accountability, and ethical use of the AI. There are many challenges that as a law enforcer, we face. But in all, this discussion would definitely come in a broader way in the future when actually the law enforcers themselves, start deploying AI to detect, prevent and solve crime. Now that will affect all of us, because at the end of it all, we are looking at AI being used by criminals to target probably individuals to get money, probably to spread fake news, but now imagine you are about to commit a crime and then AI detects that you’re about to commit a crime, there’s a concept of pre-crime. So that will affect each and every one of us, just a simple show of behavior will detect what crime you’d want to commit or you commit in the future. So now that will bring up the issues of human rights, issues of ethical use, a lot of issues, because at the end of it all, it will affect each and every one of us. Today we’re discussing on the challenges that AI driven defense system has brought, but in the future, not even in a distant future, just probably in a few years time, it will be something that all of us will have to probably face in terms of being judged, being assessed, being profiled by AI. So as much as we may discuss other challenges, let us also focus at the future when AI starts policing us. Thanks, Michael.
Babu Ram Aryal:
One question from you. Yeah. Question. Come in. Yeah, please. Mic there. Introduce, please. Thank you.
Audience:
Thank you for the insightful reflection. This is Santosh Siddhal from Nepal, Digital Rights Nepal. On the question of collaboration, I think that they understood that we have to define the concept first. I think we have to also define the concept of cyber defense. If we are moving from cyber security to cyber security. cyber defense, we have to have a kind of open discussion because defense is the job of government. And normally government national security, security defense, they are dominant actor. And they do not want to have all the actors on the table citing national security. It has happened in lots of other issues, be it freedom of expression, be it other civil rights. So national security is a kind of domain, their domain, government domain. And we are talking that promoting cyber defense, not cyber security in developing countries. So within the developing countries, we are empowering whom? We are empowering the government. We are empowering the civil society. We are empowering the tech companies, which stakeholder we are talking about. So I think we have to deconstruct the whole concept of cyber defense. And at the same time, we have to kind of deconstruct developing countries. Talking about within the developing countries, in the AI regulation, we also talked about AI regulation. And in the discussion of cyber defense, are the civil society now on the table to discuss these issues? I’ll give you one example. In Nepal, recently Nepal formed, Nepal adopted the national cyber security policy. And one of the provision in the cyber security policy is that the technology or the consultation, ICT related technology or consultation would be procured by the different system than the existing public procurement process. And that process will be defined by the government. So now they are having a new shield, or the new layer where the public or the civil society would not be discussed what kind of technology the government is importing into the country or what kind of consultation they are having on the cyber security issues. So while talking about these issues, I think we have to also, another factor is we have to. to discuss about the capacity of the government to implement it, whether that kind of defense or the capacity we are talking about, whether other governments are supporting them, is it available within the national context, or whether there is a geopolitics in the play? Because it has happened in many situation, cyber defense is part of the geopolitics as well. So we have to also consider that dimension. So in my opinion that you said earlier, technology is different, but the values are same. So we have to focus on the values. And I think the human rights charter or the internet right and principles are the basic values that we have to uphold. We are talking about different, somebody earlier said about those values having in the paper and those values having in the practical world. At least we start, we have to, I think, start with the values that we have already agreed on, all we have agreed on the paper, then we have to make them practical in the real life. Thank you.
Babu Ram Aryal:
Thank you. We have just eight minutes left. Can you please briefly share your thoughts?
Audience:
Hi, thank you. My name is Yasmin from the UN Institute for Disarmament Research. So I just have a quick question. So based on my previous, I’ve been following the issue of AI and cybersecurity for a few years now, and I see that while both fields are so inherently deeply interconnected, fact is that at the multilateral level, other than processes like the IGF, and even so it’s only been recent, most of the deliberations are done in silo. You have processes for cyber, you have processes on AI, but they don’t really interact with each other. So, but at the same time, I see that there is increased awareness on coming up with governance solutions that are sort of multidisciplinary and touch upon tech altogether. And one of the approaches that have been. proposed is responsible behavior and as states are trying to develop their national security strategies along the lines of responsible behavior on using these technologies I was wondering if all the panelists based on your respective areas of work whether it’s in the public or private sector if you have any sort of best practices that you would recommend or you would be sharing to the audiences here on yeah when states are trying to develop their national security strategies what sort of best practices have worked on your experience to govern these technologies in the security and defense sector thank
Babu Ram Aryal:
you thank you very much for this question but we have very less time we have this six minutes left a very quick intervention from Michael and then take away from all the panelists yeah so basically to probably just touch a bit
Michael Ilishebo:
of what she’s asked so in integrating into the defense system of course she’s mentioned issues of national cyber security strategies there’s also need for regulatory frameworks there’s also need for capacity building collaboration data governance incidence response and ethical guidelines of course within the international cooperation so as she’s put it we are discussing two important issues in silos cyber security is discussed as a standalone topic without due consideration for AI the same way I discussed in isolation without due consideration for cyber security and its impact so there should be a point at which we must discuss the two as a single subject based on the impact and the problems we are trying to solve
Tatiana Tropina:
closing remarks yeah I would like to address this question because to me it’s a very interesting one as somebody who is dealing with law and policy and UN process well first of all I think that this is not the first time when to interrelated process are artificially separated in the UN. For example, look at cybersecurity and cybercrime processes, they’re also separated. And then we have cybersecurity and AI and so on and so forth. As to best practices, I will be honest here as well, I do not think that there are best practices yet. We are still building our capacity to address the issues. I would say that the things where I’m looking at to become best practices, there are quite a few. First of all, when we are talking about guiding principles, I believe that they are nice and good whenever they appear, but they do not tell you how to achieve transparency, how to achieve fairness, how to achieve accountability really in a way. So I’m currently looking at the Council of Europe proposal for global treaty on AI. And I think that this might be the, it’s very kind of general as a framework, but this might be a game changer from the human rights perspective, which will play into fairness perspective in terms of agreed values. But I’m also looking to the EU AI Act, because this is where we might get a point where on the regulatory level, we will prohibit profiling and some other AI users. And this might be a game changer and this might become the best practice. And this is what I would be looking at, not at the UN, but on the EU level. Thank you.
Babu Ram Aryal:
Sarim.
Sarim Aziz:
Thanks, Babu. Yeah, I think certainly you’re right. It’s still early days, right? I mean, there’s, we met as a member of the partnerships on AI with other industry players, and there’s, I think, multi-stakeholder collaboration, I know it’s been mentioned in every session, is that is the solution. And I think there are good examples in terms of North Stars to look at in other areas. So for example, you take child safety or you take terrorism, you know, the AI is already doing some pretty advanced defensive stuff there on both fronts, right? So on child safety, the National Center for Missing. and exploitive children like that, they have a cyber tip line where they inform law enforcement in different countries based on CSAM that’s detected on platforms. And that’s because industry, that’s a public private partnership becomes very key there where industry works with them and they enable law enforcement around the world in that issue of child safety and child exploitation. So that’s a good example of where we can get to on cybersecurity. Same with terrorism. You know, the GIF-CT is a very important forum where industry is a part of and where we ensure that platforms are not used to. So I think back to the harms, like we have to go, what is the harm we’re trying to attempt? And do we have the right people focused on it? But I think on the AI front that we’re in the beginning of the stages of getting, we need to have technical standards built like we do on other areas, like things like, okay, watermarking, you know, what does that look like for audiovisual content? And that can be fixed on the production side, right? If everybody has this consensus, not just in industry, but across countries and including countries and developing countries. But I do think the opportunity in the short term is for developing countries to take advantage of the incentivize, you know, like we have a bug bounty program, for example, but incentivizing giving data to local researchers and developers to help figure out vulnerabilities and train systems using that for your purposes locally is sort of the immediate opportunity because these models are open source now and available.
Babu Ram Aryal:
Sorry, Vakas, just you have one minute left.
Waqas Hassan:
Okay, one minute. I think we look at the government to do most of the things, almost everything, but this weight of responsibility to be more cyber ready has to be distributed not only just between the government, but also among the users, among the platforms, among the academy, everybody, I’m circling back to the multi-stakeholder model that we have and the collaborative approach that we always follow. I think if we all, if we cannot, if in the developing countries we cannot have the capacity of the technology to handle these challenges, so far at least what we do have is a share of responsibility that maybe all of us can have, and you know, make sure that you know we are at least somewhat ready to address these challenges being posed by AI and cyber security.
Babu Ram Aryal:
Thank you. We completed this discussion exactly on time. I’d like to thank all of you. Couple of things were very significantly we discussed. One is identifying harm and the another was the capacity and of course these are two major things and without taking more time for another session. I would like to thank all of you, our speakers, our online moderator, our audience from online platform and of course all of you at very late evening session in Kyoto IGF. Thank you very much. I conclude this session here now. Thank you very much.
Speakers
Audience
Speech speed
173 words per minute
Speech length
2115 words
Speech time
735 secs
Arguments
Individual countries lack capacity to negotiate with big tech players
Supporting facts:
- much resources being collected from third world, global south to the developed economy
- technology is concentrated to the global north
Topics: tech, negotiations, big tech, capacity building
Access to AI and technology should be affordable and easy.
Supporting facts:
- ChatGPT has a premium and free version, most users are from developed economies and it’s difficult for others to afford.
- If not well equipped with resources, it becomes difficult to tackle challenges.
Topics: AI, Technology, Affordability, Accessibility
Inequality issues are apparent in the real world, where accessibility and inclusivity are not necessarily ubiquitous.
Supporting facts:
- The world is not equal as suggested by the politics of the global north and the global south, accessibility and inclusivity look good on paper but not universally implemented.
Topics: International Relations, Inequality, Accessibility, Inclusivity, Civil Society
AI threat is substantial and can lead to digital new-age conflicts.
Supporting facts:
- AI can be used as a tool against humans by computers leading to a potential threat
Topics: Artificial Intelligence, New-age Conflict, Digital Threats
Concerns over accessibility and inclusion of AI in developing countries
Supporting facts:
- Infrastructure and access to electricity is a problem in Africa
- Many people do not have internet access or know what is digital
Topics: AI, Global South, Infrastructure, Electricity, Internet Access, Digital Literacy
Need to improve digital literacy to understand the risks and benefits of AI
Supporting facts:
- Understanding of digital literacy is connected to understanding the risks and potential benefits of AI
Topics: Digital Literacy, AI
Necessity of defining the concept of cyber defense
Supporting facts:
- Cyber defense is a job of the government and generally the government is the dominant actor while dealing with national security.
- It is imperative to understand who is being empowered in developing countries through cyber defense – government, civil society or tech companies.
Topics: Cyber Defense, Government
Need to discuss the capacity of government to implement cyber defense
Supporting facts:
- Nepal recently adopted a national cyber security policy.
- The policy allows ICT technology or consultation to be procured by a different system than the existing public procurement process.
- This new process will be defined by the government.
- This leaves room for lack of transparency and discussion on what technology the government imports into the country or what kind of consultation happens on cyber security issues.
Topics: Cyber Defense, Government Capacity
Need to uphold agreed values
Supporting facts:
- We need to focus on values that are universally agreed upon and make practical application of those values in real life.
- Human Rights Charter or the Internet right and principles are the basic values that should be uphold.
Topics: Values, Human Rights Charter, Internet Rights and Principles
AI and cybersecurity deliberations are typically done in isolation at multilateral level
Supporting facts:
- Most of the deliberations on AI and cybersecurity are done in silo
- Processes for cyber and AI don’t often interact
Topics: AI, Cybersecurity, Multilateral level
Trend towards multidisciplinary governance solutions that cover all aspects of technology
Supporting facts:
- Increasing awareness on coming up with governance solutions that cover tech altogether
Topics: AI, Cybersecurity, Technology, Governance solutions
Responsible behavior as a national security strategy for managing AI and Cybersecurity
Supporting facts:
- States are trying to develop their national security strategies along the lines of responsible behavior on using these technologies
Topics: AI, Cybersecurity, National security strategy, Responsible behavior
Report
The extended analysis highlights several important points related to the impact of technology and AI on the global south. One key argument is that individual countries in the global south lack the capacity to effectively negotiate with big tech players.
This imbalance is due to the concentration of technology in the global north, which puts countries in the global south at a disadvantage. The supporting evidence includes the observation that many resources collected from the third world and global south are directed towards the developed economy, exacerbating the technological disparity.
Furthermore, it is suggested that AI technology and its benefits are not equally accessible to and may not equally benefit the global south. This argument is supported by the fact that the majority of the global south’s population resides in developing countries with limited access to AI technology.
The issue of affordability and accessibility of AI technology is raised, with the example of ChatGPT, an AI system that is difficult for people in developing economies to afford. The supporting evidence also highlights the challenges faced by those with limited resources in addressing AI technology-related issues.
Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as persistent issues. While accessibility and inclusivity may be promoted in theory, they are not universally implemented, thereby exposing existing inequalities across different regions. The argument is reinforced by the observation that politics between the global north and south often hinder the universal implementation of accessibility and inclusivity practices.
The analysis also raises questions about the transfer of technology between the global north and south and its implications, particularly in terms of international relations and inequality. The sentiment surrounding this issue is one of questioning, suggesting the need for further investigation and examination.
Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents AI as a tool with the potential to be used against humans, leading to various threats. Furthermore, the importance of responsive measures that keep pace with technological evolution is emphasized.
The argument is that measures aimed at addressing new tech threats need to be as fast and efficient as the development of technology itself. Concerns about the accessibility and inclusion of AI in developing countries are also highlighted. The lack of infrastructure and access to electricity in some regions, such as Africa, pose challenges to the adoption of AI technology.
Additionally, limited internet access and digital literacy hinder the effective integration of AI in these countries. The potential risks that AI poses, such as job insecurity and limited human creativity, are areas of concern. The sentiment expressed suggests that AI is perceived as a threat to job stability, and there are fears that becoming consumers of AI may restrict human creativity.
To address these challenges, it is argued that digital literacy needs to be improved in order to enhance understanding of the risks and benefits of AI. The importance of including everyone in the advancement of AI, without leaving anyone behind, is emphasized.
The analysis delves into the topic of cyber defense, advocating for the necessity of defining cyber defense and clarifying the roles of different actors, such as governments, civil society, and tech companies, in empowering developing countries in this field. The capacity of governments to implement cyber defense strategies is questioned, using examples such as Nepal adopting a national cybersecurity policy with potential limitations in transparency and discussions.
The need to uphold agreed values, such as the Human Rights Charter and internet rights and principles, is also underscored. The argument is that practical application of these values is necessary to maintain a fair and just digital environment. The analysis points out the tendency for AI and cybersecurity deliberations to be conducted in isolation at the multilateral level, emphasizing the importance of multidisciplinary governance solutions that cover all aspects of technology.
Additionally, responsible behavior is suggested as a national security strategy for effectively managing the potential risks associated with AI and cybersecurity. In conclusion, the extended analysis highlights the disparities and challenges faced by the global south in relation to technology and AI.
It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to ensure equitable benefits and mitigate risks. Ultimately, the goal should be to empower all nations and individuals to navigate the evolving technological landscape and foster a globally inclusive and secure digital future.
Babu Ram Aryal
Speech speed
117 words per minute
Speech length
1603 words
Speech time
820 secs
Arguments
Babu Ram Aryal discusses the importance and potential risks of artificial intelligence in cyber defence
Supporting facts:
- He introduces a panel of experts from different fields
- He emphasizes on the role of artificial intelligence in cyber defence
Topics: Artificial Intelligence, Cybersecurity
Report
Babu Ram Aryal advocates for comprehensive discussions on the positive aspects of integrating artificial intelligence (AI) in cybersecurity. He emphasizes the crucial role that AI can play in enhancing cyber defense measures and draws attention to the potential risks associated with its implementation.
Aryal highlights the significance of AI in bolstering cybersecurity against ever-evolving threats. He stresses the need to harness the capabilities of AI in detecting and mitigating cyber attacks, thereby enhancing the overall security of digital systems. By automating the monitoring of network activities, AI algorithms can quickly identify suspicious patterns and respond in real-time, minimizing the risk of data breaches and information theft.
Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurity. As AI systems become increasingly intelligent and autonomous, there are concerns about their susceptibility to malicious exploitation or manipulation. Understanding these vulnerabilities is crucial in developing robust defense mechanisms to safeguard against such threats.
To facilitate a comprehensive examination of the topic, Aryal assembles a panel of experts from diverse fields, promoting a multidisciplinary approach to exploring the intersection of AI and cybersecurity. This collaboration allows for a detailed analysis of the potential benefits and challenges presented by AI in this domain.
The sentiment towards AI’s potential in cybersecurity is overwhelmingly positive. The integration of AI technologies in cyber defense can significantly enhance the security of both organizations and individuals. However, there is a need to strike a balance and actively consider the associated risks to ensure ethical and secure implementation of AI.
In conclusion, Babu Ram Aryal advocates for exploring the beneficial aspects of AI in cybersecurity. By emphasizing the role of AI in strengthening cyber defense and addressing potential risks, Aryal calls for comprehensive discussions involving experts from various fields. The insights gained from these discussions can inform the development of effective strategies that leverage AI’s potential while mitigating its associated risks, resulting in improved cybersecurity measures for the digital age.
Michael Ilishebo
Speech speed
159 words per minute
Speech length
1475 words
Speech time
555 secs
Arguments
AI has enabled crimes that were previously not possible
Supporting facts:
- There are instances where an individual without computing knowledge has programmed a malware
- AI tools are free and readily available online
Topics: AI, Cybercrime, Law Enforcement
AI poses threat to digital forensic tools
Supporting facts:
- AI can generate a video that appears genuine resulting in people believing it
- AI can mimic someone’s voice creating confusion
Topics: AI, Digital Forensic, Cybercrime
Developing countries are primarily consumers of AI services and products from developed nations
Supporting facts:
- Developing countries aren’t at a stage where they can create their own localised AI solutions or train AI on their own data sets
- The public availability of language models can be manipulated for criminal purposes
Topics: Artificial Intelligence, Digital Divide
The crime trend is increasing along with the borderless nature of the internet and the use of AI
Supporting facts:
- Meta reported close to a billion fake accounts in the first quarter of their use of language model
- Misinformation, hate speech, and other inappropriate acts are often circulated
Topics: Internet Crime, Artificial Intelligence
Developing countries face challenges in filtering harmful content due to lack of capacity
Supporting facts:
- There could be serious damage if the content that Meta has brought down for ethical reasons is deployed to a country without the resources to filter it
Topics: Content Filtering, Digital Divide
AI has the potential to revolutionize law enforcement
Supporting facts:
- AI can be used to detect, prevent and solve crime
- Concept of pre-crime where AI detects potential future criminal behavior
Topics: Artificial Intelligence, Law Enforcement
Need for integration of AI and cybersecurity in defense systems
Supporting facts:
- Issues of national cybersecurity strategies
- Need for regulatory frameworks
- Capacity building collaboration
- Data governance
- Incidence response
- Ethical Guidelines
Topics: AI, Cybersecurity, Defense Systems
Report
The use of Artificial Intelligence (AI) has raised concerns regarding its negative impact on different aspects of society. One concern is that AI has enabled crimes that were previously impossible. An alarming trend is the accessibility of free AI tools online, allowing individuals with no computing knowledge to program malware for criminal purposes.
Another concern is the challenges AI poses for law enforcement agencies. AI technology performs tasks at a pace that surpasses human comprehension, making it difficult to differentiate between AI-generated content and human interaction. This creates obstacles for law enforcement in investigating and preventing crimes.
Additionally, AI’s ability to generate realistic fake videos and mimic voices complicates the effectiveness of digital forensic tools, threatening their reliability. Developing countries face unique challenges with regards to AI. They primarily rely on AI services and products from developed nations and lack the capacity to develop their own localized AI solutions or train AI based on their data sets.
This dependency on foreign AI solutions increases the risk of criminal misuse. Moreover, the public availability of language models can be exploited for criminal purposes, further intensifying the threat. The borderless nature of the internet and the use of AI have contributed to a rise in internet crimes.
Meta, a social media company, reported the detection of nearly a billion fake accounts within the first quarter of their language model implementation. The proliferation of fake accounts promotes the circulation of misinformation, hate speech, and other inappropriate content. Developing countries, facing resource limitations, struggle to effectively filter and combat such harmful content, exacerbating the challenge.
Notwithstanding the negative impact, AI also presents positive opportunities. AI has the potential to revolutionize law enforcement by detecting, preventing, and solving crimes. AI’s ability to identify patterns and signals can anticipate potential criminal behavior, often referred to as pre-crime detection.
However, caution is necessary to ensure the ethical use of AI in law enforcement, preventing human rights violations and unfair profiling. In the realm of cybersecurity, the integration of AI has become essential. National cybersecurity strategies need to incorporate AI to effectively defend against cyber threats.
This integration requires the establishment of regulatory frameworks, collaborative capacity-building efforts, data governance, incidence response mechanisms, and ethical guidelines. AI and cybersecurity should not be considered in isolation due to their interconnected impact on securing digital systems. In conclusion, while AI brings numerous benefits, significant concerns exist regarding its negative impact.
From enabling new forms of crime to posing challenges for law enforcement and digital forensic tools, AI has far-reaching implications for societal safety and security. Developing countries, particularly, face specific challenges due to their reliance on foreign AI solutions and limited capacity to filter harmful content.
Policymakers must prioritize ethical use of AI and address the intertwined impact of AI and cybersecurity to harness its potential while safeguarding against risks.
Sarim Aziz
Speech speed
220 words per minute
Speech length
4068 words
Speech time
1108 secs
Arguments
AI presents more opportunities for cybersecurity and protection rather than threats
Supporting facts:
- AI removed 676 million fake accounts in one quarter
- The performance report of adversarial threats is updated quarterly
- Meta’s cybersecurity team is working on staying ahead of the potential threats posed by AI and other sophisticated methods.
Topics: AI, Cybersecurity, Inauthentic behavior detection, AI generated photos, Phishing, Scams
AI should be used more in cybersecurity
Supporting facts:
- AI helps remove 99.7% of fake accounts
- AI technology can be used to keep people safe
Topics: AI, Cybersecurity
New technology can seem scary but we need to move beyond that and focus on managing short-term risks.
Topics: AI, Technology, Risks
AI can help in areas where human capacity is not enough, like content moderation on platforms.
Topics: AI, Online Platforms, Content Moderation
Policy makers should incentivize open innovation and create safe environments for testing.
Topics: Policy Makers, Open Innovation, Safety
The risks vectors haven’t evolved, bad actors are using the same methods as before
Supporting facts:
- Vectors include phishing and fake accounts
- Protection measures for phishing include developing one-time use authentication credentials
- Fake account detection is based on behavior rather than appearance
Topics: Cybersecurity, Artificial Intelligence, Phishing
Making technology inclusive has to be by design
Supporting facts:
- Frameworks on AI are being led by Japan, OECD, and the White House focusing on inclusivity, fairness, and lack of bias
Topics: Inclusivity, AI
Open innovation is the answer
Supporting facts:
- AI can be a game changer, as it’s about skill and doing things at scale
- Technology, such as Lama 2 model, has been open sourced, and can be accessed globally
Topics: Open Innovation, AI
AI can be a game changer
Supporting facts:
- AI is about skill and scale
- AI can help make systems more efficient and effective
Topics: AI
Tech community is creating language specific AI models
Supporting facts:
- Derivatives of AI models are already being made in various languages
- Japanese university in Tokyo created a Japanese version of Lama 2 model
Topics: AI, Language Processing
Sarim Aziz’s model is freely available and open source.
Supporting facts:
- The model is already public
- Anyone can write another chat GPT competitor using that model
Topics: Open Source, Artificial Intelligence, ChatGPT
Release of open-source AI model by Meta
Supporting facts:
- Meta launched Lama2, an open-source large language model.
Topics: Artificial Intelligence, Open Source, Technology
AI can be used advancedly to ensure safety and combat terrorism.
Supporting facts:
- AI is already doing some advanced defensive stuff in child safety and anti-terrorism.
- The National Center for Missing and Exploited Children uses AI to inform law enforcement in different countries based on CSAM detected on platforms.
- The Global Internet Forum to Counter Terrorism (GIF-CT) ensures that platforms are not used for terrorism purposes.
Topics: Artificial Intelligence, Safety, Counterterrorism
Multistakeholder collaboration is the solution for managing AI-related harms.
Supporting facts:
- There are examples like child safety and anti-terrorism where multistakeholder collaboration is effective.
- Public-private partnerships play a key role in these collaborations.
Topics: Artificial Intelligence, Multistakeholder collaboration
Technical standards need to be devised for handling AI content.
Supporting facts:
- Standards like watermarking for audiovisual content could be useful.
- This could be tackled on the production side if there is consensus among all stakeholders.
Topics: Artificial Intelligence, Technical standards
Report
In the discussion, multiple speakers addressed the role of AI in cybersecurity, emphasizing that AI offers more opportunities for cybersecurity and protection rather than threats. AI has proven effective in removing fake accounts and detecting inauthentic behavior, making it a valuable tool for safeguarding users online.
One speaker stressed the importance of focusing on identifying bad behavior rather than content, noting that fake accounts were detected based on their inauthentic behavior, regardless of the content they shared. The discussion also highlighted the significance of open innovation and collaboration in cybersecurity.
Speakers emphasized that an open approach and collaboration among experts can enhance cybersecurity measures. By keeping AI accessible to experts, the potential for misuse can be mitigated. Additionally, policymakers were urged to incentivize open innovation and create safe environments for testing AI technologies.
The potential of AI in preventing harms was underscored, with the “StopNCII.org” initiative serving as an example of using AI to block non-consensual intimate imagery across platforms and services. The discussion also emphasized the importance of inclusivity in technology, with frameworks led by Japan, the OECD, and the White House focusing on inclusivity, fairness, and eliminating bias in AI development.
Speakers expressed support for open innovation and the sharing of AI models. Meta’s release of the open-source AI model “Lama2” was highlighted, enabling researchers and developers worldwide to use and contribute to its improvement. The model was also submitted for vulnerability evaluation at DEF CON, a cybersecurity conference.
The role of AI in content moderation on online platforms was discussed, recognizing that human capacity alone is insufficient to manage the vast amount of content generated. AI can assist in these areas, where human resources fall short. Furthermore, the discussion emphasized the importance of multistakeholder collaboration in managing AI-related harms, such as child safety and counterterrorism efforts.
Public-private partnerships were considered crucial in effectively addressing these challenges. The potential benefits of open-source AI models for developing countries were explored. It was suggested that these models present immediate opportunities for developing countries, enabling local researchers and developers to leverage them for their specific needs.
Lastly, the need for technical standards to handle AI content was acknowledged. The discussion proposed implementing watermarking for audiovisual content as a potential standard, with consensus among stakeholders. Overall, the speakers expressed a positive sentiment regarding the potential of AI in cybersecurity.
They highlighted the importance of open innovation, collaboration, inclusivity, and policy measures to ensure the safe and responsible use of AI technologies. The discussion provided valuable insights into the current state and future directions of AI in cybersecurity.
Tatiana Tropina
Speech speed
179 words per minute
Speech length
3064 words
Speech time
1028 secs
Arguments
Risk-based and outcome-based regulation is currently considered for AI
Supporting facts:
- European Union is trying to address threats and opportunities of AI with regulation that would not stifle innovation
- Risk-based regulation assesses risks when new AI systems are being developed
- Outcome-based regulation aims to create a framework for desired outcomes, allowing the industry to achieve them by their own means
Topics: Artificial Intelligence, Regulatory Policies, Cybersecurity
There is a need to understand threats in AI including harm from crimes, deep fakes and bias.
Supporting facts:
- AI models can be biased based on the information fed to them
- Technologies created in the West can be double biased in technology transfer and adoption in other regions
Topics: AI threats, AI bias, Security, Regulation
It’s crucial to have cooperation between industry, researchers, governments and law enforcement in AI development.
Supporting facts:
- Cooperation can lead to better threat management and defense in AI
Topics: Industry cooperation, Government cooperation, Law enforcement cooperation
Misuse will always happen
Supporting facts:
- Any technology would be misused
- Crime follows opportunity
Topics: Technology Misuse, Crime, Human Fate
Focus on harms, not on technology
Supporting facts:
- Misuse of technology can cause harm
- We need to focus on the harm caused rather than the technology itself
Topics: Technology Misuse, Human Harm, Risk Management
AI brings so much good
Supporting facts:
- AI can be used effectively for comparison of hashes, databases
Topics: Artificial Intelligence, Positive Impact
Humans are the main source of mistakes and abuses
Supporting facts:
- Humans are the one who are making mistakes and abusing this technology
- Human is the weakest link
Topics: Human Error, Technology Misuse
Lack of capacity to assess AI and cyber threats in both global south and global north
Supporting facts:
- The threat is still there and there is lack of understanding what kind of threat is there in terms of national security, in terms of cyber operations
- In terms of understanding the threat, we still lack capacity in global north as well. We still lack understanding of the threat itself
Topics: Artificial Intelligence, Cyber Defense, Global South, Global North
Need for cooperation between global north and global south
Supporting facts:
- There is such a huge potential for cooperation between what you call global north and global south.
- There is a lot of fear-mongering going on as well
Topics: Artificial Intelligence, Cyber Defense, Global South, Global North
Need to redefine the concept of fairness in terms of AI
Supporting facts:
- Fairness currently mostly applies to Global North
- Fairness should be more broadly applied to the impact of technology on all parts of the world
Topics: AI Principles, Global North and South, Technology Fairness, AI Benefits
Artificially separated, interrelated UN processes such as cybersecurity, AI and cybercrime
Supporting facts:
- Cybersecurity and cybercrime processes are artificially separated in the UN
Topics: UN processes, cybersecurity, AI, cybercrime
The absence of best practices in addressing the issues of cybersecurity and AI
Supporting facts:
- There are still capacity building to address the issues
Topics: best practices, guiding principles, cybersecurity, AI
Council of Europe’s proposal for global treaty on AI and EU AI Act could be potential game changers
Supporting facts:
- Council of Europe’s proposal for global treaty aims to achieve transparency, fairness, accountability
- EU AI Act would prohibit profiling and some other AI uses
Topics: Council of Europe, global treaty on AI, EU AI Act
Report
The discussions surrounding AI regulation and challenges in the cybersecurity realm have shed light on the importance of implementing risk-based and outcome-based regulations. It has been recognized that while regulation should address the threats and opportunities presented by AI, it must also avoid stifling innovation.
Risk-based regulation, which assesses risks during the development of new AI systems, and outcome-based regulation, which aims to establish a framework for desired outcomes, allowing the industry to achieve them on their own terms, were highlighted as potential approaches. There are concerns regarding AI bias, accountability, and the transparency of algorithms.
There is a need to address these issues, along with the growing challenge of deepfakes. The evolving nature of AI technology poses challenges such as the generation of malware and spear-phishing campaigns. Future challenges include AI bias, algorithm transparency, and the impact of deepfakes.
These concerns need to be effectively addressed to ensure the responsible and ethical development and deployment of AI. Cooperation between industry, researchers, governments, and law enforcement was emphasized as crucial for effective threat management and defense in the AI domain.
Building partnerships and collaboration among these stakeholders can enhance response capabilities and mitigate potential risks. While AI offers significant benefits, such as its effective use in hash comparison and database management, its potential threats and misuse require a deeper understanding and investment in research and development.
The need to comprehend and address AI-related risks and challenges was underscored to establish future-proof frameworks. The discussions also highlighted the lack of capacity to assess AI and cyber threats globally, both in the global south and global north. This calls for increased efforts to enhance understanding and build expertise to effectively address such threats on a global scale.
Furthermore, the importance of cooperation between the global north and south was stressed, emphasizing the need for collaboration to tackle the challenges and harness the potential of AI technology. The concept of fairness in AI was noted as needing redefinition to encompass its impact globally.
Currently, fairness primarily applies to the global north, necessitating a broader perspective that considers the impact on all regions of the world. It was also suggested that global cooperation should focus on building a better future and emphasizing the benefits of AI.
Regulation was seen as insufficient on its own, requiring accompanying actions from civil society, the technical community, and companies. External scrutiny of AI algorithms by civil society and research organizations was proposed to ensure their ethical use and reveal potential risks.
The interrelated UN processes of cybersecurity, AI, and cybercrime were mentioned as somewhat artificially separated. This observation underscores the need for a more holistic approach to address the interdependencies and mutual influence of these processes. The absence of best practices in addressing cybersecurity and AI issues was recognized, emphasizing the need to invest in capacity building and the development of effective strategies.
The proposal for a global treaty on AI by the Council of Europe was deemed potentially transformative in achieving transparency, fairness, and accountability. Additionally, the EU AI Act, which seeks to prohibit profiling and certain other AI uses, was highlighted as a significant development in AI regulation.
The importance of guiding principles and regulatory frameworks was stressed, but it was also noted that they alone do not provide a clear path for achieving transparency, fairness, and accountability. Therefore, the need to further refine and prioritize these principles and frameworks was emphasized.
Overall, the discussions highlighted the complex challenges and opportunities associated with AI in cybersecurity. It is crucial to navigate these complexities through effective regulation, collaboration, investment, and ethical considerations to ensure the responsible and beneficial use of AI technology.
Waqas Hassan
Speech speed
159 words per minute
Speech length
2032 words
Speech time
766 secs
Arguments
Regulators face a balancing act in protecting both industry and consumers from cybersecurity risks, particularly from AI-related threats in developing countries.
Supporting facts:
- Regulators have dual connections with industry and consumers
- Cybersecurity is a major challenge for developing countries
- Defenses are typically reactive, especially in developing countries
- AI and emerging technologies pose significant challenges
Topics: AI, Telecommunication Regulation, Cybersecurity
The pace of evolution of threats is unequal to the pace at which defense mechanisms are improving
Supporting facts:
- The capacity of people who want to misuse is outweighing the capacity of people who have to defend against it
- The forensics is not as fast as the crimes are happening
Topics: Cybersecurity, Forensics, Cyber Threats
Developing countries’ cyber defense often lags behind due to insufficient tools, technologies, knowledge, resources, and investments
Topics: Cybersecurity, Developing Countries, Defense Mechanisms
Standardization of technology is necessary for its global accessibility
Supporting facts:
- Meta has open-sourced their AI model for public use
- Standard technology is easier for both developed and developing countries to utilize
Topics: Technology transfer, AI, Standardization
Sharing of information, tools, experiences, and human resources is integral in tackling AI misuse and improving cybersecurity posture
Supporting facts:
- Developed countries have the investment muscle for AI defence mechanisms, which is needed by developing countries
- It’s necessary to know what kind of new crimes are being introduced and how AI is being misused
Topics: AI, Cybersecurity, Information Sharing
The starting point of every solution is a dialogue
Supporting facts:
- The Asia-Pacific cert and ITU have already initiated some form of cooperation around cybersecurity
- National level security initiatives and agencies already exist and work in this domain
- Sharing information and threat intelligence among countries can be a starting point.
Topics: Global cooperation, AI-Cybersecurity, National Security policy, information sharing
The responsibility of being cyber ready needs to be distributed among users, platforms, and the academy.
Topics: Cyber security, Multi-stakeholder model, Collaborative approach
Report
Regulators face a delicate balancing act in protecting both industry and consumers from cybersecurity risks, particularly those related to AI in developing countries. The rapid advancement of technology and the increasing sophistication of cyber threats have made it challenging for regulators to stay ahead in ensuring the security of both industries and individuals.
Developing nations require more capacity building and technology transfer from developed countries to effectively tackle these cybersecurity challenges. Technology, especially cybersecurity technologies, is primarily developed in the West, putting developing countries at a disadvantage. This imbalance hinders their ability to effectively defend against cyber threats and leaves them vulnerable to cyber attacks.
It is crucial for developed countries to support developing nations by providing the necessary tools, knowledge, and resources to enhance their cyber defense capabilities. The pace at which cyber threats are evolving is surpassing the rate at which defense mechanisms are improving.
This disparity poses a significant challenge for regulators and exposes the vulnerability of developing countries’ cybersecurity infrastructure. The proactive approach is crucial in addressing this issue, as reactive defense mechanisms are often insufficient in mitigating the sophisticated cyber threats faced by nations worldwide.
Taking preventive measures, such as taking down potential threats before they become harmful, can significantly improve cybersecurity posture. Developing countries often face difficulties in keeping up with cyber defense due to limited tools, technologies, knowledge, resources, and investments. These limitations result in a lag in their cyber defense capabilities, leaving them susceptible to cyber attacks.
It is imperative for both developed and developing countries to work towards bridging this gap by standardizing technology, making it more accessible globally. Standardization promotes a level playing field and ensures that both nations have equal opportunities to defend against cyber threats.
Sharing information, tools, experiences, and human resources plays a vital role in tackling AI misuse and improving cybersecurity posture. Developed countries, which have the investment muscle for AI defense mechanisms, should collaborate with developing nations to share their expertise and knowledge.
This collaboration fosters a fruitful exchange of ideas and insights, leading to better cybersecurity practices globally. Global cooperation on AI cybersecurity should begin at the national level. Establishing a dialogue among nations, along with sharing information, threat intelligence, and the development of AI tools for cyber defense, paves the way for effective global cooperation.
Regional bodies such as the Asia-Pacific CERT and ITU already facilitate cybersecurity initiatives and can further contribute to this cooperation by organizing cyber drills and fostering collaboration among nations. The responsibility for being cyber ready needs to be distributed among users, platforms, and the academic community.
Cybersecurity is a collective effort that requires the cooperation and active involvement of all stakeholders. Users must remain vigilant and educated about potential cyber threats, while platforms and institutions must prioritize the security of their systems and infrastructure. In parallel, the academic community should actively contribute to research and innovation in cybersecurity, ensuring the development of robust defense mechanisms.
Despite the limitations faced by developing countries, they should still take responsibility for being ready to tackle cybersecurity challenges. Recognizing their limitations, they can leverage available resources, capacity building initiatives, and knowledge transfer to enhance their cyber defense capabilities. By actively participating in cybersecurity efforts, developing countries can contribute to creating a safer and more secure digital environment.
In conclusion, regulators face an ongoing challenge in safeguarding both industry and consumers from cybersecurity risks, particularly those related to AI. To address these challenges, developing nations require greater support in terms of capacity building, technology transfer, and standardization of technology.
A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucial components in building robust defense mechanisms and ensuring a secure cyberspace for all.