AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023

10 Oct 2023 08:30h - 10:00h UTC

Event report

Speakers and Moderators

Speakers:
  • Nabinda Aryal, Government, Asia-Pacific Group
  • Tatiana Tropina, Civil Society, Western European and Others Group (WEOG)
  • Sarim Aziz, Private Sector, Asia-Pacific Group
  • Michael Ilishebo, Government, African Group
Moderators:
  • Babu Ram Aryal, Civil Society, Asia-Pacific Group

Table of contents

Disclaimer: This is not an official record of the IGF session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed. The official record of the session can be found on the IGF's official website.

Knowledge Graph of Debate

Session report

Sarim Aziz

In the discussion, multiple speakers addressed the role of AI in cybersecurity, emphasizing that AI offers more opportunities for cybersecurity and protection rather than threats. AI has proven effective in removing fake accounts and detecting inauthentic behavior, making it a valuable tool for safeguarding users online. One speaker stressed the importance of focusing on identifying bad behavior rather than content, noting that fake accounts were detected based on their inauthentic behavior, regardless of the content they shared.

The discussion also highlighted the significance of open innovation and collaboration in cybersecurity. Speakers emphasized that an open approach and collaboration among experts can enhance cybersecurity measures. By keeping AI accessible to experts, the potential for misuse can be mitigated. Additionally, policymakers were urged to incentivize open innovation and create safe environments for testing AI technologies.

The potential of AI in preventing harms was underscored, with the “StopNCII.org” initiative serving as an example of using AI to block non-consensual intimate imagery across platforms and services. The discussion also emphasized the importance of inclusivity in technology, with frameworks led by Japan, the OECD, and the White House focusing on inclusivity, fairness, and eliminating bias in AI development.

Speakers expressed support for open innovation and the sharing of AI models. Meta’s release of the open-source AI model “Lama2” was highlighted, enabling researchers and developers worldwide to use and contribute to its improvement. The model was also submitted for vulnerability evaluation at DEF CON, a cybersecurity conference.

The role of AI in content moderation on online platforms was discussed, recognizing that human capacity alone is insufficient to manage the vast amount of content generated. AI can assist in these areas, where human resources fall short.

Furthermore, the discussion emphasized the importance of multistakeholder collaboration in managing AI-related harms, such as child safety and counterterrorism efforts. Public-private partnerships were considered crucial in effectively addressing these challenges.

The potential benefits of open-source AI models for developing countries were explored. It was suggested that these models present immediate opportunities for developing countries, enabling local researchers and developers to leverage them for their specific needs.

Lastly, the need for technical standards to handle AI content was acknowledged. The discussion proposed implementing watermarking for audiovisual content as a potential standard, with consensus among stakeholders.

Overall, the speakers expressed a positive sentiment regarding the potential of AI in cybersecurity. They highlighted the importance of open innovation, collaboration, inclusivity, and policy measures to ensure the safe and responsible use of AI technologies. The discussion provided valuable insights into the current state and future directions of AI in cybersecurity.

Michael Ilishebo

The use of Artificial Intelligence (AI) has raised concerns regarding its negative impact on different aspects of society. One concern is that AI has enabled crimes that were previously impossible. An alarming trend is the accessibility of free AI tools online, allowing individuals with no computing knowledge to program malware for criminal purposes.

Another concern is the challenges AI poses for law enforcement agencies. AI technology performs tasks at a pace that surpasses human comprehension, making it difficult to differentiate between AI-generated content and human interaction. This creates obstacles for law enforcement in investigating and preventing crimes. Additionally, AI’s ability to generate realistic fake videos and mimic voices complicates the effectiveness of digital forensic tools, threatening their reliability.

Developing countries face unique challenges with regards to AI. They primarily rely on AI services and products from developed nations and lack the capacity to develop their own localized AI solutions or train AI based on their data sets. This dependency on foreign AI solutions increases the risk of criminal misuse. Moreover, the public availability of language models can be exploited for criminal purposes, further intensifying the threat.

The borderless nature of the internet and the use of AI have contributed to a rise in internet crimes. Meta, a social media company, reported the detection of nearly a billion fake accounts within the first quarter of their language model implementation. The proliferation of fake accounts promotes the circulation of misinformation, hate speech, and other inappropriate content. Developing countries, facing resource limitations, struggle to effectively filter and combat such harmful content, exacerbating the challenge.

Notwithstanding the negative impact, AI also presents positive opportunities. AI has the potential to revolutionize law enforcement by detecting, preventing, and solving crimes. AI’s ability to identify patterns and signals can anticipate potential criminal behavior, often referred to as pre-crime detection. However, caution is necessary to ensure the ethical use of AI in law enforcement, preventing human rights violations and unfair profiling.

In the realm of cybersecurity, the integration of AI has become essential. National cybersecurity strategies need to incorporate AI to effectively defend against cyber threats. This integration requires the establishment of regulatory frameworks, collaborative capacity-building efforts, data governance, incidence response mechanisms, and ethical guidelines. AI and cybersecurity should not be considered in isolation due to their interconnected impact on securing digital systems.

In conclusion, while AI brings numerous benefits, significant concerns exist regarding its negative impact. From enabling new forms of crime to posing challenges for law enforcement and digital forensic tools, AI has far-reaching implications for societal safety and security. Developing countries, particularly, face specific challenges due to their reliance on foreign AI solutions and limited capacity to filter harmful content. Policymakers must prioritize ethical use of AI and address the intertwined impact of AI and cybersecurity to harness its potential while safeguarding against risks.

Waqas Hassan

Regulators face a delicate balancing act in protecting both industry and consumers from cybersecurity risks, particularly those related to AI in developing countries. The rapid advancement of technology and the increasing sophistication of cyber threats have made it challenging for regulators to stay ahead in ensuring the security of both industries and individuals.

Developing nations require more capacity building and technology transfer from developed countries to effectively tackle these cybersecurity challenges. Technology, especially cybersecurity technologies, is primarily developed in the West, putting developing countries at a disadvantage. This imbalance hinders their ability to effectively defend against cyber threats and leaves them vulnerable to cyber attacks. It is crucial for developed countries to support developing nations by providing the necessary tools, knowledge, and resources to enhance their cyber defense capabilities.

The pace at which cyber threats are evolving is surpassing the rate at which defense mechanisms are improving. This disparity poses a significant challenge for regulators and exposes the vulnerability of developing countries’ cybersecurity infrastructure. The proactive approach is crucial in addressing this issue, as reactive defense mechanisms are often insufficient in mitigating the sophisticated cyber threats faced by nations worldwide. Taking preventive measures, such as taking down potential threats before they become harmful, can significantly improve cybersecurity posture.

Developing countries often face difficulties in keeping up with cyber defense due to limited tools, technologies, knowledge, resources, and investments. These limitations result in a lag in their cyber defense capabilities, leaving them susceptible to cyber attacks. It is imperative for both developed and developing countries to work towards bridging this gap by standardizing technology, making it more accessible globally. Standardization promotes a level playing field and ensures that both nations have equal opportunities to defend against cyber threats.

Sharing information, tools, experiences, and human resources plays a vital role in tackling AI misuse and improving cybersecurity posture. Developed countries, which have the investment muscle for AI defense mechanisms, should collaborate with developing nations to share their expertise and knowledge. This collaboration fosters a fruitful exchange of ideas and insights, leading to better cybersecurity practices globally.

Global cooperation on AI cybersecurity should begin at the national level. Establishing a dialogue among nations, along with sharing information, threat intelligence, and the development of AI tools for cyber defense, paves the way for effective global cooperation. Regional bodies such as the Asia-Pacific CERT and ITU already facilitate cybersecurity initiatives and can further contribute to this cooperation by organizing cyber drills and fostering collaboration among nations.

The responsibility for being cyber ready needs to be distributed among users, platforms, and the academic community. Cybersecurity is a collective effort that requires the cooperation and active involvement of all stakeholders. Users must remain vigilant and educated about potential cyber threats, while platforms and institutions must prioritize the security of their systems and infrastructure. In parallel, the academic community should actively contribute to research and innovation in cybersecurity, ensuring the development of robust defense mechanisms.

Despite the limitations faced by developing countries, they should still take responsibility for being ready to tackle cybersecurity challenges. Recognizing their limitations, they can leverage available resources, capacity building initiatives, and knowledge transfer to enhance their cyber defense capabilities. By actively participating in cybersecurity efforts, developing countries can contribute to creating a safer and more secure digital environment.

In conclusion, regulators face an ongoing challenge in safeguarding both industry and consumers from cybersecurity risks, particularly those related to AI. To address these challenges, developing nations require greater support in terms of capacity building, technology transfer, and standardization of technology. A proactive approach to cybersecurity, global cooperation, and the shared responsibility of being cyber ready are crucial components in building robust defense mechanisms and ensuring a secure cyberspace for all.

Babu Ram Aryal

Babu Ram Aryal advocates for comprehensive discussions on the positive aspects of integrating artificial intelligence (AI) in cybersecurity. He emphasizes the crucial role that AI can play in enhancing cyber defense measures and draws attention to the potential risks associated with its implementation.

Aryal highlights the significance of AI in bolstering cybersecurity against ever-evolving threats. He stresses the need to harness the capabilities of AI in detecting and mitigating cyber attacks, thereby enhancing the overall security of digital systems. By automating the monitoring of network activities, AI algorithms can quickly identify suspicious patterns and respond in real-time, minimizing the risk of data breaches and information theft.

Moreover, Aryal urges for a thorough exploration of the potential risks that come with AI in the context of cybersecurity. As AI systems become increasingly intelligent and autonomous, there are concerns about their susceptibility to malicious exploitation or manipulation. Understanding these vulnerabilities is crucial in developing robust defense mechanisms to safeguard against such threats.

To facilitate a comprehensive examination of the topic, Aryal assembles a panel of experts from diverse fields, promoting a multidisciplinary approach to exploring the intersection of AI and cybersecurity. This collaboration allows for a detailed analysis of the potential benefits and challenges presented by AI in this domain.

The sentiment towards AI’s potential in cybersecurity is overwhelmingly positive. The integration of AI technologies in cyber defense can significantly enhance the security of both organizations and individuals. However, there is a need to strike a balance and actively consider the associated risks to ensure ethical and secure implementation of AI.

In conclusion, Babu Ram Aryal advocates for exploring the beneficial aspects of AI in cybersecurity. By emphasizing the role of AI in strengthening cyber defense and addressing potential risks, Aryal calls for comprehensive discussions involving experts from various fields. The insights gained from these discussions can inform the development of effective strategies that leverage AI’s potential while mitigating its associated risks, resulting in improved cybersecurity measures for the digital age.

Audience

The extended analysis highlights several important points related to the impact of technology and AI on the global south. One key argument is that individual countries in the global south lack the capacity to effectively negotiate with big tech players. This imbalance is due to the concentration of technology in the global north, which puts countries in the global south at a disadvantage. The supporting evidence includes the observation that many resources collected from the third world and global south are directed towards the developed economy, exacerbating the technological disparity.

Furthermore, it is suggested that AI technology and its benefits are not equally accessible to and may not equally benefit the global south. This argument is supported by the fact that the majority of the global south’s population resides in developing countries with limited access to AI technology. The issue of affordability and accessibility of AI technology is raised, with the example of ChatGPT, an AI system that is difficult for people in developing economies to afford. The supporting evidence also highlights the challenges faced by those with limited resources in addressing AI technology-related issues.

Inequality and limited inclusivity in the implementation of accessibility and inclusivity practices are identified as persistent issues. While accessibility and inclusivity may be promoted in theory, they are not universally implemented, thereby exposing existing inequalities across different regions. The argument is reinforced by the observation that politics between the global north and south often hinder the universal implementation of accessibility and inclusivity practices.

The analysis also raises questions about the transfer of technology between the global north and south and its implications, particularly in terms of international relations and inequality. The sentiment surrounding this issue is one of questioning, suggesting the need for further investigation and examination.

Moreover, AI is seen as a potential threat that can lead to new-age digital conflicts. The supporting evidence presents AI as a tool with the potential to be used against humans, leading to various threats. Furthermore, the importance of responsive measures that keep pace with technological evolution is emphasized. The argument is that measures aimed at addressing new tech threats need to be as fast and efficient as the development of technology itself.

Concerns about the accessibility and inclusion of AI in developing countries are also highlighted. The lack of infrastructure and access to electricity in some regions, such as Africa, pose challenges to the adoption of AI technology. Additionally, limited internet access and digital literacy hinder the effective integration of AI in these countries.

The potential risks that AI poses, such as job insecurity and limited human creativity, are areas of concern. The sentiment expressed suggests that AI is perceived as a threat to job stability, and there are fears that becoming consumers of AI may restrict human creativity.

To address these challenges, it is argued that digital literacy needs to be improved in order to enhance understanding of the risks and benefits of AI. The importance of including everyone in the advancement of AI, without leaving anyone behind, is emphasized.

The analysis delves into the topic of cyber defense, advocating for the necessity of defining cyber defense and clarifying the roles of different actors, such as governments, civil society, and tech companies, in empowering developing countries in this field. The capacity of governments to implement cyber defense strategies is questioned, using examples such as Nepal adopting a national cybersecurity policy with potential limitations in transparency and discussions.

The need to uphold agreed values, such as the Human Rights Charter and internet rights and principles, is also underscored. The argument is that practical application of these values is necessary to maintain a fair and just digital environment.

The analysis points out the tendency for AI and cybersecurity deliberations to be conducted in isolation at the multilateral level, emphasizing the importance of multidisciplinary governance solutions that cover all aspects of technology. Additionally, responsible behavior is suggested as a national security strategy for effectively managing the potential risks associated with AI and cybersecurity.

In conclusion, the extended analysis highlights the disparities and challenges faced by the global south in relation to technology and AI. It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to ensure equitable benefits and mitigate risks. Ultimately, the goal should be to empower all nations and individuals to navigate the evolving technological landscape and foster a globally inclusive and secure digital future.

Tatiana Tropina

The discussions surrounding AI regulation and challenges in the cybersecurity realm have shed light on the importance of implementing risk-based and outcome-based regulations. It has been recognized that while regulation should address the threats and opportunities presented by AI, it must also avoid stifling innovation. Risk-based regulation, which assesses risks during the development of new AI systems, and outcome-based regulation, which aims to establish a framework for desired outcomes, allowing the industry to achieve them on their own terms, were highlighted as potential approaches.

There are concerns regarding AI bias, accountability, and the transparency of algorithms. There is a need to address these issues, along with the growing challenge of deepfakes. The evolving nature of AI technology poses challenges such as the generation of malware and spear-phishing campaigns. Future challenges include AI bias, algorithm transparency, and the impact of deepfakes. These concerns need to be effectively addressed to ensure the responsible and ethical development and deployment of AI.

Cooperation between industry, researchers, governments, and law enforcement was emphasized as crucial for effective threat management and defense in the AI domain. Building partnerships and collaboration among these stakeholders can enhance response capabilities and mitigate potential risks.

While AI offers significant benefits, such as its effective use in hash comparison and database management, its potential threats and misuse require a deeper understanding and investment in research and development. The need to comprehend and address AI-related risks and challenges was underscored to establish future-proof frameworks.

The discussions also highlighted the lack of capacity to assess AI and cyber threats globally, both in the global south and global north. This calls for increased efforts to enhance understanding and build expertise to effectively address such threats on a global scale. Furthermore, the importance of cooperation between the global north and south was stressed, emphasizing the need for collaboration to tackle the challenges and harness the potential of AI technology.

The concept of fairness in AI was noted as needing redefinition to encompass its impact globally. Currently, fairness primarily applies to the global north, necessitating a broader perspective that considers the impact on all regions of the world. It was also suggested that global cooperation should focus on building a better future and emphasizing the benefits of AI.

Regulation was seen as insufficient on its own, requiring accompanying actions from civil society, the technical community, and companies. External scrutiny of AI algorithms by civil society and research organizations was proposed to ensure their ethical use and reveal potential risks.

The interrelated UN processes of cybersecurity, AI, and cybercrime were mentioned as somewhat artificially separated. This observation underscores the need for a more holistic approach to address the interdependencies and mutual influence of these processes.

The absence of best practices in addressing cybersecurity and AI issues was recognized, emphasizing the need to invest in capacity building and the development of effective strategies.

The proposal for a global treaty on AI by the Council of Europe was deemed potentially transformative in achieving transparency, fairness, and accountability. Additionally, the EU AI Act, which seeks to prohibit profiling and certain other AI uses, was highlighted as a significant development in AI regulation.

The importance of guiding principles and regulatory frameworks was stressed, but it was also noted that they alone do not provide a clear path for achieving transparency, fairness, and accountability. Therefore, the need to further refine and prioritize these principles and frameworks was emphasized.

Overall, the discussions highlighted the complex challenges and opportunities associated with AI in cybersecurity. It is crucial to navigate these complexities through effective regulation, collaboration, investment, and ethical considerations to ensure the responsible and beneficial use of AI technology.

Speakers

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more

Speech speed

0 words per minute

Speech length

words

Speech time

0 secs

Click for more