Analysis of cyber failure of CrowdStrike and Microsoft

On 19 July 2024, a blue screen of death appeared on many Microsoft computers. Australian users experienced major failures of banks and Qantas operations. As working Friday started worldwide, like a domino effect, computer systems of airports, banks, hospitals, and companies started failing. Flights were delayed as airport computer systems stopped in Singapore, Hong Kong, India, Europe, and the USA. Among those affected by cyber-failure were Manchester United and the Dutch Ministry of Foreign Affairs.

This global cyber failure was triggered by regular updates to the CrowdStrike system, which affected Microsoft’s Windows operating system. Here, you can find an analysis of the technical, economic, policy and legal aspects of this cyber incident.

Vulnerability from overreliance on single-point solutions

Current cyber systems are highly complex. The weakest link, which was, in this case, the software update, triggered major failure. The sheer complexity of interconnected services and servers prevents us from identifying points of failure. At the same time, this case has highlighted an overreliance of numerous organisations on single-point IT solutions. All impacted organisations were running the same software and this underscored a vulnerability in their cyber-resilience strategies. This incident underscores the importance of a global conversation about how such IT solutions are maintained and updated.

Supply chain security dilemma

With the constantly evolving threat landscape and increasing complexity of the digital systems we use, we are told about the necessity to frequently update our software in order to maintain security. However, this case highlighted that updates can be the root cause of security challenges. With more software opting for automatic updates, a new area of potential vulnerability to security risks has emerged: the rapidly evolving supply chain.

Failure without cyberattack

Cybersecurity is often associated with cyberattacks, and previously experts anticipated global IT outages due to malicious actors’ activity. And this is a major reason for concern. But, as this incident shows, our computer systems can be down without any malicious intent but as a result of faulty processes. A security feature turned out to lead to security risk: the underlying cause seems to be an update to the kernel-level driver that CrowdStrike uses to protect Windows computers. After “numerous reports of blue screen of death errors on Windows hosts,” CrowdStrike identified the issue and rolled back the problematic update, but this does not appear to help machines that have already been affected.

Focus on critical infrastructure

At the time when global media focused on AI risks, the risks were much more immediate and mundane than they were, in this case, the update of the server. Countries and companies must focus on critical infrastructure and critical information infrastructure, from dealing with complex systems and supply chains to protecting submarine cables and critical points of failure of the modern internet. This incident also puts the trust in digital infrastructure at risk, leading to increased scrutiny and demand for more robust, resilient systems, especially in critical sectors.

Need for international response

An update on a Crowdstrike server affected systems worldwide but also highlighted the dependence of numerous digital systems in critical sectors worldwide on a single provider. This may require owners and operators of critical infrastructure both from public and private sectors to diversify, where possible, their third party service providers but also certainly enhance cyber resilience. The challenge is though this would require international action which may be difficult to ensure in a current geopolitical environment.

Solutions…

The good news is that there are international instruments to be used in such situations. Among UN 11 cyber norms agreed upon states within the UN Group of Government Experts (GGE) and endorsed by all UN Member States within the UN Open-Ended Working Group (OEWG), several cyber norms are particularly relevant. For instance, the norms that call to protect critical infrastructure and ensure supply chain integrity.

The Geneva Dialogue on Responsible Behaviour in Cyberspace, established by Switzerland and implemented by Diplo with the support of several partners, addresses the implementation of these norms and highlights the challenges that different non-state stakeholders face in implementing them.

The results of a regular dialogue with representatives of the private sector, academia, civil society, and technical community from different countries are published in the Geneva Manual, a comprehensive guidance on the non-state stakeholder implementation. The first edition focuses on the implementation of the norms, including on supply chain security. This year the Geneva Dialogue discusses the implementation of norms related to critical infrastructure protection, and the next chapter of the Manual will focus on this topic.

UN Cybercrime Convention: Will states give in disagreements for the sake of a global common threat?

Will more than two years of interstate negotiations at the UN result in a global comprehensive convention on cybercrime? Why did states previously fail to reach a final agreement? Where do the main disagreements lie? What are the expectations of stakeholders, including civil society and industry, for the final round of UN negotiations?

We at Diplo invited experts representing different stakeholder groups and organisations to help us understand the caveats of these inter-state negotiations. Before the concluding session of the Ad Hoc Committee to Elaborate a Comprehensive International Convention on Countering the Use of Information and Communications Technologies for Criminal Purposes, we also discussed with experts  what their predictions for the outcome of a nearly four-year UN process are. 

Below we’re sharing the main takeaways, and if you would like to watch the entire discussion, follow this link.

What went well and what didn’t? Where do major disagreements lie between states and why? 

Alexander Seger, Head of Cybercrime Division, the Council of Europe, started with the overview of the process and of the current status. He mentioned that two weeks before, updated working documents were circulated, marking a significant step in the ongoing efforts to draft a comprehensive United Nations treaty on cybercrime. Dated 23 May, these documents included an updated draft text for a future treaty, a draft resolution for the General Assembly to adopt this draft text, and some interpretative notes whose status remains somewhat ambiguous.

Unlike the UN Conventions on Organized Crime and Corruption, which were initiated by consensus, the resolution leading to the current cybercrime treaty process was a contested decision, passed by a relative majority in December 2019. Despite these and other challenges, members of the Budapest Convention and the Council of Europe have supported the process in the spirit of multilateralism. The potential value lies in providing a framework for cooperation among states that cannot join the Budapest Convention for political reasons or due to non-compliance with its requirements. 

What went well? Alexander Seger highlighted that the parties and friends of the Budapest Convention have coordinated extremely well throughout this process. At a fairly early stage, provisions based on the Budapest Convention on Cybercrime, such as the list of offences, procedural powers, and specific provisions for international cooperation, were essentially agreed upon. Interestingly, the underlying concepts and definitions in the UN treaty draft are based on those in the Budapest Convention, though they are termed differently for political reasons. For example, a ‘computer system’ is now referred to as an ’information and communication technology system’, but the definition remains almost the same as in the Budapest Convention. Similarly, ’computer data’ is now called ’electronic data’, but with the same definition. This gives a hope for avoiding major inconsistencies in this respect, reflecting a positive outcome of the negotiations.

Tatiana Tropina, Assistant Professor in Cybersecurity Governance, ISGA, Leiden University, underlines other positive aspects of the process, such as the inclusion of stakeholders in the cybercrime treaty negotiations. Although the process is not complete, it represents a significant step forward: non-governmental stakeholders were given specific slots to speak which, despite being limited, marked progress considering the importance of this work. One commendable aspect of the process has been the efforts of the chair, who consistently pushed for consensus and sought to unify differing positions. Additionally, many countries participated in the negotiations in good faith, although this was not universal. A notable success in the process was the suspension of the negotiations to address concerns. States that respect human rights and were wary of the potential negative impacts of the convention pushed back during the last session, disagreeing with the draft.

Why did states fail to reach consensus at earlier sessions? Alexander Seger highlighted several significant disagreements that have emerged during the cybercrime treaty negotiations. A primary contention is the list of offences to be included. Russia, in particular, insists on a comprehensive list of additional offences such as terrorism, extremism, and drug trafficking online, among others. They argue that if these offences are not included, an additional protocol to the convention should be developed.

Another major point of contention is the balance between the scope of the convention and the obligation to cooperate. This involves whether cooperation should extend to any crime, just the offences listed in the treaty, or only electronic evidence related to serious crimes, while also ensuring human rights safeguards.

There are also concerns and differing opinions regarding the article on child sexual abuse materials, particularly the criminalisation of children for distributing self-generated materials among themselves. Tatiana Tropina agreed, highlighting that at the heart of all disagreements in the cybercrime treaty negotiations lies a common denominator: human rights. This fundamental issue influences the entire scope of criminalisation and the potential agreements between countries. While discussions may focus on specific offences like child abuse material and the rights of children to share self-generated images, the broader human rights implications are of significance.

Moreover, paragraph five of the draft resolution reveals a push for countries to agree on developing an additional protocol for further criminalisations, implicitly acknowledging that the current convention is incomplete. This likely implies incorporating offences such as terrorism and extremism, which historically have been used to crack down on free speech, thus raising human rights concerns.

The procedural powers proposed in the draft also lack sufficient safeguards, with some critical protections being diluted. Moreover, the scope of international cooperation is troubling, as it allows any country to designate an offence as a serious crime (punishable by more than five years of imprisonment) and then seek cooperation. This provision risks legitimising human rights abuses on an unprecedented scale.

In general, what is the value of the UN treaty on cybercrime considering that multiple regional and international frameworks already exist?

From a theoretical perspective, having a global comprehensive instrument with robust human rights safeguards, developed inclusively with various stakeholders and building upon existing instruments like the Budapest Convention, would likely add significant value, Tatiana Tropina noted. In an ideal world, she continued, such a treaty would bring countries together, build trust, streamline existing mechanisms, and enhance capacity in countries lacking it, potentially reducing impunity.

However, the series of drafts currently on the table, made over the past six months, suggest that this ideal is far from reality. These drafts leave much to be desired and do not align with the theoretical benefits.

‘It is crucial to understand that what we currently have for the concluding AHC session is not a cybercrime treaty but a global criminal justice treaty. It focuses heavily on the collection and cross-border transfer of electronic evidence for virtually any crime. It is well known that states often use criminal law as a tool for oppression –  to silence political opponents, oppress various groups including marginalised communities, and restrict freedoms and rights.’

Tatiana Tropina, Assistant Professor in Cybersecurity Governance, ISGA, Leiden University

Alexander Seger, at the same time, highlighted that public authorities and criminal justice authorities do not all use the same tools, treaties, or bases for cooperating with others. They rely on a multitude of treaties. For example, some countries do not use the Budapest Convention’s 24/7 network because they have long operated under the G7 24/7 network, which is now becoming increasingly similar to the Budapest Convention network. Conversely, some countries have never used the UN Convention on Transnational Organized Crime, while others, including neighbouring countries, use it daily.

And this is how international cooperation works in practice. 

‘When, and if, a UN treaty is adopted, signed, and ratified by a country, the provisions are not immediately put into universal use. Instead, the treaty becomes an additional tool among the various existing mechanisms that countries may choose to employ based on their specific needs and circumstances.’

Alexander Seger, Head of Cybercrime Division, the Council of Europe

How can the effectiveness of the existing cybercrime treaties be measured? 

We specifically brought this question to better understand what existing frameworks fail to address, for instance, and what could highlight a need for a global, more comprehensive legal instrument instead. Is there available data highlighting how existing treaties help reduce cybercrime? Tatiana Tropina answered that, unfortunately, we don’t have a reliable methodology for measuring the cost and effect of cybercrime. While the Budapest Convention is considered as a golden standard in the development of cybercrime legislation, it is challenging to quantify its impact on reducing cybercrime. Many countries have changed their legislation after ratifying the convention, and even some non-member countries have adopted laws similar to those outlined in the Budapest Convention.

However, we still lack relevant and reliable methodologies to definitively say that cybercrime has decreased as a result of the Budapest Convention or other conventions like the Malabo Convention. Despite this, there have been clear successes. Countries have developed and strengthened their cybercrime legislation and procedural frameworks based primarily on the Budapest Convention. Trust has been built through negotiations, accession, ratification, and cooperation, which are all significant achievements in their own right.

Defining a positive outcome – civil society perspective: What would be the desired elements of a ‘good’ result, from a civil society perspective, after the concluding session?

After making a quick overview of the process, it was important to hear the perspective of different stakeholders, who, as Tatiana Tropina mentioned, were actively involved in the negotiation process. We started with a civil society perspective and asked this question to Katitza Rodriguez, Policy Director for Global Privacy, Electronic Frontier Foundation (EFF) and Paloma Lara-Castro, Public Policy Coordinator, Derechos Digitales.

Katitza Rodriguez pointed out that the convention’s title is misleading and poses both conceptual and practical harms. Efforts to broaden the definition of cybercrime have led to the criminalisation of expression in many countries and risk expansive interpretations globally. Moreover, the treaty fails to adequately protect security researchers and journalists engaged in legitimate cybersecurity activities. Mandatory safeguards are lacking, jeopardising years of progress in court litigation and negotiations on domestic levels. Definitions of electronic data, especially regarding sensitive data like biometrics and neural data, are overly broad and lack mandatory data protection principles and robust mandatory safeguards to limits.

‘The text of the proposed treaty is too flawed to be adopted. Key provisions remain highlighted in red, indicating a lack of consensus on critical issues such as scope, human rights safeguards, and intrusive powers like real-time data collection or communication interception without strong mandatory human rights protections.’

Katitza Rodriguez, Policy Director for Global Privacy, Electronic Frontier Foundation (EFF)

Katitza Rodriguez also echoed Tatiana’s remarks on the need for effective human rights protections, including prior judicial authorization, and transparency in data access, and added that they are now left to national law rather than international standards like those in the Budapest Convention.

Concluding, she urged countries committed to the rule of law should approach this treaty with scepticism. Issues with its scope of cross border surveillance cooperation  and human rights protections remain unresolved, raising concerns of a potential decline in global standards for human rights and privacy protection.

Paloma Lara-Castro agreed, saying that she and her team view this treaty as potentially legitimising surveillance and criminalisation practices that are already concerning worldwide, particularly within their region (Latin America), where Derechos Digitales operates. She stressed that the fight against cybercrime should never compromise human rights.

As Tatiana Tropina highlighted, consensus on human rights, and Paloma Lara-Castro added, specifically on gender issues, remains a significant point of contention. 

‘It’s crucial to recognise that both criminal systems and technology are not neutral; they operate within societies marked by structural inequalities. Effective gender mainstreaming should be a central element of the convention, ensuring that every article is analysed from a gender perspective.’

Paloma Lara-Castro, Public Policy Coordinator, Derechos Digitales

While there have been some advances, Paloma Lara-Castro noted, such as the inclusion of gender mainstreaming in the preamble, which Derechos Digitales applaud, they continue to advocate for its integration throughout other articles.

Concluding, she also urged everyone to recognise how this treaty could impact our lives and engage with local governments, raise awareness, and closely monitor upcoming negotiations. It is crucial to ensure that any international treaty on cybercrime upholds human rights and does not undermine fundamental freedoms.

Defining a desired governance – industry perspective: Does industry anticipate positive changes with the adoption of an international cybercrime treaty? What should an effective international governance to address cybercrime look like?

We posed these questions to our expert guests representing Microsoft and Kaspersky, probably the most vocal voices in this process. Yuliya Shlychkova, Vice President, Public Affairs, Kaspersky, started highlighting that an international harmonised framework is crucial for effective cybercrime investigations, especially when cases span multiple countries, each with its own legal requirements and procedures. While the current framework in Europe benefits from the Budapest Convention, extending these standards globally remains a challenge due to the lack of mutual legal agreements.

Kaspersky, she noted, appreciates particularly the inclusion of dedicated provisions for expedited preservation of digital evidence in the current draft of the convention. However, 

‘Varying local requirements for forensic toolkits across jurisdictions pose a significant challenge. Achieving global acceptance of forensic toolkits would enhance the admissibility of evidence in courts worldwide.’

Yuliya Shlychkova, Vice President, Public Affairs, Kaspersky

Real-time access to network and traffic data is another critical issue, Yuliya Shlychkova highlighted, with current provisions being overly broad. It is essential to implement strict safeguards to prevent misuse of law enforcement powers and ensure transparency and accountability, both from governments and the private sector. Court orders should be mandatory before disclosing sensitive data like biometrics to protect human rights.

Ethical hackers and researchers also need better protection. While there are provisions against persecuting authorised penetration testing, freelancers, Yuliya Shlychkova mentioned, often operate without explicit authorisation. Criteria should focus on criminal intent rather than authorisation status to shield ethical researchers from legal repercussions. 

Concluding, she highlighted, again, that the private sector can play a significant role in technical assistance and capacity building initiatives, especially in under-resourced countries. Their involvement should be recognised and encouraged in the convention’s framework to enhance global cybersecurity efforts effectively.

Nemanja Malisevic, Director, Digital Diplomacy, Microsoft, echoed concerns shared by other panlists and added that Microsoft urges states to clearly define the treaty’s scope and significantly enhance safeguards throughout. In its current form, the treaty risks eroding data privacy, threatening digital sovereignty, and undermining online rights and freedoms globally.

Furthermore, the draft convention lacks effective international governance to address cybercrime adequately, as highlighted by experts. It also poses national security risks by allowing unauthorised disclosure of sensitive data to third states potentially compelling individuals with knowledge to reveal proprietary information.

‘To be clear, in its current form, this treaty, as we have repeatedly called out, remains a data access/global surveillance treaty in the guise of a cybercrime treaty.’

Nemanja Malisevic, Director, Digital Diplomacy, Microsoft

Nemanja Malisevic emphasised the need for a treaty focused on core cybercrime offences with robust safeguards and clear intent requirements. As it stands, he highlighted, the current draft falls short of these principles and, if adopted, could diminish cybersecurity, jeopardise data privacy, and threaten online freedoms worldwide.

Additionally, there is a provision calling for protocols on additional cybercrimes before finalising the convention’s scope, which Microsoft views as insufficient and potentially detrimental. This approach risks further widening the convention’s already overly broad scope, hindering its effectiveness.

In conclusion, Nemanja Malisevic noted unprecedented alignment among industry and civil society regarding concerns with the current draft. This consensus underscores the urgent need for a treaty focused on core cybercrime offences, bolstered by robust safeguards and clear intent requirements. 

Looking ahead: What are predictions for the outcome of the AHC negotiations and for the future of international efforts to address cybercrime? 

Tatiana Tropina gave a straightforward response saying that she doesn’t assume that there would be a convention, and that she doesn’t even believe that, when she looks at the current draft, we need this treaty at all.

Still, even if the miracle happens and states succeed in reaching a consensus at the concluding session in late July, one crucial question remains: Will democratic states sign and ratify this treaty once it is finalised? Alexander Seger noted the answer depends on the final shape of the treaty and whether it can mitigate the highlighted risks. He further highlighted a critical concern about the current title of the treaty, noting its potential to create confusion by encompassing crimes beyond traditional cybercrime, possibly extending into broader cybersecurity issues, thus echoing remarks by civil society and industry experts.

We, at Diplo, invite you all to re-watch the online expert discussion, engage in a broader conversation about the impacts of this negotiation process, and in the meantime – stay tuned. We’ll be monitoring the latest session and will share the reporting soon.

In the beginning was the word, and the word was with the chatbot, and the word was the chatbot

By introducing the argument to discuss, there is not much need to mention how important the word, respectively, the language and its narrow disciplines, is and what we humans have achieved in time through our enriched communication systems, especially in technological and diplomatic contexts where the word is an essential and powerful instrument

Since linguistics, especially nowadays, is an inseparable element from the realm of technology, it is absolutely legitimate to question the way chatbots, the offshoots of the latest technology, work. In other words, it is legitimate to question the way chatbots learn through digital, that is, algorithmic cognition and the way they accurately and articulately express themselves in response to someone’s most diverse queries or inputs.

What makes the human-like cognitive power of deep learning LLMs?

To understand AI and the epicentre of its evolution, chatbots, which interact with people by responding to most different prompts, we should delve into the branches of linguistics called semantics and syntax, and the process of learning and elaboration of most diverse and articulated info by chatbots. 

The complex understanding of language and how it is being assimilated by humans, (and in this case) by deep learning machines, was explained as far back as in some segments of language studies by Ferdinand de Saussure.

For that reason, we will explore the cognitive mechanisms underlying semantics and syntax in large language models (LLMs) such as ChatGPT, integrating the theoretical perspectives of one of the most renowned linguistic philosophers such as Saussure. By synthesising linguistic theories with contemporary AI methodologies, the aim is to provide a comprehensive understanding of how LLMs process, understand and generate natural language. What follows is a modest examination of the models’ training processes, data integration, and real-time interaction with users, highlighting the interplay between linguistic theories and AI language assimilation systems.

Overview of Saussure’s studies related to synta(x)gmatic relations and semantics 

 Face, Head, Person, Photography, Portrait, Adult, Male, Man, Mustache, Clothing, Coat, Accessories, Formal Wear, Tie, Ferdinand de Saussure

Starting with Ferdinand de Saussure, one of the first linguistic scientists of the 20th century (along with Charles Sanders Peirce and Leonard Bloomfield), and an introduction to syntax and semantics from the reading ‘Course in General Linguistics’, he depicts language as a scientific phenomenon, emphasising the synchronic study of language, focusing on its current state rather than its historical evolution, in a structuralist view, with syntax and semantics as some of the fundamental components of its structure. 

Syntax

Syntax, within this framework, is a grammar discipline which represents and explains the systematic and linear arrangement of words and phrases to form meaningful sentences within a given language. Saussure views syntax as an essential aspect of language, an abstract language system, which encompasses grammar, vocabulary, and rules. He argues that syntax operates according to inherent principles and conventions established within a linguistic community rather than being governed by individual speakers. His structuralist approach to linguistics highlights the interdependence between syntax and other linguistic elements, such as semantics, phonology and morphology, within the overall structure of language.

Semantics

Semantics is a branch of linguistics and philosophy concerned with the study of meaning in language. It explores how words, phrases, sentences, and texts convey meaning and how interpretation is influenced by context, culture, and usage. Semantics covers various aspects, including the meaning of words (lexical semantics), the meaning of sentences (compositional semantics or syntax), and the role of context in understanding language (pragmatics).

However, one of Saussure’s biggest precepts within semantics posits that language is a system of signs composed of the signifier (sound/image) and the signified (concept). This dyadic structure is crucial for understanding how LLMs process the understanding of words and their possible ambiguity. 

 Lighting, Nature, Night, Outdoors, Art, Graphics, Light, Person, Face, Head

How do chatbots cognise semantics and syntax in linguistic processes?

Chatbots’ processing and understanding of language usage involves several key steps: training on vast amounts of textual data from the internet to predict the next word in a sequence; tokenisation to divide the text into smaller units; learning relationships between words and phrases for semantic understanding; using vector representations to recognise similarities and generate contextually relevant responses; and leveraging transformer architecture to efficiently process long contexts and complex linguistic structures. Although it does not learn in real time, the model is periodically updated with new data to improve performance, enabling it to generate coherent and useful responses to user queries.

As explained earlier, in LLMs, words and phrases are tokenised and transformed into vectors within a high-dimensional space. These vectors function similarly to Saussure’s signifiers, with their positions and relationships encoding meaning (the signified). Thus, within the process of ‘Tokenisation and Embedding,’ LLMs tokenise text into discrete units (signifiers) and map them to embeddings that capture their meanings (signified). The model learns these embeddings by processing vast amounts of text, identifying patterns and relationships analogous to Saussure’s linguistic structures.

Chatbots’ ability to understand and generate text relies on their grasp of semantics (meaning) and syntax (structure). It processes semantics through contextual word embeddings that capture meanings based on usage, an attention mechanism that weighs word importance in context, and layered contextual understanding that handles polysemy and synonymy. The model is pre-trained on general language patterns and fine-tuned on specific datasets for enhanced semantic comprehension. For syntax, it uses positional encoding to understand word order, attention mechanisms to maintain syntactic coherence, layered processing to build complex structures, and probabilistic grammar learning from vast text exposure. Tokenisation and sequence modelling help track dependencies and coherence, while the transformer model integrates syntax and semantics at each layer, ensuring that responses are both meaningful and grammatically correct. Training on diverse datasets further enhances its ability to generalise across various language uses, making the chatbot a powerful natural language processing tool.

Interesting invention..

Recently, researchers in the Netherlands developed an AI platform capable of recognising sarcasm, which was presented at the Acoustical Society of America and Canadian Acoustical Association meeting. By training a neural network with the Multimodal Sarcasm Detection Dataset (MUStARD) using video clips and text from sitcoms like ‘Friends’ and ‘The Big Bang Theory,’ the large language model accurately detected sarcasm in about 75% of unlabeled exchanges.

Sarcasm generally takes the form of a, linguistically speaking, layered and ironic remark, often rooted in humour, that is intended to mock or satirise something. When a speaker is being sarcastic, they say something different than what they actually mean, and that’s why it is hard for a large language machine to detect such nuances in someone’s speech.

This process leverages deep learning techniques that analyse both syntax and semantics and the concepts of syntagma and idiom to understand the layered structure and meaning of language and how comprehensive the acquisition of human speech by an LLM is.

By integrating Saussure’s linguistic theories with the cognitive mechanisms of large language models, we gain a deeper understanding of how these models process and generate language. The interplay between structural rules, contextual usage, and fluidity of meaning partially depicts the sophisticated performance of LLMs’ language generation. This synthesis not only illuminates the inner workings of contemporary AI systems but also reinforces the enduring relevance of classical linguistic theories in the age of AI.

The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3

The intellectual property saga: The age of AI-generated content | Part 1

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

The first part on AI and IP discussed the complexities of copyrighting AI-generated content, noting the challenges of traditional laws in ownership. The second essay explored AI’s impact on trade secrets and trademarks in the EU and US legal frameworks. In this concluding section, methods being used to protect intellectual property in the era of AI will be explored.

Understanding AI and IP together is tricky. Unlike traditional forms of intellectual property, such as patents or copyrights, AI-generated outputs raise questions about ownership and authorship. Consequently, devising robust strategies to delineate ownership and protect AI-generated creations remains a major concern. As AI technology advances, it challenges traditional notions of ownership and attribution, asking for a re-evaluation of existing IP laws and ethical considerations. One significant aspect of this future is determining who owns the rights to AI-generated creations. For instance, if an AI system autonomously composes a symphony or designs a groundbreaking invention, should the credit and ownership belong to the programmer who developed AI, the company that deployed it, or perhaps even AI itself? This question suggests the need for clarity in IP laws to incentivise investment in AI research and was discussed thoroughly in the first essay. In patent cases, AI systems currently lack legal recognition as inventors, which could lead to legal reforms to accommodate AI-generated inventions. Similarly, copyright laws require adaptation to address ownership issues surrounding AI-generated creative works. Meanwhile, in trademark law, questions arise regarding the licensing and authorisation of AI systems for trademark use. As discussed in the second essay on this topic, many AI innovators choose trade secret protections over patents due to the ambiguity in traditional laws regarding AI and copyright. This approach allows them to keep their AI advancements confidential, making it challenging for others to detect and replicate their innovations, especially when used commercially. 

Legal protection for AI products?

Legal battles, such as Thaler vs Vidal, where Thaler filed patent applications for two inventions attributed to the DABUS AI without human involvement, illustrate the struggle to define AI’s role in intellectual property (IP) law. Typically, humans contribute to AI development, and its knowledge base includes copyrighted material. For instance, the US Court of Appeals ruled against recognising AI as an inventor, emphasising human-centric patent laws. Similarly, copyright registrations for AI-generated works face rejection due to human authorship requirements. 

However, cases like Thaler vs Perlmutter and Kashtanova’s comic book registration required protection for human-authored components of AI-generated content. The US Copyright Office faced this when Kristina Kashtanova sought registration for a comic book made with Midjourney AI. The US Copyright Office allowed Kashtanova to copyright the text and the arrangement of text alongside AI-generated artwork. While the text was deemed a product of human creativity, the registration also protected the arrangement. However, it explicitly excluded copyright for the AI-generated artwork itself. 

Patenting AI systems hurdles are further demonstrated under Alice Corp. vs CLS Bank International. This case established a two-step test for patent eligibility. The system assessed whether a patent claim involves ineligible subject matter like abstract ideas. If so, it considers whether the invention adds an ‘inventive concept’ to make it eligible. With this, many software-based and algorithm-reliant patents have been deemed ineligible. Given AI’s reliance on software and algorithms, inventors must navigate Alice carefully when patenting AI-related innovations or deciding whether to pursue patents at all, as it is not clear how much these rules will affect patents for software and AI. 

Robot hand and laptop
The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3 9

Approaches for IP Protection in AI

A study from 2023 from the University of Zurich’s Center for Intellectual Property and the Swiss Intellectual Property Institute proposes clarifications for AI-related IP aimed to provide clarity on this matter. The project suggests recognising AI as ‘inventors’ for patent protection, while human authorship takes precedence for copyright. Copyright may be granted for content jointly created by AI and humans, provided that human creativity is evident. Furthermore, companies should be able to claim ownership of AI-generated IP, without the need for new IP rights. In addition, permissive protection is advised to prevent AI owners from facing lawsuits for unintentional IP infringements.

Distinguishing between inspiration and infringement is crucial when asking for the establishment of governance mechanisms to address these concerns and maintain trust within creative industries. Recent conflicts, such as those between the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers (AMPTP), show the need for governance in creative AI usage. 

The negotiations between the two parties in this case included demands to restrict AI’s involvement in content creation, though compromises were reached to balance innovation with copyright protection. The agreement does not prohibit the use of AI but places restrictions on how it is credited and utilised. It states that neither traditional AI nor generative AI can be considered ‘writers’ or ‘professional writers’, and material produced solely by AI is not recognised as literary material. However, the agreement allows for collaborative work between writers and AI tools, with studios aiming for copyrightable material resulting from human-AI collaboration. Safeguards are also in place to ensure that AI use does not compromise copyrightability, with companies retaining the right to reject AI use if it affects copyrightability or work exploitation.

Detecting AI infringements

According to Originality.AI, an AI detection tool, almost 20% of the top 1000 websites in the world block crawler bots from collecting web data for AI use. Large language models (LLMs) such as OpenAI’s GPT family and Google’s LaMDA family require massive amounts of data to train their AI systems. Subsequently, various technology providers have developed and now offer AI-powered solutions designed to assist businesses in monitoring and protecting their intellectual property online. These solutions use machine learning algorithms to analyse vast amounts of data and detect potential instances of infringement. They provide tools for tracking the use of copyrighted material across websites, social media platforms, and digital channels, enabling rights holders to take appropriate action to protect their IP rights.

In August 2023, OpenAI launched its GPTBot crawler aiming to gather data for enhancing future AI models. Major websites (including Amazon, Quora, The New York Times, CNN, ABC, Reuters, and many others) have taken proactive measures to block AI crawlers from accessing their content. Axel Springer and the Associated Press have recently signed an agreement with OpenAI to license its news content for training AI models. Crawlers work like web browsers but save data instead of showing it. They are used by search engines like Google to collect information. While site owners can instruct crawlers to avoid their site, compliance is voluntary, leaving room for noncompliance by malicious actors. Google and other internet companies view the activities of their data crawlers as fair use. However, numerous publishers and holders of intellectual property have voiced objections to this, leading to several lawsuits against the company.

Government intervention through legislation is further aiming to enhance IP protection in AI. Legislative bodies play a critical role in introducing safeguards against the unchecked use of AI in accessing and utilising copyrighted material, thereby safeguarding the interests of creative industries in the digital age. For instance, a report from the UK Culture, Media and Sport Committee, composed of MPs from different parties, criticised the policies of the current UK administration, pointing out flaws and expressing concerns. In particular, they objected to the initial proposal to exclude text and data mining from copyright protection, suggesting it reveals a lack of comprehension of the creative industry’s importance to the economy and its employment of millions. 

In September 2023, lawmakers in France introduced a draft bill to regulate how artificial intelligence interacts with copyright laws. The aim was to make sure that AI respects creators’ rights, gets proper permission to use their work, and gives them fair credit. The proposed changes to the French Intellectual Property Code would mean that AI needs permission before using copyrighted material. Additionally, the law suggests a new tax on companies using AI to create works with uncertain origins. 

In the USA, lawmakers and regulatory bodies have also been struggling with the implications of AI on intellectual property rights. In response to the rapid advancements in generative AI and its widespread adoption, the US Copyright Office is reviewing the copyright implications. This action follows requests from Congress, the public, creators, and users of AI technology. Additionally, the US Patent and Trademark Office (USPTO) has examined the patentability of AI inventions and issued guidance for patent examiners. The guidance addresses the complexity of identifying substantial human input in AI-assisted inventions, providing key principles. According to it, merely posing a problem to an AI system is usually not significant, but crafting prompts tailored for specific solutions might demonstrate contribution. While acknowledging an AI-generated outcome as inventive does not automatically confer inventor status, making substantial contributions to it might. 

Ai robot checking code on the computer screen
The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3 10

Looking ahead 

Effective regulation of intellectual property rights concerning AI systems and their creations is crucial not only for legal clarity but also for motivating innovation in the market. Given the novelty of AI-generated artistic works, a reevaluation of current approaches to this issue seems unavoidable. Key points could include regulations of IP rights for AI systems and their creations. A likely solution involves implementing a distinct protection system for AI-generated creations, with rights held by either the AI system’s creator or user based on specific criteria. Additionally, further discussions might be needed to address the protection of algorithms, which are currently not covered under the existing EU legislative framework.

TikTok, a threat or a victim of complicated cyber-diplomatic relationships?

The term ‘legal saga’ does not seem to be adequate enough as an idiom to describe what has been happening and still happens to TikTok in the social media landscape ever since it was launched, given the complexity and long-term development of legal disputes. The law to ban TikTok, signed in the USA by President Joe Biden, is the last landmark in the social network’s legal file. Not that it is the first time a social network has been banned or suspended somewhere, but it seems to be a big deal when we speak about the USA and 170 million of its citizens daily using the app.

To comprehensively understand the intricate web of sociocultural, economic, and legal issues, and their interconnectedness within this major tech company’s journey, it is essential to delve into its origins and the evolution of these legal battles from the outset.

The beginnings of TikTok’s legal controversies in the wake of the rise of AI

Data governance and concerns over privacy and security

With the explosion of social media’s commercial expansion, TikTok has found itself at the centre of numerous legal controversies. Initially, concerns were primarily focused on data privacy and security, with various governments questioning how TikTok, owned by Chinese company ByteDance, handled user data. These concerns were amplified as the cutting-edge semiconductor industry became the battleground for chip market dominance, and AI technology has evolved and integrated more deeply into digital platforms – TikTok has begun using sophisticated AI algorithms, which personalise content feeds by collecting extensive user data, leading to fears about potential misuse and data security risks.

Sociocultural impact – content policy on deepfakes, hate speech, and the elections in the digital age 

As AI technology progressed, TikTok faced additional scrutiny over its content moderation practices. The platform’s AI-driven systems for detecting and removing inappropriate content have been criticised for both overreach and underperformance. Namely, TikTok’s algorithms sometimes mistakenly censor harmless content while failing to effectively filter out harmful material, including misinformation and hate speech. Controversy cases, such as the audio deepfake impersonating US President Joe Biden have caused alarm among politicians in the year with numerous elections. Also, deepfake videos depicting fictitious members of the Le Pen family have recently surfaced online, stirring controversy as France’s far-right parties gear up for the upcoming EU elections. Consequently, these and other deepfakes have spurred legal challenges and regulatory investigations in multiple countries, pushing TikTok to enhance transparency and refine its moderation technologies.

The rise of deepfakes and AI-generated content has further complicated TikTok’s legal landscape. Researchers and lawmakers have expressed concern that AI-generated videos could be used to spread misinformation, especially during sensitive times such as elections or warfare. In response to these challenges, TikTok has implemented measures to label AI-generated content, working with technology like Adobe’s ‘Content Credentials’ to mark such media. Despite these efforts, the potential for AI misuse remains a contentious issue, prompting ongoing debates about the adequacy of TikTok’s measures and the broader implications for digital platforms.

Bans on TikTok around the world since its global rise

TikTok has faced bans and severe restrictions in several countries due to concerns over national security, privacy and data protection, and content (moderation) policy. One of the most prominent instances occurred in India, which first ordered the removal of the application from Google and Apple stores in 2019, considering it a platform that degrades culture and encourages pornography, in addition to causing paedophiles and explicit disturbing content, social stigma, and media health issues among teens. Subsequently, the country imposed a ban on TikTok in June 2020. The Indian government cited data privacy and national security concerns, arguing that the app was transmitting user data to servers outside the country. The ban came during heightened border tensions between India and China, effectively removing TikTok from one of its largest markets, and impacting millions of users and creators in the region.

The application faced bans in several other countries as well. In 2020, Pakistan temporarily banned the app, citing concerns over immoral and indecent content. The ban was lifted after TikTok assured Pakistani authorities that it would implement stricter content moderation policies. Similarly, Indonesia banned the app for a brief period in 2018 due to content deemed blasphemous and inappropriate. The ban was lifted after TikTok agreed to remove the offending content and improve its moderation practices. Recently, Kyrgyzstan also banned TikTok following security service recommendations to safeguard children. The decision came amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

Other bans occurred in Australia, where the government banned TikTok from all federal government-owned devices over security concerns, aligning with other ‘Five Eyes’ intelligence-sharing network members. New Zealand imposed a ban on the use of TikTok on devices with access to the parliamentary network amid cybersecurity concerns. 

Along with the listed countries that banned TikTok on government-issued devices due to security risks, Canada and Taiwan banned TikTok and some other Chinese apps on state-owned devices and launched a probe into the app in December 2022 over suspected illegal operations. Nepal, on the other side, banned TikTok in November 2023, citing disruption of social harmony and goodwill caused by the misuse of the popular video app. Somalia also banned the application in 2023 citing concerns about these platforms being used by both terrorists and immoral groups to circulate disturbing images and false information.

Electronics, Mobile Phone, Phone, Face, Head, Person, Child, Female, Girl

EU bans

More recently, in Europe, TikTok has come under regulatory scrutiny from various governments. The EU has raised concerns about data privacy and compliance with its stringent General Data Protection Regulation (GDPR). The European Commission had already banned TikTok on its corporate phones and highlighted the perceived danger of the platform concerning the GDPR. Furthermore, the European Commission president suggested that banning TikTok in the EU could be an option during a debate in Maastricht featuring parties’ lead candidates for the bloc’s 2024 election. Some EU countries have considered or implemented restrictions on the app for its addictive nature in children’s environments. Additionally, the app has faced calls for bans from political figures who argue that TikTok could be used for espionage or to influence public opinion, especially during the election periods.

The major restrictions TikTok faced in the EU occurred in France in April 2023, when the country banned TikTok from government employee devices due to data security and privacy concerns. The ban was part of a broader measure affecting several social media and gaming apps deemed inappropriate for government networks. Belgium imposed a ban on TikTok in March 2023 for government employees, citing national security and privacy concerns. The ban came as a response to potential data sharing with Chinese authorities, given TikTok’s ownership by ByteDance. In Scotland, the government removed TikTok from Scottish Parliament phones and devices due to security concerns, as well as in Belgium, where the app has been banned from federal government employees’ work phones.

Although the UK is no longer part of the EU, it is relevant to note that the country also banned TikTok from government devices due to security concerns. The restriction aligned with similar actions taken within the EU. In the same year, Austria decided to join the ‘ban group’, in prohibiting the Chinese-owned video-sharing app TikTok from being installed on government employees’ work phones. The ban was implemented as a precautionary measure against potential security risks.

TikTok and the ‘ban or divest’ legal saga in the USA

In the USA, TikTok has faced perhaps the strictest scrutiny and legal challenges in the last 5 years. During the Trump administration, an executive order was issued in August 2020 seeking to ban the app unless ByteDance sold its US operations to an American company, which is a precedent compared to the current situation. Although the ban was temporarily halted by court rulings, the Biden administration has continued to review and address concerns regarding TikTok’s data practices and its potential ties to the Chinese government. 

The proposed ban led to a flurry of legal battles and negotiations, with TikTok challenging the executive order in court and exploring potential deals with American companies such as Microsoft, Walmart, and Oracle. These negotiations, however, did not result in a definitive resolution, and the legal actions temporarily halted the enforcement of the ban. The controversy continued into the Biden administration, which has taken a more measured approach but remains concerned about TikTok’s data privacy practices and its potential ties to the Chinese government.

In April 2024, President Biden signed legislation that required ByteDance to divest TikTok or face a US ban. The enactment of the law extended the divestment timeline and aimed to address ongoing national security concerns. TikTok responded by challenging the law in court, arguing that such laws violated the freedom of speech, including the First Amendment, and that no concrete evidence had been provided to substantiate the claims against its irregularity. 

Despite TikTok’s reassurances and legal challenges, the ‘ban or divest’ dilemma persists, reflecting broader tensions between the USA and China over technology and data security. The outcome of this controversy remains uncertain as TikTok continues to navigate the complex legal landscape and regulatory scrutiny in the USA. The resolution of this issue will have significant implications for the future of TikTok in the US market and for the broader regulatory environment governing international tech companies operating in the USA.

As this legal saga unfolds, it underscores the increasing importance of digital sovereignty and data privacy in the global tech industry. The TikTok case has already set a precedent for how free speech and corporate rights are trampled to protect national security, and the ongoing saga is a testament to the intricate interplay between technology, law, and geopolitics in the modern digital age.

Tech titans clash: Inside the US-China battle for chip market dominance

Competition between the USA and China in chip trade and production is growing on a daily basis to the extent that it is considered a chip war between these two superpowers.

In this analysis, we will review all the facts and steps that Beijing and Washington have taken so far to position themselves better in the chip market. This will help us see the whole picture better and allow us to predict what will come next more easily.

China

China’s first significant step in strengthening its position in the semiconductor technology market happened in 2014 when a broader national security strategy was introduced. The main task of the strategy, active to this day, is to position China as the world’s leading science and technology superpower, which is part of its goal to establish itself as a global superpower. Chinese leaders realised that semiconductor microchips are crucial to emerging civilian and military technologies and for achieving their long-term geopolitical goals and potentially surpassing the USA as the dominant superpower.

China has made significant progress in technological advancements that have outpaced the forecasts from Western intelligence and industry analyses. For example, the military-civil fusion programme aims to integrate civilian technologies with military capabilities and to blur the lines between civilian and military applications.

Part of the broader national security strategy is a tendency to reduce dependence on Western technologies and to reach the point where they can rely on themselves in critical sectors like semiconductors. That’s precisely why Xi Jinping, the Chinese president, called for increased technological autonomy to counter Western influence and strengthen China’s global position. They have also invested heavily in its semiconductor industry while setting ambitious targets to increase chip self-reliance. But, some targets are proving to be somewhat challenging, such as reaching 70% self-reliance by 2025.

However, those efforts have been bolstered even more by the constant pressure of the USA in the form of increasing trade restrictions and policies that limit Chinese technological investments and exports. Semiconductor microchips are a focal point in Beijing’s economic security strategies. As expected, the conflict over microchips with the USA did not go without countermeasures. For example, China accelerated its efforts to remove foreign-manufactured chips, especially those made in the USA, and set a deadline for domestic telecommunications companies to do so by 2027. That move could particularly hit American chipmakers such as Intel and AMD and inflict significant financial damage to the US economy.

China also found a way to bypass Washington’s prohibition of Nvidia’s high-end AI processor sales to China. Instead of buying directly from Nvidia, Chinese universities and research institutions acquired the processors through resellers. There was no lack of open criticism either, as officials in Beijing criticised the USA for tightening trade rules. They emphasised that this move raises barriers and introduces uncertainty to the global chip sector. China is showing clear signs that they will not give up the fight, but it all depends on the speed of their technological progress.

US

As for the USA, when President Biden took office in 2021, concerns about China’s accelerating technological progress were already very much present. Those concerns were mainly focused on the field of AI. Many feared that China could overtake the US in semiconductor technology, which would also threaten the dominance of the West over the East in technology.

This is precisely why the EU and the USA began emphasising economic security in the foreground, thus making a turn from past policies when they promoted globalisation and trade liberalisation. This was also triggered by alleged reports that claimed China acquired Western technologies through joint ventures and projects and caused disruptions in supply chains for crucial materials and equipment.

However, the most significant turning point in American politics regarding semiconductor microchip manufacturing was the introduction of the CHIPS Act in August 2022. The primary purpose of the CHIPS Act was to boost the domestic semiconductor manufacturing process and protect it from potential sabotage. It also included the tendency to reduce US dependency on imports, especially from China.

Furthermore, Washington implemented a series of sanctions and export controls to protect its intellectual property and national security interests. The sanctions included restrictions on exporting the equipment required to produce advanced chips to China, emphasising chips lower than 16/14 nm.

The next step the USA took was to strengthen some of its alliances. They did this primarily with the Netherlands and Japan, which enhanced export controls on high-performance semiconductor manufacturing equipment. Also, to further isolate China, the White House proposed the Chip 4 Alliance with Japan, South Korea, and Taiwan, aiming to bolster the resilience of East Asia’s semiconductor supply chain.

Taiwan plays a vital role in this US-China conflict because it produces a significant share of the world’s most advanced chips. Its technological leadership, supplier diversity, and resilience made it a cornerstone in efforts to strengthen the semiconductor supply chain. Both Beijing and Washington want to increase their influence in Taiwan to better take advantage of the breadth of Taiwan’s chip production.

What can we expect?

The rivalry between China and the USA in this field started during Donald Trump’s presidency and has continued under President Joe Biden. It reflects a rare bipartisan consensus in the US Congress to challenge China’s technological ambition. On the other hand, for China, the position of a global leader is a matter of national pride, which is omnipresent in President Xi Jinping’s leadership.

The expanded tech war manifests in various arenas, with the most notable ones being chipmaking and green technology. Chipmaking is crucial for information processing, while green technology is becoming increasingly important for the global economy. Both China and the USA are vying for dominance in these sectors.

The Economist stated in its article titled ‘The tech wars are about to enter a fiery new phase’ that regardless of the outcome of future elections in the USA, the next president is likely to continue challenging China’s technological advancements. This echoes the joint effort in Washington to confront China’s growing influence in advanced technologies.

The Economist added that heightened tensions and a more aggressive US approach under a future administration are also possible. This could involve expanding export controls and sanctions beyond companies like Huawei to other Chinese tech firms. Such actions might provoke retaliatory measures from China, further escalating the conflict.

The Taiwanese chipmaker TSMC, which has significant investments in China, could be pressured by the US government to limit its operations there. That could also happen with other foreign companies that do business in China and get caught in the crossfire of this conflict.

Despite winning over some allies, the USA might need help with other partners, particularly in Europe and Asia. Washington’s approach to technology and China could affect its relationship with some allies since there is a difference in priorities, which could strain alliances and potentially complicate efforts to form a united front against China’s technological ambition.

This clash between the two great powers will undoubtedly leave its mark on the world economy. The International Monetary Fund (IMF) estimates that the elimination of high-tech trade between the two countries could cost as much as $1 trillion annually, equivalent to 1.2% of the global GDP. It is in the general interest to resolve this conflict as soon as possible, although everything indicates that it will not happen very soon.

UN AI resolution a significant global effort to harness AI for sustainable development 

On 21 March, the United Nations General Assembly (UNGA) overwhelmingly passed the first global resolution on AI. Member states are urged to protect human rights and personal data and to monitor AI for potential harms, so the technology can benefit all.

The unanimous adoption of the US-led resolution on the promotion of ‘safe, secure, and trustworthy artificial intelligence systems that will also benefit sustainable development for all’ is a historic global effort to ensure the ethical and sustainable use of AI. While nonbinding, the draft resolution was supported by more than 120 states, including China, and endorsed without a vote by all 193 UN member states.

Vice President Kamala Harris praised the agreement, stating that this ‘resolution, initiated by the USA and co-sponsored by more than 100 nations, is a historic step towards establishing clear international norms for AI and fostering safe, secure, and trustworthy AI systems’.

To unpack the significance of this resolution and its potential impact on AI policies, we will look at five dimensions: policy and regulation in the global context, ethical design, data privacy and protection, transparency and trust, and AI for sustainable development.

Global policy and regulation

EU policymakers have paved the way with the recently approved AI Act, the first comprehensive legislation covering the new technology. The Council of Europe (CoE), a 46-member human rights body, has also agreed on a draft AI Treaty to protect human rights, democracy, and the rule of law.

The United States wants to play a leadership role in shaping global AI regulations. Last October, President Biden unveiled a landmark Executive Order on ‘Safe, Secure, and Trustworthy AI’ and in March, VP Harris announced a new policy of the White House Office of Management and Budget for federal agencies’ use of AI.

Other countries and regions are also developing their own frameworks, guidelines, strategies, and policies. For instance, at the African Union (AU) level, its Development Agency (AUDA) released in March a White Paper on a pan-African AI policy and a continental roadmap.

The UN resolution acknowledges that multiple initiatives may lead the way in the right direction and further encourages member states, international organisations, and others, to assist developing countries in their national process.

Ethical design

The text highlights the need for ethical design in all AI-based decision-making systems (6.b, p5/8). AI systems should be designed, developed, and operated within the frameworks of national, regional, and international laws to minimise risks and liabilities and ensure the preservation of human rights and fundamental freedoms (5., p5/8). A collaborative approach combining AI, ethics, law, philosophy, and social sciences can help craft comprehensive ethical frameworks and standards to govern the design, deployment, and use of AI-powered decision-making tools. Ethical design is a critical aspect of promoting safe, secure, and trustworthy AI systems. The resolution urges member states and other stakeholders to integrate ethical considerations in the design, development, deployment, and use of AI to safeguard human rights and fundamental freedoms, including the right to life, privacy, and freedom of expression.

Introducing the draft, Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, added that ‘AI should be created and deployed through the lens of humanity and dignity, safety and security, human rights, and fundamental freedoms’.

Data privacy and protection

The UN resolution addresses data privacy safeguards to guarantee safe AI development, especially when data used is sensitive personal information such as health, biometrics, or financial data. Member states and relevant stakeholders are encouraged to monitor AI systems for risk and assess for impact on data security measures and personal data protection, throughout their life cycle (6.e, p5/8). Privacy impact assessments and detailed product testing during development are suggested as mechanisms to protect data and preserve our fundamental privacy rights. Additionally, transparency and reporting obligations in accordance with all applicable laws contribute to safeguarding privacy and protecting personal data (6.j, p6/8).

Transparency and trust

The document highlights the value of transparency and consent in AI systems. Transparency, inclusivity, and fairness promote our diverse needs, preferences, and emotions.

To preserve fundamental human rights, algorithms that affect our lives have to be developed in a way that does not cause any harm to us or the environment. This includes providing notice and explanation, promoting human oversight and ensuring that automated decisions are reviewed. When necessary, human decision-making alternatives should be accessible, as well as effective redress.

Transparent, interpretable, predictable, and explainable AI systems facilitate reliability and accountability, allowing end-users to better understand, accept, and trust outcomes and decisions that impact them. 

AI for sustainable development

The resolution confirms that safe, secure, and trustworthy AI systems can accelerate progress toward achieving all 17 sustainable development goals (SDGs) in all three dimensions – economic, social, and environmental – in a balanced way. 

AI technologies can be a driving force to help achieve the SDGs by augmenting human intelligence and capabilities, improving efficiency, and reducing environmental impact. For instance, AI models can predict and unveil errors, plan more effectively, and boost renewable energy efficiency. AI can also streamline transportation and traffic management and anticipate energy needs and production. Any AI system designed, developed, deployed, and used without proper safeguards engenders potential threats that could hamper progress toward the 2030 Agenda and its SDGs.

The aim is to reduce the digital divide between wealthy industrialised nations and developing countries, and within countries, to give all nations a proper representation at the table of discussions on AI governance for sustainable development. The intention is also to ensure that less developed nations have access to the needed technology, infrastructure, and capabilities to reap the promised gains of AI, such as disease detection, flood forecasting, effective capacity building, and a workforce upskilled for the future.

The UN resolution is a remarkable step in global AI policy because it addresses many of the key drivers for AI to play a safe and effective role in sustainable development that will benefit all. It also recognises that innovation and regulation, far from being mutually exclusive, complement and reinforce one another.

By following up on the current consensus, implementing these recommendations, and aligning them with other regional and global initiatives, governments, public and private sectors, and other involved stakeholders can harness AI’s potential while minimising its risks.

The road ahead for global AI governance

South Korea will co-host the second AI Safety Summit with the UK through a virtual conference in May, and 6 months later France will hold the next in-person global gathering after Prime Minister Rishi Sunak led the inaugural AI Safety Summit in Bletchley Park last November. 

By September 2024 and the Summit of The Future in New York, more important developments in global AI policy and governance can be expected. 

One is the work in progress from the UN ‘High-Level Advisory Body on AI’, which will lead to a final report. This will progress in parallel with and feed into the long-awaited Global Digital Compact process. 

Another one will be the formal adoption of the CoE ‘Convention on AI, Human Rights, Democracy, and the Rule of Law’ and its subsequent ratification process open to member and non-member states. 

On the EU side, the European Commission has started staffing and structuring the newly established AI Office. The EU AI Act was adopted by the EU Parliament, and it awaits the EU Council’s formal approval. The AI Act will enter into force 20 days after it is published in the Official Journal, with phased implementation and enforcement. After 6 months, unacceptable risks will be prohibited, after 12 months, obligations for providers of general-purpose AI systems come into effect and member states should designate their relevant national authority, and after 24 months, the legislation becomes fully applicable.

In Africa, the African Union Commission has begun holding a series of online consultations with diverse stakeholders across the continent to gather input and inform the development of an Africa-wide AI policy, with a focus on ‘building the capabilities of AU member states in AI skills, research and development, data availability, infrastructure, governance and private sector-led innovation’.

The rapid advance of AI technologies poses new challenges for legislators around the world since existing rules struggle to keep up with the acceleration of technical progress. This demonstrates the critical need for regulatory frameworks that can adapt to AI’s evolving landscape.

The governance of AI systems requires ongoing discussions on appropriate approaches that are agile, adaptable, interoperable, inclusive, and responsive to the needs of both developed and developing countries. The UNGA resolution opens the door to global cooperation on a safe, secure, and trustworthy AI for sustainable development that benefits all.

Digital dominance in the 2024 elections

As the historic number of voters head to the polls, determining the future course of over 60 nations and the EU in the years ahead, all eyes are on digital, especially AI.

Digital technologies, including AI, have become integral to every stage of the electoral process, from the inception of campaigns to polling stations, a phenomenon observed for several years. What distinguishes the current landscape is their unprecedented scale and impact. Generative AI, a type of AI enabling users to quickly generate new content, including audio, video, and text, made a significant breakthrough in 2023, reaching millions of users. With its ability to quickly produce vast amounts of content, generative AI contributes to the scale of misinformation by generating false and deceptive narratives at an unprecedented pace. The multitude of elections worldwide, pivotal in shaping the future of certain states, have directed intense focus on synthetically generated content, given its potential to sway election outcomes.

Political campaigns have experienced the emergence of easily produced deepfakes, stirring worries about information credibility and setting off alarms among politicians who called on Big Tech for more robust safeguards.

Big Tech’s response 

Key players in generative AI, including OpenAI and Microsoft, joined platforms like Meta Platforms, TikTok, and X (formerly Twitter) in the battle against harmful content at the Munich Security Conference. Signatories of the tech accord committed to working together to create tools for identifying targeted content, raising public awareness through educational campaigns, and taking action against inappropriate content on their platforms. To address this challenge, potential technologies being considered include watermarking or embedding metadata to verify the origin of AI-generated content, focusing primarily on photos, videos, and audio.

After the European Commissioner for Internal Market Thierry Breton urged Big Tech to assist European endeavours in combating election misinformation, tech firms promptly acted in response. 

Back in February, TikTok announced that it would launch an in-app for EU member states in local language election centres to prevent misinformation from spreading ahead of the election year. 

Meta intends to launch an Elections Operations Center to detect and counter threats like misinformation and misuse of generative AI in real time. Google collaborates with a European fact-checking network on a unique verification database for the upcoming elections. Previously, Google announced the launch of an anti-misinformation campaign in several EU member states featuring ‘pre-bunking’ techniques to increase users’ capacity to spot misinformation. 

Tech companies are, by and large, partnering with individual governments’ efforts to tackle the spread of election-related misinformation. Google is teaming up with India’s Election Commission to provide voting guidance via Google Search and YouTube for the upcoming elections. They are also partnering with Shakti, India Election Fact-Checking Collective, to combat deepfakes and misinformation, offering training and resources throughout the election period. 

That said, some remain dissatisfied with the ongoing efforts by tech companies to mitigate misinformation. Over 200 advocacy groups call on tech giants like Google, Meta, Reddit, TikTok, and X to take a stronger stance on AI-fuelled misinformation before global elections. They claim that many of the largest social media companies have scaled back necessary interventions such as ‘content moderation, civil-society oversight tools and trust and safety’, making platforms ‘less prepared to protect users and democracy in 2024’. Among other requests, the companies are urged to disclose AI-generated content and prohibit deepfakes in political ads, promote factual content algorithmically, apply uniform moderation standards to all accounts, and improve transparency through regular reporting on enforcement practices and disclosure of AI tools and data they are trained on.

EU to walk the talk?

Given the far-reaching impact of its regulations, the EU has assumed the role of de facto regulator of digital issues. Its policies often set precedents that influence digital governance worldwide, positioning the EU as a key player in shaping the global digital landscape.

 People, Person, Crowd, Adult, Male, Man, Face, Head, Audience, Lecture, Indoors, Room, Seminar, Speech, Thierry Breton
European Commissioner for Internal Market Thierry Breton

The EU has been proactive in tackling online misinformation through a range of initiatives. These include implementing regulations like the Digital Services Act (DSA), which holds online platforms accountable for combating fake content. The EU has also promoted media literacy programmes and established the European Digital Media Observatory to monitor and counter misinformation online. With European Parliament elections approaching and the rising prevalence of AI-generated misinformation, leaders are ramping up efforts to safeguard democratic integrity against online threats.

Following the Parliament’s adoption of rules focussing on online political advertising requiring clear labelling and prohibiting sponsoring ads from outside the EU in the three months before an election, the European Commission issued guidelines for Very Large Online Platforms and Search Engines to protect the integrity of elections from online threats. 

The new guidelines cover various election phases, emphasising internal reinforcement, tailored risk mitigation, and collaboration with authorities and civil society. The proposed measures include establishing internal teams, conducting elections-specific risk assessments, adopting specific mitigation measures linked to generative AI and collaborating with EU and national entities to combat disinformation and cybersecurity threats. The platforms are urged to adopt incident response mechanisms during elections, followed by post-election evaluations to gauge effectiveness.

The EU political parties have recently signed a code of conduct brokered by the Commission intending to maintain the integrity of the upcoming elections for the Parliament. The signatories pledge to ensure transparency by labelling AI-generated content and abstain from producing or disseminating misinformation. While this introduces an additional safeguard to the electoral campaign, the responsibility for implementation and monitoring falls on the European umbrella parties rather than national parties conducting the campaign on the ground.

What to expect

The significance of the 2024 elections extends beyond selecting new world leaders. They serve as a pivotal moment to assess the profound influence of digital on democratic processes, putting digital platforms into the spotlight. The readiness of tech giants to uphold democratic values in the digital age and respond to increasing demands for accountability will be tested. 

Likewise, the European Parliament elections will test the EU’s ability to lead by example in regulating the digital landscape, particularly in combating misinformation. The effectiveness of the EU initiatives will be gauged, shedding light on whether collaborative efforts can establish effective measures to safeguard democratic integrity in the digital age.

(Jail) time ahead for the cryptocurrency industry 

The cryptocurrency and digital asset industry has once again been the focus of the worldwide media. This time, it is not about the promises of an inclusive future of finance but is related to a couple of court cases initiated or found to have come to a close in the past months. 


These particular developments can be seen as a desire of regulators worldwide to set legal practice around the new class of digital assets (or cryptoassets as named in regulations worldwide) and send a message to the ever-growing base of consumers of such products that they will be protected while entering this new arena. A particular push is seen in the United States, where a couple of the world’s biggest cryptocurrency exchanges Binance and Kraken have been accused and charged with anti-money laundering activities. In both cases, regulators highlighted the lack of fully implemented Know-Your-Customer (KYC) procedures as a primary focus. In the case of the world’s number one cryptocurrency exchange Binance, the US Justice Department argued that the failure of KYC led to the money laundering and evasion of international sanctions. Cryptocurrency exchange Binance, and its CEO, Zhao Changpeng pleaded guilty to charges filed by the US Justice Department and US Securities and Exchange Commission (SEC) while agreeing to a record USD 4.2 billion fine in this case. In the most recent case, cryptocurrency exchange KuCoin has been hit with the same anti-money laundering charges and is facing a similar outcome. For Kraken, the SEC is asking for a total ban in the USA as they failed to register within the regulatory framework.

A couple of significant cases from the past have received their final acts in the past months. The cases of Celsius, Terra, and, most prominently, FTX exchange moved from the standstill, and in the case of FTX, the trial ended with the sentencing of the former FTX CEO Sam Bankman-Fried. The sentence was delivered in the court case related to the collapse of the FTX exchange and Alameda Research trading firm in November 2022. The former FTX CEO was sentenced to 25 years in prison six months after being convicted of fraud. In addition to the sentence, Bankman-Fried was ordered to pay USD 11 billion in reparations and damages to FTX users and investors. Another crypto-company CEO, Do Kwon was extradited from Montenegro to prosecutors in South Korea for the trial of the Terra cryptocurrency company. Kwon was hiding from law enforcement for a whole year to be finally arrested at the tarmac of the Podgorica airport in Montenegro. Kwon also faces a lengthy jail sentence if allegations from the indictment stand the trial case.

Do Kwon, Cryptocurrency king,  Helmet, Adult, Male, Man, Person, Officer, Police Officer, , Head, Arrest
‘Cryptocurrency King’ Do Kwon with a a group of Montenegro police officers. Photo by: Radio Free Europe (RFE)

In another long-lasting legal battle before the US courts, a case against one of the biggest cryptocurrency companies, Ripple Labs, is nearing its end. The prosecutors look for another major fine of USD 2 billion. This would, according to their statement, send a message to the industry in relation to consumer protection. What exactly is that message?


‘Countries should take the issue seriously and strengthen regulation, as virtual assets tend to flow towards less regulated jurisdictions.’ This is pointed out in the Financial Action Task Force (FATF) president T. Raja Kumar’s interview, in which he acknowledged that only one-third of the world has implemented some form of cryptocurrency regulations. Mr Kumar urges countries to take the issue seriously and strengthen regulation.

That is definitely a trend for crypto companies. As a whole, the cryptocurrency industry has seen a significant drop in value received by illicit cryptocurrency addresses. The share of all crypto transaction volume associated with illicit activity has also decreased. This is stressed in the annual report by Chainalysis, which provides blockchain forensics for most governments worldwide. So, the industry is going in the right direction.

OEWG’s seventh substantive session: the highlights

The OEWG held its 7th substantive session on 4-8 March. With 18 months until the end of the group’s mandate, a sense of urgency can be felt in the discussions, particularly on the mechanism that will follow the OEWG.

Some of the main takeaways from this session are:

  • AI is increasingly prevalent in the discussion on threats, with ransomware and election interference rounding up the top 3 threats.
  • There is still no agreement on whether new norms are needed.
  • Agreement is also elusive on whether and how international law and international humanitarian law apply to cyberspace.
  • The operationalisation of the POC directory, the most important confidence building measure (CBM) to result from the OEWG, is in full swing ahead of its launch on 9 May.
  • Bolstering capacity building efforts and funding for them are necessary actions.
  • The mechanism for regular institutional dialogue on ICT security must be single-track and consensus-based. Whether it will take the shape of the Programme of Action (PoA) or another OEWG is still up in the air.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated OEWG process page.

un meeting 2022
UN OEWG
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN OEWG
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
Threats: AI, elections and ransomware at the forefront
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

The widespread availability of AI tools for different purposes led to delegations focusing on AI-enabled threats. AI tools may exacerbate malicious cyber activity, for example, by faster searching for ICT vulnerabilities, developing malware, and boosting social engineering and phishing tactics. 

France, the Netherlands, and Australia spoke about the security of AI itself, pointing to the vulnerability of algorithms and platforms and the risk of poisoning models. 

2024 is the year of elections on different levels in many states. Large language models (LLMs) and generative AI spur the fake creation process and the proliferation of disinformation and manipulation of public opinion, especially during significant political and social processes. Belgium, Italy, Germany, Canada, and Denmark expressed concern that cyber operations are used to interfere in democratic processes. Malicious use of cyber capabilities can influence political outcomes and threaten the process by targeting voters, politicians, political parties, and election infrastructure, thus undermining trust in democratic institutions. 

Another prevalent threat highlighted by the delegations was ransomware. Cybercriminals target critical infrastructure and life-sustaining systems, but states noted that the most suffering sector is healthcare. Belgium stressed that such attacks eventually lead to human casualties because of the disruption in providing medical assistance. The USA and Greece highlighted the increase in ransomware attacks because some states allow criminal actors to act from their territories with impunity. Also, now AI is an excellent leverage for malicious threat actors, providing unsophisticated operators of ransomware-as-service with a new degree of possibilities and allowing rogue states to exploit this technology for offensive cyber activities. 

Ransomware attacks go hand in hand with IP theft, data breaches, violation of privacy, and cryptocurrency theft. The Republic of Korea, Japan, the Czech Republic, Mexico, Australia and Kenya connected such heists with the proliferation of WMDs. 

Delegations expressed concerns about a growing commercial market of cyber intrusion capabilities, 0-day vulnerabilities and hacking-as-service. The UK, Belgium, Australia, and Cuba considered this market capable of increasing instability in cyberspace. The Pall Mall process launched by France and the UK aimed at addressing the proliferation of commercially available cyber intrusion tools was upheld by Switzerland and Germany.

The growing IoT landscape expands the surfaces of cyberattacks, Mauritius, India, and Kazakhstan mentioned. Quantum computing may break the existing encryption methods, leading to strategic advantages for those who control this technology, Brazil added. It could also be used to develop armaments, other military equipment, and offensive operations. 

Russia once again drew attention to the use of information space as an arena of geopolitical confrontation and militarisation of ICTs. Russia, China, and Iran have also highlighted certain states’ monopolisation of the ICT market and internet governance as threats to cyber stability. Syria and Iran pointed to practices of technological embargo and politicised ICT supply chain issues that weaken the cyber resilience of States and impose barriers to trade and tech development.

Norms: new norms vs. norms’ implementation
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Reflections of the several delegations have highlighted the existing binary dilemma: should there be new norms or not? 

Iran, China and Russia highlighted once again that new norms are needed. Russia also suggested four new norms to strengthen the sovereignty, territorial integrity and independence of states; to suggest the inadmissibility of unsubstantiated accusations against states; and to promote the settlement of interstate conflicts through negotiations, mediation, reconciliation or other peaceful means. Brazil noted that additional norms will become necessary as technology evolves and stressed that any efforts to develop new norms must occur within the UN OEWG. South Africa expressed that they could support a new norm to protect against AI-powered cyber operations and attacks on AI systems. Vietnam strongly supported the development of technical standards regarding electronic evidence to facilitate the verification of the origins of cybersecurity incidents. 

However, some delegations insist that implementing already existing norms comes before elaborating new ones. Bangladesh urged states to collaborate more to translate norms into concrete actions and focus on providing guidance on their interpretation and implementation. The UK, in particular, suggested four steps to improve the implementation of the norms by addressing the growing commercial market for intrusive ICT capabilities. The delegate called states to prevent commercially available cyber intrusion capabilities from being used irresponsibly, to ensure that governments take the appropriate regulatory steps within their domestic jurisdictions, to conduct procurement responsibly, and to use cyber capabilities responsibly and lawfully.

Several delegations mentioned the accountability and due diligence issues in implementing the agreed norms. New Zealand, in particular, shared that the OEWG could usefully examine what to do when agreed norms are willfully ignored. France mentioned that it continues its work on the due diligence norm C with other countries. Italy called for dedicated efforts to set up accountability mechanisms to ‘increase mutual responsibility among states’ and proposed national measures to detect, defend and respond to and recover from ICT incidents, which may include the establishment at the national level of a centre or a responsible agency that leads on ICT matters.

The Chair issued a draft of the norms implementation checklist before the start of the session. According to Egypt, this checklist must be simplified because it includes duplicate measures and detailed actions beyond states’ capabilities. The checklist, Egypt continued, should acknowledge technological gaps among states and their diverse national legal systems, thus respecting regions’ specifics. Many delegations have strongly supported the checklist and made recommendations. For example, the Netherlands suggested that the checklist includes the consensus notion that state practices, such as mass arbitrary or unlawful mass surveillance, may negatively impact human rights, particularly the right to privacy.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.
3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.

Some delegations addressed the Chair’s questions on implementing critical infrastructure protection (CIP) and supply chain security-related norms. The EU reminded us that it is necessary to look into existing cybersecurity best practices in this regard and gave an example of the Geneva Manual as a multistakeholder initiative to clarify the roles and responsibilities of non-state actors in implementing the norms. Italy encouraged the adoption of specific frameworks for assessing the supply chain security of ICT products based on guidelines, best practices, and international standards. Practically, it could include establishing national evaluation and security certification centres for cyber certification schemes. The Republic of Korea suggested building institutional and normative foundations to provide security guidelines starting from the development stage of software products, which can be used in the public sector to protect public service or critical infrastructure from being targeted by cyberattacks. Japan suggested adopting the Software Bill of Materials (SBOM) and discussing how ICT manufacturers can achieve security by design.

International law: applicability to use of ICTs in cyberspace
 Accessories, Bag, Handbag, Scale

The member states have held their previous positions on the applicability of international law. Most states have confirmed the applicability of international law to cyberspace, including the UN Charter, international human rights law and international humanitarian law. However, Russia and Iran stated that existing international law does not apply to cyberspace, while Syria noted how international law applies in cyberspace is unclear. However, China and Russia pointed out that the principles of international law apply. These states, as well as Pakistan, Burkina Faso, and Belarus, support the development of a new legally binding treaty. 

Of note was the contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay that reflected on the continued engagement of a cross-regional group of 13 states based on a working paper from July 2023. The contribution highlighted the emerging convergence of views that: 

  • states must respect and protect human rights and fundamental freedoms, both online and offline, by their respective obligations; 
  • states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused; and
  • International humanitarian law applies to cyber activities in situations of armed conflict, including, where applicable, the established international legal principles of humanity, necessity, proportionality and distinction.

Many states echoed the Colombian statement, including Germany, Australia, Czechia, Switzerland, Italy, Canada, the USA, the UK, Spain and others.

New discussion point

The contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay highlighted that states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused, a new element in the discussions within the OEWG substantive sessions. Thailand, Uganda, and the Netherlands have also specifically addressed the need for reparation for the injury caused.

The discussions have also progressed on the applicability of international humanitarian law (IHL) to the use of ICT in situations of armed conflicts. 

Senegal presented a working paper on the application of international humanitarian law on behalf of Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Sweden, and Switzerland. This working paper shows convergence on the applicability of IHL in situations of armed conflict. It delves deeper into the principles and rules of IHL governing the use of ICTs, notably military necessity, humanity, distinction, and proportionality. Other states welcomed with working paper, including Italy, Australia, South Africa, Austria, the United Kingdom, the USA, France, Spain, Uruguay and others. 

On the other hand, Sri Lanka, Pakistan, and China have called for additional efforts to develop an understanding of the applicability of IHL and its gaps.

In its statement on IHL, the ICRC has pointed out the differences between the definitions of armed attack under the UN Charter and under IHL, the need to discuss how IHL limits cyber operations, and the need to interpret the existing rules of IHL as not to undermine the protective function of IHL in the ICT environment.

icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.
icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.

The discussion on international law greatly benefited from the recent submission to the OEWG by the Peace and Security Council of the African Union on the Application of international law in the use of ICTs in cyberspace (Common African Position). Reflecting the views of 55 states, it represents a significant contribution to the work of the OEWG and an example of valuable input by regional forums. This comprehensive position paper addresses issues of applicability of international law in cyberspace, including human rights and IHL, principles of sovereignty, due diligence, prohibition of intervention in the affairs of states in cyberspace, peaceful settlement of disputes, prohibition of the threat or use of force in cyberspace, rules of attribution, and capacity building and international cooperation. The majority of the delegations welcomed the Common African Position.

African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.
African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.

The Chair has also pointed out that, as of date, 23 states have shared their national positions, and many others are preparing their positions on the applicability of international law in cyberspace. 

Most states supported scenario-based exercises to enhance the understanding between states on the applicability of international law. They would like to have the opportunity to conduct such exercises and have a more in-depth discussion on international law in the May intersessional meeting. China firmly opposes this.

Several states, such as Japan, Canada, Czechia, the EU, Ireland and others, would like to see future discussions on international law embedded in the Programme of Action (PoA). Read more about the talks on the PoA below.

CBMs: operationalising the POC directory
 Stencil, Text

The official launch of the Points of Contact (PoC) directory is scheduled for 9 May, which led to the discussion revolving around the operationalisation of the POC directory. At the time of the session, 25 countries had appointed their POCs. Most delegations reiterated their support for the directory and either confirmed their appointments or that the process was ongoing. Some states nevertheless suggested adjustments to the POC directory. Ghana, Canada, and Colombia commented that communication protocols may be helpful, while Czechia and Switzerland recommended that the POC shouldn’t be overburdened with these procedures yet. Argentina also brought up the potential participation of non-state actors in the POC directory.

To further facilitate communication, several states advanced the usefulness of building a common terminology (Kazakhstan, Mauritius, Iran, Pakistan), while Brazil mentioned that Mercosur was effectively working on this kind of taxonomy.

While Czechia, Switzerland and Japan underlined the necessity to focus first on the implementation and consolidation of existing CBMs, many states nevertheless were in favour of additional CBMs: protection of critical infrastructure (Switzerland, Colombia, Malaysia, Pakistan, Fiji, Netherlands, Singapore and Czechia) as well as coordinated vulnerability disclosure (Singapore, Netherlands, Switzerland, Mauritius, Colombia, Malaysia and Czechia). The integration of multi-stakeholders to the development of CBMs was also considered by some states and organisations (the EU, Chile, Albania, Argentina) while adding public-private partnerships as a CBM received broad support from Kazakhstan, Qatar, Switzerland, South Africa, Mauritius, Colombia, Malaysia, Pakistan, South Korea, Netherlands, and Singapore.

All states recalled and praised the significance of regional and subregional cooperation in the implementation of CBMs regionally and how it can contribute to the development of CBMs globally. In that respect, most states highlighted enriching initiatives at a cross-regional level, such as a recent side event at the German House. Work within the OAS, the OSCE, the ASEAN, the Pacific region, and the African Union was underlined. Interventions were enriched explicitly by sharing national experiences, most notably Kazakhstan’s and France’s recent use of the OSCE community portal for POC.Finally, states highlighted the link between CBMs and capacity-building, Ghana, Djibouti, and Fiji sharing their national experiences in closing the digital divide. In that vein, Argentina, Iran, Pakistan, Djibouti, Botswana, Fiji, Chile, Thailand, Ethiopia, Mauritius, and Colombia support creating a specific CBM on capacity-building.

Capacity building: bolstering efforts and funding
 Art, Drawing, Doodle

Several noteworthy proposals were put forth by different countries, each aiming to bolster capacity building efforts. The Philippines introduced a comprehensive ‘Needs-Based Capacity Building Catalogue,’ designed to help member states identify their specific capacity needs, connect with relevant providers, and access application guidance for capacity building programmes.

 Page, Text
A scheme of the Philippine proposal. Source: UNODA.

Kuwait proposed an expansion of the Global Cybersecurity Cooperation Portal (GCSE), suggesting adding a module dedicated to housing both established and proposed norms, thus facilitating collaboration among member states and tracking the implementation progress of these norms. India‘s CERT expressed willingness to develop an awareness booklet on ICT and best practices with the contribution of other delegations, intending to post it on the proposed GCSE for widespread dissemination.

The crucial issue of funding for capacity building received substantial attention during the discussions, with multiple delegations bringing to the fore the need for additional resources to sustainably support such efforts. Uganda advocated establishing a UN voluntary fund targeting countries and regions most in need. In contrast, others stressed the imperative of exploring structured avenues within the UN framework for resource mobilisation and allocation. 

On the foundational capacities of cybersecurity, an emphasis was placed on developing ICT policies and national strategies, enhancing societal awareness, and establishing national cybersecurity agencies or CERTs.

Furthermore, the importance of self-assessment tools for improving states’ participation in capacity building programmes was emphasised. Pakistan proposed implementing checklists and frameworks for evaluating cybersecurity readiness and identifying gaps. Rwanda advocated for reviews based on the cybersecurity capacity maturity model (CMM) to achieve varying levels of capacity maturity. The discussions also commended existing initiatives, such as the Secretariat’s mapping exercise and emphasised the need for a multistakeholder approach in capacity building efforts. Finally, Germany highlighted the significant contributions of organisations in creating gender-sensitive toolkits for cybersecurity programming, underscoring the importance of incorporating gender perspectives in implementing the UN framework on cybersecurity.

Regular institutional dialogue: the fight for a single-track process
 Accessories, Sunglasses, Text, Handwriting, Glasses

States are still divided on the issue of regular institutional dialogue. What they agree on is that there must be a singular process, its establishment must be agreed upon by consensus, and decisions it makes must be by consensus. 

France, one of the original co-sponsors of the PoA, has delivered a presentation on the PoA’s future elements and organisation. Review conferences would be convened in the framework of the POA every few years. The scope of these review conferences would include (i) assessing the evolving cyber threat landscape, the results of the initiatives and meetings of the mechanism, (ii) updating the framework as necessary and (iii) providing strategic direction and mandate or a program of work for the POA’s activities. The periodicity would need to be defined as not being a burden to delegations, especially delegations from small countries and developing countries. However, the PoA would need to keep up with the rapid evolution of technology and of the threat landscape.

The PoA would also include open-ended plenary discussions to (i) assess the progress in the implementation of the framework, (ii) take forward any recommendations from these modalities (iii) to discuss ongoing and emerging threats, (iv) to provide guidance for open ended technical meetings and practical initiatives. Inter-sessional meetings could also be convened if necessary.

Furthermore, four modalities would feed discussions on the implementation of the framework: capacity building, voluntary reporting by states, practical initiatives, and contributions from multistakeholder community. The POA could leverage existing and potential capacity building efforts in order to increase their visibility, improve their coordination, and support the mobilisation of resources. The review conferences and the discussions would then provide an opportunity to exchange on the ongoing capacity building efforts and identify areas where additional action is needed. Voluntary reporting of states could be based either on creating a new reporting system or by promoting existing mechanisms. The PoA would contain, enable, and deepen practical initiatives. It would build on existing initiatives and develop new ones when necessary. The PoA would enable that engagement and collaboration with the multistakeholder community.

France also noted that a cross-regional paper to build on this proposal will be submitted at the next session.

Multiple delegations expressed support for the PoA, including the EU, the USA, the UK,  Canada, Latvia, Switzerland, Cote d’Ivoire, Croatia, Belgium, Slovakia, Czechia, Israel, and Japan.

The Russian Federation, the country that originally suggested the OEWG, is the biggest proponent of its continuation. Russia cautioned against making decisions by a majority in the General Assembly, noting that such an approach will not be met with understanding by member states, first and foremost developing countries, which long fought to get the opportunity to directly partake in the negotiations process on the principles governing information security. Russia stated that after 2025, a permanent OEWG with a decision-making function should be established. Its pillar activity would be crafting legally binding rules, which would serve as elements of a future universal agreement on information security. The OEWG would also adapt international law to the ICT sphere. It would strengthen CBMs, launch mechanisms for cooperation, and establish programmes of funds for capacity building. Belarus, Venezuela, and Iran are also in favour of another OEWG.

A number of countries didn’t express support for either the PoA or the OEWG but noted some of the elements the future mechanism should have.

Similarly to Russia, China noted that the future mechanism should implement the existing framework but also formulate new norms and facilitate the drafting of legal instruments. The Arab Group noted that the future mechanism should develop the existing normative framework to achieve new legally binding norms. Indonesia also noted the mechanism should create rules and norms for a secure and safe cyberspace.

Latvia and Switzerland noted that the mechanism must focus on the implementation of the existing framework. However, Switzerland and the Arab Group noted that the mechanism could identify gaps in the framework and could develop the framework further.

Many delegations noted that capacity building must be an integral part of the regular mechanism, such as South Africa, Bangladesh, the Arab Group, Switzerland, Indonesia, and Kenya.

States also expressed opinions on which topics should be discussed under the permanent mechanism. Malaysia, South Africa, Korea, and Indonesia stated that the topics under the mechanism should be broadly similar to those of the OEWG. The UK, Latvia and Kenya stated it should discuss threats, while Bangladesh outlined the following emerging threats: countering disinformation campaigns, including deepfakes, quantum computing, AI-powered hacking, and addressing the use of ICTs for malicious purposes by non-state actors

South Africa highlighted that discussion on voluntary commitments, such as norms or CBMs, should be developed without prejudice to the possibility of a future legally binding agreement. The UK noted that the mechanism should also discuss international law.

States also discussed the operational details of the future mechanism. For instance, Egypt suggested that the future mechanism hold biannual meetings every two years, review conferences to be convened every six years, and intersessional meetings or informal working groups that may be decided by consensus. The future mechanism should ensure the operationalisation and review of established cyber tools, including POC’s directory and all other proposals to be adopted by the current OEWG. Sri Lanka noted that the sequence of submitting progress reports, be it annual or biennial, should correspond with the term of the Chair and its Bureau.

Brazil suggested a moratorium on First Committee resolutions until the end of the OEWG’s mandate to allow member states to focus on their efforts in the OEWG. This suggestion was supported by El Salvador, South Africa, Bangladesh, and India.

Dedicated stakeholders session

The dedicated stakeholder session allowed ten stakeholders to share their expertise within the substantive session. 

The stakeholders addressed the topics of CII protection and AI (Center for Excellence of RSIS), norms I and J, supply chain vulnerabilities, and addressing the threat lifecycle (Hitachi), role of youth and the importance of youth perspective as a possible area of thematic interest of OEWG (Youth for Privacy). The topics of AI and supply chain management are echoed in SafePC Solutions‘ statement. At the same time, the Centre for International Law (CIL) at the National University of Singapore focused on the intersection of international law and the use of AI.

Chatham House has shared their research on the proliferation of commercial cyber intrusion tools, among others, and the Pall Mall Process, launched by the UK and France. Access Now focused on intersectional harms caused by malicious cyber threats, issues of surveillance and norms E and J. Building on the Chatham House and Access Now remarks, the Paris Peace Forum focused its intervention on the commercial proliferation of cyber-intrusive and disruptive cyber capabilities, and possible helpful steps states could undertake in the short term.

DiploFoundation focused on the responsibility of non-state stakeholders in cyberspace and shared the Geneva Manual on responsible behaviour in cyberspace.Nuclear Age Peace Foundation, in their statement, connected cybersecurity concerns with safeguarding weapons systems and the importance of secure software, while The National Association for International Information Security focused on the need to interpret the norms of state behaviour.

What’s next?

The OEWG’s schedule for 2024 is jam-packed: mid-April, the chair will revise the discussion papers circulated before the 7th session. On 9 May, the POC Directory will be launched, followed by a global roundtable meeting on ICT security capacity-building on 10 May 2024. A dedicated intersessional meeting will be held on 13-17 May 2024. 

Looking ahead to the second half of 2024, the 8th and 9th substantive sessions are planned for 8-12 July and 2-6 December 2024. A simulation exercise for the POC directory is also on the schedule, along with the release of capacity-building materials by the Secretariat, including e-learning modules.