In the beginning was the word, and the word was with the chatbot, and the word was the chatbot

By introducing the argument to discuss, there is not much need to mention how important the word, respectively, the language and its narrow disciplines, is and what we humans have achieved in time through our enriched communication systems, especially in technological and diplomatic contexts where the word is an essential and powerful instrument

Since linguistics, especially nowadays, is an inseparable element from the realm of technology, it is absolutely legitimate to question the way chatbots, the offshoots of the latest technology, work. In other words, it is legitimate to question the way chatbots learn through digital, that is, algorithmic cognition and the way they accurately and articulately express themselves in response to someone’s most diverse queries or inputs.

What makes the human-like cognitive power of deep learning LLMs?

To understand AI and the epicentre of its evolution, chatbots, which interact with people by responding to most different prompts, we should delve into the branches of linguistics called semantics and syntax, and the process of learning and elaboration of most diverse and articulated info by chatbots. 

The complex understanding of language and how it is being assimilated by humans, (and in this case) by deep learning machines, was explained as far back as in some segments of language studies by Ferdinand de Saussure.

For that reason, we will explore the cognitive mechanisms underlying semantics and syntax in large language models (LLMs) such as ChatGPT, integrating the theoretical perspectives of one of the most renowned linguistic philosophers such as Saussure. By synthesising linguistic theories with contemporary AI methodologies, the aim is to provide a comprehensive understanding of how LLMs process, understand and generate natural language. What follows is a modest examination of the models’ training processes, data integration, and real-time interaction with users, highlighting the interplay between linguistic theories and AI language assimilation systems.

Overview of Saussure’s studies related to synta(x)gmatic relations and semantics 

 Face, Head, Person, Photography, Portrait, Adult, Male, Man, Mustache, Clothing, Coat, Accessories, Formal Wear, Tie, Ferdinand de Saussure

Starting with Ferdinand de Saussure, one of the first linguistic scientists of the 20th century (along with Charles Sanders Peirce and Leonard Bloomfield), and an introduction to syntax and semantics from the reading ‘Course in General Linguistics’, he depicts language as a scientific phenomenon, emphasising the synchronic study of language, focusing on its current state rather than its historical evolution, in a structuralist view, with syntax and semantics as some of the fundamental components of its structure. 

Syntax

Syntax, within this framework, is a grammar discipline which represents and explains the systematic and linear arrangement of words and phrases to form meaningful sentences within a given language. Saussure views syntax as an essential aspect of language, an abstract language system, which encompasses grammar, vocabulary, and rules. He argues that syntax operates according to inherent principles and conventions established within a linguistic community rather than being governed by individual speakers. His structuralist approach to linguistics highlights the interdependence between syntax and other linguistic elements, such as semantics, phonology and morphology, within the overall structure of language.

Semantics

Semantics is a branch of linguistics and philosophy concerned with the study of meaning in language. It explores how words, phrases, sentences, and texts convey meaning and how interpretation is influenced by context, culture, and usage. Semantics covers various aspects, including the meaning of words (lexical semantics), the meaning of sentences (compositional semantics or syntax), and the role of context in understanding language (pragmatics).

However, one of Saussure’s biggest precepts within semantics posits that language is a system of signs composed of the signifier (sound/image) and the signified (concept). This dyadic structure is crucial for understanding how LLMs process the understanding of words and their possible ambiguity. 

 Lighting, Nature, Night, Outdoors, Art, Graphics, Light, Person, Face, Head

How do chatbots cognise semantics and syntax in linguistic processes?

Chatbots’ processing and understanding of language usage involves several key steps: training on vast amounts of textual data from the internet to predict the next word in a sequence; tokenisation to divide the text into smaller units; learning relationships between words and phrases for semantic understanding; using vector representations to recognise similarities and generate contextually relevant responses; and leveraging transformer architecture to efficiently process long contexts and complex linguistic structures. Although it does not learn in real time, the model is periodically updated with new data to improve performance, enabling it to generate coherent and useful responses to user queries.

As explained earlier, in LLMs, words and phrases are tokenised and transformed into vectors within a high-dimensional space. These vectors function similarly to Saussure’s signifiers, with their positions and relationships encoding meaning (the signified). Thus, within the process of ‘Tokenisation and Embedding,’ LLMs tokenise text into discrete units (signifiers) and map them to embeddings that capture their meanings (signified). The model learns these embeddings by processing vast amounts of text, identifying patterns and relationships analogous to Saussure’s linguistic structures.

Chatbots’ ability to understand and generate text relies on their grasp of semantics (meaning) and syntax (structure). It processes semantics through contextual word embeddings that capture meanings based on usage, an attention mechanism that weighs word importance in context, and layered contextual understanding that handles polysemy and synonymy. The model is pre-trained on general language patterns and fine-tuned on specific datasets for enhanced semantic comprehension. For syntax, it uses positional encoding to understand word order, attention mechanisms to maintain syntactic coherence, layered processing to build complex structures, and probabilistic grammar learning from vast text exposure. Tokenisation and sequence modelling help track dependencies and coherence, while the transformer model integrates syntax and semantics at each layer, ensuring that responses are both meaningful and grammatically correct. Training on diverse datasets further enhances its ability to generalise across various language uses, making the chatbot a powerful natural language processing tool.

Interesting invention..

Recently, researchers in the Netherlands developed an AI platform capable of recognising sarcasm, which was presented at the Acoustical Society of America and Canadian Acoustical Association meeting. By training a neural network with the Multimodal Sarcasm Detection Dataset (MUStARD) using video clips and text from sitcoms like ‘Friends’ and ‘The Big Bang Theory,’ the large language model accurately detected sarcasm in about 75% of unlabeled exchanges.

Sarcasm generally takes the form of a, linguistically speaking, layered and ironic remark, often rooted in humour, that is intended to mock or satirise something. When a speaker is being sarcastic, they say something different than what they actually mean, and that’s why it is hard for a large language machine to detect such nuances in someone’s speech.

This process leverages deep learning techniques that analyse both syntax and semantics and the concepts of syntagma and idiom to understand the layered structure and meaning of language and how comprehensive the acquisition of human speech by an LLM is.

By integrating Saussure’s linguistic theories with the cognitive mechanisms of large language models, we gain a deeper understanding of how these models process and generate language. The interplay between structural rules, contextual usage, and fluidity of meaning partially depicts the sophisticated performance of LLMs’ language generation. This synthesis not only illuminates the inner workings of contemporary AI systems but also reinforces the enduring relevance of classical linguistic theories in the age of AI.

The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3

The intellectual property saga: The age of AI-generated content | Part 1

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

The first part on AI and IP discussed the complexities of copyrighting AI-generated content, noting the challenges of traditional laws in ownership. The second essay explored AI’s impact on trade secrets and trademarks in the EU and US legal frameworks. In this concluding section, methods being used to protect intellectual property in the era of AI will be explored.

Understanding AI and IP together is tricky. Unlike traditional forms of intellectual property, such as patents or copyrights, AI-generated outputs raise questions about ownership and authorship. Consequently, devising robust strategies to delineate ownership and protect AI-generated creations remains a major concern. As AI technology advances, it challenges traditional notions of ownership and attribution, asking for a re-evaluation of existing IP laws and ethical considerations. One significant aspect of this future is determining who owns the rights to AI-generated creations. For instance, if an AI system autonomously composes a symphony or designs a groundbreaking invention, should the credit and ownership belong to the programmer who developed AI, the company that deployed it, or perhaps even AI itself? This question suggests the need for clarity in IP laws to incentivise investment in AI research and was discussed thoroughly in the first essay. In patent cases, AI systems currently lack legal recognition as inventors, which could lead to legal reforms to accommodate AI-generated inventions. Similarly, copyright laws require adaptation to address ownership issues surrounding AI-generated creative works. Meanwhile, in trademark law, questions arise regarding the licensing and authorisation of AI systems for trademark use. As discussed in the second essay on this topic, many AI innovators choose trade secret protections over patents due to the ambiguity in traditional laws regarding AI and copyright. This approach allows them to keep their AI advancements confidential, making it challenging for others to detect and replicate their innovations, especially when used commercially. 

Legal protection for AI products?

Legal battles, such as Thaler vs Vidal, where Thaler filed patent applications for two inventions attributed to the DABUS AI without human involvement, illustrate the struggle to define AI’s role in intellectual property (IP) law. Typically, humans contribute to AI development, and its knowledge base includes copyrighted material. For instance, the US Court of Appeals ruled against recognising AI as an inventor, emphasising human-centric patent laws. Similarly, copyright registrations for AI-generated works face rejection due to human authorship requirements. 

However, cases like Thaler vs Perlmutter and Kashtanova’s comic book registration required protection for human-authored components of AI-generated content. The US Copyright Office faced this when Kristina Kashtanova sought registration for a comic book made with Midjourney AI. The US Copyright Office allowed Kashtanova to copyright the text and the arrangement of text alongside AI-generated artwork. While the text was deemed a product of human creativity, the registration also protected the arrangement. However, it explicitly excluded copyright for the AI-generated artwork itself. 

Patenting AI systems hurdles are further demonstrated under Alice Corp. vs CLS Bank International. This case established a two-step test for patent eligibility. The system assessed whether a patent claim involves ineligible subject matter like abstract ideas. If so, it considers whether the invention adds an ‘inventive concept’ to make it eligible. With this, many software-based and algorithm-reliant patents have been deemed ineligible. Given AI’s reliance on software and algorithms, inventors must navigate Alice carefully when patenting AI-related innovations or deciding whether to pursue patents at all, as it is not clear how much these rules will affect patents for software and AI. 

Robot hand and laptop
The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3 7

Approaches for IP Protection in AI

A study from 2023 from the University of Zurich’s Center for Intellectual Property and the Swiss Intellectual Property Institute proposes clarifications for AI-related IP aimed to provide clarity on this matter. The project suggests recognising AI as ‘inventors’ for patent protection, while human authorship takes precedence for copyright. Copyright may be granted for content jointly created by AI and humans, provided that human creativity is evident. Furthermore, companies should be able to claim ownership of AI-generated IP, without the need for new IP rights. In addition, permissive protection is advised to prevent AI owners from facing lawsuits for unintentional IP infringements.

Distinguishing between inspiration and infringement is crucial when asking for the establishment of governance mechanisms to address these concerns and maintain trust within creative industries. Recent conflicts, such as those between the Writers Guild of America (WGA) and the Alliance of Motion Picture and Television Producers (AMPTP), show the need for governance in creative AI usage. 

The negotiations between the two parties in this case included demands to restrict AI’s involvement in content creation, though compromises were reached to balance innovation with copyright protection. The agreement does not prohibit the use of AI but places restrictions on how it is credited and utilised. It states that neither traditional AI nor generative AI can be considered ‘writers’ or ‘professional writers’, and material produced solely by AI is not recognised as literary material. However, the agreement allows for collaborative work between writers and AI tools, with studios aiming for copyrightable material resulting from human-AI collaboration. Safeguards are also in place to ensure that AI use does not compromise copyrightability, with companies retaining the right to reject AI use if it affects copyrightability or work exploitation.

Detecting AI infringements

According to Originality.AI, an AI detection tool, almost 20% of the top 1000 websites in the world block crawler bots from collecting web data for AI use. Large language models (LLMs) such as OpenAI’s GPT family and Google’s LaMDA family require massive amounts of data to train their AI systems. Subsequently, various technology providers have developed and now offer AI-powered solutions designed to assist businesses in monitoring and protecting their intellectual property online. These solutions use machine learning algorithms to analyse vast amounts of data and detect potential instances of infringement. They provide tools for tracking the use of copyrighted material across websites, social media platforms, and digital channels, enabling rights holders to take appropriate action to protect their IP rights.

In August 2023, OpenAI launched its GPTBot crawler aiming to gather data for enhancing future AI models. Major websites (including Amazon, Quora, The New York Times, CNN, ABC, Reuters, and many others) have taken proactive measures to block AI crawlers from accessing their content. Axel Springer and the Associated Press have recently signed an agreement with OpenAI to license its news content for training AI models. Crawlers work like web browsers but save data instead of showing it. They are used by search engines like Google to collect information. While site owners can instruct crawlers to avoid their site, compliance is voluntary, leaving room for noncompliance by malicious actors. Google and other internet companies view the activities of their data crawlers as fair use. However, numerous publishers and holders of intellectual property have voiced objections to this, leading to several lawsuits against the company.

Government intervention through legislation is further aiming to enhance IP protection in AI. Legislative bodies play a critical role in introducing safeguards against the unchecked use of AI in accessing and utilising copyrighted material, thereby safeguarding the interests of creative industries in the digital age. For instance, a report from the UK Culture, Media and Sport Committee, composed of MPs from different parties, criticised the policies of the current UK administration, pointing out flaws and expressing concerns. In particular, they objected to the initial proposal to exclude text and data mining from copyright protection, suggesting it reveals a lack of comprehension of the creative industry’s importance to the economy and its employment of millions. 

In September 2023, lawmakers in France introduced a draft bill to regulate how artificial intelligence interacts with copyright laws. The aim was to make sure that AI respects creators’ rights, gets proper permission to use their work, and gives them fair credit. The proposed changes to the French Intellectual Property Code would mean that AI needs permission before using copyrighted material. Additionally, the law suggests a new tax on companies using AI to create works with uncertain origins. 

In the USA, lawmakers and regulatory bodies have also been struggling with the implications of AI on intellectual property rights. In response to the rapid advancements in generative AI and its widespread adoption, the US Copyright Office is reviewing the copyright implications. This action follows requests from Congress, the public, creators, and users of AI technology. Additionally, the US Patent and Trademark Office (USPTO) has examined the patentability of AI inventions and issued guidance for patent examiners. The guidance addresses the complexity of identifying substantial human input in AI-assisted inventions, providing key principles. According to it, merely posing a problem to an AI system is usually not significant, but crafting prompts tailored for specific solutions might demonstrate contribution. While acknowledging an AI-generated outcome as inventive does not automatically confer inventor status, making substantial contributions to it might. 

Ai robot checking code on the computer screen
The intellectual property saga: approaches for balancing AI advancements and IP protection |Part 3 8

Looking ahead 

Effective regulation of intellectual property rights concerning AI systems and their creations is crucial not only for legal clarity but also for motivating innovation in the market. Given the novelty of AI-generated artistic works, a reevaluation of current approaches to this issue seems unavoidable. Key points could include regulations of IP rights for AI systems and their creations. A likely solution involves implementing a distinct protection system for AI-generated creations, with rights held by either the AI system’s creator or user based on specific criteria. Additionally, further discussions might be needed to address the protection of algorithms, which are currently not covered under the existing EU legislative framework.

TikTok, a threat or a victim of complicated cyber-diplomatic relationships?

The term ‘legal saga’ does not seem to be adequate enough as an idiom to describe what has been happening and still happens to TikTok in the social media landscape ever since it was launched, given the complexity and long-term development of legal disputes. The law to ban TikTok, signed in the USA by President Joe Biden, is the last landmark in the social network’s legal file. Not that it is the first time a social network has been banned or suspended somewhere, but it seems to be a big deal when we speak about the USA and 170 million of its citizens daily using the app.

To comprehensively understand the intricate web of sociocultural, economic, and legal issues, and their interconnectedness within this major tech company’s journey, it is essential to delve into its origins and the evolution of these legal battles from the outset.

The beginnings of TikTok’s legal controversies in the wake of the rise of AI

Data governance and concerns over privacy and security

With the explosion of social media’s commercial expansion, TikTok has found itself at the centre of numerous legal controversies. Initially, concerns were primarily focused on data privacy and security, with various governments questioning how TikTok, owned by Chinese company ByteDance, handled user data. These concerns were amplified as the cutting-edge semiconductor industry became the battleground for chip market dominance, and AI technology has evolved and integrated more deeply into digital platforms – TikTok has begun using sophisticated AI algorithms, which personalise content feeds by collecting extensive user data, leading to fears about potential misuse and data security risks.

Sociocultural impact – content policy on deepfakes, hate speech, and the elections in the digital age 

As AI technology progressed, TikTok faced additional scrutiny over its content moderation practices. The platform’s AI-driven systems for detecting and removing inappropriate content have been criticised for both overreach and underperformance. Namely, TikTok’s algorithms sometimes mistakenly censor harmless content while failing to effectively filter out harmful material, including misinformation and hate speech. Controversy cases, such as the audio deepfake impersonating US President Joe Biden have caused alarm among politicians in the year with numerous elections. Also, deepfake videos depicting fictitious members of the Le Pen family have recently surfaced online, stirring controversy as France’s far-right parties gear up for the upcoming EU elections. Consequently, these and other deepfakes have spurred legal challenges and regulatory investigations in multiple countries, pushing TikTok to enhance transparency and refine its moderation technologies.

The rise of deepfakes and AI-generated content has further complicated TikTok’s legal landscape. Researchers and lawmakers have expressed concern that AI-generated videos could be used to spread misinformation, especially during sensitive times such as elections or warfare. In response to these challenges, TikTok has implemented measures to label AI-generated content, working with technology like Adobe’s ‘Content Credentials’ to mark such media. Despite these efforts, the potential for AI misuse remains a contentious issue, prompting ongoing debates about the adequacy of TikTok’s measures and the broader implications for digital platforms.

Bans on TikTok around the world since its global rise

TikTok has faced bans and severe restrictions in several countries due to concerns over national security, privacy and data protection, and content (moderation) policy. One of the most prominent instances occurred in India, which first ordered the removal of the application from Google and Apple stores in 2019, considering it a platform that degrades culture and encourages pornography, in addition to causing paedophiles and explicit disturbing content, social stigma, and media health issues among teens. Subsequently, the country imposed a ban on TikTok in June 2020. The Indian government cited data privacy and national security concerns, arguing that the app was transmitting user data to servers outside the country. The ban came during heightened border tensions between India and China, effectively removing TikTok from one of its largest markets, and impacting millions of users and creators in the region.

The application faced bans in several other countries as well. In 2020, Pakistan temporarily banned the app, citing concerns over immoral and indecent content. The ban was lifted after TikTok assured Pakistani authorities that it would implement stricter content moderation policies. Similarly, Indonesia banned the app for a brief period in 2018 due to content deemed blasphemous and inappropriate. The ban was lifted after TikTok agreed to remove the offending content and improve its moderation practices. Recently, Kyrgyzstan also banned TikTok following security service recommendations to safeguard children. The decision came amid growing global scrutiny over the social media app’s impact on children’s mental health and data privacy.

Other bans occurred in Australia, where the government banned TikTok from all federal government-owned devices over security concerns, aligning with other ‘Five Eyes’ intelligence-sharing network members. New Zealand imposed a ban on the use of TikTok on devices with access to the parliamentary network amid cybersecurity concerns. 

Along with the listed countries that banned TikTok on government-issued devices due to security risks, Canada and Taiwan banned TikTok and some other Chinese apps on state-owned devices and launched a probe into the app in December 2022 over suspected illegal operations. Nepal, on the other side, banned TikTok in November 2023, citing disruption of social harmony and goodwill caused by the misuse of the popular video app. Somalia also banned the application in 2023 citing concerns about these platforms being used by both terrorists and immoral groups to circulate disturbing images and false information.

Electronics, Mobile Phone, Phone, Face, Head, Person, Child, Female, Girl

EU bans

More recently, in Europe, TikTok has come under regulatory scrutiny from various governments. The EU has raised concerns about data privacy and compliance with its stringent General Data Protection Regulation (GDPR). The European Commission had already banned TikTok on its corporate phones and highlighted the perceived danger of the platform concerning the GDPR. Furthermore, the European Commission president suggested that banning TikTok in the EU could be an option during a debate in Maastricht featuring parties’ lead candidates for the bloc’s 2024 election. Some EU countries have considered or implemented restrictions on the app for its addictive nature in children’s environments. Additionally, the app has faced calls for bans from political figures who argue that TikTok could be used for espionage or to influence public opinion, especially during the election periods.

The major restrictions TikTok faced in the EU occurred in France in April 2023, when the country banned TikTok from government employee devices due to data security and privacy concerns. The ban was part of a broader measure affecting several social media and gaming apps deemed inappropriate for government networks. Belgium imposed a ban on TikTok in March 2023 for government employees, citing national security and privacy concerns. The ban came as a response to potential data sharing with Chinese authorities, given TikTok’s ownership by ByteDance. In Scotland, the government removed TikTok from Scottish Parliament phones and devices due to security concerns, as well as in Belgium, where the app has been banned from federal government employees’ work phones.

Although the UK is no longer part of the EU, it is relevant to note that the country also banned TikTok from government devices due to security concerns. The restriction aligned with similar actions taken within the EU. In the same year, Austria decided to join the ‘ban group’, in prohibiting the Chinese-owned video-sharing app TikTok from being installed on government employees’ work phones. The ban was implemented as a precautionary measure against potential security risks.

TikTok and the ‘ban or divest’ legal saga in the USA

In the USA, TikTok has faced perhaps the strictest scrutiny and legal challenges in the last 5 years. During the Trump administration, an executive order was issued in August 2020 seeking to ban the app unless ByteDance sold its US operations to an American company, which is a precedent compared to the current situation. Although the ban was temporarily halted by court rulings, the Biden administration has continued to review and address concerns regarding TikTok’s data practices and its potential ties to the Chinese government. 

The proposed ban led to a flurry of legal battles and negotiations, with TikTok challenging the executive order in court and exploring potential deals with American companies such as Microsoft, Walmart, and Oracle. These negotiations, however, did not result in a definitive resolution, and the legal actions temporarily halted the enforcement of the ban. The controversy continued into the Biden administration, which has taken a more measured approach but remains concerned about TikTok’s data privacy practices and its potential ties to the Chinese government.

In April 2024, President Biden signed legislation that required ByteDance to divest TikTok or face a US ban. The enactment of the law extended the divestment timeline and aimed to address ongoing national security concerns. TikTok responded by challenging the law in court, arguing that such laws violated the freedom of speech, including the First Amendment, and that no concrete evidence had been provided to substantiate the claims against its irregularity. 

Despite TikTok’s reassurances and legal challenges, the ‘ban or divest’ dilemma persists, reflecting broader tensions between the USA and China over technology and data security. The outcome of this controversy remains uncertain as TikTok continues to navigate the complex legal landscape and regulatory scrutiny in the USA. The resolution of this issue will have significant implications for the future of TikTok in the US market and for the broader regulatory environment governing international tech companies operating in the USA.

As this legal saga unfolds, it underscores the increasing importance of digital sovereignty and data privacy in the global tech industry. The TikTok case has already set a precedent for how free speech and corporate rights are trampled to protect national security, and the ongoing saga is a testament to the intricate interplay between technology, law, and geopolitics in the modern digital age.

Tech titans clash: Inside the US-China battle for chip market dominance

Competition between the USA and China in chip trade and production is growing on a daily basis to the extent that it is considered a chip war between these two superpowers.

In this analysis, we will review all the facts and steps that Beijing and Washington have taken so far to position themselves better in the chip market. This will help us see the whole picture better and allow us to predict what will come next more easily.

China

China’s first significant step in strengthening its position in the semiconductor technology market happened in 2014 when a broader national security strategy was introduced. The main task of the strategy, active to this day, is to position China as the world’s leading science and technology superpower, which is part of its goal to establish itself as a global superpower. Chinese leaders realised that semiconductor microchips are crucial to emerging civilian and military technologies and for achieving their long-term geopolitical goals and potentially surpassing the USA as the dominant superpower.

China has made significant progress in technological advancements that have outpaced the forecasts from Western intelligence and industry analyses. For example, the military-civil fusion programme aims to integrate civilian technologies with military capabilities and to blur the lines between civilian and military applications.

Part of the broader national security strategy is a tendency to reduce dependence on Western technologies and to reach the point where they can rely on themselves in critical sectors like semiconductors. That’s precisely why Xi Jinping, the Chinese president, called for increased technological autonomy to counter Western influence and strengthen China’s global position. They have also invested heavily in its semiconductor industry while setting ambitious targets to increase chip self-reliance. But, some targets are proving to be somewhat challenging, such as reaching 70% self-reliance by 2025.

However, those efforts have been bolstered even more by the constant pressure of the USA in the form of increasing trade restrictions and policies that limit Chinese technological investments and exports. Semiconductor microchips are a focal point in Beijing’s economic security strategies. As expected, the conflict over microchips with the USA did not go without countermeasures. For example, China accelerated its efforts to remove foreign-manufactured chips, especially those made in the USA, and set a deadline for domestic telecommunications companies to do so by 2027. That move could particularly hit American chipmakers such as Intel and AMD and inflict significant financial damage to the US economy.

China also found a way to bypass Washington’s prohibition of Nvidia’s high-end AI processor sales to China. Instead of buying directly from Nvidia, Chinese universities and research institutions acquired the processors through resellers. There was no lack of open criticism either, as officials in Beijing criticised the USA for tightening trade rules. They emphasised that this move raises barriers and introduces uncertainty to the global chip sector. China is showing clear signs that they will not give up the fight, but it all depends on the speed of their technological progress.

US

As for the USA, when President Biden took office in 2021, concerns about China’s accelerating technological progress were already very much present. Those concerns were mainly focused on the field of AI. Many feared that China could overtake the US in semiconductor technology, which would also threaten the dominance of the West over the East in technology.

This is precisely why the EU and the USA began emphasising economic security in the foreground, thus making a turn from past policies when they promoted globalisation and trade liberalisation. This was also triggered by alleged reports that claimed China acquired Western technologies through joint ventures and projects and caused disruptions in supply chains for crucial materials and equipment.

However, the most significant turning point in American politics regarding semiconductor microchip manufacturing was the introduction of the CHIPS Act in August 2022. The primary purpose of the CHIPS Act was to boost the domestic semiconductor manufacturing process and protect it from potential sabotage. It also included the tendency to reduce US dependency on imports, especially from China.

Furthermore, Washington implemented a series of sanctions and export controls to protect its intellectual property and national security interests. The sanctions included restrictions on exporting the equipment required to produce advanced chips to China, emphasising chips lower than 16/14 nm.

The next step the USA took was to strengthen some of its alliances. They did this primarily with the Netherlands and Japan, which enhanced export controls on high-performance semiconductor manufacturing equipment. Also, to further isolate China, the White House proposed the Chip 4 Alliance with Japan, South Korea, and Taiwan, aiming to bolster the resilience of East Asia’s semiconductor supply chain.

Taiwan plays a vital role in this US-China conflict because it produces a significant share of the world’s most advanced chips. Its technological leadership, supplier diversity, and resilience made it a cornerstone in efforts to strengthen the semiconductor supply chain. Both Beijing and Washington want to increase their influence in Taiwan to better take advantage of the breadth of Taiwan’s chip production.

What can we expect?

The rivalry between China and the USA in this field started during Donald Trump’s presidency and has continued under President Joe Biden. It reflects a rare bipartisan consensus in the US Congress to challenge China’s technological ambition. On the other hand, for China, the position of a global leader is a matter of national pride, which is omnipresent in President Xi Jinping’s leadership.

The expanded tech war manifests in various arenas, with the most notable ones being chipmaking and green technology. Chipmaking is crucial for information processing, while green technology is becoming increasingly important for the global economy. Both China and the USA are vying for dominance in these sectors.

The Economist stated in its article titled ‘The tech wars are about to enter a fiery new phase’ that regardless of the outcome of future elections in the USA, the next president is likely to continue challenging China’s technological advancements. This echoes the joint effort in Washington to confront China’s growing influence in advanced technologies.

The Economist added that heightened tensions and a more aggressive US approach under a future administration are also possible. This could involve expanding export controls and sanctions beyond companies like Huawei to other Chinese tech firms. Such actions might provoke retaliatory measures from China, further escalating the conflict.

The Taiwanese chipmaker TSMC, which has significant investments in China, could be pressured by the US government to limit its operations there. That could also happen with other foreign companies that do business in China and get caught in the crossfire of this conflict.

Despite winning over some allies, the USA might need help with other partners, particularly in Europe and Asia. Washington’s approach to technology and China could affect its relationship with some allies since there is a difference in priorities, which could strain alliances and potentially complicate efforts to form a united front against China’s technological ambition.

This clash between the two great powers will undoubtedly leave its mark on the world economy. The International Monetary Fund (IMF) estimates that the elimination of high-tech trade between the two countries could cost as much as $1 trillion annually, equivalent to 1.2% of the global GDP. It is in the general interest to resolve this conflict as soon as possible, although everything indicates that it will not happen very soon.

UN AI resolution a significant global effort to harness AI for sustainable development 

On 21 March, the United Nations General Assembly (UNGA) overwhelmingly passed the first global resolution on AI. Member states are urged to protect human rights and personal data and to monitor AI for potential harms, so the technology can benefit all.

The unanimous adoption of the US-led resolution on the promotion of ‘safe, secure, and trustworthy artificial intelligence systems that will also benefit sustainable development for all’ is a historic global effort to ensure the ethical and sustainable use of AI. While nonbinding, the draft resolution was supported by more than 120 states, including China, and endorsed without a vote by all 193 UN member states.

Vice President Kamala Harris praised the agreement, stating that this ‘resolution, initiated by the USA and co-sponsored by more than 100 nations, is a historic step towards establishing clear international norms for AI and fostering safe, secure, and trustworthy AI systems’.

To unpack the significance of this resolution and its potential impact on AI policies, we will look at five dimensions: policy and regulation in the global context, ethical design, data privacy and protection, transparency and trust, and AI for sustainable development.

Global policy and regulation

EU policymakers have paved the way with the recently approved AI Act, the first comprehensive legislation covering the new technology. The Council of Europe (CoE), a 46-member human rights body, has also agreed on a draft AI Treaty to protect human rights, democracy, and the rule of law.

The United States wants to play a leadership role in shaping global AI regulations. Last October, President Biden unveiled a landmark Executive Order on ‘Safe, Secure, and Trustworthy AI’ and in March, VP Harris announced a new policy of the White House Office of Management and Budget for federal agencies’ use of AI.

Other countries and regions are also developing their own frameworks, guidelines, strategies, and policies. For instance, at the African Union (AU) level, its Development Agency (AUDA) released in March a White Paper on a pan-African AI policy and a continental roadmap.

The UN resolution acknowledges that multiple initiatives may lead the way in the right direction and further encourages member states, international organisations, and others, to assist developing countries in their national process.

Ethical design

The text highlights the need for ethical design in all AI-based decision-making systems (6.b, p5/8). AI systems should be designed, developed, and operated within the frameworks of national, regional, and international laws to minimise risks and liabilities and ensure the preservation of human rights and fundamental freedoms (5., p5/8). A collaborative approach combining AI, ethics, law, philosophy, and social sciences can help craft comprehensive ethical frameworks and standards to govern the design, deployment, and use of AI-powered decision-making tools. Ethical design is a critical aspect of promoting safe, secure, and trustworthy AI systems. The resolution urges member states and other stakeholders to integrate ethical considerations in the design, development, deployment, and use of AI to safeguard human rights and fundamental freedoms, including the right to life, privacy, and freedom of expression.

Introducing the draft, Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, added that ‘AI should be created and deployed through the lens of humanity and dignity, safety and security, human rights, and fundamental freedoms’.

Data privacy and protection

The UN resolution addresses data privacy safeguards to guarantee safe AI development, especially when data used is sensitive personal information such as health, biometrics, or financial data. Member states and relevant stakeholders are encouraged to monitor AI systems for risk and assess for impact on data security measures and personal data protection, throughout their life cycle (6.e, p5/8). Privacy impact assessments and detailed product testing during development are suggested as mechanisms to protect data and preserve our fundamental privacy rights. Additionally, transparency and reporting obligations in accordance with all applicable laws contribute to safeguarding privacy and protecting personal data (6.j, p6/8).

Transparency and trust

The document highlights the value of transparency and consent in AI systems. Transparency, inclusivity, and fairness promote our diverse needs, preferences, and emotions.

To preserve fundamental human rights, algorithms that affect our lives have to be developed in a way that does not cause any harm to us or the environment. This includes providing notice and explanation, promoting human oversight and ensuring that automated decisions are reviewed. When necessary, human decision-making alternatives should be accessible, as well as effective redress.

Transparent, interpretable, predictable, and explainable AI systems facilitate reliability and accountability, allowing end-users to better understand, accept, and trust outcomes and decisions that impact them. 

AI for sustainable development

The resolution confirms that safe, secure, and trustworthy AI systems can accelerate progress toward achieving all 17 sustainable development goals (SDGs) in all three dimensions – economic, social, and environmental – in a balanced way. 

AI technologies can be a driving force to help achieve the SDGs by augmenting human intelligence and capabilities, improving efficiency, and reducing environmental impact. For instance, AI models can predict and unveil errors, plan more effectively, and boost renewable energy efficiency. AI can also streamline transportation and traffic management and anticipate energy needs and production. Any AI system designed, developed, deployed, and used without proper safeguards engenders potential threats that could hamper progress toward the 2030 Agenda and its SDGs.

The aim is to reduce the digital divide between wealthy industrialised nations and developing countries, and within countries, to give all nations a proper representation at the table of discussions on AI governance for sustainable development. The intention is also to ensure that less developed nations have access to the needed technology, infrastructure, and capabilities to reap the promised gains of AI, such as disease detection, flood forecasting, effective capacity building, and a workforce upskilled for the future.

The UN resolution is a remarkable step in global AI policy because it addresses many of the key drivers for AI to play a safe and effective role in sustainable development that will benefit all. It also recognises that innovation and regulation, far from being mutually exclusive, complement and reinforce one another.

By following up on the current consensus, implementing these recommendations, and aligning them with other regional and global initiatives, governments, public and private sectors, and other involved stakeholders can harness AI’s potential while minimising its risks.

The road ahead for global AI governance

South Korea will co-host the second AI Safety Summit with the UK through a virtual conference in May, and 6 months later France will hold the next in-person global gathering after Prime Minister Rishi Sunak led the inaugural AI Safety Summit in Bletchley Park last November. 

By September 2024 and the Summit of The Future in New York, more important developments in global AI policy and governance can be expected. 

One is the work in progress from the UN ‘High-Level Advisory Body on AI’, which will lead to a final report. This will progress in parallel with and feed into the long-awaited Global Digital Compact process. 

Another one will be the formal adoption of the CoE ‘Convention on AI, Human Rights, Democracy, and the Rule of Law’ and its subsequent ratification process open to member and non-member states. 

On the EU side, the European Commission has started staffing and structuring the newly established AI Office. The EU AI Act was adopted by the EU Parliament, and it awaits the EU Council’s formal approval. The AI Act will enter into force 20 days after it is published in the Official Journal, with phased implementation and enforcement. After 6 months, unacceptable risks will be prohibited, after 12 months, obligations for providers of general-purpose AI systems come into effect and member states should designate their relevant national authority, and after 24 months, the legislation becomes fully applicable.

In Africa, the African Union Commission has begun holding a series of online consultations with diverse stakeholders across the continent to gather input and inform the development of an Africa-wide AI policy, with a focus on ‘building the capabilities of AU member states in AI skills, research and development, data availability, infrastructure, governance and private sector-led innovation’.

The rapid advance of AI technologies poses new challenges for legislators around the world since existing rules struggle to keep up with the acceleration of technical progress. This demonstrates the critical need for regulatory frameworks that can adapt to AI’s evolving landscape.

The governance of AI systems requires ongoing discussions on appropriate approaches that are agile, adaptable, interoperable, inclusive, and responsive to the needs of both developed and developing countries. The UNGA resolution opens the door to global cooperation on a safe, secure, and trustworthy AI for sustainable development that benefits all.

Digital dominance in the 2024 elections

As the historic number of voters head to the polls, determining the future course of over 60 nations and the EU in the years ahead, all eyes are on digital, especially AI.

Digital technologies, including AI, have become integral to every stage of the electoral process, from the inception of campaigns to polling stations, a phenomenon observed for several years. What distinguishes the current landscape is their unprecedented scale and impact. Generative AI, a type of AI enabling users to quickly generate new content, including audio, video, and text, made a significant breakthrough in 2023, reaching millions of users. With its ability to quickly produce vast amounts of content, generative AI contributes to the scale of misinformation by generating false and deceptive narratives at an unprecedented pace. The multitude of elections worldwide, pivotal in shaping the future of certain states, have directed intense focus on synthetically generated content, given its potential to sway election outcomes.

Political campaigns have experienced the emergence of easily produced deepfakes, stirring worries about information credibility and setting off alarms among politicians who called on Big Tech for more robust safeguards.

Big Tech’s response 

Key players in generative AI, including OpenAI and Microsoft, joined platforms like Meta Platforms, TikTok, and X (formerly Twitter) in the battle against harmful content at the Munich Security Conference. Signatories of the tech accord committed to working together to create tools for identifying targeted content, raising public awareness through educational campaigns, and taking action against inappropriate content on their platforms. To address this challenge, potential technologies being considered include watermarking or embedding metadata to verify the origin of AI-generated content, focusing primarily on photos, videos, and audio.

After the European Commissioner for Internal Market Thierry Breton urged Big Tech to assist European endeavours in combating election misinformation, tech firms promptly acted in response. 

Back in February, TikTok announced that it would launch an in-app for EU member states in local language election centres to prevent misinformation from spreading ahead of the election year. 

Meta intends to launch an Elections Operations Center to detect and counter threats like misinformation and misuse of generative AI in real time. Google collaborates with a European fact-checking network on a unique verification database for the upcoming elections. Previously, Google announced the launch of an anti-misinformation campaign in several EU member states featuring ‘pre-bunking’ techniques to increase users’ capacity to spot misinformation. 

Tech companies are, by and large, partnering with individual governments’ efforts to tackle the spread of election-related misinformation. Google is teaming up with India’s Election Commission to provide voting guidance via Google Search and YouTube for the upcoming elections. They are also partnering with Shakti, India Election Fact-Checking Collective, to combat deepfakes and misinformation, offering training and resources throughout the election period. 

That said, some remain dissatisfied with the ongoing efforts by tech companies to mitigate misinformation. Over 200 advocacy groups call on tech giants like Google, Meta, Reddit, TikTok, and X to take a stronger stance on AI-fuelled misinformation before global elections. They claim that many of the largest social media companies have scaled back necessary interventions such as ‘content moderation, civil-society oversight tools and trust and safety’, making platforms ‘less prepared to protect users and democracy in 2024’. Among other requests, the companies are urged to disclose AI-generated content and prohibit deepfakes in political ads, promote factual content algorithmically, apply uniform moderation standards to all accounts, and improve transparency through regular reporting on enforcement practices and disclosure of AI tools and data they are trained on.

EU to walk the talk?

Given the far-reaching impact of its regulations, the EU has assumed the role of de facto regulator of digital issues. Its policies often set precedents that influence digital governance worldwide, positioning the EU as a key player in shaping the global digital landscape.

 People, Person, Crowd, Adult, Male, Man, Face, Head, Audience, Lecture, Indoors, Room, Seminar, Speech, Thierry Breton
European Commissioner for Internal Market Thierry Breton

The EU has been proactive in tackling online misinformation through a range of initiatives. These include implementing regulations like the Digital Services Act (DSA), which holds online platforms accountable for combating fake content. The EU has also promoted media literacy programmes and established the European Digital Media Observatory to monitor and counter misinformation online. With European Parliament elections approaching and the rising prevalence of AI-generated misinformation, leaders are ramping up efforts to safeguard democratic integrity against online threats.

Following the Parliament’s adoption of rules focussing on online political advertising requiring clear labelling and prohibiting sponsoring ads from outside the EU in the three months before an election, the European Commission issued guidelines for Very Large Online Platforms and Search Engines to protect the integrity of elections from online threats. 

The new guidelines cover various election phases, emphasising internal reinforcement, tailored risk mitigation, and collaboration with authorities and civil society. The proposed measures include establishing internal teams, conducting elections-specific risk assessments, adopting specific mitigation measures linked to generative AI and collaborating with EU and national entities to combat disinformation and cybersecurity threats. The platforms are urged to adopt incident response mechanisms during elections, followed by post-election evaluations to gauge effectiveness.

The EU political parties have recently signed a code of conduct brokered by the Commission intending to maintain the integrity of the upcoming elections for the Parliament. The signatories pledge to ensure transparency by labelling AI-generated content and abstain from producing or disseminating misinformation. While this introduces an additional safeguard to the electoral campaign, the responsibility for implementation and monitoring falls on the European umbrella parties rather than national parties conducting the campaign on the ground.

What to expect

The significance of the 2024 elections extends beyond selecting new world leaders. They serve as a pivotal moment to assess the profound influence of digital on democratic processes, putting digital platforms into the spotlight. The readiness of tech giants to uphold democratic values in the digital age and respond to increasing demands for accountability will be tested. 

Likewise, the European Parliament elections will test the EU’s ability to lead by example in regulating the digital landscape, particularly in combating misinformation. The effectiveness of the EU initiatives will be gauged, shedding light on whether collaborative efforts can establish effective measures to safeguard democratic integrity in the digital age.

(Jail) time ahead for the cryptocurrency industry 

The cryptocurrency and digital asset industry has once again been the focus of the worldwide media. This time, it is not about the promises of an inclusive future of finance but is related to a couple of court cases initiated or found to have come to a close in the past months. 


These particular developments can be seen as a desire of regulators worldwide to set legal practice around the new class of digital assets (or cryptoassets as named in regulations worldwide) and send a message to the ever-growing base of consumers of such products that they will be protected while entering this new arena. A particular push is seen in the United States, where a couple of the world’s biggest cryptocurrency exchanges Binance and Kraken have been accused and charged with anti-money laundering activities. In both cases, regulators highlighted the lack of fully implemented Know-Your-Customer (KYC) procedures as a primary focus. In the case of the world’s number one cryptocurrency exchange Binance, the US Justice Department argued that the failure of KYC led to the money laundering and evasion of international sanctions. Cryptocurrency exchange Binance, and its CEO, Zhao Changpeng pleaded guilty to charges filed by the US Justice Department and US Securities and Exchange Commission (SEC) while agreeing to a record USD 4.2 billion fine in this case. In the most recent case, cryptocurrency exchange KuCoin has been hit with the same anti-money laundering charges and is facing a similar outcome. For Kraken, the SEC is asking for a total ban in the USA as they failed to register within the regulatory framework.

A couple of significant cases from the past have received their final acts in the past months. The cases of Celsius, Terra, and, most prominently, FTX exchange moved from the standstill, and in the case of FTX, the trial ended with the sentencing of the former FTX CEO Sam Bankman-Fried. The sentence was delivered in the court case related to the collapse of the FTX exchange and Alameda Research trading firm in November 2022. The former FTX CEO was sentenced to 25 years in prison six months after being convicted of fraud. In addition to the sentence, Bankman-Fried was ordered to pay USD 11 billion in reparations and damages to FTX users and investors. Another crypto-company CEO, Do Kwon was extradited from Montenegro to prosecutors in South Korea for the trial of the Terra cryptocurrency company. Kwon was hiding from law enforcement for a whole year to be finally arrested at the tarmac of the Podgorica airport in Montenegro. Kwon also faces a lengthy jail sentence if allegations from the indictment stand the trial case.

Do Kwon, Cryptocurrency king,  Helmet, Adult, Male, Man, Person, Officer, Police Officer, , Head, Arrest
‘Cryptocurrency King’ Do Kwon with a a group of Montenegro police officers. Photo by: Radio Free Europe (RFE)

In another long-lasting legal battle before the US courts, a case against one of the biggest cryptocurrency companies, Ripple Labs, is nearing its end. The prosecutors look for another major fine of USD 2 billion. This would, according to their statement, send a message to the industry in relation to consumer protection. What exactly is that message?


‘Countries should take the issue seriously and strengthen regulation, as virtual assets tend to flow towards less regulated jurisdictions.’ This is pointed out in the Financial Action Task Force (FATF) president T. Raja Kumar’s interview, in which he acknowledged that only one-third of the world has implemented some form of cryptocurrency regulations. Mr Kumar urges countries to take the issue seriously and strengthen regulation.

That is definitely a trend for crypto companies. As a whole, the cryptocurrency industry has seen a significant drop in value received by illicit cryptocurrency addresses. The share of all crypto transaction volume associated with illicit activity has also decreased. This is stressed in the annual report by Chainalysis, which provides blockchain forensics for most governments worldwide. So, the industry is going in the right direction.

OEWG’s seventh substantive session: the highlights

The OEWG held its 7th substantive session on 4-8 March. With 18 months until the end of the group’s mandate, a sense of urgency can be felt in the discussions, particularly on the mechanism that will follow the OEWG.

Some of the main takeaways from this session are:

  • AI is increasingly prevalent in the discussion on threats, with ransomware and election interference rounding up the top 3 threats.
  • There is still no agreement on whether new norms are needed.
  • Agreement is also elusive on whether and how international law and international humanitarian law apply to cyberspace.
  • The operationalisation of the POC directory, the most important confidence building measure (CBM) to result from the OEWG, is in full swing ahead of its launch on 9 May.
  • Bolstering capacity building efforts and funding for them are necessary actions.
  • The mechanism for regular institutional dialogue on ICT security must be single-track and consensus-based. Whether it will take the shape of the Programme of Action (PoA) or another OEWG is still up in the air.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated OEWG process page.

un meeting 2022
UN OEWG
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN OEWG
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
Threats: AI, elections and ransomware at the forefront
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

The widespread availability of AI tools for different purposes led to delegations focusing on AI-enabled threats. AI tools may exacerbate malicious cyber activity, for example, by faster searching for ICT vulnerabilities, developing malware, and boosting social engineering and phishing tactics. 

France, the Netherlands, and Australia spoke about the security of AI itself, pointing to the vulnerability of algorithms and platforms and the risk of poisoning models. 

2024 is the year of elections on different levels in many states. Large language models (LLMs) and generative AI spur the fake creation process and the proliferation of disinformation and manipulation of public opinion, especially during significant political and social processes. Belgium, Italy, Germany, Canada, and Denmark expressed concern that cyber operations are used to interfere in democratic processes. Malicious use of cyber capabilities can influence political outcomes and threaten the process by targeting voters, politicians, political parties, and election infrastructure, thus undermining trust in democratic institutions. 

Another prevalent threat highlighted by the delegations was ransomware. Cybercriminals target critical infrastructure and life-sustaining systems, but states noted that the most suffering sector is healthcare. Belgium stressed that such attacks eventually lead to human casualties because of the disruption in providing medical assistance. The USA and Greece highlighted the increase in ransomware attacks because some states allow criminal actors to act from their territories with impunity. Also, now AI is an excellent leverage for malicious threat actors, providing unsophisticated operators of ransomware-as-service with a new degree of possibilities and allowing rogue states to exploit this technology for offensive cyber activities. 

Ransomware attacks go hand in hand with IP theft, data breaches, violation of privacy, and cryptocurrency theft. The Republic of Korea, Japan, the Czech Republic, Mexico, Australia and Kenya connected such heists with the proliferation of WMDs. 

Delegations expressed concerns about a growing commercial market of cyber intrusion capabilities, 0-day vulnerabilities and hacking-as-service. The UK, Belgium, Australia, and Cuba considered this market capable of increasing instability in cyberspace. The Pall Mall process launched by France and the UK aimed at addressing the proliferation of commercially available cyber intrusion tools was upheld by Switzerland and Germany.

The growing IoT landscape expands the surfaces of cyberattacks, Mauritius, India, and Kazakhstan mentioned. Quantum computing may break the existing encryption methods, leading to strategic advantages for those who control this technology, Brazil added. It could also be used to develop armaments, other military equipment, and offensive operations. 

Russia once again drew attention to the use of information space as an arena of geopolitical confrontation and militarisation of ICTs. Russia, China, and Iran have also highlighted certain states’ monopolisation of the ICT market and internet governance as threats to cyber stability. Syria and Iran pointed to practices of technological embargo and politicised ICT supply chain issues that weaken the cyber resilience of States and impose barriers to trade and tech development.

Norms: new norms vs. norms’ implementation
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Reflections of the several delegations have highlighted the existing binary dilemma: should there be new norms or not? 

Iran, China and Russia highlighted once again that new norms are needed. Russia also suggested four new norms to strengthen the sovereignty, territorial integrity and independence of states; to suggest the inadmissibility of unsubstantiated accusations against states; and to promote the settlement of interstate conflicts through negotiations, mediation, reconciliation or other peaceful means. Brazil noted that additional norms will become necessary as technology evolves and stressed that any efforts to develop new norms must occur within the UN OEWG. South Africa expressed that they could support a new norm to protect against AI-powered cyber operations and attacks on AI systems. Vietnam strongly supported the development of technical standards regarding electronic evidence to facilitate the verification of the origins of cybersecurity incidents. 

However, some delegations insist that implementing already existing norms comes before elaborating new ones. Bangladesh urged states to collaborate more to translate norms into concrete actions and focus on providing guidance on their interpretation and implementation. The UK, in particular, suggested four steps to improve the implementation of the norms by addressing the growing commercial market for intrusive ICT capabilities. The delegate called states to prevent commercially available cyber intrusion capabilities from being used irresponsibly, to ensure that governments take the appropriate regulatory steps within their domestic jurisdictions, to conduct procurement responsibly, and to use cyber capabilities responsibly and lawfully.

Several delegations mentioned the accountability and due diligence issues in implementing the agreed norms. New Zealand, in particular, shared that the OEWG could usefully examine what to do when agreed norms are willfully ignored. France mentioned that it continues its work on the due diligence norm C with other countries. Italy called for dedicated efforts to set up accountability mechanisms to ‘increase mutual responsibility among states’ and proposed national measures to detect, defend and respond to and recover from ICT incidents, which may include the establishment at the national level of a centre or a responsible agency that leads on ICT matters.

The Chair issued a draft of the norms implementation checklist before the start of the session. According to Egypt, this checklist must be simplified because it includes duplicate measures and detailed actions beyond states’ capabilities. The checklist, Egypt continued, should acknowledge technological gaps among states and their diverse national legal systems, thus respecting regions’ specifics. Many delegations have strongly supported the checklist and made recommendations. For example, the Netherlands suggested that the checklist includes the consensus notion that state practices, such as mass arbitrary or unlawful mass surveillance, may negatively impact human rights, particularly the right to privacy.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.
3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.

Some delegations addressed the Chair’s questions on implementing critical infrastructure protection (CIP) and supply chain security-related norms. The EU reminded us that it is necessary to look into existing cybersecurity best practices in this regard and gave an example of the Geneva Manual as a multistakeholder initiative to clarify the roles and responsibilities of non-state actors in implementing the norms. Italy encouraged the adoption of specific frameworks for assessing the supply chain security of ICT products based on guidelines, best practices, and international standards. Practically, it could include establishing national evaluation and security certification centres for cyber certification schemes. The Republic of Korea suggested building institutional and normative foundations to provide security guidelines starting from the development stage of software products, which can be used in the public sector to protect public service or critical infrastructure from being targeted by cyberattacks. Japan suggested adopting the Software Bill of Materials (SBOM) and discussing how ICT manufacturers can achieve security by design.

International law: applicability to use of ICTs in cyberspace
 Accessories, Bag, Handbag, Scale

The member states have held their previous positions on the applicability of international law. Most states have confirmed the applicability of international law to cyberspace, including the UN Charter, international human rights law and international humanitarian law. However, Russia and Iran stated that existing international law does not apply to cyberspace, while Syria noted how international law applies in cyberspace is unclear. However, China and Russia pointed out that the principles of international law apply. These states, as well as Pakistan, Burkina Faso, and Belarus, support the development of a new legally binding treaty. 

Of note was the contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay that reflected on the continued engagement of a cross-regional group of 13 states based on a working paper from July 2023. The contribution highlighted the emerging convergence of views that: 

  • states must respect and protect human rights and fundamental freedoms, both online and offline, by their respective obligations; 
  • states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused; and
  • International humanitarian law applies to cyber activities in situations of armed conflict, including, where applicable, the established international legal principles of humanity, necessity, proportionality and distinction.

Many states echoed the Colombian statement, including Germany, Australia, Czechia, Switzerland, Italy, Canada, the USA, the UK, Spain and others.

New discussion point

The contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay highlighted that states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused, a new element in the discussions within the OEWG substantive sessions. Thailand, Uganda, and the Netherlands have also specifically addressed the need for reparation for the injury caused.

The discussions have also progressed on the applicability of international humanitarian law (IHL) to the use of ICT in situations of armed conflicts. 

Senegal presented a working paper on the application of international humanitarian law on behalf of Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Sweden, and Switzerland. This working paper shows convergence on the applicability of IHL in situations of armed conflict. It delves deeper into the principles and rules of IHL governing the use of ICTs, notably military necessity, humanity, distinction, and proportionality. Other states welcomed with working paper, including Italy, Australia, South Africa, Austria, the United Kingdom, the USA, France, Spain, Uruguay and others. 

On the other hand, Sri Lanka, Pakistan, and China have called for additional efforts to develop an understanding of the applicability of IHL and its gaps.

In its statement on IHL, the ICRC has pointed out the differences between the definitions of armed attack under the UN Charter and under IHL, the need to discuss how IHL limits cyber operations, and the need to interpret the existing rules of IHL as not to undermine the protective function of IHL in the ICT environment.

icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.
icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.

The discussion on international law greatly benefited from the recent submission to the OEWG by the Peace and Security Council of the African Union on the Application of international law in the use of ICTs in cyberspace (Common African Position). Reflecting the views of 55 states, it represents a significant contribution to the work of the OEWG and an example of valuable input by regional forums. This comprehensive position paper addresses issues of applicability of international law in cyberspace, including human rights and IHL, principles of sovereignty, due diligence, prohibition of intervention in the affairs of states in cyberspace, peaceful settlement of disputes, prohibition of the threat or use of force in cyberspace, rules of attribution, and capacity building and international cooperation. The majority of the delegations welcomed the Common African Position.

African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.
African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.

The Chair has also pointed out that, as of date, 23 states have shared their national positions, and many others are preparing their positions on the applicability of international law in cyberspace. 

Most states supported scenario-based exercises to enhance the understanding between states on the applicability of international law. They would like to have the opportunity to conduct such exercises and have a more in-depth discussion on international law in the May intersessional meeting. China firmly opposes this.

Several states, such as Japan, Canada, Czechia, the EU, Ireland and others, would like to see future discussions on international law embedded in the Programme of Action (PoA). Read more about the talks on the PoA below.

CBMs: operationalising the POC directory
 Stencil, Text

The official launch of the Points of Contact (PoC) directory is scheduled for 9 May, which led to the discussion revolving around the operationalisation of the POC directory. At the time of the session, 25 countries had appointed their POCs. Most delegations reiterated their support for the directory and either confirmed their appointments or that the process was ongoing. Some states nevertheless suggested adjustments to the POC directory. Ghana, Canada, and Colombia commented that communication protocols may be helpful, while Czechia and Switzerland recommended that the POC shouldn’t be overburdened with these procedures yet. Argentina also brought up the potential participation of non-state actors in the POC directory.

To further facilitate communication, several states advanced the usefulness of building a common terminology (Kazakhstan, Mauritius, Iran, Pakistan), while Brazil mentioned that Mercosur was effectively working on this kind of taxonomy.

While Czechia, Switzerland and Japan underlined the necessity to focus first on the implementation and consolidation of existing CBMs, many states nevertheless were in favour of additional CBMs: protection of critical infrastructure (Switzerland, Colombia, Malaysia, Pakistan, Fiji, Netherlands, Singapore and Czechia) as well as coordinated vulnerability disclosure (Singapore, Netherlands, Switzerland, Mauritius, Colombia, Malaysia and Czechia). The integration of multi-stakeholders to the development of CBMs was also considered by some states and organisations (the EU, Chile, Albania, Argentina) while adding public-private partnerships as a CBM received broad support from Kazakhstan, Qatar, Switzerland, South Africa, Mauritius, Colombia, Malaysia, Pakistan, South Korea, Netherlands, and Singapore.

All states recalled and praised the significance of regional and subregional cooperation in the implementation of CBMs regionally and how it can contribute to the development of CBMs globally. In that respect, most states highlighted enriching initiatives at a cross-regional level, such as a recent side event at the German House. Work within the OAS, the OSCE, the ASEAN, the Pacific region, and the African Union was underlined. Interventions were enriched explicitly by sharing national experiences, most notably Kazakhstan’s and France’s recent use of the OSCE community portal for POC.Finally, states highlighted the link between CBMs and capacity-building, Ghana, Djibouti, and Fiji sharing their national experiences in closing the digital divide. In that vein, Argentina, Iran, Pakistan, Djibouti, Botswana, Fiji, Chile, Thailand, Ethiopia, Mauritius, and Colombia support creating a specific CBM on capacity-building.

Capacity building: bolstering efforts and funding
 Art, Drawing, Doodle

Several noteworthy proposals were put forth by different countries, each aiming to bolster capacity building efforts. The Philippines introduced a comprehensive ‘Needs-Based Capacity Building Catalogue,’ designed to help member states identify their specific capacity needs, connect with relevant providers, and access application guidance for capacity building programmes.

 Page, Text
A scheme of the Philippine proposal. Source: UNODA.

Kuwait proposed an expansion of the Global Cybersecurity Cooperation Portal (GCSE), suggesting adding a module dedicated to housing both established and proposed norms, thus facilitating collaboration among member states and tracking the implementation progress of these norms. India‘s CERT expressed willingness to develop an awareness booklet on ICT and best practices with the contribution of other delegations, intending to post it on the proposed GCSE for widespread dissemination.

The crucial issue of funding for capacity building received substantial attention during the discussions, with multiple delegations bringing to the fore the need for additional resources to sustainably support such efforts. Uganda advocated establishing a UN voluntary fund targeting countries and regions most in need. In contrast, others stressed the imperative of exploring structured avenues within the UN framework for resource mobilisation and allocation. 

On the foundational capacities of cybersecurity, an emphasis was placed on developing ICT policies and national strategies, enhancing societal awareness, and establishing national cybersecurity agencies or CERTs.

Furthermore, the importance of self-assessment tools for improving states’ participation in capacity building programmes was emphasised. Pakistan proposed implementing checklists and frameworks for evaluating cybersecurity readiness and identifying gaps. Rwanda advocated for reviews based on the cybersecurity capacity maturity model (CMM) to achieve varying levels of capacity maturity. The discussions also commended existing initiatives, such as the Secretariat’s mapping exercise and emphasised the need for a multistakeholder approach in capacity building efforts. Finally, Germany highlighted the significant contributions of organisations in creating gender-sensitive toolkits for cybersecurity programming, underscoring the importance of incorporating gender perspectives in implementing the UN framework on cybersecurity.

Regular institutional dialogue: the fight for a single-track process
 Accessories, Sunglasses, Text, Handwriting, Glasses

States are still divided on the issue of regular institutional dialogue. What they agree on is that there must be a singular process, its establishment must be agreed upon by consensus, and decisions it makes must be by consensus. 

France, one of the original co-sponsors of the PoA, has delivered a presentation on the PoA’s future elements and organisation. Review conferences would be convened in the framework of the POA every few years. The scope of these review conferences would include (i) assessing the evolving cyber threat landscape, the results of the initiatives and meetings of the mechanism, (ii) updating the framework as necessary and (iii) providing strategic direction and mandate or a program of work for the POA’s activities. The periodicity would need to be defined as not being a burden to delegations, especially delegations from small countries and developing countries. However, the PoA would need to keep up with the rapid evolution of technology and of the threat landscape.

The PoA would also include open-ended plenary discussions to (i) assess the progress in the implementation of the framework, (ii) take forward any recommendations from these modalities (iii) to discuss ongoing and emerging threats, (iv) to provide guidance for open ended technical meetings and practical initiatives. Inter-sessional meetings could also be convened if necessary.

Furthermore, four modalities would feed discussions on the implementation of the framework: capacity building, voluntary reporting by states, practical initiatives, and contributions from multistakeholder community. The POA could leverage existing and potential capacity building efforts in order to increase their visibility, improve their coordination, and support the mobilisation of resources. The review conferences and the discussions would then provide an opportunity to exchange on the ongoing capacity building efforts and identify areas where additional action is needed. Voluntary reporting of states could be based either on creating a new reporting system or by promoting existing mechanisms. The PoA would contain, enable, and deepen practical initiatives. It would build on existing initiatives and develop new ones when necessary. The PoA would enable that engagement and collaboration with the multistakeholder community.

France also noted that a cross-regional paper to build on this proposal will be submitted at the next session.

Multiple delegations expressed support for the PoA, including the EU, the USA, the UK,  Canada, Latvia, Switzerland, Cote d’Ivoire, Croatia, Belgium, Slovakia, Czechia, Israel, and Japan.

The Russian Federation, the country that originally suggested the OEWG, is the biggest proponent of its continuation. Russia cautioned against making decisions by a majority in the General Assembly, noting that such an approach will not be met with understanding by member states, first and foremost developing countries, which long fought to get the opportunity to directly partake in the negotiations process on the principles governing information security. Russia stated that after 2025, a permanent OEWG with a decision-making function should be established. Its pillar activity would be crafting legally binding rules, which would serve as elements of a future universal agreement on information security. The OEWG would also adapt international law to the ICT sphere. It would strengthen CBMs, launch mechanisms for cooperation, and establish programmes of funds for capacity building. Belarus, Venezuela, and Iran are also in favour of another OEWG.

A number of countries didn’t express support for either the PoA or the OEWG but noted some of the elements the future mechanism should have.

Similarly to Russia, China noted that the future mechanism should implement the existing framework but also formulate new norms and facilitate the drafting of legal instruments. The Arab Group noted that the future mechanism should develop the existing normative framework to achieve new legally binding norms. Indonesia also noted the mechanism should create rules and norms for a secure and safe cyberspace.

Latvia and Switzerland noted that the mechanism must focus on the implementation of the existing framework. However, Switzerland and the Arab Group noted that the mechanism could identify gaps in the framework and could develop the framework further.

Many delegations noted that capacity building must be an integral part of the regular mechanism, such as South Africa, Bangladesh, the Arab Group, Switzerland, Indonesia, and Kenya.

States also expressed opinions on which topics should be discussed under the permanent mechanism. Malaysia, South Africa, Korea, and Indonesia stated that the topics under the mechanism should be broadly similar to those of the OEWG. The UK, Latvia and Kenya stated it should discuss threats, while Bangladesh outlined the following emerging threats: countering disinformation campaigns, including deepfakes, quantum computing, AI-powered hacking, and addressing the use of ICTs for malicious purposes by non-state actors

South Africa highlighted that discussion on voluntary commitments, such as norms or CBMs, should be developed without prejudice to the possibility of a future legally binding agreement. The UK noted that the mechanism should also discuss international law.

States also discussed the operational details of the future mechanism. For instance, Egypt suggested that the future mechanism hold biannual meetings every two years, review conferences to be convened every six years, and intersessional meetings or informal working groups that may be decided by consensus. The future mechanism should ensure the operationalisation and review of established cyber tools, including POC’s directory and all other proposals to be adopted by the current OEWG. Sri Lanka noted that the sequence of submitting progress reports, be it annual or biennial, should correspond with the term of the Chair and its Bureau.

Brazil suggested a moratorium on First Committee resolutions until the end of the OEWG’s mandate to allow member states to focus on their efforts in the OEWG. This suggestion was supported by El Salvador, South Africa, Bangladesh, and India.

Dedicated stakeholders session

The dedicated stakeholder session allowed ten stakeholders to share their expertise within the substantive session. 

The stakeholders addressed the topics of CII protection and AI (Center for Excellence of RSIS), norms I and J, supply chain vulnerabilities, and addressing the threat lifecycle (Hitachi), role of youth and the importance of youth perspective as a possible area of thematic interest of OEWG (Youth for Privacy). The topics of AI and supply chain management are echoed in SafePC Solutions‘ statement. At the same time, the Centre for International Law (CIL) at the National University of Singapore focused on the intersection of international law and the use of AI.

Chatham House has shared their research on the proliferation of commercial cyber intrusion tools, among others, and the Pall Mall Process, launched by the UK and France. Access Now focused on intersectional harms caused by malicious cyber threats, issues of surveillance and norms E and J. Building on the Chatham House and Access Now remarks, the Paris Peace Forum focused its intervention on the commercial proliferation of cyber-intrusive and disruptive cyber capabilities, and possible helpful steps states could undertake in the short term.

DiploFoundation focused on the responsibility of non-state stakeholders in cyberspace and shared the Geneva Manual on responsible behaviour in cyberspace.Nuclear Age Peace Foundation, in their statement, connected cybersecurity concerns with safeguarding weapons systems and the importance of secure software, while The National Association for International Information Security focused on the need to interpret the norms of state behaviour.

What’s next?

The OEWG’s schedule for 2024 is jam-packed: mid-April, the chair will revise the discussion papers circulated before the 7th session. On 9 May, the POC Directory will be launched, followed by a global roundtable meeting on ICT security capacity-building on 10 May 2024. A dedicated intersessional meeting will be held on 13-17 May 2024. 

Looking ahead to the second half of 2024, the 8th and 9th substantive sessions are planned for 8-12 July and 2-6 December 2024. A simulation exercise for the POC directory is also on the schedule, along with the release of capacity-building materials by the Secretariat, including e-learning modules.

Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations

The UN’s Ad Hoc Committee to  Elaborate a Comprehensive International Convention on Countering the Use of ICTs for Criminal Purposes aka the Ad Hoc Committee on Cybercrime convened in New York for a culminating session held from 29 January to 9 February 2024, marking the end of two years of negotiations. The Ad Hoc Committee (AHC) was tasked with drafting a comprehensive  cybercrime convention. However, as the final session started, there were no signs of significant progress: member states couldn’t agree on significant issues such as the scope of the convention. As a result, the delegations required more time to discuss the content and wording of the draft convention and decided to hold additional meetings. Though some delegations such as China and the US offered financial support for more meetings, several states such as El Salvador, Uruguay, and Lichtenstein pointed out the strain these additional meetings would put on their resources.

 Book, Comics, Publication, People, Person, Face, Head, Art, Baby, Drawing, Mitsutoshi Shimabukuro
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 37

The chair initially split negotiations in two tracks: formal sessions and informal meetings behind closed doors. The informal meetings seem to have focused on more sensitive issues such as the scope and human rights-related provisions and were extremely intense causing the regular sessions to start late. It also resulted in less transparency in negotiations and excluded the multistakeholder community from contributing.

In the last days of the concluding sessions, there was increased pressure from civil society and the industry, as well as cybersecurity researchers.

“There are fears that if the UN Ad Hoc Committee does not conclude with a convention, it could be considered a failure of multilateral diplomacy. However, in my opinion, the real fiasco of diplomatic efforts to address the problem of cybercrime would happen if the states adopt a treaty that significantly waters down human rights obligations and legitimises the use of criminal justice for oppression and persecution.” 

Dr. Tatiana Tropina, Assistant Professor in Cybersecurity Governance, ISGA, Leiden University

The comments provided are personal opinions and are not representative of the organisation as a whole.

So, what happened?

Here are the issues with the draft convention that need to be resolved:

Scope of the convention and criminalisation 

One of the main unresolved points remains the question whether the cybercrime convention should be a traditional treaty or if it should cover all crimes committed via ICTs. This divide translated into a lengthy discussion on the name of the convention itself, as well as on Article 3 (scope of application) of the draft convention.

In relation to the scope of application, delegations discussed Canada’s proposal, which received support from 66 states. The proposal suggests wide wording of the actions that may fall within the scope of the convention, and adding Article 3.3 to ensure that the convention doesn’t permit or facilitate ’repression of expression, conscience, opinion, belief, peaceful assembly or association; or permitting or facilitating discrimination or persecution based on individual characteristics’.

The Russian Federation continued expressing the view that the AHC hadn’t fully implemented the mandated outline in Resolution 74/247 which established the committee, and the scope of the convention should include broader measures to combat ‘the spread of terrorist, extremist, and Nazi ideas with the use of ICTs’. Russia further highlighted that ‘many articles are simply copied from treaties that are 20 years old’ and that the revised text doesn’t include efforts to agree on procedures of investigation, or creating platforms and channels for law enforcement cooperation.

In the same vein, Iran, Egypt, and Kuwait see the primary mandate of the AHC to elaborate a comprehensive international convention on the use of ICT for criminal purposes and see the inclusion of human rights regulations and detailed international collaboration as duplication of already existing international treaties.  

Representatives from civil society, private entities, and academia also shared feedback on the scope, stressing the importance of limiting the convention’s scope and implementing strong human rights protections. They expressed concerns about the convention’s potential to undermine cybersecurity, jeopardise data privacy, and diminish online rights and freedoms.

Discussing additional provisions in the criminalisation chapter, delegations were deadlocked over specific terms. For instance, concerning Article 6(2), 7(2), and 12, Russia, with support from several delegations, proposed replacing ‘dishonest intent’ with a more specific term. Russia’s representative argued that ‘dishonest’ is not a legal term, thus making it challenging for countries to implement or clarify it in domestic legislation. However, the UK, US, and EU opposed this change. Austria, in particular, explained that ‘dishonest intent’ provides clear criteria for identifying when conduct constitutes an offence, offering flexibility across various legal systems. 

Human rights and safeguards 

Human rights (Article 5) and safeguards (Article 24) have been a difficult topic for delegations from day one. Some delegations such as Iran argued that the cybercrime treaty is not a human rights treaty, suggesting a model akin to the UN Convention against Corruption (UNCAC), which omits explicit human rights references. As reported earlier, this didn’t find support from many other delegations.

Egypt and other delegations also expressed confusion over the repetitive nature of certain human rights provisions within the text, emphasising the redundancy of similar mentions occurring five or six times. 

Additionally, Egypt raised concerns about Article 24 and questioned why the principle of proportionality was singled out from other legal principles recognised under international law. Egypt pointed out the challenge of applying proportionality when different countries have varying legal provisions, such as the death penalty. Pakistan supported Egypt and Brazil suggested appending ‘legality’ to the principle of proportionality, including both of the principles of legality and proportionality. Ecuador expressed support for Brazil’s proposal.

As a result, both articles remain without text in the further revised draft text of the convention

There was no consensus regarding the articles on online sexual abuse (Article 13) and non-consensual distribution of intimate images (Article 15). Delegations tried to find a balance between protecting privacy and criminalising the sharing of intimate images without consent. Many felt the convention should be flexible to accommodate different laws and international human rights agreements. There was debate about whether to stick with the Convention on the Rights of the Child’s (CRC) definition or use a different one. The US worried the CRC’s definition didn’t fit cybercrimes well and might lead to inconsistent interpretations that wouldn’t adequately protect children under Article 13. 

Transfer of technology and technical assistance

The transfer of technology appears twice in Article 1 (statement of purpose) and Article 54 (technical assistance and capacity-building). The group of African countries strongly advocated for keeping a reference to the transfer of technology in both articles, including in Article 1, paragraph 3. 

Russia, Syria, Namibia, India, Senegal, and Algeria supported this, while the US was against it and called to keep this reference in Article 54 only. The EU, Israel, Norway, Canada, Albania, and the UK supported the US.

With Article 54, more or less the same groups of states had further disagreements. The US, Israel, the EU, Norway, Switzerland, and Albania supported inserting ‘voluntary’ before ‘where possible’ and ‘on mutually agreed terms’ in the context of how capacity building shall be provided between states in Article 54(1). Most African countries and Iran, Iraq, Cabo Verde, Colombia, Brazil, and Pakistan, opposed such a proposal because it would undermine the purpose of the provision in ensuring effective assistance to developing countries. With the goal of reaching a consensus on Article 54(1), the US withdrew its proposal and retained the ‘where possible’ and ‘mutually agreed terms’. In the revised draft text of the convention these paragraphs remain open for further negotiations between delegations.

“As offenders, victims and evidence are often located in different jurisdictions, investigations will typically require international coordinated law enforcement action. This means that gaps in the capacity of one country can severely undermine the safety of communities in other countries. Technical assistance and capacity-building are key tools to address this challenge. However, to have a real-world impact, the future Convention needs to recognize that addressing the needs of the diverse actors involved in combating [the criminal use of ICTs] [cybercrime] will require various forms of specialized technical assistance, which no single organization can provide. Even within countries, the various actors involved in combating [the criminal use of ICTs] [cybercrime] – including legislators, prosecutors, law enforcement, national Computer Emergency Response Teams (CERTs) – may have very different technical assistance needs.”

Director Craig Jones, INTERPOL Cybercrime Programme

Scope of international cooperation

Delegations expressed opposing views on provisions related to cooperation on electronic evidence and didn’t reach consensus. The discussion included Articles 35 (1) c, Article 35 (3), and (4), which deal with the general principles of international cooperation and e-evidence. The draft convention allowed countries to collect data across borders without prior legal authorisation. However, there were no agreements across many delegations. 

In particular, New Zealand, Canada, the EU, Brazil, the USA, Argentina, Uruguay, Singapore, Peru, and others expressed concerns: fearing that the current draft of Article 35 would allow an excessively broad application, potentially leading to the pursuit of non-criminal activities. These states expressed views that the previous draft allowed for national law to determine what constitutes criminal conduct and pointed out the need to differentiate between serious crimes and offence, the need for safeguards and guardrails on the power of states to limit the possibility of repression and implementations of intrusive and secret mechanisms and to ensure the protection of human rights. On the other hand, states like Egypt, Saudi Arabia, Iran, Iraq, Mauretania, Oman, and others called for the deletion of Article 35 (3) altogether.

Additionally, New Zealand suggested including a non-discrimination clause in Article 37(15) on extradition to prevent unfair grounds for refusing cooperation. This would ensure consistency across the entire chapter on international cooperation. However, member states couldn’t agree on the language and left this open. 

Within the international cooperation chapter, delegations spend quite a bit of time discussing the terms: in particular, in Article 45 and 46 the debates centred around the use of ‘shall’ vs ‘may’. The EU and other delegations advocated for changing ‘shall’ to ‘may’ in those articles to allow states the option, but not the obligation, to cooperate. This proposal was met with mixed reactions, with some delegations, including Egypt and Russia preferring to retain ‘shall’ to ensure robust international cooperation. The countries opposing the change from shall to may advocate that this would undermine the effectiveness of the cooperation between the states. So far, the further revised draft text of the convention includes both options in brackets. 

cooperation
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 38

Preventive measures 

Another term which created some confusion across several delegations was the use of ‘stakeholders’ in Article 53, where preventive measures are discussed and paragraph 2 highlights that ‘States shall take appropriate measures […] to promote the active participation of relevant individuals and stakeholders outside the public sector, such as non-governmental organizations, civil society organizations, academic institutions and the private sector, as well as the public in general, in the prevention of the offences covered by this Convention’. Egypt, in particular, called to remove the word ‘stakeholders’ unless it’s clearly defined. The US didn’t support this proposal. The further revised draft text of the convention ‘relevant individuals and entities […]’, but the paragraph hasn’t been agreed yet.

In the same article, in paragraph 3(h), where ‘gender-based violence is mentioned and strategies and policies should be developed to prevent it, states couldn’t reach an agreement. The first group of states, including the USA, Iceland, Australia, Vanuatu, and Costa Rica, advocated for keeping the provision. Other delegations such as Iran, Namibia, Saudi Arabia, and Russia, among others, proposed the deletion of the term ‘gender-based’ and instead keep ‘violence’. In the end, this part remained as it is with the term ‘gender-based violence’, with the chair emphasising that this article is not obligatory as it says that preventing measures may include.

Another notable example of where states had opposing views was Article 41 on 24/7 network, which is a point of contact designated at the national level, available 24 hours a day and 7 days a week, to ensure the provision of immediate assistance for the purposes of the convention. India proposed new duties for the 24/7 network, explaining that prevention should be a part of such duties. They particularly stressed that ‘if the offence is not prevented and it occurs, States would be needing multiple times the resources that they saved in the process of evidence collection, prosecution, extradition, and so on. So it’s better to prevent rather than to spend multiple times the same resources that States are trying to save in going through the whole process of criminal justice’. Russia, Kazakhstan and Belarus supported this proposal, while the US, UK, Argentina, the EU, and Canada didn’t.

So, what’s next?

A question mark on a wooden cube on a computer keyboard

As mentioned earlier, the delegates managed to agree on one major item to postpone the final decision. The chair’s further revised draft text of the convention is available at the AHC’s website, and new dates for more meetings should be announced soon. 

Does this mean that delegations are close to reaching a consensus over a landmark cybercrime convention before the UN General Assembly? Hardly so, but these two weeks have also demonstrated that many (though less fundamental compared to the scope of application) open issues have been resolved behind closed doors, and there is still a chance that intense non-public negotiations between delegations could speed up the process.

We will continue to monitor the negotiations, in the meantime discover more through our detailed reports from each session generated by DiploAI.

The perfect cryptostorm

To fully understand the incredible story behind the cryptocurrency and blockchain craze of 2017-2021, we must explain the unique setting in which events played out, setting the course for the collision. One component amplified the other, multiplying the effect, thus creating a perfect cryptostorm. Unfortunately, that storm took a toll on trust in the industry and caused financial losses.

The cryptocurrency industry is a one-hit wonder. But what a wonder that is! Bitcoin presents the true marvel of human engineering of money. It has withstood the test of time and resiliency, becoming the worldwide recognised use case for digital gold. We witnessed newly coined terms such as ‘crypto-rich’. In response, a whole new payment industry emerged, forged by the desire of the legacy financial organisation to stay relevant in the new era. 

Moreover, alongside the new fast-digital payment industry, which was delivering miracles on financial inclusion of the unbanked, the retail investing industry was a new form of capital inflow. The emergence of online trading companies, backed mainly by larger institutional investors, was recognised as a risk for the retail users and overall consumer protection rights.

Unanswered risks, the new hype around the change in the financial industry, and the emergence of inexperienced investors were the ingredients for the perfect storm in the cryptocurrency industry. Add human greed to that mixture and it becomes the perfect cryptostorm.

The perfect cryptostorm

The necromancers that summoned this cryptostorm, are quite vividly depicted in the latest Netflix documentary drama, ‘Bitconned’, which aired this January after two years of production. In 2017, the Centra Tech company raised USD 25 million in investments in their main product: a VISA-backed credit card, allowing people to spend their cryptocurrency at any retail store across the USA.
Centra Tech’s CEO, CTO, and other executives had a Harvard Business School background or an MIT engineering degree. The new headquarters in downtown Miami was full of young, bright people, and 20,000 VISA cards were produced. However, none of this was real. Everything was a (not so cleverly) staged mirage.

The court case concluded in 2021, handing jail time sentences to the people involved. The documentary is led by one of three prominent persons behind Centra Tech, Ray Trapani, who collaborated with the federal investigation on the case. In the film, he explained in detail how two young scammers working at a car rental company raised millions in an ICO, having only a one-page website. 
Once it started, the storm did not calm down for years. The story of Centra Tech from 2017 was replicated time and time again, culminating in the collapse of, at the time, the world’s second-largest company in the industry: FTX, an online cryptocurrency exchange. As we read from publicly presented pieces of court evidence, in the cases against Celsius, Luna and FTX, the crypto companies spent funds custodied by their investors.

 Person, Sitting, Adult, Male, Man, Clothing, Footwear, Shoe, Furniture, Face, Head, Home Decor, Chair
Screenshot from the Netflix documentary film ‘Bitconned

How did crypto scam companies utilise the above ingredients?

By promising the right thing at the right moment. Internet users witnessed the financial sector’s transformation and bitcoin’s success. They could easily be convinced that a new decentralised finance infrastructure is on the verge, which will be supported by the lack of a regulatory framework. At the same time, giving them a fair chance to participate in the industry beginnings and become the new crypto millionaires, which was the main incentive for many. If people behind the open-source cryptocurrency (bitcoin) could create the ‘internet of information’,  the next generation of cryptocurrency engineers would surely deliver the ‘internet of money’. However, again, it was false. It was, in fact, a carefully worded money-grabbing experiment.

All the above ideas still stand as a goalpost for further industry developments. Moreover, we must admit that the initial takeover of the industry by scammers, fraudsters, and, in some cases, straightforward sociopaths will taint the forthcoming period of developments in this industry.

In contrast to bitcoin, the creators of almost all cryptocurrencies that came later were incentivised by the financial benefits of ‘tokenisation’ rather than by secure and trustworthy technology. The term tokenisation was supposed to describe the emergence of fast-exchanging digital information (tokens) that could help trade digital products and services, promising the possibility of a ‘creators’ economy, micropayments, or unique digital objects. But in reality, it was merely copying analogue objects to the digital world and charging money for that service. Stocks, bonds, tin cans, energy prices, cloud storage, and dental appointments were all promised to be tokenised, while the term ‘blockchain’ was the ultimate hype word. People soon realised that not all digital artefacts had value solely by being placed on a blockchain. That was the case with projects that honestly intended to build the product (token or cryptocurrency) rather than just sell vapourware and go permanently offline the moment they got busted. As with any other technology, time will show the most efficient and rational use of blockchain.

Could this happen again for online financial services? 

Chances are meagre, certainly not to happen on this scale. Financial agencies worldwide have prepared a set of comprehensive laws and authorities to detect such fraudulent companies much faster and more efficiently. Financial regulations are negotiated with much more success on a global scale. Intergovernmental financial organisations and their bodies have equipped the regulators with the tools to comprehend how technology works and what can be done on the consumer protection side. Also, the users have had their fair share of schooling. Once bitten, twice shy.

For any other technology developed and utilised mainly online, the chances are always there. Users can now easily be engaged directly, via a mobile app, with companies that promise the next technological innovation. All they have to do is to carefully word our societal dreams into their product description.