Digital dominance in the 2024 elections

As the historic number of voters head to the polls, determining the future course of over 60 nations and the EU in the years ahead, all eyes are on digital, especially AI.

Digital technologies, including AI, have become integral to every stage of the electoral process, from the inception of campaigns to polling stations, a phenomenon observed for several years. What distinguishes the current landscape is their unprecedented scale and impact. Generative AI, a type of AI enabling users to quickly generate new content, including audio, video, and text, made a significant breakthrough in 2023, reaching millions of users. With its ability to quickly produce vast amounts of content, generative AI contributes to the scale of misinformation by generating false and deceptive narratives at an unprecedented pace. The multitude of elections worldwide, pivotal in shaping the future of certain states, have directed intense focus on synthetically generated content, given its potential to sway election outcomes.

Political campaigns have experienced the emergence of easily produced deepfakes, stirring worries about information credibility and setting off alarms among politicians who called on Big Tech for more robust safeguards.

Big Tech’s response 

Key players in generative AI, including OpenAI and Microsoft, joined platforms like Meta Platforms, TikTok, and X (formerly Twitter) in the battle against harmful content at the Munich Security Conference. Signatories of the tech accord committed to working together to create tools for identifying targeted content, raising public awareness through educational campaigns, and taking action against inappropriate content on their platforms. To address this challenge, potential technologies being considered include watermarking or embedding metadata to verify the origin of AI-generated content, focusing primarily on photos, videos, and audio.

After the European Commissioner for Internal Market Thierry Breton urged Big Tech to assist European endeavours in combating election misinformation, tech firms promptly acted in response. 

Back in February, TikTok announced that it would launch an in-app for EU member states in local language election centres to prevent misinformation from spreading ahead of the election year. 

Meta intends to launch an Elections Operations Center to detect and counter threats like misinformation and misuse of generative AI in real time. Google collaborates with a European fact-checking network on a unique verification database for the upcoming elections. Previously, Google announced the launch of an anti-misinformation campaign in several EU member states featuring ‘pre-bunking’ techniques to increase users’ capacity to spot misinformation. 

Tech companies are, by and large, partnering with individual governments’ efforts to tackle the spread of election-related misinformation. Google is teaming up with India’s Election Commission to provide voting guidance via Google Search and YouTube for the upcoming elections. They are also partnering with Shakti, India Election Fact-Checking Collective, to combat deepfakes and misinformation, offering training and resources throughout the election period. 

That said, some remain dissatisfied with the ongoing efforts by tech companies to mitigate misinformation. Over 200 advocacy groups call on tech giants like Google, Meta, Reddit, TikTok, and X to take a stronger stance on AI-fuelled misinformation before global elections. They claim that many of the largest social media companies have scaled back necessary interventions such as ‘content moderation, civil-society oversight tools and trust and safety’, making platforms ‘less prepared to protect users and democracy in 2024’. Among other requests, the companies are urged to disclose AI-generated content and prohibit deepfakes in political ads, promote factual content algorithmically, apply uniform moderation standards to all accounts, and improve transparency through regular reporting on enforcement practices and disclosure of AI tools and data they are trained on.

EU to walk the talk?

Given the far-reaching impact of its regulations, the EU has assumed the role of de facto regulator of digital issues. Its policies often set precedents that influence digital governance worldwide, positioning the EU as a key player in shaping the global digital landscape.

 People, Person, Crowd, Adult, Male, Man, Face, Head, Audience, Lecture, Indoors, Room, Seminar, Speech, Thierry Breton
European Commissioner for Internal Market Thierry Breton

The EU has been proactive in tackling online misinformation through a range of initiatives. These include implementing regulations like the Digital Services Act (DSA), which holds online platforms accountable for combating fake content. The EU has also promoted media literacy programmes and established the European Digital Media Observatory to monitor and counter misinformation online. With European Parliament elections approaching and the rising prevalence of AI-generated misinformation, leaders are ramping up efforts to safeguard democratic integrity against online threats.

Following the Parliament’s adoption of rules focussing on online political advertising requiring clear labelling and prohibiting sponsoring ads from outside the EU in the three months before an election, the European Commission issued guidelines for Very Large Online Platforms and Search Engines to protect the integrity of elections from online threats. 

The new guidelines cover various election phases, emphasising internal reinforcement, tailored risk mitigation, and collaboration with authorities and civil society. The proposed measures include establishing internal teams, conducting elections-specific risk assessments, adopting specific mitigation measures linked to generative AI and collaborating with EU and national entities to combat disinformation and cybersecurity threats. The platforms are urged to adopt incident response mechanisms during elections, followed by post-election evaluations to gauge effectiveness.

The EU political parties have recently signed a code of conduct brokered by the Commission intending to maintain the integrity of the upcoming elections for the Parliament. The signatories pledge to ensure transparency by labelling AI-generated content and abstain from producing or disseminating misinformation. While this introduces an additional safeguard to the electoral campaign, the responsibility for implementation and monitoring falls on the European umbrella parties rather than national parties conducting the campaign on the ground.

What to expect

The significance of the 2024 elections extends beyond selecting new world leaders. They serve as a pivotal moment to assess the profound influence of digital on democratic processes, putting digital platforms into the spotlight. The readiness of tech giants to uphold democratic values in the digital age and respond to increasing demands for accountability will be tested. 

Likewise, the European Parliament elections will test the EU’s ability to lead by example in regulating the digital landscape, particularly in combating misinformation. The effectiveness of the EU initiatives will be gauged, shedding light on whether collaborative efforts can establish effective measures to safeguard democratic integrity in the digital age.

(Jail) time ahead for the cryptocurrency industry 

The cryptocurrency and digital asset industry has once again been the focus of the worldwide media. This time, it is not about the promises of an inclusive future of finance but is related to a couple of court cases initiated or found to have come to a close in the past months. 


These particular developments can be seen as a desire of regulators worldwide to set legal practice around the new class of digital assets (or cryptoassets as named in regulations worldwide) and send a message to the ever-growing base of consumers of such products that they will be protected while entering this new arena. A particular push is seen in the United States, where a couple of the world’s biggest cryptocurrency exchanges Binance and Kraken have been accused and charged with anti-money laundering activities. In both cases, regulators highlighted the lack of fully implemented Know-Your-Customer (KYC) procedures as a primary focus. In the case of the world’s number one cryptocurrency exchange Binance, the US Justice Department argued that the failure of KYC led to the money laundering and evasion of international sanctions. Cryptocurrency exchange Binance, and its CEO, Zhao Changpeng pleaded guilty to charges filed by the US Justice Department and US Securities and Exchange Commission (SEC) while agreeing to a record USD 4.2 billion fine in this case. In the most recent case, cryptocurrency exchange KuCoin has been hit with the same anti-money laundering charges and is facing a similar outcome. For Kraken, the SEC is asking for a total ban in the USA as they failed to register within the regulatory framework.

A couple of significant cases from the past have received their final acts in the past months. The cases of Celsius, Terra, and, most prominently, FTX exchange moved from the standstill, and in the case of FTX, the trial ended with the sentencing of the former FTX CEO Sam Bankman-Fried. The sentence was delivered in the court case related to the collapse of the FTX exchange and Alameda Research trading firm in November 2022. The former FTX CEO was sentenced to 25 years in prison six months after being convicted of fraud. In addition to the sentence, Bankman-Fried was ordered to pay USD 11 billion in reparations and damages to FTX users and investors. Another crypto-company CEO, Do Kwon was extradited from Montenegro to prosecutors in South Korea for the trial of the Terra cryptocurrency company. Kwon was hiding from law enforcement for a whole year to be finally arrested at the tarmac of the Podgorica airport in Montenegro. Kwon also faces a lengthy jail sentence if allegations from the indictment stand the trial case.

Do Kwon, Cryptocurrency king,  Helmet, Adult, Male, Man, Person, Officer, Police Officer, , Head, Arrest
‘Cryptocurrency King’ Do Kwon with a a group of Montenegro police officers. Photo by: Radio Free Europe (RFE)

In another long-lasting legal battle before the US courts, a case against one of the biggest cryptocurrency companies, Ripple Labs, is nearing its end. The prosecutors look for another major fine of USD 2 billion. This would, according to their statement, send a message to the industry in relation to consumer protection. What exactly is that message?


‘Countries should take the issue seriously and strengthen regulation, as virtual assets tend to flow towards less regulated jurisdictions.’ This is pointed out in the Financial Action Task Force (FATF) president T. Raja Kumar’s interview, in which he acknowledged that only one-third of the world has implemented some form of cryptocurrency regulations. Mr Kumar urges countries to take the issue seriously and strengthen regulation.

That is definitely a trend for crypto companies. As a whole, the cryptocurrency industry has seen a significant drop in value received by illicit cryptocurrency addresses. The share of all crypto transaction volume associated with illicit activity has also decreased. This is stressed in the annual report by Chainalysis, which provides blockchain forensics for most governments worldwide. So, the industry is going in the right direction.

OEWG’s seventh substantive session: the highlights

The OEWG held its 7th substantive session on 4-8 March. With 18 months until the end of the group’s mandate, a sense of urgency can be felt in the discussions, particularly on the mechanism that will follow the OEWG.

Some of the main takeaways from this session are:

  • AI is increasingly prevalent in the discussion on threats, with ransomware and election interference rounding up the top 3 threats.
  • There is still no agreement on whether new norms are needed.
  • Agreement is also elusive on whether and how international law and international humanitarian law apply to cyberspace.
  • The operationalisation of the POC directory, the most important confidence building measure (CBM) to result from the OEWG, is in full swing ahead of its launch on 9 May.
  • Bolstering capacity building efforts and funding for them are necessary actions.
  • The mechanism for regular institutional dialogue on ICT security must be single-track and consensus-based. Whether it will take the shape of the Programme of Action (PoA) or another OEWG is still up in the air.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated OEWG process page.

un meeting 2022
UN OEWG
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN OEWG
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
Threats: AI, elections and ransomware at the forefront
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

The widespread availability of AI tools for different purposes led to delegations focusing on AI-enabled threats. AI tools may exacerbate malicious cyber activity, for example, by faster searching for ICT vulnerabilities, developing malware, and boosting social engineering and phishing tactics. 

France, the Netherlands, and Australia spoke about the security of AI itself, pointing to the vulnerability of algorithms and platforms and the risk of poisoning models. 

2024 is the year of elections on different levels in many states. Large language models (LLMs) and generative AI spur the fake creation process and the proliferation of disinformation and manipulation of public opinion, especially during significant political and social processes. Belgium, Italy, Germany, Canada, and Denmark expressed concern that cyber operations are used to interfere in democratic processes. Malicious use of cyber capabilities can influence political outcomes and threaten the process by targeting voters, politicians, political parties, and election infrastructure, thus undermining trust in democratic institutions. 

Another prevalent threat highlighted by the delegations was ransomware. Cybercriminals target critical infrastructure and life-sustaining systems, but states noted that the most suffering sector is healthcare. Belgium stressed that such attacks eventually lead to human casualties because of the disruption in providing medical assistance. The USA and Greece highlighted the increase in ransomware attacks because some states allow criminal actors to act from their territories with impunity. Also, now AI is an excellent leverage for malicious threat actors, providing unsophisticated operators of ransomware-as-service with a new degree of possibilities and allowing rogue states to exploit this technology for offensive cyber activities. 

Ransomware attacks go hand in hand with IP theft, data breaches, violation of privacy, and cryptocurrency theft. The Republic of Korea, Japan, the Czech Republic, Mexico, Australia and Kenya connected such heists with the proliferation of WMDs. 

Delegations expressed concerns about a growing commercial market of cyber intrusion capabilities, 0-day vulnerabilities and hacking-as-service. The UK, Belgium, Australia, and Cuba considered this market capable of increasing instability in cyberspace. The Pall Mall process launched by France and the UK aimed at addressing the proliferation of commercially available cyber intrusion tools was upheld by Switzerland and Germany.

The growing IoT landscape expands the surfaces of cyberattacks, Mauritius, India, and Kazakhstan mentioned. Quantum computing may break the existing encryption methods, leading to strategic advantages for those who control this technology, Brazil added. It could also be used to develop armaments, other military equipment, and offensive operations. 

Russia once again drew attention to the use of information space as an arena of geopolitical confrontation and militarisation of ICTs. Russia, China, and Iran have also highlighted certain states’ monopolisation of the ICT market and internet governance as threats to cyber stability. Syria and Iran pointed to practices of technological embargo and politicised ICT supply chain issues that weaken the cyber resilience of States and impose barriers to trade and tech development.

Norms: new norms vs. norms’ implementation
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Reflections of the several delegations have highlighted the existing binary dilemma: should there be new norms or not? 

Iran, China and Russia highlighted once again that new norms are needed. Russia also suggested four new norms to strengthen the sovereignty, territorial integrity and independence of states; to suggest the inadmissibility of unsubstantiated accusations against states; and to promote the settlement of interstate conflicts through negotiations, mediation, reconciliation or other peaceful means. Brazil noted that additional norms will become necessary as technology evolves and stressed that any efforts to develop new norms must occur within the UN OEWG. South Africa expressed that they could support a new norm to protect against AI-powered cyber operations and attacks on AI systems. Vietnam strongly supported the development of technical standards regarding electronic evidence to facilitate the verification of the origins of cybersecurity incidents. 

However, some delegations insist that implementing already existing norms comes before elaborating new ones. Bangladesh urged states to collaborate more to translate norms into concrete actions and focus on providing guidance on their interpretation and implementation. The UK, in particular, suggested four steps to improve the implementation of the norms by addressing the growing commercial market for intrusive ICT capabilities. The delegate called states to prevent commercially available cyber intrusion capabilities from being used irresponsibly, to ensure that governments take the appropriate regulatory steps within their domestic jurisdictions, to conduct procurement responsibly, and to use cyber capabilities responsibly and lawfully.

Several delegations mentioned the accountability and due diligence issues in implementing the agreed norms. New Zealand, in particular, shared that the OEWG could usefully examine what to do when agreed norms are willfully ignored. France mentioned that it continues its work on the due diligence norm C with other countries. Italy called for dedicated efforts to set up accountability mechanisms to ‘increase mutual responsibility among states’ and proposed national measures to detect, defend and respond to and recover from ICT incidents, which may include the establishment at the national level of a centre or a responsible agency that leads on ICT matters.

The Chair issued a draft of the norms implementation checklist before the start of the session. According to Egypt, this checklist must be simplified because it includes duplicate measures and detailed actions beyond states’ capabilities. The checklist, Egypt continued, should acknowledge technological gaps among states and their diverse national legal systems, thus respecting regions’ specifics. Many delegations have strongly supported the checklist and made recommendations. For example, the Netherlands suggested that the checklist includes the consensus notion that state practices, such as mass arbitrary or unlawful mass surveillance, may negatively impact human rights, particularly the right to privacy.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.
3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.

Some delegations addressed the Chair’s questions on implementing critical infrastructure protection (CIP) and supply chain security-related norms. The EU reminded us that it is necessary to look into existing cybersecurity best practices in this regard and gave an example of the Geneva Manual as a multistakeholder initiative to clarify the roles and responsibilities of non-state actors in implementing the norms. Italy encouraged the adoption of specific frameworks for assessing the supply chain security of ICT products based on guidelines, best practices, and international standards. Practically, it could include establishing national evaluation and security certification centres for cyber certification schemes. The Republic of Korea suggested building institutional and normative foundations to provide security guidelines starting from the development stage of software products, which can be used in the public sector to protect public service or critical infrastructure from being targeted by cyberattacks. Japan suggested adopting the Software Bill of Materials (SBOM) and discussing how ICT manufacturers can achieve security by design.

International law: applicability to use of ICTs in cyberspace
 Accessories, Bag, Handbag, Scale

The member states have held their previous positions on the applicability of international law. Most states have confirmed the applicability of international law to cyberspace, including the UN Charter, international human rights law and international humanitarian law. However, Russia and Iran stated that existing international law does not apply to cyberspace, while Syria noted how international law applies in cyberspace is unclear. However, China and Russia pointed out that the principles of international law apply. These states, as well as Pakistan, Burkina Faso, and Belarus, support the development of a new legally binding treaty. 

Of note was the contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay that reflected on the continued engagement of a cross-regional group of 13 states based on a working paper from July 2023. The contribution highlighted the emerging convergence of views that: 

  • states must respect and protect human rights and fundamental freedoms, both online and offline, by their respective obligations; 
  • states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused; and
  • International humanitarian law applies to cyber activities in situations of armed conflict, including, where applicable, the established international legal principles of humanity, necessity, proportionality and distinction.

Many states echoed the Colombian statement, including Germany, Australia, Czechia, Switzerland, Italy, Canada, the USA, the UK, Spain and others.

New discussion point

The contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay highlighted that states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused, a new element in the discussions within the OEWG substantive sessions. Thailand, Uganda, and the Netherlands have also specifically addressed the need for reparation for the injury caused.

The discussions have also progressed on the applicability of international humanitarian law (IHL) to the use of ICT in situations of armed conflicts. 

Senegal presented a working paper on the application of international humanitarian law on behalf of Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Sweden, and Switzerland. This working paper shows convergence on the applicability of IHL in situations of armed conflict. It delves deeper into the principles and rules of IHL governing the use of ICTs, notably military necessity, humanity, distinction, and proportionality. Other states welcomed with working paper, including Italy, Australia, South Africa, Austria, the United Kingdom, the USA, France, Spain, Uruguay and others. 

On the other hand, Sri Lanka, Pakistan, and China have called for additional efforts to develop an understanding of the applicability of IHL and its gaps.

In its statement on IHL, the ICRC has pointed out the differences between the definitions of armed attack under the UN Charter and under IHL, the need to discuss how IHL limits cyber operations, and the need to interpret the existing rules of IHL as not to undermine the protective function of IHL in the ICT environment.

icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.
icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.

The discussion on international law greatly benefited from the recent submission to the OEWG by the Peace and Security Council of the African Union on the Application of international law in the use of ICTs in cyberspace (Common African Position). Reflecting the views of 55 states, it represents a significant contribution to the work of the OEWG and an example of valuable input by regional forums. This comprehensive position paper addresses issues of applicability of international law in cyberspace, including human rights and IHL, principles of sovereignty, due diligence, prohibition of intervention in the affairs of states in cyberspace, peaceful settlement of disputes, prohibition of the threat or use of force in cyberspace, rules of attribution, and capacity building and international cooperation. The majority of the delegations welcomed the Common African Position.

African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.
African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.

The Chair has also pointed out that, as of date, 23 states have shared their national positions, and many others are preparing their positions on the applicability of international law in cyberspace. 

Most states supported scenario-based exercises to enhance the understanding between states on the applicability of international law. They would like to have the opportunity to conduct such exercises and have a more in-depth discussion on international law in the May intersessional meeting. China firmly opposes this.

Several states, such as Japan, Canada, Czechia, the EU, Ireland and others, would like to see future discussions on international law embedded in the Programme of Action (PoA). Read more about the talks on the PoA below.

CBMs: operationalising the POC directory
 Stencil, Text

The official launch of the Points of Contact (PoC) directory is scheduled for 9 May, which led to the discussion revolving around the operationalisation of the POC directory. At the time of the session, 25 countries had appointed their POCs. Most delegations reiterated their support for the directory and either confirmed their appointments or that the process was ongoing. Some states nevertheless suggested adjustments to the POC directory. Ghana, Canada, and Colombia commented that communication protocols may be helpful, while Czechia and Switzerland recommended that the POC shouldn’t be overburdened with these procedures yet. Argentina also brought up the potential participation of non-state actors in the POC directory.

To further facilitate communication, several states advanced the usefulness of building a common terminology (Kazakhstan, Mauritius, Iran, Pakistan), while Brazil mentioned that Mercosur was effectively working on this kind of taxonomy.

While Czechia, Switzerland and Japan underlined the necessity to focus first on the implementation and consolidation of existing CBMs, many states nevertheless were in favour of additional CBMs: protection of critical infrastructure (Switzerland, Colombia, Malaysia, Pakistan, Fiji, Netherlands, Singapore and Czechia) as well as coordinated vulnerability disclosure (Singapore, Netherlands, Switzerland, Mauritius, Colombia, Malaysia and Czechia). The integration of multi-stakeholders to the development of CBMs was also considered by some states and organisations (the EU, Chile, Albania, Argentina) while adding public-private partnerships as a CBM received broad support from Kazakhstan, Qatar, Switzerland, South Africa, Mauritius, Colombia, Malaysia, Pakistan, South Korea, Netherlands, and Singapore.

All states recalled and praised the significance of regional and subregional cooperation in the implementation of CBMs regionally and how it can contribute to the development of CBMs globally. In that respect, most states highlighted enriching initiatives at a cross-regional level, such as a recent side event at the German House. Work within the OAS, the OSCE, the ASEAN, the Pacific region, and the African Union was underlined. Interventions were enriched explicitly by sharing national experiences, most notably Kazakhstan’s and France’s recent use of the OSCE community portal for POC.Finally, states highlighted the link between CBMs and capacity-building, Ghana, Djibouti, and Fiji sharing their national experiences in closing the digital divide. In that vein, Argentina, Iran, Pakistan, Djibouti, Botswana, Fiji, Chile, Thailand, Ethiopia, Mauritius, and Colombia support creating a specific CBM on capacity-building.

Capacity building: bolstering efforts and funding
 Art, Drawing, Doodle

Several noteworthy proposals were put forth by different countries, each aiming to bolster capacity building efforts. The Philippines introduced a comprehensive ‘Needs-Based Capacity Building Catalogue,’ designed to help member states identify their specific capacity needs, connect with relevant providers, and access application guidance for capacity building programmes.

 Page, Text
A scheme of the Philippine proposal. Source: UNODA.

Kuwait proposed an expansion of the Global Cybersecurity Cooperation Portal (GCSE), suggesting adding a module dedicated to housing both established and proposed norms, thus facilitating collaboration among member states and tracking the implementation progress of these norms. India‘s CERT expressed willingness to develop an awareness booklet on ICT and best practices with the contribution of other delegations, intending to post it on the proposed GCSE for widespread dissemination.

The crucial issue of funding for capacity building received substantial attention during the discussions, with multiple delegations bringing to the fore the need for additional resources to sustainably support such efforts. Uganda advocated establishing a UN voluntary fund targeting countries and regions most in need. In contrast, others stressed the imperative of exploring structured avenues within the UN framework for resource mobilisation and allocation. 

On the foundational capacities of cybersecurity, an emphasis was placed on developing ICT policies and national strategies, enhancing societal awareness, and establishing national cybersecurity agencies or CERTs.

Furthermore, the importance of self-assessment tools for improving states’ participation in capacity building programmes was emphasised. Pakistan proposed implementing checklists and frameworks for evaluating cybersecurity readiness and identifying gaps. Rwanda advocated for reviews based on the cybersecurity capacity maturity model (CMM) to achieve varying levels of capacity maturity. The discussions also commended existing initiatives, such as the Secretariat’s mapping exercise and emphasised the need for a multistakeholder approach in capacity building efforts. Finally, Germany highlighted the significant contributions of organisations in creating gender-sensitive toolkits for cybersecurity programming, underscoring the importance of incorporating gender perspectives in implementing the UN framework on cybersecurity.

Regular institutional dialogue: the fight for a single-track process
 Accessories, Sunglasses, Text, Handwriting, Glasses

States are still divided on the issue of regular institutional dialogue. What they agree on is that there must be a singular process, its establishment must be agreed upon by consensus, and decisions it makes must be by consensus. 

France, one of the original co-sponsors of the PoA, has delivered a presentation on the PoA’s future elements and organisation. Review conferences would be convened in the framework of the POA every few years. The scope of these review conferences would include (i) assessing the evolving cyber threat landscape, the results of the initiatives and meetings of the mechanism, (ii) updating the framework as necessary and (iii) providing strategic direction and mandate or a program of work for the POA’s activities. The periodicity would need to be defined as not being a burden to delegations, especially delegations from small countries and developing countries. However, the PoA would need to keep up with the rapid evolution of technology and of the threat landscape.

The PoA would also include open-ended plenary discussions to (i) assess the progress in the implementation of the framework, (ii) take forward any recommendations from these modalities (iii) to discuss ongoing and emerging threats, (iv) to provide guidance for open ended technical meetings and practical initiatives. Inter-sessional meetings could also be convened if necessary.

Furthermore, four modalities would feed discussions on the implementation of the framework: capacity building, voluntary reporting by states, practical initiatives, and contributions from multistakeholder community. The POA could leverage existing and potential capacity building efforts in order to increase their visibility, improve their coordination, and support the mobilisation of resources. The review conferences and the discussions would then provide an opportunity to exchange on the ongoing capacity building efforts and identify areas where additional action is needed. Voluntary reporting of states could be based either on creating a new reporting system or by promoting existing mechanisms. The PoA would contain, enable, and deepen practical initiatives. It would build on existing initiatives and develop new ones when necessary. The PoA would enable that engagement and collaboration with the multistakeholder community.

France also noted that a cross-regional paper to build on this proposal will be submitted at the next session.

Multiple delegations expressed support for the PoA, including the EU, the USA, the UK,  Canada, Latvia, Switzerland, Cote d’Ivoire, Croatia, Belgium, Slovakia, Czechia, Israel, and Japan.

The Russian Federation, the country that originally suggested the OEWG, is the biggest proponent of its continuation. Russia cautioned against making decisions by a majority in the General Assembly, noting that such an approach will not be met with understanding by member states, first and foremost developing countries, which long fought to get the opportunity to directly partake in the negotiations process on the principles governing information security. Russia stated that after 2025, a permanent OEWG with a decision-making function should be established. Its pillar activity would be crafting legally binding rules, which would serve as elements of a future universal agreement on information security. The OEWG would also adapt international law to the ICT sphere. It would strengthen CBMs, launch mechanisms for cooperation, and establish programmes of funds for capacity building. Belarus, Venezuela, and Iran are also in favour of another OEWG.

A number of countries didn’t express support for either the PoA or the OEWG but noted some of the elements the future mechanism should have.

Similarly to Russia, China noted that the future mechanism should implement the existing framework but also formulate new norms and facilitate the drafting of legal instruments. The Arab Group noted that the future mechanism should develop the existing normative framework to achieve new legally binding norms. Indonesia also noted the mechanism should create rules and norms for a secure and safe cyberspace.

Latvia and Switzerland noted that the mechanism must focus on the implementation of the existing framework. However, Switzerland and the Arab Group noted that the mechanism could identify gaps in the framework and could develop the framework further.

Many delegations noted that capacity building must be an integral part of the regular mechanism, such as South Africa, Bangladesh, the Arab Group, Switzerland, Indonesia, and Kenya.

States also expressed opinions on which topics should be discussed under the permanent mechanism. Malaysia, South Africa, Korea, and Indonesia stated that the topics under the mechanism should be broadly similar to those of the OEWG. The UK, Latvia and Kenya stated it should discuss threats, while Bangladesh outlined the following emerging threats: countering disinformation campaigns, including deepfakes, quantum computing, AI-powered hacking, and addressing the use of ICTs for malicious purposes by non-state actors

South Africa highlighted that discussion on voluntary commitments, such as norms or CBMs, should be developed without prejudice to the possibility of a future legally binding agreement. The UK noted that the mechanism should also discuss international law.

States also discussed the operational details of the future mechanism. For instance, Egypt suggested that the future mechanism hold biannual meetings every two years, review conferences to be convened every six years, and intersessional meetings or informal working groups that may be decided by consensus. The future mechanism should ensure the operationalisation and review of established cyber tools, including POC’s directory and all other proposals to be adopted by the current OEWG. Sri Lanka noted that the sequence of submitting progress reports, be it annual or biennial, should correspond with the term of the Chair and its Bureau.

Brazil suggested a moratorium on First Committee resolutions until the end of the OEWG’s mandate to allow member states to focus on their efforts in the OEWG. This suggestion was supported by El Salvador, South Africa, Bangladesh, and India.

Dedicated stakeholders session

The dedicated stakeholder session allowed ten stakeholders to share their expertise within the substantive session. 

The stakeholders addressed the topics of CII protection and AI (Center for Excellence of RSIS), norms I and J, supply chain vulnerabilities, and addressing the threat lifecycle (Hitachi), role of youth and the importance of youth perspective as a possible area of thematic interest of OEWG (Youth for Privacy). The topics of AI and supply chain management are echoed in SafePC Solutions‘ statement. At the same time, the Centre for International Law (CIL) at the National University of Singapore focused on the intersection of international law and the use of AI.

Chatham House has shared their research on the proliferation of commercial cyber intrusion tools, among others, and the Pall Mall Process, launched by the UK and France. Access Now focused on intersectional harms caused by malicious cyber threats, issues of surveillance and norms E and J. Building on the Chatham House and Access Now remarks, the Paris Peace Forum focused its intervention on the commercial proliferation of cyber-intrusive and disruptive cyber capabilities, and possible helpful steps states could undertake in the short term.

DiploFoundation focused on the responsibility of non-state stakeholders in cyberspace and shared the Geneva Manual on responsible behaviour in cyberspace.Nuclear Age Peace Foundation, in their statement, connected cybersecurity concerns with safeguarding weapons systems and the importance of secure software, while The National Association for International Information Security focused on the need to interpret the norms of state behaviour.

What’s next?

The OEWG’s schedule for 2024 is jam-packed: mid-April, the chair will revise the discussion papers circulated before the 7th session. On 9 May, the POC Directory will be launched, followed by a global roundtable meeting on ICT security capacity-building on 10 May 2024. A dedicated intersessional meeting will be held on 13-17 May 2024. 

Looking ahead to the second half of 2024, the 8th and 9th substantive sessions are planned for 8-12 July and 2-6 December 2024. A simulation exercise for the POC directory is also on the schedule, along with the release of capacity-building materials by the Secretariat, including e-learning modules.

Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations

The UN’s Ad Hoc Committee to  Elaborate a Comprehensive International Convention on Countering the Use of ICTs for Criminal Purposes aka the Ad Hoc Committee on Cybercrime convened in New York for a culminating session held from 29 January to 9 February 2024, marking the end of two years of negotiations. The Ad Hoc Committee (AHC) was tasked with drafting a comprehensive  cybercrime convention. However, as the final session started, there were no signs of significant progress: member states couldn’t agree on significant issues such as the scope of the convention. As a result, the delegations required more time to discuss the content and wording of the draft convention and decided to hold additional meetings. Though some delegations such as China and the US offered financial support for more meetings, several states such as El Salvador, Uruguay, and Lichtenstein pointed out the strain these additional meetings would put on their resources.

 Book, Comics, Publication, People, Person, Face, Head, Art, Baby, Drawing, Mitsutoshi Shimabukuro
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 25

The chair initially split negotiations in two tracks: formal sessions and informal meetings behind closed doors. The informal meetings seem to have focused on more sensitive issues such as the scope and human rights-related provisions and were extremely intense causing the regular sessions to start late. It also resulted in less transparency in negotiations and excluded the multistakeholder community from contributing.

In the last days of the concluding sessions, there was increased pressure from civil society and the industry, as well as cybersecurity researchers.

“There are fears that if the UN Ad Hoc Committee does not conclude with a convention, it could be considered a failure of multilateral diplomacy. However, in my opinion, the real fiasco of diplomatic efforts to address the problem of cybercrime would happen if the states adopt a treaty that significantly waters down human rights obligations and legitimises the use of criminal justice for oppression and persecution.” 

Dr. Tatiana Tropina, Assistant Professor in Cybersecurity Governance, ISGA, Leiden University

The comments provided are personal opinions and are not representative of the organisation as a whole.

So, what happened?

Here are the issues with the draft convention that need to be resolved:

Scope of the convention and criminalisation 

One of the main unresolved points remains the question whether the cybercrime convention should be a traditional treaty or if it should cover all crimes committed via ICTs. This divide translated into a lengthy discussion on the name of the convention itself, as well as on Article 3 (scope of application) of the draft convention.

In relation to the scope of application, delegations discussed Canada’s proposal, which received support from 66 states. The proposal suggests wide wording of the actions that may fall within the scope of the convention, and adding Article 3.3 to ensure that the convention doesn’t permit or facilitate ’repression of expression, conscience, opinion, belief, peaceful assembly or association; or permitting or facilitating discrimination or persecution based on individual characteristics’.

The Russian Federation continued expressing the view that the AHC hadn’t fully implemented the mandated outline in Resolution 74/247 which established the committee, and the scope of the convention should include broader measures to combat ‘the spread of terrorist, extremist, and Nazi ideas with the use of ICTs’. Russia further highlighted that ‘many articles are simply copied from treaties that are 20 years old’ and that the revised text doesn’t include efforts to agree on procedures of investigation, or creating platforms and channels for law enforcement cooperation.

In the same vein, Iran, Egypt, and Kuwait see the primary mandate of the AHC to elaborate a comprehensive international convention on the use of ICT for criminal purposes and see the inclusion of human rights regulations and detailed international collaboration as duplication of already existing international treaties.  

Representatives from civil society, private entities, and academia also shared feedback on the scope, stressing the importance of limiting the convention’s scope and implementing strong human rights protections. They expressed concerns about the convention’s potential to undermine cybersecurity, jeopardise data privacy, and diminish online rights and freedoms.

Discussing additional provisions in the criminalisation chapter, delegations were deadlocked over specific terms. For instance, concerning Article 6(2), 7(2), and 12, Russia, with support from several delegations, proposed replacing ‘dishonest intent’ with a more specific term. Russia’s representative argued that ‘dishonest’ is not a legal term, thus making it challenging for countries to implement or clarify it in domestic legislation. However, the UK, US, and EU opposed this change. Austria, in particular, explained that ‘dishonest intent’ provides clear criteria for identifying when conduct constitutes an offence, offering flexibility across various legal systems. 

Human rights and safeguards 

Human rights (Article 5) and safeguards (Article 24) have been a difficult topic for delegations from day one. Some delegations such as Iran argued that the cybercrime treaty is not a human rights treaty, suggesting a model akin to the UN Convention against Corruption (UNCAC), which omits explicit human rights references. As reported earlier, this didn’t find support from many other delegations.

Egypt and other delegations also expressed confusion over the repetitive nature of certain human rights provisions within the text, emphasising the redundancy of similar mentions occurring five or six times. 

Additionally, Egypt raised concerns about Article 24 and questioned why the principle of proportionality was singled out from other legal principles recognised under international law. Egypt pointed out the challenge of applying proportionality when different countries have varying legal provisions, such as the death penalty. Pakistan supported Egypt and Brazil suggested appending ‘legality’ to the principle of proportionality, including both of the principles of legality and proportionality. Ecuador expressed support for Brazil’s proposal.

As a result, both articles remain without text in the further revised draft text of the convention

There was no consensus regarding the articles on online sexual abuse (Article 13) and non-consensual distribution of intimate images (Article 15). Delegations tried to find a balance between protecting privacy and criminalising the sharing of intimate images without consent. Many felt the convention should be flexible to accommodate different laws and international human rights agreements. There was debate about whether to stick with the Convention on the Rights of the Child’s (CRC) definition or use a different one. The US worried the CRC’s definition didn’t fit cybercrimes well and might lead to inconsistent interpretations that wouldn’t adequately protect children under Article 13. 

Transfer of technology and technical assistance

The transfer of technology appears twice in Article 1 (statement of purpose) and Article 54 (technical assistance and capacity-building). The group of African countries strongly advocated for keeping a reference to the transfer of technology in both articles, including in Article 1, paragraph 3. 

Russia, Syria, Namibia, India, Senegal, and Algeria supported this, while the US was against it and called to keep this reference in Article 54 only. The EU, Israel, Norway, Canada, Albania, and the UK supported the US.

With Article 54, more or less the same groups of states had further disagreements. The US, Israel, the EU, Norway, Switzerland, and Albania supported inserting ‘voluntary’ before ‘where possible’ and ‘on mutually agreed terms’ in the context of how capacity building shall be provided between states in Article 54(1). Most African countries and Iran, Iraq, Cabo Verde, Colombia, Brazil, and Pakistan, opposed such a proposal because it would undermine the purpose of the provision in ensuring effective assistance to developing countries. With the goal of reaching a consensus on Article 54(1), the US withdrew its proposal and retained the ‘where possible’ and ‘mutually agreed terms’. In the revised draft text of the convention these paragraphs remain open for further negotiations between delegations.

“As offenders, victims and evidence are often located in different jurisdictions, investigations will typically require international coordinated law enforcement action. This means that gaps in the capacity of one country can severely undermine the safety of communities in other countries. Technical assistance and capacity-building are key tools to address this challenge. However, to have a real-world impact, the future Convention needs to recognize that addressing the needs of the diverse actors involved in combating [the criminal use of ICTs] [cybercrime] will require various forms of specialized technical assistance, which no single organization can provide. Even within countries, the various actors involved in combating [the criminal use of ICTs] [cybercrime] – including legislators, prosecutors, law enforcement, national Computer Emergency Response Teams (CERTs) – may have very different technical assistance needs.”

Director Craig Jones, INTERPOL Cybercrime Programme

Scope of international cooperation

Delegations expressed opposing views on provisions related to cooperation on electronic evidence and didn’t reach consensus. The discussion included Articles 35 (1) c, Article 35 (3), and (4), which deal with the general principles of international cooperation and e-evidence. The draft convention allowed countries to collect data across borders without prior legal authorisation. However, there were no agreements across many delegations. 

In particular, New Zealand, Canada, the EU, Brazil, the USA, Argentina, Uruguay, Singapore, Peru, and others expressed concerns: fearing that the current draft of Article 35 would allow an excessively broad application, potentially leading to the pursuit of non-criminal activities. These states expressed views that the previous draft allowed for national law to determine what constitutes criminal conduct and pointed out the need to differentiate between serious crimes and offence, the need for safeguards and guardrails on the power of states to limit the possibility of repression and implementations of intrusive and secret mechanisms and to ensure the protection of human rights. On the other hand, states like Egypt, Saudi Arabia, Iran, Iraq, Mauretania, Oman, and others called for the deletion of Article 35 (3) altogether.

Additionally, New Zealand suggested including a non-discrimination clause in Article 37(15) on extradition to prevent unfair grounds for refusing cooperation. This would ensure consistency across the entire chapter on international cooperation. However, member states couldn’t agree on the language and left this open. 

Within the international cooperation chapter, delegations spend quite a bit of time discussing the terms: in particular, in Article 45 and 46 the debates centred around the use of ‘shall’ vs ‘may’. The EU and other delegations advocated for changing ‘shall’ to ‘may’ in those articles to allow states the option, but not the obligation, to cooperate. This proposal was met with mixed reactions, with some delegations, including Egypt and Russia preferring to retain ‘shall’ to ensure robust international cooperation. The countries opposing the change from shall to may advocate that this would undermine the effectiveness of the cooperation between the states. So far, the further revised draft text of the convention includes both options in brackets. 

cooperation
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 26

Preventive measures 

Another term which created some confusion across several delegations was the use of ‘stakeholders’ in Article 53, where preventive measures are discussed and paragraph 2 highlights that ‘States shall take appropriate measures […] to promote the active participation of relevant individuals and stakeholders outside the public sector, such as non-governmental organizations, civil society organizations, academic institutions and the private sector, as well as the public in general, in the prevention of the offences covered by this Convention’. Egypt, in particular, called to remove the word ‘stakeholders’ unless it’s clearly defined. The US didn’t support this proposal. The further revised draft text of the convention ‘relevant individuals and entities […]’, but the paragraph hasn’t been agreed yet.

In the same article, in paragraph 3(h), where ‘gender-based violence is mentioned and strategies and policies should be developed to prevent it, states couldn’t reach an agreement. The first group of states, including the USA, Iceland, Australia, Vanuatu, and Costa Rica, advocated for keeping the provision. Other delegations such as Iran, Namibia, Saudi Arabia, and Russia, among others, proposed the deletion of the term ‘gender-based’ and instead keep ‘violence’. In the end, this part remained as it is with the term ‘gender-based violence’, with the chair emphasising that this article is not obligatory as it says that preventing measures may include.

Another notable example of where states had opposing views was Article 41 on 24/7 network, which is a point of contact designated at the national level, available 24 hours a day and 7 days a week, to ensure the provision of immediate assistance for the purposes of the convention. India proposed new duties for the 24/7 network, explaining that prevention should be a part of such duties. They particularly stressed that ‘if the offence is not prevented and it occurs, States would be needing multiple times the resources that they saved in the process of evidence collection, prosecution, extradition, and so on. So it’s better to prevent rather than to spend multiple times the same resources that States are trying to save in going through the whole process of criminal justice’. Russia, Kazakhstan and Belarus supported this proposal, while the US, UK, Argentina, the EU, and Canada didn’t.

So, what’s next?

A question mark on a wooden cube on a computer keyboard

As mentioned earlier, the delegates managed to agree on one major item to postpone the final decision. The chair’s further revised draft text of the convention is available at the AHC’s website, and new dates for more meetings should be announced soon. 

Does this mean that delegations are close to reaching a consensus over a landmark cybercrime convention before the UN General Assembly? Hardly so, but these two weeks have also demonstrated that many (though less fundamental compared to the scope of application) open issues have been resolved behind closed doors, and there is still a chance that intense non-public negotiations between delegations could speed up the process.

We will continue to monitor the negotiations, in the meantime discover more through our detailed reports from each session generated by DiploAI.

The perfect cryptostorm

To fully understand the incredible story behind the cryptocurrency and blockchain craze of 2017-2021, we must explain the unique setting in which events played out, setting the course for the collision. One component amplified the other, multiplying the effect, thus creating a perfect cryptostorm. Unfortunately, that storm took a toll on trust in the industry and caused financial losses.

The cryptocurrency industry is a one-hit wonder. But what a wonder that is! Bitcoin presents the true marvel of human engineering of money. It has withstood the test of time and resiliency, becoming the worldwide recognised use case for digital gold. We witnessed newly coined terms such as ‘crypto-rich’. In response, a whole new payment industry emerged, forged by the desire of the legacy financial organisation to stay relevant in the new era. 

Moreover, alongside the new fast-digital payment industry, which was delivering miracles on financial inclusion of the unbanked, the retail investing industry was a new form of capital inflow. The emergence of online trading companies, backed mainly by larger institutional investors, was recognised as a risk for the retail users and overall consumer protection rights.

Unanswered risks, the new hype around the change in the financial industry, and the emergence of inexperienced investors were the ingredients for the perfect storm in the cryptocurrency industry. Add human greed to that mixture and it becomes the perfect cryptostorm.

The perfect cryptostorm

The necromancers that summoned this cryptostorm, are quite vividly depicted in the latest Netflix documentary drama, ‘Bitconned’, which aired this January after two years of production. In 2017, the Centra Tech company raised USD 25 million in investments in their main product: a VISA-backed credit card, allowing people to spend their cryptocurrency at any retail store across the USA.
Centra Tech’s CEO, CTO, and other executives had a Harvard Business School background or an MIT engineering degree. The new headquarters in downtown Miami was full of young, bright people, and 20,000 VISA cards were produced. However, none of this was real. Everything was a (not so cleverly) staged mirage.

The court case concluded in 2021, handing jail time sentences to the people involved. The documentary is led by one of three prominent persons behind Centra Tech, Ray Trapani, who collaborated with the federal investigation on the case. In the film, he explained in detail how two young scammers working at a car rental company raised millions in an ICO, having only a one-page website. 
Once it started, the storm did not calm down for years. The story of Centra Tech from 2017 was replicated time and time again, culminating in the collapse of, at the time, the world’s second-largest company in the industry: FTX, an online cryptocurrency exchange. As we read from publicly presented pieces of court evidence, in the cases against Celsius, Luna and FTX, the crypto companies spent funds custodied by their investors.

 Person, Sitting, Adult, Male, Man, Clothing, Footwear, Shoe, Furniture, Face, Head, Home Decor, Chair
Screenshot from the Netflix documentary film ‘Bitconned

How did crypto scam companies utilise the above ingredients?

By promising the right thing at the right moment. Internet users witnessed the financial sector’s transformation and bitcoin’s success. They could easily be convinced that a new decentralised finance infrastructure is on the verge, which will be supported by the lack of a regulatory framework. At the same time, giving them a fair chance to participate in the industry beginnings and become the new crypto millionaires, which was the main incentive for many. If people behind the open-source cryptocurrency (bitcoin) could create the ‘internet of information’,  the next generation of cryptocurrency engineers would surely deliver the ‘internet of money’. However, again, it was false. It was, in fact, a carefully worded money-grabbing experiment.

All the above ideas still stand as a goalpost for further industry developments. Moreover, we must admit that the initial takeover of the industry by scammers, fraudsters, and, in some cases, straightforward sociopaths will taint the forthcoming period of developments in this industry.

In contrast to bitcoin, the creators of almost all cryptocurrencies that came later were incentivised by the financial benefits of ‘tokenisation’ rather than by secure and trustworthy technology. The term tokenisation was supposed to describe the emergence of fast-exchanging digital information (tokens) that could help trade digital products and services, promising the possibility of a ‘creators’ economy, micropayments, or unique digital objects. But in reality, it was merely copying analogue objects to the digital world and charging money for that service. Stocks, bonds, tin cans, energy prices, cloud storage, and dental appointments were all promised to be tokenised, while the term ‘blockchain’ was the ultimate hype word. People soon realised that not all digital artefacts had value solely by being placed on a blockchain. That was the case with projects that honestly intended to build the product (token or cryptocurrency) rather than just sell vapourware and go permanently offline the moment they got busted. As with any other technology, time will show the most efficient and rational use of blockchain.

Could this happen again for online financial services? 

Chances are meagre, certainly not to happen on this scale. Financial agencies worldwide have prepared a set of comprehensive laws and authorities to detect such fraudulent companies much faster and more efficiently. Financial regulations are negotiated with much more success on a global scale. Intergovernmental financial organisations and their bodies have equipped the regulators with the tools to comprehend how technology works and what can be done on the consumer protection side. Also, the users have had their fair share of schooling. Once bitten, twice shy.

For any other technology developed and utilised mainly online, the chances are always there. Users can now easily be engaged directly, via a mobile app, with companies that promise the next technological innovation. All they have to do is to carefully word our societal dreams into their product description.

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

Read the first part of the blog series: The intellectual property saga: The age of AI-generated content | Part 1.

The European Union (EU) has reached a historic provisional agreement in 2023 on the world’s first comprehensive set of rules to regulate AI, which will become law, once adopted by the EU Parliament and Council. The legislation, known as the AI Act, sets a new global benchmark for countries seeking to harness the potential benefits of AI, while trying to protect against possible risks of using it. While much of the attention was given to parts such as biometric categorisation and national security, among others, the AI Act will also give new guidance on AI and copyright. 

The AI Act’s approach regarding Copyright and Transparency in AI takes a nuanced stance, requiring transparency regarding training data without demanding exhaustive lists of copyrighted works. Instead, a summary of data collections suffices, easing the burden on AI providers. Nonetheless, uncertainties persist about the foundation model of providers’ obligations under copyright laws. While the AI Act stresses compliance with the existing regulations, including those in the Digital Single Market Directive, it yet raises concerns about applying EU rules to models globally, potentially fostering regulatory ambiguity for developers.

In one of the previous blogs,  the Digital Watch Observatory elucidated the relationship between AI-generated content and copyright. The analysis showed how traditional laws struggle to address AI-generated content, raising questions of ownership and authorship. Various global approaches – denying AI copyright, attributing it to humans – highlight the complexity.

This part will delve into the influence of AI on Intellectual Property Rights, and will assess the ramifications of AI on trade secrets and trademarks focusing on examples from the EU and US legal frameworks.

Trade Secrets and AI Algorithms

Within the realm of AI and intellectual property, trademarks and trade secrets present unique challenges and opportunities that require special attention in the evolving legal landscape. As AI systems often require extensive training datasets and proprietary algorithms, determining what constitutes a protectable trade secret becomes more complex. Companies must navigate how to safeguard their AI-related innovations, including the datasets used for training, without hindering the collaborative nature of the AI development. 

Trade secret laws may need refinement in order to address issues like reverse engineering of AI algorithms and the accidental disclosure of sensitive information by AI systems. However, given the limitations associated with patenting and copyrighting AI-related content, trade secret principles seem to present an alternative, at least in the USA. Patents necessitate a demonstrated utility disclosed in the application, while trade secrets lack this requirement. Trade secrets cover a broader range of information without the immediate need to disclose utility. In addition, trade secret law allows information created by an AI system to be protected, even if the creator is not an individual. This differs from patent law, which requires a human inventor listed on the application. 

Computer security concept with a closed padlock on the laptop.

Trade secrets, traditionally associated with formulae and confidential business information, now extend to AI algorithms and proprietary models. Safeguarding these trade secrets is critical for maintaining a competitive edge in industries in which AI plays a pivotal role. In the USA, trade secret law safeguards a broad spectrum of information, encompassing financial, business, scientific, technical, economic, or engineering data, as long as the owner has taken reasonable measures to maintain its secrecy, and the information derives value from not being widely known or easily accessible through legitimate means by others who could benefit from its disclosure or use (as defined in 18 U.S.C. §1839(3)). It is important, however, to consider that patent owners have a monopoly on the right to make, use, or sell the patented invention. In contrast, owners of AI-based trade secrets face the risk of competitors reverse engineering the trade secret, which is permitted under US trade secret law.

Requirements related to secrecy exclude trade secret protection for AI-generated outputs that are not confidential, such as those produced by systems like ChatGPT or Dall·E. Nevertheless, trade secret laws seem to be more flexible to safeguard various AI-related assets, including training data, AI software code, input parameters, and AI-generated content intended solely for internal and confidential purposes. Importantly, there is no stipulation that a trade secret must be originated by a human being, while AI-generated material is treated like any other form of information, as evident in 18 U.S.C. §1839(4), which defines trade secret ownership.

Instead of pursuing patents, based on traditional laws that seem to provide ambiguous guidance on AI and Copyright,  numerous AI innovators opt for trade secret protections to safeguard their AI advancements, as these innovations in commercial use frequently remain concealed and difficult for others to detect. With the AI Act soon to become law, there’s a likelihood that the EU will necessitate disclosing how AI innovations operate, categorising them as limited or high risk. This consequently leads to trade secret safeguarding to no longer be viable in some instances. 

Establishing clear guidelines for what qualifies as a trade secret in the AI domain, and defining the obligations of parties involved in AI collaborations will be essential for fostering innovation while ensuring the protection of valuable business assets.

Trademarks and Branding in the AI Era

artificial intelligence (ai) and machine learning (ml)

The integration of AI technologies into product and service offerings has also reshaped the landscape of trademark protection, presenting both challenges and opportunities for businesses. Traditionally associated with logos, brand names, and distinctive symbols, trademarks now extend their scope to encompass AI-generated content, virtual personalities, and unique algorithms associated with a particular brand. As companies increasingly rely on AI for customer interactions, the challenge of maintaining brand consistency in automated, AI-powered engagements becomes paramount. In the realm of AI-driven customer service and chatbots, the traditional understanding of the ’average consumer’ in trademark infringement cases undergoes transformation. When an AI application acquires a product with minimal or no human involvement, determining who, or more crucially, what constitutes the average consumer, becomes a pertinent question. Likewise, identifying responsibility for a purchase that results in trademark infringement in such scenarios becomes complex.

While there have been no known cases directly addressing the issue of AI and liability in trademark infringement, there have been several cases within the past decade adjudicated by the Court of Justice of the European Union (CJEU) that could offer insights into the matter when considering this new technology. For instance, the Louis Vuitton vs Google France decision focused on keyword advertising and the automatic selection of keywords in Google’s AdWords system. It concluded that Google wouldn’t be accountable for trademark infringement unless it actively participated in the keyword advertising system. Similarly, the L’Oréal vs eBay case, which revolved around the sale of counterfeit goods on eBay’s online platform, determined that eBay wouldn’t be liable for trademark infringement unless it had clear awareness of the infringing activity. A comparable rationale was applied in the Coty vs Amazon case. 

It would seem that if a provider of AI applications implemented adequate takedown procedures and had no prior knowledge of infringing actions, they would likely not be held responsible for such infringements. However, when the AI provider plays a more active role in potential infringing actions, the two cases indicate that the AI provider could be held accountable. 

In the case of Cosmetic Warriors Ltd and Lush Ltd vs Amazon.co.uk Ltd and Amazon EU Sarl before the United Kingdom High Court, in 2014, Amazon was determined to be liable for trademark infringement. Amazon used ads on Google mentioning ’lush’ to bring people to its UK website, where Lush claimed Amazon was breaking trademark rules by showing ‘LUSH’ in ads and search results for similar products without saying Lush items weren’t available on Amazon. The Court explained that consumers were unable to discern whether the products being offered for sale were those of the brand owner or not, thus illustrating that the evolving definition of the average consumer and the delineation of responsibility in trademark infringement cases involving AI require nuanced legal considerations. 

 Computer, Electronics, Tablet Computer, Pen

Conclusion

As AI continues to impact various industries, the ongoing evolution of intellectual property laws will play a pivotal role in defining and safeguarding AI innovations, underscoring the need for adaptable regulations that balance innovation and protection. The intersection of AI and intellectual property introduces novel challenges and opportunities, necessitating a thoughtful and adaptive legal framework. One crucial aspect involves the recognition and protection of AI-generated innovations. Traditional IP laws, such as patents, copyrights, and trade secrets, were designed with human inventors in mind. However, the autonomous and generative nature of AI raises questions about the attribution of authorship and inventorship. Legal systems will need to address whether AI-generated creations should be eligible for patent or copyright protection and, if so, how to attribute ownership and responsibility. This demands a forward-thinking approach from policymakers, legal scholars, and industry stakeholders to craft a legal landscape that not only accommodates the transformative potential of AI, but also safeguards the rights, responsibilities, and interests of all parties involved.

AI industry faces threat of copyright law in 2024

Copyright laws are set to provide a substantial challenge to the artificial intelligence (AI) sector in 2024, particularly in the context of generative AI (GenAI) technologies becoming pervasive in 2023. At the heart of the matter lie concerns about the use of copyrighted material to train AI systems and the generation of results that may be significantly similar to existing copyrighted works. Legal battles are predicted to affect the future of AI innovation and may even change the industry’s economic models and overall direction.
According to tech companies, the lawsuits could create massive barriers to the expanding AI sector. On the other hand, the plaintiffs claim that the firms owe them payment for using their work without fair compensation or authorization.

Legal Challenges and Industry Impact

AI programs that generate outputs comparable to existing works could infringe on copyrights if they had access to the works and produced substantially similar outcomes. In late December 2023, the New York Times was the first American news organization to file a lawsuit against OpenAI and its backer Microsoft, asking the court to erase all large language models (LLMs), including the famous chatbot ChatGPT, and all training datasets that rely on the publication’s copyrighted content. The prominent news media is alleging that their AI systems engaged in ‘widescale copying’, which is a violation of copyright law.
This high-profile case illustrates the broader legal challenges faced by AI companies. Authors, creators, and other copyright holders have initiated lawsuits to protect their works from being used without permission or compensation.

As recently as 5 January 2024, authors Nicholas Basbanes and Nicholas Gauge filed a new complaint against both OpenAI and its investor, Microsoft, alleging that their copyrighted works were used without authorization to train their AI models, including ChatGPT. In the proposed class action complaint, filed in federal court in Manhattan, they charge the companies with copyright infringement for putting multiple works by the authors in the datasets used to train OpenAI’s GPT large language model (LLM).


This lawsuit is one among a series of legal cases filed by multiple writers and organizations, including well-known names like George R.R. Martin and Sarah Silverman, alleging that tech firms utilised their protected work to train AI systems without offering any payment or compensation. The results of these lawsuits could have significant implications for the growing AI industry, with tech companies openly warning that any adverse verdict could create considerable hurdles and uncertainty.

Ownership and Fair Use

Questions about who owns the outcome generated by AI systems—whether it is the companies and developers that design the systems or the end users who supply the prompts and inputs—are central to the ongoing debate. The ‘fair use‘ doctrine, often cited by the United States Copyright Office (USCO), the United States Patent and Trademark Office (USPTO), and the federal courts, is a critical parameter, as it allows creators to build upon copyrighted work. However, its application to AI-generated content with models using massive datasets for training is still being tested in courts.

Policy and Regulation

The USCO has initiated a project to investigate the copyright legal and policy challenges brought by AI. This involves evaluating the scope of copyright in works created by AI tools and the use of copyrighted content in training foundational and LLM-powered AI systems. This endeavour is an acknowledgement of the need for clarification and future regulatory adjustments to address the pressing issues at the intersection of AI and copyright law.

Industry Perspectives

Many stakeholders in the AI industry argue that training generative AI systems, including LLMs and other foundational models, on the large and diverse content available online, most of which is copyrighted, is the only realistic and cost-effective method to build them. According to the Silicon Valley venture capital firm Andreessen Horowitz, extending copyright rules to AI models would potentially constitute an existential threat to the current AI industry.

Why does it matter?

The intersection of AI and copyright law is a complex issue with significant implications for innovation, legal liability, ownership rights, commercial interests, policy and regulation, consumer protection, and the future of the AI industry.

The AI sector in 2024 is at a crossroads with existing copyright laws, particularly in the US. The legal system’s reaction to these challenges will be critical in striking the correct balance between preserving creators’ rights and promoting AI innovation and progress. As lawsuits proceed and policymakers engage with these issues, the AI industry may face significant pressure to adapt, depending on the legal interpretations and policy decisions that will emerge from the ongoing processes. Ultimately, these legal fights could determine who the market winners and losers would be.

The intellectual property saga: The age of AI-generated content | Part 1

Read the second part of the blog series: The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2.

As AI advances rapidly, machines are increasingly gaining human-like skills, which is increasingly blurring the distinction between humans and machines. Traditionally, computers were tools that assisted human creativity with clear distinctions: humans had sole ownership and authorship. However, recent AI developments enable machines to independently perform creative tasks, including complex functions such as software development and artistic endeavours like composing music, generating artwork, and even writing novels.

This has sparked debates about whether creations produced by machines should be protected by copyright and patent laws? Furthermore, the question of ownership and authorship becomes complex, as it raises the issue of whether credit should be given to the machine itself, the humans who created the AI, the works the AI feeds off from or perhaps none of the above?

This essay initiates a three-part series that delves into the influence of AI on intellectual property rights (IPR). To start off, we will elucidate the relationship between AI-generated content and copyright. In the following essays, we will assess the ramifications of AI on trademarks, patents, as well as the strategies employed to safeguard intellectual property (IP) in the age of AI.

Understanding IP and the impact of AI 

In essence, IP encompasses a range of rights aimed at protecting human innovation and creativity. These rights include patents, copyrights, trademarks, and trade secrets. They serve as incentives for people and organisations to invest their time, resources, and intelligence in developing new ideas and inventions. Current intellectual property rules and laws focus on safeguarding the products of human intellectual effort. 

Google recently provided financial support for an AI project designed to generate local news articles. Back in 2016, a consortium of museums and researchers based in the Netherlands revealed a portrait named ‘The Next Rembrandt’. This artwork was created by a computer that had meticulously analysed numerous pieces crafted by the 17th-century Dutch artist, Rembrandt Harmenszoon van Rijn. In principle, this invention could be seen as ineligible for copyright protection due to the absence of a human creator. As a result, they might be used and reused without limitations by anyone. This situation could present a major obstacle for companies selling these creations because the art isn’t protected by copyright laws, allowing anyone worldwide to use it without having to pay for it.

Hence, when it comes to creations that involve little to no human involvement the situation becomes more complex and blurred. Recent rulings in copyright law have been applied in two distinct ways.

One approach was to deny copyright protection to works generated by AI (computers), potentially allowing them to become part of the public domain. This approach has been adopted by most countries and was exemplified in the 2022 DABUS case, which centred around an AI-generated image. The US Copyright Office supported this stance by stating that AI lacks the necessary human authorship for a copyright claim. Other patent offices worldwide have made comparable decisions, except for South Africa, where the AI machine Device for Autonomous Bootstrapping of Unified Sentience (DABUS), is recognised as the inventor, and the machine’s owner is acknowledged as the patent holder.

In Europe, the Court of Justice of the European Union (CJEU) has made significant declarations, as seen in the influential Infopaq case (C-5/08 Infopaq International A/S v Danske Dagblades Forening). These declarations emphasise that copyright applies exclusively to original works, requiring that originality represents the author’s own intellectual creation. This typically means that an original work must reflect the author’s personal input, highlighting the need for a human author for copyright eligibility.

The second approach involved attributing authorship to human individuals, often the programmers or developers. This is the approach followed in countries like the UK, India, Ireland, and New Zealand. UK copyright law, specifically section 9(3) of the Copyright, Designs, and Patents Act (CDPA), embodies this approach, stating:

‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’


AI-generated content and copyright

ai artificial intelligence concept robot hands typing on lit keyboard

This illustrates that the laws in many countries are not equipped to handle copyright for non-human creations. One of the primary difficulties is determining authorship and ownership when it comes to AI-generated content. Many argue that it’s improbable for a copyrighted work to come into existence entirely devoid of human input. Typically, a human is likely to play a role in training an AI, and the system may acquire knowledge from copyrighted works created by humans. Furthermore, a human may guide the AI in determining the kind of work it generates, such as selecting the genre of a song and setting its tempo, etc. Nonetheless, as AI becomes more independent in producing art, music, and literature, traditional notions of authorship become unclear. Additionally, concerns have arisen about AI inadvertently replicating copyrighted material, raising questions about liability and accountability. The proliferation of open-source AI models also raises concerns about the boundaries of intellectual property.

In a recent case, US District Judge Beryl Howell ruled that art generated solely by AI cannot be granted copyright protection. This ruling underscores the need for human authorship to qualify for copyright. The case stemmed from Stephen Thaler’s attempt to secure copyright protection for AI-generated artworks. Thaler, the Chief Engineer at Imagination Engines, has been striving for legal recognition of AI-generated creations since 2018. Furthermore, the US Copyright Office has initiated a formal inquiry, called a notice of inquiry (NOI), to address copyright issues related to AI. The NOI aims to examine various aspects of copyright law and policy concerning AI technology. Microsoft is offering legal protection to users of its Copilot AI services who may face copyright infringement lawsuits. Brad Smith, Microsoft’s Chief Legal Officer, introduced the Copilot Copyright Commitment initiative, in which the company commits to assuming legal liabilities associated with copyright infringement claims arising from the use of its AI Copilot services.

On the other hand, Google has submitted a report to the Australian government, highlighting the legal uncertainty and copyright challenges that hinder the development of AI research in the country. Google suggests that there is a need for clarity regarding potential liability for the misuse or abuse of AI systems, as well as the establishment of a new copyright system to enable fair use of copyright-protected content. Google compares Australia unfavourably to other countries with more innovation-friendly legal environments, such as the USA and Singapore.

Training AI models with protected content

Studying is good, but studying in company is better.

Clarifying the legal framework of AI and copyright also requires further guidelines on the training data of AI systems. To train AI systems like ChatGPT, a significant amount of data comprising text, images, and parameters is indispensable. During the training process, AI platforms identify patterns to establish guidelines, make assessments, and generate predictions, enabling them to provide responses to user queries. However, this training procedure may potentially involve infringements of IPR, as it often involves using data collected from the internet, which may include copyrighted content.

In the AI industry, it is common practice to construct datasets for AI models by indiscriminately extracting content and data from websites using software, a process known as web scraping. Data scraping is typically considered lawful, although it comes with certain restrictions. Taking legal action for violations of terms of service offers limited solutions, and the existing laws have largely proven inadequate in dealing with the issue of data scraping. In AI development, the prevailing belief is that the more training data, the better. OpenAI’s GPT-3 model, for instance, underwent training on an extensive 570 GB dataset. These methods, combined with the sheer size of the dataset, mean that tech companies often do not have a complete understanding of the data used to train their models.

An investigation conducted by the online magazine The Atlantic has uncovered that popular generative AI models, including Meta’s open-source Llama, were partially trained using unauthorised copies of books by well-known authors. This includes models like BloombergGPT and GPT-J from the nonprofit EleutherAI. The pirated books, totalling around 170,000 titles published in the last two decades, were part of a larger dataset called the Pile, which was freely available online until recently.

In specific situations, reproducing copyrighted materials may still be permissible without the consent of the copyright holder. In Europe, there are limited and specific exemptions that allow this, such as for purposes like quoting and creating parodies. Despite growing concerns about the use of machine learning (ML) in the EU, it is only recently that EU member states have started implementing copyright exceptions for training purposes. The UK`s 2017 independent AI review, ‘Growing the artificial intelligence industry in the UK’, recommended allowing text and data mining by AI, through appropriate copyright laws. In the USA, access to copyrighted training data seems to be somewhat more permissive. Although US law doesn’t include specific provisions addressing ML, it benefits from a comprehensive and adaptable fair use doctrine that has proven favourable for technological applications involving copyrighted materials.

The indiscriminate scraping of data and the unclear legal framework surrounding AI training datasets and the use of copyrighted materials without proper authorisation have prompted legal actions by content creators and authors. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have filed lawsuits against OpenAI and Meta, alleging that their works were used without permission to train AI models. The lawsuits contend that OpenAI’s ChatGPT and Meta’s LLaMA were trained on datasets obtained from ‘shadow library’ websites containing copyrighted books authored by them.

Why does it matter?

In conclusion, as AI rapidly advances, it blurs the lines between human and machine creativity, raising complex questions regarding IPR. Legislators are facing a challenging decision – whether to grant IP protection or not. As AI continues to advance, it poses significant legal and ethical questions by challenging traditional ideas of authorship and ownership. While navigating this new digital frontier, it’s evident that finding a balance between encouraging AI innovation and protecting IPRs is crucial.

If the stance is maintained that IP protection only applies to human-created works, it could have adverse implications for AI development. This would place AI-generated creations in the public domain, allowing anyone to use them without paying royalties or receiving financial benefits. Conversely, if lawmakers take a different approach, it could profoundly impact human creators and their creativity.

Another approach could be AI developers guaranteeing adherence to data acquisition regulations, which might encompass acquiring licences or providing compensation for IP utilised during the training process. 

One thing is certain, effectively dealing with IP concerns in the AI domain necessitates cooperation among diverse parties, including policymakers, developers, content creators, and enterprises.

Key takeaways from the sixth UN session on cybercrime treaty negotiations

The 6th session of the Ad Hoc Committee (AHC) to elaborate a UN cybercrime convention is over: From 21 August until 1 September 2023,  in New York, delegates from all states finished another round of text-based negotiations. This was a pre-final session before the final negotiation round in February 2024.

Stalled negotiations over a scope and terminology

Well, reaching a final agreement does not seem to be easy. A number of Western advocacy groups and Microsoft publicly expressed their discontent with the current draft (updated on 1 September 2023), which, they stated, could be ‘disastrous for human rights’. At the same time some countries (e.g. Russia and China) shared concerns that the current draft does not meet the scope that was established by the mandate of the committee. In particular, these delegations and their like-minded colleagues believe that the current approach in the chair’s draft does not adequately address the evolving landscape of information and communication technologies (ICTs). For instance, Russia shared its complaint about the secretariat’s alleged disregard for a proposed article addressing the criminalisation of the use of ICTs for extremist and terrorist purposes. Russia, together with a group of states (e.g. China, Namibia, Malaysia, Saudi Arabia and some others), also supported the inclusion of digital assets under Article 16 regarding the laundering of proceeds of crimes. The UK, Tanzania, and Australia opposed the inclusion of digital assets because it does not fall within the scope of the convention. Concerning other articles, Canada, the USA, the EU and its member states, and some other countries also wished to keep the scope more narrow, and opposed proposals, in particular, for articles on international cooperation (i.e. 37, 38, and 39) that would significantly expand the scope of the treaty.

The use of specific words in each provision, considering the power behind them, is yet another issue that remains uncertain. Even though the chair emphasised that the dedicated terminology group continues working to resolve the issues over terms and propose some ideas, many delegations have split into at least two opposing camps: whether to use ‘cybercrime’ or ‘the use of ICTs for malicious purposes’, to keep the verb ‘combat’ or replace it with more precise verbs such as ‘suppress’, or whether to use ‘child pornography’ or ‘online child sexual abuse’, ‘digital’ or ‘electronic’ information, and so on. 

 Book, Publication, Person, Comics, Face, Head, Art

For instance, in the review of Articles 6–10 on criminalisation, which cover essential cybercrime offences such as illegal access, illegal interception, data interference, systems interference, and the misuse of devices, several debates revolved around the terms ‘without right’ vs ‘unlawful’, and ‘dishonest intent’ vs ‘criminal intent’. 

Another disagreement arose over the terms: ‘restitution’ or ‘compensation’ in Article 52. This provision requires states to retain the proceeds of crimes, to be disbursed to requesting states to compensate victims. India, however, supported by China, Russia, Syria, Egypt, and Iran proposed that the term ‘compensation’ be replaced with ‘restitution’ to avoid further financial burden for states. Additionally, India suggested that ‘compensation’ shall be at the discretion of national laws and not under the convention. Australia and Canada suggested retaining the word ‘compensation’ because it would ensure that the proceeds of the crime delivered to requesting states are only used for the compensation of victims.

The bottom line is that terminology and scope, two of the most critical elements of the convention, remain unresolved, needing attention at the session in February 2024. However, if states have not been able to agree for the past 6 sessions, the international community needs a true diplomatic miracle to occur in the current geopolitical climate. At the same time, the chair confirmed that she has no intention of extending her role beyond February.

Hurdles to deal with human rights and data protection-related provisions

We wrote before that states are divided when discussing human rights perspectives and safeguards: While one group is pushing for a stronger text to protect human rights and fundamental freedoms within the convention, another group disagrees, arguing that the AHC is not mandated to negotiate another human rights convention, but an international treaty to facilitate law enforcement cooperation in combating cybercrime. 

In the context of text-based negotiations, this has meant that some states suggested deleting Article 5 on human rights and merging it with Article 24 to remove the gender perspective-related paragraphs because of the concerns over the definition of the ‘gender perspective’ and challenges to translate the phrase into other languages. Another clash happened during discussions about whether the provisions should allow the real-time collection of traffic data and interception of content data (Articles 29 and 30, respectively). While Singapore, Switzerland, Malaysia, and Vietnam proposed removing such powers from the text, other delegations (e.g. Brazil, South Africa, the USA, Russia, Argentina and others) favoured keeping them. The EU stressed that such measures represent a high level of intrusion and significantly interfere with the human rights and freedoms of individuals. However, the EU expressed its openness to consider keeping such provisions, provided that the conditions and safeguards outlined in Articles 24, 36 and 40(21) remain in the text.

With regard to data protection in Article 36, CARICOM proposed an amendment allowing states to impose appropriate conditions in compliance with their applicable laws to facilitate personal data transfers. The EU and its member states, New Zealand, Albania, the USA, the UK, China, Norway, Colombia, Ecuador, Pakistan, Switzerland, and some other delegations supported this proposal. India did not, while some other delegations (e.g. Russia, Malaysia, Argentina, Türkiye, Iran, Namibia and others) preferred retaining the original text.

Cybersecurity,protection

Articles on international cooperation or international competition?

Negotiations on the international cooperation chapter have not been smooth either. During the discussions on mutual assistance, Russia, in particular, pointed out a lack of grounds for requests and suggested adding a request for “data identifying the person who is the subject of a crime report” with, where possible “their location and nationality or account as well as items concerned”. Australia, the USA, and Canada did not support this amendment. 

Regarding the expedited preservation of stored computer data/digital information in Article 42, Russia also emphasised the need to distinguish between the location of a service provider or any other data custodian, as defined in the text, and the necessity to specifically highlight the locations where data flows and processing activities, such as storage and transmission, occur due to technologies like cloud computing. To address this ‘loss of location’ issue, Russia suggested referring to the second protocol of the Budapest Convention. The reasoning for this inclusion was to incorporate the concept of data as being in the possession or under the control of a service provider or established through data processing activities operating from within the borders of another state party. The EU and its member states, the USA, Australia, Malaysia, South Africa, Nigeria, Canada, and others were among delegations who preferred to retain the original draft text.

A number of delegations (e.g. Pakistan, Iran, China, Mauritania) also proposed an additional article on ‘cooperation between national authorities and service providers’ to oblige the reporting of criminal incidents to relevant law enforcement authorities, providing support to such authorities by sharing expertise, training, and knowledge, ensuring the implementation of protective measures and due diligence protocols, ensuring adequate training for their workforce, promptly preserving electronic evidence, ensuring the confidentiality of requests received from such authorities, and taking measures to render offensive and harmful content inaccessible. The USA, Georgia, Canada, Australia, the EU, and its member states, and some other delegations rejected this proposal. 

SDGs in the scope of the convention?

An interesting development was the inclusion of the word ‘sustainability’ under Article 56 on the implementation of the convention. While sustainability was not mentioned in the previous sessions, Australia, China, New Zealand and Yemen, among other countries, proposed that Article 56 should read: ‘Implementation of the convention through sustainable development and technical assistance’. Costa Rica claimed that such inclusion would link the capacity building under this convention to the achievement of the Sustainable Development Goals (SDGs)”. Additionally, Paraguay proposed that Article 52(1) should ensure that the implementation of the convention through international cooperation should take into account ‘negative effects of the offences covered by this Convention on society in general and, in particular, on sustainable development, including the limited access that landlocked countries are facing’. While the USA and Tanzania acknowledged the importance of Paraguay’s proposal, they stated that they could not support this edit.

What’s next?

The committee will continue the negotiations in February 2024 for the seventh session, and if the text is adopted, states will still have to ratify it afterwards. If, however, ‘should a consensus prove not to be possible, the Bureau of the UN Office on Drugs and Crime (UNODC) will confirm that the decisions shall be taken by a two-thirds majority of the present voting representatives’ (from the resolution establishing the AHC). The chair must report their final decisions before the 78th session of the UN General Assembly.

5G Transformation: The power of good policy 

The global rollout of 5G networks has been met with considerable excitement, and rightly so. While the promise of faster data speeds has captured much of the spotlight, the true transformational potential of 5G extends far beyond mere internet speed enhancements. Across continents, from the bustling metropolises of North America to the vibrant landscapes of Africa, a diverse array of strategies and approaches is shaping the future of 5G transformation connectivity. As policymakers grapple with the intricacies of crafting effective 5G spectrum policies, it’s essential to understand how these policies are intrinsically linked to achieving the wider benefits of this groundbreaking technology. 

The spectrum: A valuable resource

At the heart of 5G technology is the radio spectrum, a finite and valuable resource allocated by governments to mobile network operators. These spectrum bands determine the speed, coverage, and reliability of wireless networks. In 2023, there’s a high demand for mid-band and the millimeter-wave spectrum, both essential for delivering the anticipated 5G transformation.

 City, Chart, Diagram, Plan, Plot, Urban, Neighborhood, Metropolis
Frequency bands of 5G networks [picture from digi.com]

Policy imperatives to ensure low latency

Ultra-low latency is one of 5G’s defining features, enabling real-time communication and interaction over the internet. Policy decisions that prioritise and allocate specific spectrum bands for applications that require low latency, such as telemedicine and autonomous vehicles, can have a profound impact on their effectiveness and safety. Policymakers must prioritise the allocation of spectrum for latency-sensitive applications while also accommodating the growing data demands of traditional mobile services. 

The US Federal Communications Commission (FCC) launched its 5G FAST Plan in 2018. This initiative facilitates the deployment of 5G infrastructure by streamlining regulations and accelerating spectrum availability. As part of the programme, the FCC conducted auctions for spectrum bands suitable for 5G, such as the 24 GHz and 28 GHz bands, to support high-frequency, low-latency applications. 
The EU introduced the 5G Action Plan in 2016 as part of its broader Digital Single Market strategy. The plan emphasises cooperation among EU member states to create the conditions needed for 5G deployment, including favourable spectrum policies. 
China launched its National 5G Strategy in 2019, outlining a comprehensive roadmap for 5G development. The strategy includes policies to allocate and optimise spectrum resources for 5G networks.The Independent Communications Authority of South Africa (ICASA) is actively exploring spectrum policies to accommodate 5G. ICASA has published draft regulations for the use of high-demand spectrum, including the 3.5 GHz and 2.6 GHz bands, which are crucial for 5G deployment. ICASA’s efforts to regulate spectrum have been praised by the Wi-Fi Alliance for their role in advancing Wi-Fi technology and connectivity in Africa. ICASA aims to amend radio frequency regulations to stimulate digital development, investment, and innovation in the telecom sector for public benefit.

Enabling massive Internet of Things connectivity

The International Telecommunication Union (ITU) has classified 5G mobile network services in three categories: Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low-latency Communications (uRLLC), and Massive Machine Type Communications (mMTC).The mMTC service was created specifically to enable an enormous volume of small data packets to be collected from large numbers of devices simultaneously; this is the case with internet of things (IoT applications. mMTC classified 5G as the first network designed for Internet of Things from the ground up.

City, Metropolis, Urban, Architecture, Building, Cityscape, 5g speeds, Neighborhood, Office Building, Lighting, Outdoors
5g communication network is important for the Internet of Things powered ‘Smart Cities’

The IoT stands as a cornerstone of 5G transformation potential; 5G is expected to unleash a massive 5G IoT ecosystem where networks can serve the communication needs for billions of connected devices, with the appropriate trade-offs between speed, latency, and cost. However, this potential hinges on the availability of sufficient spectrum for the massive device connectivity that the IoT needs. The demands that the IoT places on cellular networks vary by application, often requiring remote device management. And as connectivity and speed (especially even very short network dropouts) are mission critical for remotely-operated devices, URLLC and 5G Massive MIMO radio access technologies offer key ingredients for effective IoT operations.

Effective 5G spectrum policies must allocate dedicated bands for IoT devices while ensuring interference-free communication. Standards in Releases 14 and 15 of the Third Generation Partnership Project (3GPP) the solve most of the commercial bottlenecks to facilitate the vision of 5G and the huge IoT market. 

Diverse approaches to spectrum allocation

The USA’s spectrum allocation strategy is centered around auctions as its primary methodology. The FCC has been at the forefront of this approach, conducting auctions for various frequency bands. This auction-driven strategy allows network operators to bid for licenses, enabling them to gain access to specific frequency ranges. Notably, the focus has been on making the mid-band spectrum available, with a significant emphasis on cybersecurity.

Its proactive stance has marked South Korea’s approach to spectrum allocation. Among the pioneers in launching commercial 5G services, the South Korean government facilitated early spectrum auctions. As a result, they allocated critical frequency bands, such as 3.5 GHz and 28 GHz, for 5G deployment. This forward-looking strategy not only contributed to the rapid adoption of 5G within the nation, but also positioned South Korea as a global leader in the 5G revolution.

The Korea Fair Trade Commission (KFTC), South Korea’s antitrust regulator, has fined three domestic mobile carriers a total of 33.6 billion won ($25.06 million) for exaggerating 5G speeds. [link]

The EU champions spectrum harmonisation to enable seamless cross-border connectivity. The identification of the 26 GHz band for 5G in the Radio Spectrum Policy Group (RSPG) decision further supports the development of a coordinated approach. By aligning policies across member states, the EU aims to eliminate fragmentation and ensure a cohesive 5G experience.

Moreover, many African countries are in the process of identifying and allocating spectrum for 5G deployment. Governments and regulatory bodies have considered various frequency bands, such as the C-Band (around 3.5 GHz) and the millimeter-wave bands (above 24 GHz), for 5G services. Some African nations have issued trial licenses to telecommunications operators to conduct 5G trials and test deployments. Thesehelp operators understand the technical challenges and opportunities associated with 5G in the African context. For example, in South Africa, ICASA is developing a framework for 5G spectrum allocation. Their approach encompasses license conditions, coverage requirements, and the possibility of sharing spectrum resources. 

Kenya is in the process of exploring opportunities to release additional spectrum to facilitate 5G deployment. The Communications Authority of Kenya is contemplating reallocating the 700 and 800 MHz bands for mobile broadband use, including 5G services.

 Chart, Plot
Ookla 5G Map [link]

A well-structured spectrum management framework serves as the guiding principle for equitable and efficient allocation of this resource. These frameworks include regulatory approaches like exclusive licensing, lightly-managed sharing, and license-exempt usage. Sharing frameworks enable coexistence, from simple co-primary sharing to multi-tiered arrangements. Static sharing uses techniques such as FDMA and CDMA, while Dynamic Spectrum Sharing (DSS) allows users to access spectrum as needed. 

In conclusion, the intricate world of 5G spectrum policies profoundly shapes the path of 5G’s transformative journey. Beyond speed enhancements, global strategies spotlighted here reveal the interplay of technology and governance.

From South Korea’s spectrum leadership to the EU’s harmonisation and Africa’s context-specific solutions to challenges, each of these approaches underscores the link between policies and 5G’s potential. These efforts are indispensable to foster optimal policies for future development.

Today’s decisions will echo into the future, moulding 5G’s global impact. This intricate interweaving emphasises 5G’s capabilities and policy’s role in driving unprecedented connectivity, innovation, and societal change.