(Jail) time ahead for the cryptocurrency industry 

The cryptocurrency and digital asset industry has once again been the focus of the worldwide media. This time, it is not about the promises of an inclusive future of finance but is related to a couple of court cases initiated or found to have come to a close in the past months. 


These particular developments can be seen as a desire of regulators worldwide to set legal practice around the new class of digital assets (or cryptoassets as named in regulations worldwide) and send a message to the ever-growing base of consumers of such products that they will be protected while entering this new arena. A particular push is seen in the United States, where a couple of the world’s biggest cryptocurrency exchanges Binance and Kraken have been accused and charged with anti-money laundering activities. In both cases, regulators highlighted the lack of fully implemented Know-Your-Customer (KYC) procedures as a primary focus. In the case of the world’s number one cryptocurrency exchange Binance, the US Justice Department argued that the failure of KYC led to the money laundering and evasion of international sanctions. Cryptocurrency exchange Binance, and its CEO, Zhao Changpeng pleaded guilty to charges filed by the US Justice Department and US Securities and Exchange Commission (SEC) while agreeing to a record USD 4.2 billion fine in this case. In the most recent case, cryptocurrency exchange KuCoin has been hit with the same anti-money laundering charges and is facing a similar outcome. For Kraken, the SEC is asking for a total ban in the USA as they failed to register within the regulatory framework.

A couple of significant cases from the past have received their final acts in the past months. The cases of Celsius, Terra, and, most prominently, FTX exchange moved from the standstill, and in the case of FTX, the trial ended with the sentencing of the former FTX CEO Sam Bankman-Fried. The sentence was delivered in the court case related to the collapse of the FTX exchange and Alameda Research trading firm in November 2022. The former FTX CEO was sentenced to 25 years in prison six months after being convicted of fraud. In addition to the sentence, Bankman-Fried was ordered to pay USD 11 billion in reparations and damages to FTX users and investors. Another crypto-company CEO, Do Kwon was extradited from Montenegro to prosecutors in South Korea for the trial of the Terra cryptocurrency company. Kwon was hiding from law enforcement for a whole year to be finally arrested at the tarmac of the Podgorica airport in Montenegro. Kwon also faces a lengthy jail sentence if allegations from the indictment stand the trial case.

Do Kwon, Cryptocurrency king,  Helmet, Adult, Male, Man, Person, Officer, Police Officer, , Head, Arrest
‘Cryptocurrency King’ Do Kwon with a a group of Montenegro police officers. Photo by: Radio Free Europe (RFE)

In another long-lasting legal battle before the US courts, a case against one of the biggest cryptocurrency companies, Ripple Labs, is nearing its end. The prosecutors look for another major fine of USD 2 billion. This would, according to their statement, send a message to the industry in relation to consumer protection. What exactly is that message?


‘Countries should take the issue seriously and strengthen regulation, as virtual assets tend to flow towards less regulated jurisdictions.’ This is pointed out in the Financial Action Task Force (FATF) president T. Raja Kumar’s interview, in which he acknowledged that only one-third of the world has implemented some form of cryptocurrency regulations. Mr Kumar urges countries to take the issue seriously and strengthen regulation.

That is definitely a trend for crypto companies. As a whole, the cryptocurrency industry has seen a significant drop in value received by illicit cryptocurrency addresses. The share of all crypto transaction volume associated with illicit activity has also decreased. This is stressed in the annual report by Chainalysis, which provides blockchain forensics for most governments worldwide. So, the industry is going in the right direction.

OEWG’s seventh substantive session: the highlights

The OEWG held its 7th substantive session on 4-8 March. With 18 months until the end of the group’s mandate, a sense of urgency can be felt in the discussions, particularly on the mechanism that will follow the OEWG.

Some of the main takeaways from this session are:

  • AI is increasingly prevalent in the discussion on threats, with ransomware and election interference rounding up the top 3 threats.
  • There is still no agreement on whether new norms are needed.
  • Agreement is also elusive on whether and how international law and international humanitarian law apply to cyberspace.
  • The operationalisation of the POC directory, the most important confidence building measure (CBM) to result from the OEWG, is in full swing ahead of its launch on 9 May.
  • Bolstering capacity building efforts and funding for them are necessary actions.
  • The mechanism for regular institutional dialogue on ICT security must be single-track and consensus-based. Whether it will take the shape of the Programme of Action (PoA) or another OEWG is still up in the air.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated OEWG process page.

un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
Threats: AI, elections and ransomware at the forefront
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

The widespread availability of AI tools for different purposes led to delegations focusing on AI-enabled threats. AI tools may exacerbate malicious cyber activity, for example, by faster searching for ICT vulnerabilities, developing malware, and boosting social engineering and phishing tactics. 

France, the Netherlands, and Australia spoke about the security of AI itself, pointing to the vulnerability of algorithms and platforms and the risk of poisoning models. 

2024 is the year of elections on different levels in many states. Large language models (LLMs) and generative AI spur the fake creation process and the proliferation of disinformation and manipulation of public opinion, especially during significant political and social processes. Belgium, Italy, Germany, Canada, and Denmark expressed concern that cyber operations are used to interfere in democratic processes. Malicious use of cyber capabilities can influence political outcomes and threaten the process by targeting voters, politicians, political parties, and election infrastructure, thus undermining trust in democratic institutions. 

Another prevalent threat highlighted by the delegations was ransomware. Cybercriminals target critical infrastructure and life-sustaining systems, but states noted that the most suffering sector is healthcare. Belgium stressed that such attacks eventually lead to human casualties because of the disruption in providing medical assistance. The USA and Greece highlighted the increase in ransomware attacks because some states allow criminal actors to act from their territories with impunity. Also, now AI is an excellent leverage for malicious threat actors, providing unsophisticated operators of ransomware-as-service with a new degree of possibilities and allowing rogue states to exploit this technology for offensive cyber activities. 

Ransomware attacks go hand in hand with IP theft, data breaches, violation of privacy, and cryptocurrency theft. The Republic of Korea, Japan, the Czech Republic, Mexico, Australia and Kenya connected such heists with the proliferation of WMDs. 

Delegations expressed concerns about a growing commercial market of cyber intrusion capabilities, 0-day vulnerabilities and hacking-as-service. The UK, Belgium, Australia, and Cuba considered this market capable of increasing instability in cyberspace. The Pall Mall process launched by France and the UK aimed at addressing the proliferation of commercially available cyber intrusion tools was upheld by Switzerland and Germany.

The growing IoT landscape expands the surfaces of cyberattacks, Mauritius, India, and Kazakhstan mentioned. Quantum computing may break the existing encryption methods, leading to strategic advantages for those who control this technology, Brazil added. It could also be used to develop armaments, other military equipment, and offensive operations. 

Russia once again drew attention to the use of information space as an arena of geopolitical confrontation and militarisation of ICTs. Russia, China, and Iran have also highlighted certain states’ monopolisation of the ICT market and internet governance as threats to cyber stability. Syria and Iran pointed to practices of technological embargo and politicised ICT supply chain issues that weaken the cyber resilience of States and impose barriers to trade and tech development.

Norms: new norms vs. norms’ implementation
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Reflections of the several delegations have highlighted the existing binary dilemma: should there be new norms or not? 

Iran, China and Russia highlighted once again that new norms are needed. Russia also suggested four new norms to strengthen the sovereignty, territorial integrity and independence of states; to suggest the inadmissibility of unsubstantiated accusations against states; and to promote the settlement of interstate conflicts through negotiations, mediation, reconciliation or other peaceful means. Brazil noted that additional norms will become necessary as technology evolves and stressed that any efforts to develop new norms must occur within the UN OEWG. South Africa expressed that they could support a new norm to protect against AI-powered cyber operations and attacks on AI systems. Vietnam strongly supported the development of technical standards regarding electronic evidence to facilitate the verification of the origins of cybersecurity incidents. 

However, some delegations insist that implementing already existing norms comes before elaborating new ones. Bangladesh urged states to collaborate more to translate norms into concrete actions and focus on providing guidance on their interpretation and implementation. The UK, in particular, suggested four steps to improve the implementation of the norms by addressing the growing commercial market for intrusive ICT capabilities. The delegate called states to prevent commercially available cyber intrusion capabilities from being used irresponsibly, to ensure that governments take the appropriate regulatory steps within their domestic jurisdictions, to conduct procurement responsibly, and to use cyber capabilities responsibly and lawfully.

Several delegations mentioned the accountability and due diligence issues in implementing the agreed norms. New Zealand, in particular, shared that the OEWG could usefully examine what to do when agreed norms are willfully ignored. France mentioned that it continues its work on the due diligence norm C with other countries. Italy called for dedicated efforts to set up accountability mechanisms to ‘increase mutual responsibility among states’ and proposed national measures to detect, defend and respond to and recover from ICT incidents, which may include the establishment at the national level of a centre or a responsible agency that leads on ICT matters.

The Chair issued a draft of the norms implementation checklist before the start of the session. According to Egypt, this checklist must be simplified because it includes duplicate measures and detailed actions beyond states’ capabilities. The checklist, Egypt continued, should acknowledge technological gaps among states and their diverse national legal systems, thus respecting regions’ specifics. Many delegations have strongly supported the checklist and made recommendations. For example, the Netherlands suggested that the checklist includes the consensus notion that state practices, such as mass arbitrary or unlawful mass surveillance, may negatively impact human rights, particularly the right to privacy.

3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.
3d united nations flag waving wind with modern skyscraper city close up un banner blowing soft smooth silk cloth fabric texture ensign background
UN OEWG Chair publishes discussion paper on norms implementation checklist
The checklist comprises voluntary, practical, and actionable measures collected from different relevant sources.

Some delegations addressed the Chair’s questions on implementing critical infrastructure protection (CIP) and supply chain security-related norms. The EU reminded us that it is necessary to look into existing cybersecurity best practices in this regard and gave an example of the Geneva Manual as a multistakeholder initiative to clarify the roles and responsibilities of non-state actors in implementing the norms. Italy encouraged the adoption of specific frameworks for assessing the supply chain security of ICT products based on guidelines, best practices, and international standards. Practically, it could include establishing national evaluation and security certification centres for cyber certification schemes. The Republic of Korea suggested building institutional and normative foundations to provide security guidelines starting from the development stage of software products, which can be used in the public sector to protect public service or critical infrastructure from being targeted by cyberattacks. Japan suggested adopting the Software Bill of Materials (SBOM) and discussing how ICT manufacturers can achieve security by design.

International law: applicability to use of ICTs in cyberspace
 Accessories, Bag, Handbag, Scale

The member states have held their previous positions on the applicability of international law. Most states have confirmed the applicability of international law to cyberspace, including the UN Charter, international human rights law and international humanitarian law. However, Russia and Iran stated that existing international law does not apply to cyberspace, while Syria noted how international law applies in cyberspace is unclear. However, China and Russia pointed out that the principles of international law apply. These states, as well as Pakistan, Burkina Faso, and Belarus, support the development of a new legally binding treaty. 

Of note was the contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay that reflected on the continued engagement of a cross-regional group of 13 states based on a working paper from July 2023. The contribution highlighted the emerging convergence of views that: 

  • states must respect and protect human rights and fundamental freedoms, both online and offline, by their respective obligations; 
  • states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused; and
  • International humanitarian law applies to cyber activities in situations of armed conflict, including, where applicable, the established international legal principles of humanity, necessity, proportionality and distinction.

Many states echoed the Colombian statement, including Germany, Australia, Czechia, Switzerland, Italy, Canada, the USA, the UK, Spain and others.

New discussion point

The contribution by Colombia on behalf of Australia, El Salvador, Estonia, and Uruguay highlighted that states must meet their international obligations regarding internationally wrongful acts attributable to them under international law, which includes reparation for the injury caused, a new element in the discussions within the OEWG substantive sessions. Thailand, Uganda, and the Netherlands have also specifically addressed the need for reparation for the injury caused.

The discussions have also progressed on the applicability of international humanitarian law (IHL) to the use of ICT in situations of armed conflicts. 

Senegal presented a working paper on the application of international humanitarian law on behalf of Brazil, Canada, Chile, Colombia, the Czech Republic, Estonia, Germany, the Netherlands, Mexico, the Republic of Korea, Sweden, and Switzerland. This working paper shows convergence on the applicability of IHL in situations of armed conflict. It delves deeper into the principles and rules of IHL governing the use of ICTs, notably military necessity, humanity, distinction, and proportionality. Other states welcomed with working paper, including Italy, Australia, South Africa, Austria, the United Kingdom, the USA, France, Spain, Uruguay and others. 

On the other hand, Sri Lanka, Pakistan, and China have called for additional efforts to develop an understanding of the applicability of IHL and its gaps.

In its statement on IHL, the ICRC has pointed out the differences between the definitions of armed attack under the UN Charter and under IHL, the need to discuss how IHL limits cyber operations, and the need to interpret the existing rules of IHL as not to undermine the protective function of IHL in the ICT environment.

icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.
icrc logo
The International Committee of the Red Cross: New rules protecting from consequences of cyberattacks may be needed
The ICRC emphasised the urgent need for deeper discussions on the application of international humanitarian law to the use of ICTs in armed conflict, underscoring the importance of upholding humanitarian principles amidst evolving means of warfare.

The discussion on international law greatly benefited from the recent submission to the OEWG by the Peace and Security Council of the African Union on the Application of international law in the use of ICTs in cyberspace (Common African Position). Reflecting the views of 55 states, it represents a significant contribution to the work of the OEWG and an example of valuable input by regional forums. This comprehensive position paper addresses issues of applicability of international law in cyberspace, including human rights and IHL, principles of sovereignty, due diligence, prohibition of intervention in the affairs of states in cyberspace, peaceful settlement of disputes, prohibition of the threat or use of force in cyberspace, rules of attribution, and capacity building and international cooperation. The majority of the delegations welcomed the Common African Position.

African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.
African Union AU
African Union submits position on international law to OEWG
The position was adopted by the Peace and Security Council of the African Union on 31 January 2024.

The Chair has also pointed out that, as of date, 23 states have shared their national positions, and many others are preparing their positions on the applicability of international law in cyberspace. 

Most states supported scenario-based exercises to enhance the understanding between states on the applicability of international law. They would like to have the opportunity to conduct such exercises and have a more in-depth discussion on international law in the May intersessional meeting. China firmly opposes this.

Several states, such as Japan, Canada, Czechia, the EU, Ireland and others, would like to see future discussions on international law embedded in the Programme of Action (PoA). Read more about the talks on the PoA below.

CBMs: operationalising the POC directory
 Stencil, Text

The official launch of the Points of Contact (PoC) directory is scheduled for 9 May, which led to the discussion revolving around the operationalisation of the POC directory. At the time of the session, 25 countries had appointed their POCs. Most delegations reiterated their support for the directory and either confirmed their appointments or that the process was ongoing. Some states nevertheless suggested adjustments to the POC directory. Ghana, Canada, and Colombia commented that communication protocols may be helpful, while Czechia and Switzerland recommended that the POC shouldn’t be overburdened with these procedures yet. Argentina also brought up the potential participation of non-state actors in the POC directory.

To further facilitate communication, several states advanced the usefulness of building a common terminology (Kazakhstan, Mauritius, Iran, Pakistan), while Brazil mentioned that Mercosur was effectively working on this kind of taxonomy.

While Czechia, Switzerland and Japan underlined the necessity to focus first on the implementation and consolidation of existing CBMs, many states nevertheless were in favour of additional CBMs: protection of critical infrastructure (Switzerland, Colombia, Malaysia, Pakistan, Fiji, Netherlands, Singapore and Czechia) as well as coordinated vulnerability disclosure (Singapore, Netherlands, Switzerland, Mauritius, Colombia, Malaysia and Czechia). The integration of multi-stakeholders to the development of CBMs was also considered by some states and organisations (the EU, Chile, Albania, Argentina) while adding public-private partnerships as a CBM received broad support from Kazakhstan, Qatar, Switzerland, South Africa, Mauritius, Colombia, Malaysia, Pakistan, South Korea, Netherlands, and Singapore.

All states recalled and praised the significance of regional and subregional cooperation in the implementation of CBMs regionally and how it can contribute to the development of CBMs globally. In that respect, most states highlighted enriching initiatives at a cross-regional level, such as a recent side event at the German House. Work within the OAS, the OSCE, the ASEAN, the Pacific region, and the African Union was underlined. Interventions were enriched explicitly by sharing national experiences, most notably Kazakhstan’s and France’s recent use of the OSCE community portal for POC.Finally, states highlighted the link between CBMs and capacity-building, Ghana, Djibouti, and Fiji sharing their national experiences in closing the digital divide. In that vein, Argentina, Iran, Pakistan, Djibouti, Botswana, Fiji, Chile, Thailand, Ethiopia, Mauritius, and Colombia support creating a specific CBM on capacity-building.

Capacity building: bolstering efforts and funding
 Art, Drawing, Doodle

Several noteworthy proposals were put forth by different countries, each aiming to bolster capacity building efforts. The Philippines introduced a comprehensive ‘Needs-Based Capacity Building Catalogue,’ designed to help member states identify their specific capacity needs, connect with relevant providers, and access application guidance for capacity building programmes.

 Page, Text
A scheme of the Philippine proposal. Source: UNODA.

Kuwait proposed an expansion of the Global Cybersecurity Cooperation Portal (GCSE), suggesting adding a module dedicated to housing both established and proposed norms, thus facilitating collaboration among member states and tracking the implementation progress of these norms. India‘s CERT expressed willingness to develop an awareness booklet on ICT and best practices with the contribution of other delegations, intending to post it on the proposed GCSE for widespread dissemination.

The crucial issue of funding for capacity building received substantial attention during the discussions, with multiple delegations bringing to the fore the need for additional resources to sustainably support such efforts. Uganda advocated establishing a UN voluntary fund targeting countries and regions most in need. In contrast, others stressed the imperative of exploring structured avenues within the UN framework for resource mobilisation and allocation. 

On the foundational capacities of cybersecurity, an emphasis was placed on developing ICT policies and national strategies, enhancing societal awareness, and establishing national cybersecurity agencies or CERTs.

Furthermore, the importance of self-assessment tools for improving states’ participation in capacity building programmes was emphasised. Pakistan proposed implementing checklists and frameworks for evaluating cybersecurity readiness and identifying gaps. Rwanda advocated for reviews based on the cybersecurity capacity maturity model (CMM) to achieve varying levels of capacity maturity. The discussions also commended existing initiatives, such as the Secretariat’s mapping exercise and emphasised the need for a multistakeholder approach in capacity building efforts. Finally, Germany highlighted the significant contributions of organisations in creating gender-sensitive toolkits for cybersecurity programming, underscoring the importance of incorporating gender perspectives in implementing the UN framework on cybersecurity.

Regular institutional dialogue: the fight for a single-track process
 Accessories, Sunglasses, Text, Handwriting, Glasses

States are still divided on the issue of regular institutional dialogue. What they agree on is that there must be a singular process, its establishment must be agreed upon by consensus, and decisions it makes must be by consensus. 

France, one of the original co-sponsors of the PoA, has delivered a presentation on the PoA’s future elements and organisation. Review conferences would be convened in the framework of the POA every few years. The scope of these review conferences would include (i) assessing the evolving cyber threat landscape, the results of the initiatives and meetings of the mechanism, (ii) updating the framework as necessary and (iii) providing strategic direction and mandate or a program of work for the POA’s activities. The periodicity would need to be defined as not being a burden to delegations, especially delegations from small countries and developing countries. However, the PoA would need to keep up with the rapid evolution of technology and of the threat landscape.

The PoA would also include open-ended plenary discussions to (i) assess the progress in the implementation of the framework, (ii) take forward any recommendations from these modalities (iii) to discuss ongoing and emerging threats, (iv) to provide guidance for open ended technical meetings and practical initiatives. Inter-sessional meetings could also be convened if necessary.

Furthermore, four modalities would feed discussions on the implementation of the framework: capacity building, voluntary reporting by states, practical initiatives, and contributions from multistakeholder community. The POA could leverage existing and potential capacity building efforts in order to increase their visibility, improve their coordination, and support the mobilisation of resources. The review conferences and the discussions would then provide an opportunity to exchange on the ongoing capacity building efforts and identify areas where additional action is needed. Voluntary reporting of states could be based either on creating a new reporting system or by promoting existing mechanisms. The PoA would contain, enable, and deepen practical initiatives. It would build on existing initiatives and develop new ones when necessary. The PoA would enable that engagement and collaboration with the multistakeholder community.

France also noted that a cross-regional paper to build on this proposal will be submitted at the next session.

Multiple delegations expressed support for the PoA, including the EU, the USA, the UK,  Canada, Latvia, Switzerland, Cote d’Ivoire, Croatia, Belgium, Slovakia, Czechia, Israel, and Japan.

The Russian Federation, the country that originally suggested the OEWG, is the biggest proponent of its continuation. Russia cautioned against making decisions by a majority in the General Assembly, noting that such an approach will not be met with understanding by member states, first and foremost developing countries, which long fought to get the opportunity to directly partake in the negotiations process on the principles governing information security. Russia stated that after 2025, a permanent OEWG with a decision-making function should be established. Its pillar activity would be crafting legally binding rules, which would serve as elements of a future universal agreement on information security. The OEWG would also adapt international law to the ICT sphere. It would strengthen CBMs, launch mechanisms for cooperation, and establish programmes of funds for capacity building. Belarus, Venezuela, and Iran are also in favour of another OEWG.

A number of countries didn’t express support for either the PoA or the OEWG but noted some of the elements the future mechanism should have.

Similarly to Russia, China noted that the future mechanism should implement the existing framework but also formulate new norms and facilitate the drafting of legal instruments. The Arab Group noted that the future mechanism should develop the existing normative framework to achieve new legally binding norms. Indonesia also noted the mechanism should create rules and norms for a secure and safe cyberspace.

Latvia and Switzerland noted that the mechanism must focus on the implementation of the existing framework. However, Switzerland and the Arab Group noted that the mechanism could identify gaps in the framework and could develop the framework further.

Many delegations noted that capacity building must be an integral part of the regular mechanism, such as South Africa, Bangladesh, the Arab Group, Switzerland, Indonesia, and Kenya.

States also expressed opinions on which topics should be discussed under the permanent mechanism. Malaysia, South Africa, Korea, and Indonesia stated that the topics under the mechanism should be broadly similar to those of the OEWG. The UK, Latvia and Kenya stated it should discuss threats, while Bangladesh outlined the following emerging threats: countering disinformation campaigns, including deepfakes, quantum computing, AI-powered hacking, and addressing the use of ICTs for malicious purposes by non-state actors

South Africa highlighted that discussion on voluntary commitments, such as norms or CBMs, should be developed without prejudice to the possibility of a future legally binding agreement. The UK noted that the mechanism should also discuss international law.

States also discussed the operational details of the future mechanism. For instance, Egypt suggested that the future mechanism hold biannual meetings every two years, review conferences to be convened every six years, and intersessional meetings or informal working groups that may be decided by consensus. The future mechanism should ensure the operationalisation and review of established cyber tools, including POC’s directory and all other proposals to be adopted by the current OEWG. Sri Lanka noted that the sequence of submitting progress reports, be it annual or biennial, should correspond with the term of the Chair and its Bureau.

Brazil suggested a moratorium on First Committee resolutions until the end of the OEWG’s mandate to allow member states to focus on their efforts in the OEWG. This suggestion was supported by El Salvador, South Africa, Bangladesh, and India.

Dedicated stakeholders session

The dedicated stakeholder session allowed ten stakeholders to share their expertise within the substantive session. 

The stakeholders addressed the topics of CII protection and AI (Center for Excellence of RSIS), norms I and J, supply chain vulnerabilities, and addressing the threat lifecycle (Hitachi), role of youth and the importance of youth perspective as a possible area of thematic interest of OEWG (Youth for Privacy). The topics of AI and supply chain management are echoed in SafePC Solutions‘ statement. At the same time, the Centre for International Law (CIL) at the National University of Singapore focused on the intersection of international law and the use of AI.

Chatham House has shared their research on the proliferation of commercial cyber intrusion tools, among others, and the Pall Mall Process, launched by the UK and France. Access Now focused on intersectional harms caused by malicious cyber threats, issues of surveillance and norms E and J. Building on the Chatham House and Access Now remarks, the Paris Peace Forum focused its intervention on the commercial proliferation of cyber-intrusive and disruptive cyber capabilities, and possible helpful steps states could undertake in the short term.

DiploFoundation focused on the responsibility of non-state stakeholders in cyberspace and shared the Geneva Manual on responsible behaviour in cyberspace.Nuclear Age Peace Foundation, in their statement, connected cybersecurity concerns with safeguarding weapons systems and the importance of secure software, while The National Association for International Information Security focused on the need to interpret the norms of state behaviour.

What’s next?

The OEWG’s schedule for 2024 is jam-packed: mid-April, the chair will revise the discussion papers circulated before the 7th session. On 9 May, the POC Directory will be launched, followed by a global roundtable meeting on ICT security capacity-building on 10 May 2024. A dedicated intersessional meeting will be held on 13-17 May 2024. 

Looking ahead to the second half of 2024, the 8th and 9th substantive sessions are planned for 8-12 July and 2-6 December 2024. A simulation exercise for the POC directory is also on the schedule, along with the release of capacity-building materials by the Secretariat, including e-learning modules.

Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations

The UN’s Ad Hoc Committee to  Elaborate a Comprehensive International Convention on Countering the Use of ICTs for Criminal Purposes aka the Ad Hoc Committee on Cybercrime convened in New York for a culminating session held from 29 January to 9 February 2024, marking the end of two years of negotiations. The Ad Hoc Committee (AHC) was tasked with drafting a comprehensive  cybercrime convention. However, as the final session started, there were no signs of significant progress: member states couldn’t agree on significant issues such as the scope of the convention. As a result, the delegations required more time to discuss the content and wording of the draft convention and decided to hold additional meetings. Though some delegations such as China and the US offered financial support for more meetings, several states such as El Salvador, Uruguay, and Lichtenstein pointed out the strain these additional meetings would put on their resources.

 Book, Comics, Publication, People, Person, Face, Head, Art, Baby, Drawing, Mitsutoshi Shimabukuro
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 23

The chair initially split negotiations in two tracks: formal sessions and informal meetings behind closed doors. The informal meetings seem to have focused on more sensitive issues such as the scope and human rights-related provisions and were extremely intense causing the regular sessions to start late. It also resulted in less transparency in negotiations and excluded the multistakeholder community from contributing.

In the last days of the concluding sessions, there was increased pressure from civil society and the industry, as well as cybersecurity researchers.

“There are fears that if the UN Ad Hoc Committee does not conclude with a convention, it could be considered a failure of multilateral diplomacy. However, in my opinion, the real fiasco of diplomatic efforts to address the problem of cybercrime would happen if the states adopt a treaty that significantly waters down human rights obligations and legitimises the use of criminal justice for oppression and persecution.” 

Dr. Tatiana Tropina, Assistant Professor in Cybersecurity Governance, ISGA, Leiden University

The comments provided are personal opinions and are not representative of the organisation as a whole.

So, what happened?

Here are the issues with the draft convention that need to be resolved:

Scope of the convention and criminalisation 

One of the main unresolved points remains the question whether the cybercrime convention should be a traditional treaty or if it should cover all crimes committed via ICTs. This divide translated into a lengthy discussion on the name of the convention itself, as well as on Article 3 (scope of application) of the draft convention.

In relation to the scope of application, delegations discussed Canada’s proposal, which received support from 66 states. The proposal suggests wide wording of the actions that may fall within the scope of the convention, and adding Article 3.3 to ensure that the convention doesn’t permit or facilitate ’repression of expression, conscience, opinion, belief, peaceful assembly or association; or permitting or facilitating discrimination or persecution based on individual characteristics’.

The Russian Federation continued expressing the view that the AHC hadn’t fully implemented the mandated outline in Resolution 74/247 which established the committee, and the scope of the convention should include broader measures to combat ‘the spread of terrorist, extremist, and Nazi ideas with the use of ICTs’. Russia further highlighted that ‘many articles are simply copied from treaties that are 20 years old’ and that the revised text doesn’t include efforts to agree on procedures of investigation, or creating platforms and channels for law enforcement cooperation.

In the same vein, Iran, Egypt, and Kuwait see the primary mandate of the AHC to elaborate a comprehensive international convention on the use of ICT for criminal purposes and see the inclusion of human rights regulations and detailed international collaboration as duplication of already existing international treaties.  

Representatives from civil society, private entities, and academia also shared feedback on the scope, stressing the importance of limiting the convention’s scope and implementing strong human rights protections. They expressed concerns about the convention’s potential to undermine cybersecurity, jeopardise data privacy, and diminish online rights and freedoms.

Discussing additional provisions in the criminalisation chapter, delegations were deadlocked over specific terms. For instance, concerning Article 6(2), 7(2), and 12, Russia, with support from several delegations, proposed replacing ‘dishonest intent’ with a more specific term. Russia’s representative argued that ‘dishonest’ is not a legal term, thus making it challenging for countries to implement or clarify it in domestic legislation. However, the UK, US, and EU opposed this change. Austria, in particular, explained that ‘dishonest intent’ provides clear criteria for identifying when conduct constitutes an offence, offering flexibility across various legal systems. 

Human rights and safeguards 

Human rights (Article 5) and safeguards (Article 24) have been a difficult topic for delegations from day one. Some delegations such as Iran argued that the cybercrime treaty is not a human rights treaty, suggesting a model akin to the UN Convention against Corruption (UNCAC), which omits explicit human rights references. As reported earlier, this didn’t find support from many other delegations.

Egypt and other delegations also expressed confusion over the repetitive nature of certain human rights provisions within the text, emphasising the redundancy of similar mentions occurring five or six times. 

Additionally, Egypt raised concerns about Article 24 and questioned why the principle of proportionality was singled out from other legal principles recognised under international law. Egypt pointed out the challenge of applying proportionality when different countries have varying legal provisions, such as the death penalty. Pakistan supported Egypt and Brazil suggested appending ‘legality’ to the principle of proportionality, including both of the principles of legality and proportionality. Ecuador expressed support for Brazil’s proposal.

As a result, both articles remain without text in the further revised draft text of the convention

There was no consensus regarding the articles on online sexual abuse (Article 13) and non-consensual distribution of intimate images (Article 15). Delegations tried to find a balance between protecting privacy and criminalising the sharing of intimate images without consent. Many felt the convention should be flexible to accommodate different laws and international human rights agreements. There was debate about whether to stick with the Convention on the Rights of the Child’s (CRC) definition or use a different one. The US worried the CRC’s definition didn’t fit cybercrimes well and might lead to inconsistent interpretations that wouldn’t adequately protect children under Article 13. 

Transfer of technology and technical assistance

The transfer of technology appears twice in Article 1 (statement of purpose) and Article 54 (technical assistance and capacity-building). The group of African countries strongly advocated for keeping a reference to the transfer of technology in both articles, including in Article 1, paragraph 3. 

Russia, Syria, Namibia, India, Senegal, and Algeria supported this, while the US was against it and called to keep this reference in Article 54 only. The EU, Israel, Norway, Canada, Albania, and the UK supported the US.

With Article 54, more or less the same groups of states had further disagreements. The US, Israel, the EU, Norway, Switzerland, and Albania supported inserting ‘voluntary’ before ‘where possible’ and ‘on mutually agreed terms’ in the context of how capacity building shall be provided between states in Article 54(1). Most African countries and Iran, Iraq, Cabo Verde, Colombia, Brazil, and Pakistan, opposed such a proposal because it would undermine the purpose of the provision in ensuring effective assistance to developing countries. With the goal of reaching a consensus on Article 54(1), the US withdrew its proposal and retained the ‘where possible’ and ‘mutually agreed terms’. In the revised draft text of the convention these paragraphs remain open for further negotiations between delegations.

“As offenders, victims and evidence are often located in different jurisdictions, investigations will typically require international coordinated law enforcement action. This means that gaps in the capacity of one country can severely undermine the safety of communities in other countries. Technical assistance and capacity-building are key tools to address this challenge. However, to have a real-world impact, the future Convention needs to recognize that addressing the needs of the diverse actors involved in combating [the criminal use of ICTs] [cybercrime] will require various forms of specialized technical assistance, which no single organization can provide. Even within countries, the various actors involved in combating [the criminal use of ICTs] [cybercrime] – including legislators, prosecutors, law enforcement, national Computer Emergency Response Teams (CERTs) – may have very different technical assistance needs.”

Director Craig Jones, INTERPOL Cybercrime Programme

Scope of international cooperation

Delegations expressed opposing views on provisions related to cooperation on electronic evidence and didn’t reach consensus. The discussion included Articles 35 (1) c, Article 35 (3), and (4), which deal with the general principles of international cooperation and e-evidence. The draft convention allowed countries to collect data across borders without prior legal authorisation. However, there were no agreements across many delegations. 

In particular, New Zealand, Canada, the EU, Brazil, the USA, Argentina, Uruguay, Singapore, Peru, and others expressed concerns: fearing that the current draft of Article 35 would allow an excessively broad application, potentially leading to the pursuit of non-criminal activities. These states expressed views that the previous draft allowed for national law to determine what constitutes criminal conduct and pointed out the need to differentiate between serious crimes and offence, the need for safeguards and guardrails on the power of states to limit the possibility of repression and implementations of intrusive and secret mechanisms and to ensure the protection of human rights. On the other hand, states like Egypt, Saudi Arabia, Iran, Iraq, Mauretania, Oman, and others called for the deletion of Article 35 (3) altogether.

Additionally, New Zealand suggested including a non-discrimination clause in Article 37(15) on extradition to prevent unfair grounds for refusing cooperation. This would ensure consistency across the entire chapter on international cooperation. However, member states couldn’t agree on the language and left this open. 

Within the international cooperation chapter, delegations spend quite a bit of time discussing the terms: in particular, in Article 45 and 46 the debates centred around the use of ‘shall’ vs ‘may’. The EU and other delegations advocated for changing ‘shall’ to ‘may’ in those articles to allow states the option, but not the obligation, to cooperate. This proposal was met with mixed reactions, with some delegations, including Egypt and Russia preferring to retain ‘shall’ to ensure robust international cooperation. The countries opposing the change from shall to may advocate that this would undermine the effectiveness of the cooperation between the states. So far, the further revised draft text of the convention includes both options in brackets. 

cooperation
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 24

Preventive measures 

Another term which created some confusion across several delegations was the use of ‘stakeholders’ in Article 53, where preventive measures are discussed and paragraph 2 highlights that ‘States shall take appropriate measures […] to promote the active participation of relevant individuals and stakeholders outside the public sector, such as non-governmental organizations, civil society organizations, academic institutions and the private sector, as well as the public in general, in the prevention of the offences covered by this Convention’. Egypt, in particular, called to remove the word ‘stakeholders’ unless it’s clearly defined. The US didn’t support this proposal. The further revised draft text of the convention ‘relevant individuals and entities […]’, but the paragraph hasn’t been agreed yet.

In the same article, in paragraph 3(h), where ‘gender-based violence is mentioned and strategies and policies should be developed to prevent it, states couldn’t reach an agreement. The first group of states, including the USA, Iceland, Australia, Vanuatu, and Costa Rica, advocated for keeping the provision. Other delegations such as Iran, Namibia, Saudi Arabia, and Russia, among others, proposed the deletion of the term ‘gender-based’ and instead keep ‘violence’. In the end, this part remained as it is with the term ‘gender-based violence’, with the chair emphasising that this article is not obligatory as it says that preventing measures may include.

Another notable example of where states had opposing views was Article 41 on 24/7 network, which is a point of contact designated at the national level, available 24 hours a day and 7 days a week, to ensure the provision of immediate assistance for the purposes of the convention. India proposed new duties for the 24/7 network, explaining that prevention should be a part of such duties. They particularly stressed that ‘if the offence is not prevented and it occurs, States would be needing multiple times the resources that they saved in the process of evidence collection, prosecution, extradition, and so on. So it’s better to prevent rather than to spend multiple times the same resources that States are trying to save in going through the whole process of criminal justice’. Russia, Kazakhstan and Belarus supported this proposal, while the US, UK, Argentina, the EU, and Canada didn’t.

So, what’s next?

A question mark on a wooden cube on a computer keyboard

As mentioned earlier, the delegates managed to agree on one major item to postpone the final decision. The chair’s further revised draft text of the convention is available at the AHC’s website, and new dates for more meetings should be announced soon. 

Does this mean that delegations are close to reaching a consensus over a landmark cybercrime convention before the UN General Assembly? Hardly so, but these two weeks have also demonstrated that many (though less fundamental compared to the scope of application) open issues have been resolved behind closed doors, and there is still a chance that intense non-public negotiations between delegations could speed up the process.

We will continue to monitor the negotiations, in the meantime discover more through our detailed reports from each session generated by DiploAI.

The perfect cryptostorm

To fully understand the incredible story behind the cryptocurrency and blockchain craze of 2017-2021, we must explain the unique setting in which events played out, setting the course for the collision. One component amplified the other, multiplying the effect, thus creating a perfect cryptostorm. Unfortunately, that storm took a toll on trust in the industry and caused financial losses.

The cryptocurrency industry is a one-hit wonder. But what a wonder that is! Bitcoin presents the true marvel of human engineering of money. It has withstood the test of time and resiliency, becoming the worldwide recognised use case for digital gold. We witnessed newly coined terms such as ‘crypto-rich’. In response, a whole new payment industry emerged, forged by the desire of the legacy financial organisation to stay relevant in the new era. 

Moreover, alongside the new fast-digital payment industry, which was delivering miracles on financial inclusion of the unbanked, the retail investing industry was a new form of capital inflow. The emergence of online trading companies, backed mainly by larger institutional investors, was recognised as a risk for the retail users and overall consumer protection rights.

Unanswered risks, the new hype around the change in the financial industry, and the emergence of inexperienced investors were the ingredients for the perfect storm in the cryptocurrency industry. Add human greed to that mixture and it becomes the perfect cryptostorm.

The perfect cryptostorm

The necromancers that summoned this cryptostorm, are quite vividly depicted in the latest Netflix documentary drama, ‘Bitconned’, which aired this January after two years of production. In 2017, the Centra Tech company raised USD 25 million in investments in their main product: a VISA-backed credit card, allowing people to spend their cryptocurrency at any retail store across the USA.
Centra Tech’s CEO, CTO, and other executives had a Harvard Business School background or an MIT engineering degree. The new headquarters in downtown Miami was full of young, bright people, and 20,000 VISA cards were produced. However, none of this was real. Everything was a (not so cleverly) staged mirage.

The court case concluded in 2021, handing jail time sentences to the people involved. The documentary is led by one of three prominent persons behind Centra Tech, Ray Trapani, who collaborated with the federal investigation on the case. In the film, he explained in detail how two young scammers working at a car rental company raised millions in an ICO, having only a one-page website. 
Once it started, the storm did not calm down for years. The story of Centra Tech from 2017 was replicated time and time again, culminating in the collapse of, at the time, the world’s second-largest company in the industry: FTX, an online cryptocurrency exchange. As we read from publicly presented pieces of court evidence, in the cases against Celsius, Luna and FTX, the crypto companies spent funds custodied by their investors.

 Person, Sitting, Adult, Male, Man, Clothing, Footwear, Shoe, Furniture, Face, Head, Home Decor, Chair
Screenshot from the Netflix documentary film ‘Bitconned

How did crypto scam companies utilise the above ingredients?

By promising the right thing at the right moment. Internet users witnessed the financial sector’s transformation and bitcoin’s success. They could easily be convinced that a new decentralised finance infrastructure is on the verge, which will be supported by the lack of a regulatory framework. At the same time, giving them a fair chance to participate in the industry beginnings and become the new crypto millionaires, which was the main incentive for many. If people behind the open-source cryptocurrency (bitcoin) could create the ‘internet of information’,  the next generation of cryptocurrency engineers would surely deliver the ‘internet of money’. However, again, it was false. It was, in fact, a carefully worded money-grabbing experiment.

All the above ideas still stand as a goalpost for further industry developments. Moreover, we must admit that the initial takeover of the industry by scammers, fraudsters, and, in some cases, straightforward sociopaths will taint the forthcoming period of developments in this industry.

In contrast to bitcoin, the creators of almost all cryptocurrencies that came later were incentivised by the financial benefits of ‘tokenisation’ rather than by secure and trustworthy technology. The term tokenisation was supposed to describe the emergence of fast-exchanging digital information (tokens) that could help trade digital products and services, promising the possibility of a ‘creators’ economy, micropayments, or unique digital objects. But in reality, it was merely copying analogue objects to the digital world and charging money for that service. Stocks, bonds, tin cans, energy prices, cloud storage, and dental appointments were all promised to be tokenised, while the term ‘blockchain’ was the ultimate hype word. People soon realised that not all digital artefacts had value solely by being placed on a blockchain. That was the case with projects that honestly intended to build the product (token or cryptocurrency) rather than just sell vapourware and go permanently offline the moment they got busted. As with any other technology, time will show the most efficient and rational use of blockchain.

Could this happen again for online financial services? 

Chances are meagre, certainly not to happen on this scale. Financial agencies worldwide have prepared a set of comprehensive laws and authorities to detect such fraudulent companies much faster and more efficiently. Financial regulations are negotiated with much more success on a global scale. Intergovernmental financial organisations and their bodies have equipped the regulators with the tools to comprehend how technology works and what can be done on the consumer protection side. Also, the users have had their fair share of schooling. Once bitten, twice shy.

For any other technology developed and utilised mainly online, the chances are always there. Users can now easily be engaged directly, via a mobile app, with companies that promise the next technological innovation. All they have to do is to carefully word our societal dreams into their product description.

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

The intellectual property saga: The age of AI-generated content | Part 1

The intellectual property saga: approaches for balancing AI advancements and IP protection | Part 3

The European Union (EU) has reached a historic provisional agreement in 2023 on the world’s first comprehensive set of rules to regulate AI, which will become law, once adopted by the EU Parliament and Council. The legislation, known as the AI Act, sets a new global benchmark for countries seeking to harness the potential benefits of AI, while trying to protect against possible risks of using it. While much of the attention was given to parts such as biometric categorisation and national security, among others, the AI Act will also give new guidance on AI and copyright. 

The AI Act’s approach regarding Copyright and Transparency in AI takes a nuanced stance, requiring transparency regarding training data without demanding exhaustive lists of copyrighted works. Instead, a summary of data collections suffices, easing the burden on AI providers. Nonetheless, uncertainties persist about the foundation model of providers’ obligations under copyright laws. While the AI Act stresses compliance with the existing regulations, including those in the Digital Single Market Directive, it yet raises concerns about applying EU rules to models globally, potentially fostering regulatory ambiguity for developers.

In one of the previous blogs,  the Digital Watch Observatory elucidated the relationship between AI-generated content and copyright. The analysis showed how traditional laws struggle to address AI-generated content, raising questions of ownership and authorship. Various global approaches – denying AI copyright, attributing it to humans – highlight the complexity.

This part will delve into the influence of AI on Intellectual Property Rights, and will assess the ramifications of AI on trade secrets and trademarks focusing on examples from the EU and US legal frameworks.

Trade Secrets and AI Algorithms

Within the realm of AI and intellectual property, trademarks and trade secrets present unique challenges and opportunities that require special attention in the evolving legal landscape. As AI systems often require extensive training datasets and proprietary algorithms, determining what constitutes a protectable trade secret becomes more complex. Companies must navigate how to safeguard their AI-related innovations, including the datasets used for training, without hindering the collaborative nature of the AI development. 

Trade secret laws may need refinement in order to address issues like reverse engineering of AI algorithms and the accidental disclosure of sensitive information by AI systems. However, given the limitations associated with patenting and copyrighting AI-related content, trade secret principles seem to present an alternative, at least in the USA. Patents necessitate a demonstrated utility disclosed in the application, while trade secrets lack this requirement. Trade secrets cover a broader range of information without the immediate need to disclose utility. In addition, trade secret law allows information created by an AI system to be protected, even if the creator is not an individual. This differs from patent law, which requires a human inventor listed on the application. 

Computer security concept with a closed padlock on the laptop.

Trade secrets, traditionally associated with formulae and confidential business information, now extend to AI algorithms and proprietary models. Safeguarding these trade secrets is critical for maintaining a competitive edge in industries in which AI plays a pivotal role. In the USA, trade secret law safeguards a broad spectrum of information, encompassing financial, business, scientific, technical, economic, or engineering data, as long as the owner has taken reasonable measures to maintain its secrecy, and the information derives value from not being widely known or easily accessible through legitimate means by others who could benefit from its disclosure or use (as defined in 18 U.S.C. §1839(3)). It is important, however, to consider that patent owners have a monopoly on the right to make, use, or sell the patented invention. In contrast, owners of AI-based trade secrets face the risk of competitors reverse engineering the trade secret, which is permitted under US trade secret law.

Requirements related to secrecy exclude trade secret protection for AI-generated outputs that are not confidential, such as those produced by systems like ChatGPT or Dall·E. Nevertheless, trade secret laws seem to be more flexible to safeguard various AI-related assets, including training data, AI software code, input parameters, and AI-generated content intended solely for internal and confidential purposes. Importantly, there is no stipulation that a trade secret must be originated by a human being, while AI-generated material is treated like any other form of information, as evident in 18 U.S.C. §1839(4), which defines trade secret ownership.

Instead of pursuing patents, based on traditional laws that seem to provide ambiguous guidance on AI and Copyright,  numerous AI innovators opt for trade secret protections to safeguard their AI advancements, as these innovations in commercial use frequently remain concealed and difficult for others to detect. With the AI Act soon to become law, there’s a likelihood that the EU will necessitate disclosing how AI innovations operate, categorising them as limited or high risk. This consequently leads to trade secret safeguarding to no longer be viable in some instances. 

Establishing clear guidelines for what qualifies as a trade secret in the AI domain, and defining the obligations of parties involved in AI collaborations will be essential for fostering innovation while ensuring the protection of valuable business assets.

Trademarks and Branding in the AI Era

artificial intelligence (ai) and machine learning (ml)

The integration of AI technologies into product and service offerings has also reshaped the landscape of trademark protection, presenting both challenges and opportunities for businesses. Traditionally associated with logos, brand names, and distinctive symbols, trademarks now extend their scope to encompass AI-generated content, virtual personalities, and unique algorithms associated with a particular brand. As companies increasingly rely on AI for customer interactions, the challenge of maintaining brand consistency in automated, AI-powered engagements becomes paramount. In the realm of AI-driven customer service and chatbots, the traditional understanding of the ’average consumer’ in trademark infringement cases undergoes transformation. When an AI application acquires a product with minimal or no human involvement, determining who, or more crucially, what constitutes the average consumer, becomes a pertinent question. Likewise, identifying responsibility for a purchase that results in trademark infringement in such scenarios becomes complex.

While there have been no known cases directly addressing the issue of AI and liability in trademark infringement, there have been several cases within the past decade adjudicated by the Court of Justice of the European Union (CJEU) that could offer insights into the matter when considering this new technology. For instance, the Louis Vuitton vs Google France decision focused on keyword advertising and the automatic selection of keywords in Google’s AdWords system. It concluded that Google wouldn’t be accountable for trademark infringement unless it actively participated in the keyword advertising system. Similarly, the L’Oréal vs eBay case, which revolved around the sale of counterfeit goods on eBay’s online platform, determined that eBay wouldn’t be liable for trademark infringement unless it had clear awareness of the infringing activity. A comparable rationale was applied in the Coty vs Amazon case. 

It would seem that if a provider of AI applications implemented adequate takedown procedures and had no prior knowledge of infringing actions, they would likely not be held responsible for such infringements. However, when the AI provider plays a more active role in potential infringing actions, the two cases indicate that the AI provider could be held accountable. 

In the case of Cosmetic Warriors Ltd and Lush Ltd vs Amazon.co.uk Ltd and Amazon EU Sarl before the United Kingdom High Court, in 2014, Amazon was determined to be liable for trademark infringement. Amazon used ads on Google mentioning ’lush’ to bring people to its UK website, where Lush claimed Amazon was breaking trademark rules by showing ‘LUSH’ in ads and search results for similar products without saying Lush items weren’t available on Amazon. The Court explained that consumers were unable to discern whether the products being offered for sale were those of the brand owner or not, thus illustrating that the evolving definition of the average consumer and the delineation of responsibility in trademark infringement cases involving AI require nuanced legal considerations. 

 Computer, Electronics, Tablet Computer, Pen

Conclusion

As AI continues to impact various industries, the ongoing evolution of intellectual property laws will play a pivotal role in defining and safeguarding AI innovations, underscoring the need for adaptable regulations that balance innovation and protection. The intersection of AI and intellectual property introduces novel challenges and opportunities, necessitating a thoughtful and adaptive legal framework. One crucial aspect involves the recognition and protection of AI-generated innovations. Traditional IP laws, such as patents, copyrights, and trade secrets, were designed with human inventors in mind. However, the autonomous and generative nature of AI raises questions about the attribution of authorship and inventorship. Legal systems will need to address whether AI-generated creations should be eligible for patent or copyright protection and, if so, how to attribute ownership and responsibility. This demands a forward-thinking approach from policymakers, legal scholars, and industry stakeholders to craft a legal landscape that not only accommodates the transformative potential of AI, but also safeguards the rights, responsibilities, and interests of all parties involved.

AI industry faces threat of copyright law in 2024

Copyright laws are set to provide a substantial challenge to the artificial intelligence (AI) sector in 2024, particularly in the context of generative AI (GenAI) technologies becoming pervasive in 2023. At the heart of the matter lie concerns about the use of copyrighted material to train AI systems and the generation of results that may be significantly similar to existing copyrighted works. Legal battles are predicted to affect the future of AI innovation and may even change the industry’s economic models and overall direction.
According to tech companies, the lawsuits could create massive barriers to the expanding AI sector. On the other hand, the plaintiffs claim that the firms owe them payment for using their work without fair compensation or authorization.

Legal Challenges and Industry Impact

AI programs that generate outputs comparable to existing works could infringe on copyrights if they had access to the works and produced substantially similar outcomes. In late December 2023, the New York Times was the first American news organization to file a lawsuit against OpenAI and its backer Microsoft, asking the court to erase all large language models (LLMs), including the famous chatbot ChatGPT, and all training datasets that rely on the publication’s copyrighted content. The prominent news media is alleging that their AI systems engaged in ‘widescale copying’, which is a violation of copyright law.
This high-profile case illustrates the broader legal challenges faced by AI companies. Authors, creators, and other copyright holders have initiated lawsuits to protect their works from being used without permission or compensation.

As recently as 5 January 2024, authors Nicholas Basbanes and Nicholas Gauge filed a new complaint against both OpenAI and its investor, Microsoft, alleging that their copyrighted works were used without authorization to train their AI models, including ChatGPT. In the proposed class action complaint, filed in federal court in Manhattan, they charge the companies with copyright infringement for putting multiple works by the authors in the datasets used to train OpenAI’s GPT large language model (LLM).


This lawsuit is one among a series of legal cases filed by multiple writers and organizations, including well-known names like George R.R. Martin and Sarah Silverman, alleging that tech firms utilised their protected work to train AI systems without offering any payment or compensation. The results of these lawsuits could have significant implications for the growing AI industry, with tech companies openly warning that any adverse verdict could create considerable hurdles and uncertainty.

Ownership and Fair Use

Questions about who owns the outcome generated by AI systems—whether it is the companies and developers that design the systems or the end users who supply the prompts and inputs—are central to the ongoing debate. The ‘fair use‘ doctrine, often cited by the United States Copyright Office (USCO), the United States Patent and Trademark Office (USPTO), and the federal courts, is a critical parameter, as it allows creators to build upon copyrighted work. However, its application to AI-generated content with models using massive datasets for training is still being tested in courts.

Policy and Regulation

The USCO has initiated a project to investigate the copyright legal and policy challenges brought by AI. This involves evaluating the scope of copyright in works created by AI tools and the use of copyrighted content in training foundational and LLM-powered AI systems. This endeavour is an acknowledgement of the need for clarification and future regulatory adjustments to address the pressing issues at the intersection of AI and copyright law.

Industry Perspectives

Many stakeholders in the AI industry argue that training generative AI systems, including LLMs and other foundational models, on the large and diverse content available online, most of which is copyrighted, is the only realistic and cost-effective method to build them. According to the Silicon Valley venture capital firm Andreessen Horowitz, extending copyright rules to AI models would potentially constitute an existential threat to the current AI industry.

Why does it matter?

The intersection of AI and copyright law is a complex issue with significant implications for innovation, legal liability, ownership rights, commercial interests, policy and regulation, consumer protection, and the future of the AI industry.

The AI sector in 2024 is at a crossroads with existing copyright laws, particularly in the US. The legal system’s reaction to these challenges will be critical in striking the correct balance between preserving creators’ rights and promoting AI innovation and progress. As lawsuits proceed and policymakers engage with these issues, the AI industry may face significant pressure to adapt, depending on the legal interpretations and policy decisions that will emerge from the ongoing processes. Ultimately, these legal fights could determine who the market winners and losers would be.

OEWG’s sixth substantive session: the highlights

The sixth substantive session of the UN Open-Ended Working Group (OEWG) on security of and the use of information and communications technologies 2021–2025 was held in December 2023, marking the midway point of the process.

Threats

The risks and challenges associated with emerging technologies, such as AI, quantum computing, and IoT, were highlighted by several countries. Numerous nations expressed concerns about ransomware attacks’ increasing frequency and impact on various entities, including critical infrastructure, local governments, health institutions, and democratic institutions.

The need for capacity building efforts to enhance cybersecurity capabilities globally was emphasised by multiple countries, recognising the importance of preparing for and responding to cyber threats.

The Russian Federation raised concerns about the potential for interstate conflicts arising from using information and communication technologies (ICTs). It proposed discussions on a global information security system under UN auspices. El Salvador discussed evolving threats in the ICT sector, particularly during peacetime, indicating that cybersecurity challenges are not limited to times of conflict.

Delegates discussed the impact of malicious cyber activities on international trust and development, particularly in the context of state-sponsored cyber threats and cybercrime.

Several countries, including the United Kingdom, Kenya, Finland, and Ireland, focused on the intersection of AI and cybersecurity, advocating for approaches considering AI systems’ security implications.

Some countries, including Iran and Syria, expressed concerns about threats to sovereignty in cyberspace, including issues related to internet governance and potential interference in internal affairs.

Many countries emphasised the importance of international cooperation and information sharing to address cybersecurity challenges effectively. Proposals for repositories of information on threats and incidents were discussed. The idea of a global repository of cyber threats, as advanced by Kenya, enjoys much support.

Rules, norms and principles 

Many delegations shared how they have already begun implementing national and regional norms through policies, laws and strategies. At the same time, some delegations shared the existing gaps and ongoing processes to introduce new laws, in particular, to protect critical infrastructure (CI) and implement CI-related norms. 

Clarifying the norms and providing implementation guidance

Delegations also signalled that clarifying the norms and providing implementation guidance is necessary. Singapore, for instance, supported the proposal to develop broader norm implementation guidance, such as a checklist. The Netherlands argued that such guidance should not only consider the direct impact of malicious cyber activities but also consider the cascading effects that such activities may have, including their impact on citizens. Canada stressed that a checklist would be a complementary tool, formulating voluntary and non-binding guidelines, while some delegations (e.g. China and Syria) called for translating norms as political commitments into legally binding elements. 

Australia suggested first focusing on developing norms implementation guidance for the three CI norms (F, G, and H). China, in particular, among many other delegations, expressed the same need to develop guidelines for the protection of CI. Portugal proposed the focus on clarifying and implementing the due diligence, including by the private sector in protecting CI, and France supported it.  

Norms related to ICT supply chain security and vulnerability reporting

In response to the Chair’s query about the norms related to ICT supply chain security and vulnerability reporting, Switzerland presented the Geneva Manual on Responsible Behaviour in Cyberspace. This inaugural edition offers comprehensive guidance for non-state stakeholders, emphasising norms related to supply chain security and responsible vulnerability reporting. At the same time, the UK and France raised the issue of the use of commercially available intrusion capabilities. The UK expressed its concerns about the growing market of software intrusion capabilities. It stressed that all actors, including the private sector, are responsible for ensuring that the development, facilitation and use of commercially available ICT capabilities do not undermine stability in cyberspace. In addition, France highlighted the need to guarantee the integrity of the supply chain by ensuring users’ trust in the safety of digital products and, in this context, cited the European Cyber Resilience Act proposal, which aims to impose cybersecurity requirements for digital products. China also addressed these norms and argued that some states abuse them by developing their standards for supply chain security and undermining fair competition for businesses. China also said all states should explicitly commit themselves to not proliferating offensive cyber technologies and urged that the so-called term ‘peacetime’ had never been used in the context of 11 norms in earlier consensus documents.

New norms vs existing norms 

Delegations had divergent views on whether new norms should be developed or not. Some countries supported the idea of creating new norms till 2025 (the end of the OEWG mandate), and, in particular, China called for new norms on data security issues. Other delegations (e.g. Canada, Colombia, France, Israel, the Netherlands, Switzerland, etc.) opposed the development of new norms and instead called for implementing ones. 

South Africa emphasised the need to intensify implementation efforts to identify any gaps in the existing normative frameworks and if there is a need for additional norms to close that gap. Brazil stressed that the implementation of existing standards is not contradictory to discussing the possibility of adopting specifically legally binding norms and thus rejected the idea that ‘there is any dichotomy opposing both perspectives’. Brazil expressed its openness to considering the adoption of both additional voluntary norms and legally binding ones to promote peaceful cyberspace. 

International law

The discussion on international law in the use of ICTs by states was guided by four questions: whether states see convergences in perspectives on how international law applies in the use of ICTs, whether there are possible unique features of cyber domain as compared to other domains that would require distinction in application of international law, whether there are gaps in applicability, and on capacity-building needs. While some delegations had statements prepared by legal departments or had legal counsel input, others, especially developing countries, needed support in formulating their interventions.

Convergences in perspectives on how international law applies in the use of ICTs

The overwhelming majority of delegations stated that convergence is in agreement that international law, in particular, the UN Charter, is applicable in cyberspace (Thailand, Denmark, Iceland, Norway, Sweden, Finland, Brazil, Estonia, El Salvador, Austria, Canada, the EU, Republic of Korea, Netherlands, Israel, Pakistan, UK, Bangladesh, India, France, Japan, Singapore, South Africa, Australia, Chile, Ukraine, and others). These states see the need to deepen a common understanding of how existing international law applies in cyberspace, alongside its possible implications and legal consequences. Most delegations also stated that cyberspace is not unique and would require a distinction in how international law applies. Kenya pointed out the role of regional organisations in clarifying how international law applies to cyberspace, the African Union in particular, and their contributions to this debate, which was supported by many.

India stated that, in their view, the dynamic nature of cyberspace creates ambiguity in the application of international law since a state, as a subject of international law, can exercise its rights and obligations through its organs or other natural and legal persons. 

Another group of states (Cuba, Nicaragua, Vietnam, and the Syrian Arab Republic) thinks cyberspace is unique and can not be addressed by applying existing international law. They call for a legally binding instrument in the UN framework. Russia and Bangladesh see gaps in international law that require new legally binding regulations. According to China and the Syrian Arab Republic, the draft of the International Convention on International Information Security proposed by the Russian Federation would be a good starting point for such negotiations. 

The delegations also discussed general international law principles enshrined in the UN Charter. There is an overarching agreement that the principles of sovereignty and sovereign equality, non-intervention, peaceful settlement of disputes, and prohibition of the use of force apply in cyberspace (Malaysia, Australia, Russian Federation, Italy, the USA, India, Canada, Switzerland, Czech Republic, Estonia, Ireland, others). The states concluded that the principles of due diligence, attribution, invoking the right of self-defence, and assessing whether an internationally wrongful act has been committed requires additional work to understand how they apply in cyberspace.

Many delegations (Australia, Canada, the EU, New Zealand, Germany, Switzerland, Estonia, El Salvador, the USA, Singapore, Ireland, and others) stated that the discussions need to clarify how international law addresses violations, what rights and obligations arise in such case, and how international law of state responsibility applies in cyberspace. Mexico, Italy and Bangladesh see value in the contributions of the UN International Law Commission to this debate.

The majority of delegations see convergence in understanding that international humanitarian law applies in cyberspace in cases of armed conflict and that the states must adhere to international legal principles of humanity, necessity, proportionality and distinction (Kiribati, UK, Germany, the USA, Netherlands, El Salvador, Ukraine, Denmark, Czech Republic, Australia, others). Deeper discussions on this matter are necessary. Cuba, in line with its previous statements, disagrees with the concept of applying international humanitarian law in cyberspace.

Addressing capacity building in international law, Uganda stated that it is extremely difficult for developing countries to be equal partners and effectively participate globally due to a lack of expertise and capacity. The majority of countries have supported continuous capacity building efforts in international law (Thailand, Mexico, Nordic countries; Estonia, Ireland, Kenya, the EU, Spain, Italy, Republic of Korea, Netherlands, Malaysia, Bangladesh, India, France, Japan, Singapore, Australia, Switzerland), with Canada mentioning two priority areas: national expertise to enable meaningful participation in substantive legal discussions in multilateral processes such as our OEWG and expertise to develop national or regional positions. Almost all delegations have found the recent UNIDIR workshop to be a valuable contribution to understanding international law’s applicability in cyberspace. 

Several delegations have underscored the value of sharing national positions (Thailand, Brazil, Austria, the EU, Israel, the UK, India, Nigeria, Nordic countries, and Mexico) in capacity-building and confidence-building measures.

Going forward, most speakers (Estonia, the EU, Austria, Spain, Italy, El Salvador, the Republic of Korea, the UK, Malaysia, Japan, Chile, and others) have supported the proposal to hold a two-day inter-sessional meeting dedicated to international law.

CBMs

Operationalisation of the Global POC Directory

Many states supported the operationalisation of the agreements to establish a global POC Directory. Australia stressed that those states already positioned to nominate their diplomatic and technical POCs should do so promptly. Switzerland, however, reiterated that the POC Directory should not duplicate the work of CERT and CSIRT teams. The Netherlands stressed the need to regularly evaluate the performance of the POC Directory once it is established. Ghana supported this proposal to develop a feedback mechanism to collect input from states on the Directory’s functionality and user experience. At the end of this agenda item, the Chair also addressed the participation of stakeholders and shared that a dedicated intersessional meeting in May will be convened to discuss stakeholders’ role in the POC directory.

Role of regional organisations

Some delegations (e.g. the US, the EU, Singapore, etc.) highlighted the role of regional organisations in operationalising the POC directory and CBMs. However, several delegations expressed their concerns – e.g. Cuba stated that they are not in favour of ‘attempts to impose the recognition of specific organisations as regional interlocutors on the subject when they do not include the participation of all member states of the region and question’. The EU noted that not all states are members of regional organisations and added that the UN should develop global recommendation service practices on cyber CBMs and encourage regional dialogue and exchanges. 

Additional CBMs

Delegations discussed potentially adding additional CBMs. Iran highlighted the need for universal terminology in ICT security to reduce the risk of misunderstanding between states. India reiterated the proposal for a global cybersecurity cooperation portal to address cooperation channels for incident response. India also called for differentiating between cyberterrorism and other cyber incidents in this context. India also suggested that the OEWG may focus on building mechanisms for states to cooperate in investigating cyber crimes and sharing digital forensic evidence. The Chair, at the end of this agenda item, highlighted that the OEWG must continue discussions on potentially adding new CBMs and the importance of identifying if there are any additional things to do. 

Capacity building

The recent discussions on cybersecurity highlighted a consensus among participating nations regarding the urgency and cross-cutting nature of cyber threats. Delegations emphasised the importance of Cyber Capacity (CB) in enabling countries to identify and address these threats while adhering to international law and norms for responsible behaviour in cyberspace. Central to the dialogue was the pursuit of equity among nations in achieving cyber resilience, with a recurring emphasis on the ‘leave no country behind’ principle. The core notion of foundational capacities was at the centre of the debates. The development of legal frameworks, dedicated agencies, and incident response mechanisms, especially Computer Emergency Response Teams (CERTs) and CERT cooperation, were highlighted. However, delegations also stressed the importance of national contexts and the lack of one-size-fits-all answers to foundational capacities. Instead, efforts should be tailored to individual countries’ specific needs, legal landscape and infrastructure.

Other issues highlighted were the shortage of qualified cybersecurity personnel and the need to develop technical skills through sustainable and self-sufficient traineeship programs, such as train-the-trainer initiatives. Notable among these initiatives was the Western Balkans Cyber Capacity Centre (WB3C), a long-term project fostering information exchange, good practices, and training courses developed by Slovenia and France together with Montenegro 

Concrete actions emerged as a response to past calls from delegations for concrete actions. Two critical planned exercises, the mapping exercise and the Global Roundtable on CB, were commended. The mapping exercise scheduled for March 2024 aims to survey global cybersecurity capacity-building initiatives comprehensively, enhancing operational awareness and coordination. The Global Roundtable, scheduled for May 2024, is considered a milestone in involving the UN, showcasing ongoing initiatives, creating partnerships, and facilitating a dynamic exchange of needs and solutions. These initiatives align with the broader themes of global cooperation, encompassing south-south, north-south, and triangular collaboration in science, technology, and innovation, emphasising needs-based approaches by matching initiatives with specific needs.

Additional points from the discussions included a presentation from India on the technical aspects of the Global Cyber Security Cooperation Portal, emphasising synergy with existing portals. Delegations also supported a voluntary checklist of mainstream cyber capacity-building principles proposed by Singapore. Furthermore, the outcomes of the Global Conference on Cyber Capacity Building, hosted by Ghana and jointly organised by the Cyber Peace Institute, the World Bank, and the World Economic Forum, garnered endorsement from many delegations. The ‘Accra call,’ as it is being termed, is a practical action framework to strengthen cyber resilience as a vital enabler for sustainable development. Switzerland announced its plan to host the follow-up conference in 2025 and urged all states to endorse the Accra Call for cyber-resilient development.

Regular institutional dialogue

The 6th substantive session of the current OEWG marks halfway to the end of the mandate, and the fate of the future dialogue on international  ICT security remains open. The situation is exacerbated with a new plot twist: in addition to the Program of Action (PoA) that was proposed by France and Egypt back in 2019 and noted by GA resolutions lately (77/37 and 78/16), Russia tabled a new concept paper introducing a permanent OEWG as an alternative. 

Delegations spent in total more than 3 hours discussing the RID issue.  All supporters of the PoA stressed the amount of votes that resolution 78/16 got in GA: 161 states upheld the option to create a permanent inclusive and action-oriented mechanism under the UN auspices upon the conclusion of the current OEWG and no later than 2026, implying PoA. Notably, supporters of the resolution stressed that the final vision of the PoA would be defined at the OEWG in a consensus manner, considering the common elements expressed in the 2nd Annual progress report.  Several states noted that no PoA discussions may be held outside the OEWG to maintain consistency.

There is no consolidated view of the details of the PoA architecture. Egypt and Switzerland provided some ideas about the number and frequency of meetings and review mechanisms. However, Slovakia, Germany, Switzerland, Japan, Ireland, Australia, Colombia, Netherlands and France suggested including into the PoA architecture already discussed initiatives like PoC, Cyber Portal, threat repository, national implementation survey and other future ideas. The PoA recognises the possibility of developing new norms (beyond the agreed framework). Through the future review mechanism, it may identify gaps in existing international law and consider new legally binding norms to fill them if necessary.  As for the additional common element to the RID, some states pointed to inclusivity. PoA should allow multistakeholder participation during meetings, especially in the private sector, and allow them to submit positions. However, the final decision-making will remain with states only. 

The Russian proposal of a permanent OEWG after 2025 was co-sponsored by 11 states. It offers several principles for the group’s future work, stressing the consensus nature of decisions and stricter rules for stakeholder participation. It also provides detailed procedural rules and modalities of work.

The consensus issue was crucial at this substantive session as many states, even supporters of PoA, stressed this in statements. The problem may lie in the 78/16 resolution that does not specify the consensus mode of work except that the mechanism should be ‘permanent, inclusive and action-oriented’. 

Another divergence between the two formats is the main scope. According to the statements by PoA supporters, PoA should focus on implementing the existing framework of responsible state behaviour in cyberspace and concentrate efforts on capacity building to enable developing countries to cope with that. There may be a place for a dialogue on new threats and norms, but this is not a primary task. On the contrary, a permanent OEWG will concentrate on drafting legally binding norms and mechanisms of its implementation as elements of a new treaty or convention on ICT security. However, other aspects, such as CBMs and capacity building, will also remain in its scope. 

For Russia, the struggle to push the permanent OEWG format may lie in substance and in preserving the image of the pioneer of cyber negotiations at the UN and agenda-setter. If OEWG as a format ends in 2025, it will end the tradition of Russian diplomacy, which has more than 20 years of history. Also, earlier this year, in the submission to the SecGen under resolution 77/37, Russia frankly expressed its negative attitude towards PoA, saying that it will be ‘used by Western countries, in line with the ‘rules-based order’ concept promoted by the United States, to impose non-binding rules and standards to their advantage, instead of international law’.

The Chair plans to convey intersessional meetings on regular institutional dialogue in 2024 to deliberate this issue carefully.

ChatGPT: A year in review

As ChatGPT turns one, the significance of its impact cannot be overstated. What started as a pioneering step in AI has swiftly evolved into a ubiquitous presence, transforming abstract notions of AI into an everyday reality for many or at least a topic on everyone’s lips.

While ChatGPT and similar large language models (LLMs) have unveiled glimpses of the possibilities within AI, they are the pillars of the new technological revolution. All predictions state these models to be increasingly personalised and context-specific. To leverage proprietary data for refined model training and industry-specific automation.


Important milestones throughout the year

O7sSb1b44sU9uFl AqFKKj0m03ydxmOYwUDmV4acuu5XX46UCj8z3SOMj FlhB0nBnkSaZfjOJsWPVdU0Gu2Yww8BsBvB854HiVk5ENvfqmjUVUJKrDxLvLW5yOsXewuYoXBtzVmyz6IDrX 5OGYP9w

Source: https://www.globalxetfs.com/

Since its public launch in November 2022, ChatGPT has undergone substantial evolution. Initially, it operated solely as a text generator, limited to responses derived from its training data gathered until September 2021. Initially, it tended to fabricate information when lacking answers, introducing a new term of ‘hallucination’ into discourse when discussing AI. 

At this moment, the evolved iteration of ChatGPT, trained up to April 2023, boasts expanded capabilities. It now harnesses Microsoft’s Bing search engine and internet resources to access more current information. Moreover, it has become a product platform, enabling the integration of images or documents into searches and facilitating conversation through spoken language.

Tech race for AI dominance


In January 2022, ChatGPT achieved 100 million monthly users. The sudden surge in interest in generative AI has taken major tech companies by surprise. In addition to ChatGPT, several other notable generative AI models, such as Midjourney, Stable Diffusion, and Google’s Bard, have been released. These developments are reshaping the technological terrain. Tech giants put all resources into what they perceive as a pivotal future technological infrastructure and shape the narrative of the AI revolution. However, a significant challenge looming ahead is the potential dominance of only a select few players in this landscape.

Venture capitalists invested almost five times as much into generative AI firms in the first half of 2023 as during the same period last year. Even excluding a $10 billion investment by Microsoft unveiled in January, VC funding is still up nearly 58% compared with the first half of 2022.

The anticipated economic impact is substantial, with PwC forecasting that AI could potentially elevate the global economy by over $15 trillion by 2030. The largest economies – the US and China- are at the forefront of this new ‘AI arms race.’

According to the 2023 AI Index Report, the United States and China have consistently held the spotlight regarding AI investment, with the US taking the lead since 2013, accumulating close to $250 billion across 4,643 companies. The momentum in investment shows no signs of slowing. In 2022, the US witnessed the emergence of 524 new AI startups, drawing in an impressive $47 billion from non-government funding. Meanwhile, there were also substantial investment trends in China, with 160 newly established AI startups securing an average of $71 million each in 2022.

Many of these new startups are leveraging ChatGPT API and building specific use-case scenarios for users.

 Head, Person, Face, Photography, Portrait

AI governance – to regulate or not to regulate

In the midst of AI’s incredible advancements, there’s a shadow of concern. The worry about AI generating misleading or inappropriate content often referred to as ‘hallucinating,’ remains a significant challenge. The fear of AI also extends to broader societal implications like biases, job displacement, data privacy, the spread of disinformation and AI’s impact on decision-making processes.

The meteoric rise of the OpenAI company was one of the main reasons for the swift action from policymakers regarding Artificial intelligence regulation. OpenAI CEO Sam Altman was a guest of the US Congress and the EU Commission for negotiations of the new AI regulatory framework in the United States and the European Union.

The United States

The global landscape of AI regulation is gradually taking shape. On 30 October, President Biden issued an executive order mandating AI developers to provide the federal government with an evaluation of the data of their applications used to train and test AI, its performance measurements, and its vulnerability to cyberattacks. The Biden-Harris administration is making progress in crafting domestic AI regulation, including with the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the voluntary commitments from AI companies to manage the risks posed by the technology. This is recognised as the industry’s self-regulation approach from the US government and was welcomed in the industry.

In Congress, there are several bipartisan proposals. Just last week, prominent Senators Amy Klobuchar and John Thune and their colleagues introduced the bipartisan ‘AI Research, Innovation, and Accountability Act ‘to boost innovation while increasing transparency, accountability, and security for high-risk AI applications.

European Union

The tiered approach (as currently envisioned in EU AI Act) would mean categorising AI into different risk bands, with more or less regulation depending on the risk level.

In the EU, two and a half years after the draft rules were proposed, the negotiation on the final version hit a significant snag, as France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models. It seems that the EU’s largest economies are moving away from the concept of stringent AI regulation and inclining towards a self-regulatory approach akin to the US model. Many speculate that this shift is a consequence of intense lobbying efforts by Big Tech. These three countries asked the Spanish presidency of the EU Council, which negotiates on behalf of member states in the trialogues, to retreat from the approach. What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself and propose ‘mandatory self-regulation through codes of conduct’ for foundation models.

China

China was the first country to introduce its interim measures on generative AI, effective in August this year.

What is the aim? To solidify China’s role as a key player in shaping global standards for AI regulation. China also unveiled its Global AI Governance Initiative during the Third Belt and Road Forum, marking a significant stride in shaping the trajectory of AI on a global scale. China’s GAIGI is expected to bring together 155 countries participating in the Belt and Road Initiative, establishing one of the largest global AI governance forums. This strategic initiative focuses on five aspects, including ensuring AI development aligns with human progress, promoting mutual benefit, and opposing ideological divisions. It also establishes a testing and assessment system to evaluate and mitigate AI-related risks, similar to the risk-based approach of the EU’s upcoming AI Act.

 Art, Graphics, Advertisement, Poster, Text, Outdoors, Nature

At the international level

At the international level, there are initiatives like the establishment of a High-Level Body on AI by the UN Secretary-General, the group of seven wealthy nations (G7) agreeing on the Hiroshima guiding principles and endorsing an AI code of conduct for companies, AI Safety Summit at Bletchley Park and more.

The UN Security Council on AI

The UN Security Council held its first-ever debate on AI (18 July), delving into the technology’s opportunities and risks for global peace and security. A few experts were also invited to participate in the debate chaired by Britain’s Foreign Secretary James Cleverly. In his briefing to the 15-member council, UN Secretary-General Antonio Guterres promoted a risk-based approach to regulating AI and backed calls for a new UN entity on AI, akin to models such as the International Atomic Energy Agency, the International Civil Aviation Organization, and the Intergovernmental Panel on Climate Change.

G7

The G7 nations released their guiding principles for advanced AI, accompanied by a detailed code of conduct for organisations developing AI. A notable similarity with the EU’s AI Act is the risk-based approach, placing responsibility on AI developers to assess and manage the risks associated with their systems. While building on the existing Organisation for Economic Co-operation and Development AI Principles (OECD) principles, the G7 principles go a step further in certain aspects. They encourage developers to deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. However, the G7’s approach preserves a degree of flexibility, allowing jurisdictions to adopt the code in ways that align with their individual approaches.

UK AI Safety Summit

The UK’s much-anticipated summit resulted in a landmark commitment among leading AI countries and companies to test frontier AI models before public release.

The Bletchley Declaration identifies the dangers of current AI, including bias, threats to privacy, and deceptive content generation. While addressing these immediate concerns, the focus shifted to frontier AI – advanced models that exceed current capabilities – and their potential for serious harm.

Signatories include Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU. Governments will now play a more active role in testing AI models. The AI Safety Institute, a new global hub established in the UK, will collaborate with leading AI institutions to assess the safety of emerging AI technologies before and after their public release. The summit resulted in an agreement to form an international advisory panel on AI risk.

UN’s High-Level Advisory Body on AI

The UN has taken a unique approach by launching a High-Level Advisory Body on AI comprising 39 members. Led by UN Tech Envoy Amandeep Singh Gill, the body plans to publish its first recommendations by the end of this year, with final recommendations expected next year. These recommendations will be discussed during the UN’s Summit of the Future in September 2024.

Unlike previous initiatives that introduced new principles, the UN’s advisory body focuses on assessing existing governance initiatives worldwide, identifying gaps, and proposing solutions. The tech envoy envisions the UN as the platform for governments to discuss and refine AI governance frameworks. 

What can we expect from language models in the future?

If the industry keeps the focus on research and investments, 2024 will bring some massive breakthroughs. For the OpenAI, the Q project is in the focus. The Q project can solve certain math problems, allegedly having a higher reasoning capacity. This could be a potential breakthrough in artificial general intelligence (AGI). If language models expend their powers in the realm of math and reasoning, they will reach higher levels of usefulness. Many experts are reasoning, including Elon Musk, that ‘digital superintelligence’ will exist within the next five to ten years.

When it comes to regulation, the spotlight will continue to be on ensuring the safety of AI usage while removing a bias from future datasets. With further calls for global collaboration in AI governance and for greater transparency of these models.

Must read

Four seasons of AI:  From excitement to clarity in the first year of ChatGPT – Diplo
ChatGPT was launched by OpenAI on the last day of November 2022. It triggered a lot of excitement. Over the last 12 months, the winter of AI excitement was… Read more.
Four seasons of AI:  From excitement to clarity in the first year of ChatGPT – Diplo
ChatGPT was launched by OpenAI on the last day of November 2022. It triggered a lot of excitement. Over the last 12 months, the winter of AI excitement was… Read more.
portrait valtazar bogisic scaled 900x300 1
Valtazar Bogišić: How can the 1888 Code inspire the AI code? – Diplo
Our quest for effective AI governance can be informed by the legal wisdom of Valtazar Bogisic, drafter of the Montenegrin civil code (1888) Read more.
portrait valtazar bogisic scaled 900x300 1
Valtazar Bogišić: How can the 1888 Code inspire the AI code? – Diplo
Our quest for effective AI governance can be informed by the legal wisdom of Valtazar Bogisic, drafter of the Montenegrin civil code (1888) Read more.
AI risks taxonomy 900x300 1
How can we deal with AI risks?
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks. Read more.
AI risks taxonomy 900x300 1
How can we deal with AI risks?
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks. Read more.
Kenya typical Kenyan micro enterprise 900x300 1
Jua Kali AI: Bottom-up algorithms for a Bottom-up economy – Diplo
This text is about bottom-up AI for the bottom-up economy. Read more.
Kenya typical Kenyan micro enterprise 900x300 1
Jua Kali AI: Bottom-up algorithms for a Bottom-up economy – Diplo
This text is about bottom-up AI for the bottom-up economy. Read more.
shutterstock 1660018615 scaled 900x300 1
Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems? – Diplo
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.
shutterstock 1660018615 scaled 900x300 1
Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems? – Diplo
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.

The intellectual property saga: The age of AI-generated content | Part 1

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

The intellectual property saga: approaches for balancing AI advancements and IP protection | Part 3

As AI advances rapidly, machines are increasingly gaining human-like skills, which is increasingly blurring the distinction between humans and machines. Traditionally, computers were tools that assisted human creativity with clear distinctions: humans had sole ownership and authorship. However, recent AI developments enable machines to independently perform creative tasks, including complex functions such as software development and artistic endeavours like composing music, generating artwork, and even writing novels.

This has sparked debates about whether creations produced by machines should be protected by copyright and patent laws? Furthermore, the question of ownership and authorship becomes complex, as it raises the issue of whether credit should be given to the machine itself, the humans who created the AI, the works the AI feeds off from or perhaps none of the above?

This essay initiates a three-part series that delves into the influence of AI on intellectual property rights (IPR). To start off, we will elucidate the relationship between AI-generated content and copyright. In the following essays, we will assess the ramifications of AI on trademarks, patents, as well as the strategies employed to safeguard intellectual property (IP) in the age of AI.

Understanding IP and the impact of AI 

In essence, IP encompasses a range of rights aimed at protecting human innovation and creativity. These rights include patents, copyrights, trademarks, and trade secrets. They serve as incentives for people and organisations to invest their time, resources, and intelligence in developing new ideas and inventions. Current intellectual property rules and laws focus on safeguarding the products of human intellectual effort. 

Google recently provided financial support for an AI project designed to generate local news articles. Back in 2016, a consortium of museums and researchers based in the Netherlands revealed a portrait named ‘The Next Rembrandt’. This artwork was created by a computer that had meticulously analysed numerous pieces crafted by the 17th-century Dutch artist, Rembrandt Harmenszoon van Rijn. In principle, this invention could be seen as ineligible for copyright protection due to the absence of a human creator. As a result, they might be used and reused without limitations by anyone. This situation could present a major obstacle for companies selling these creations because the art isn’t protected by copyright laws, allowing anyone worldwide to use it without having to pay for it.

Hence, when it comes to creations that involve little to no human involvement the situation becomes more complex and blurred. Recent rulings in copyright law have been applied in two distinct ways.

One approach was to deny copyright protection to works generated by AI (computers), potentially allowing them to become part of the public domain. This approach has been adopted by most countries and was exemplified in the 2022 DABUS case, which centred around an AI-generated image. The US Copyright Office supported this stance by stating that AI lacks the necessary human authorship for a copyright claim. Other patent offices worldwide have made comparable decisions, except for South Africa, where the AI machine Device for Autonomous Bootstrapping of Unified Sentience (DABUS), is recognised as the inventor, and the machine’s owner is acknowledged as the patent holder.

In Europe, the Court of Justice of the European Union (CJEU) has made significant declarations, as seen in the influential Infopaq case (C-5/08 Infopaq International A/S v Danske Dagblades Forening). These declarations emphasise that copyright applies exclusively to original works, requiring that originality represents the author’s own intellectual creation. This typically means that an original work must reflect the author’s personal input, highlighting the need for a human author for copyright eligibility.

The second approach involved attributing authorship to human individuals, often the programmers or developers. This is the approach followed in countries like the UK, India, Ireland, and New Zealand. UK copyright law, specifically section 9(3) of the Copyright, Designs, and Patents Act (CDPA), embodies this approach, stating:

‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’


AI-generated content and copyright

ai artificial intelligence concept robot hands typing on lit keyboard

This illustrates that the laws in many countries are not equipped to handle copyright for non-human creations. One of the primary difficulties is determining authorship and ownership when it comes to AI-generated content. Many argue that it’s improbable for a copyrighted work to come into existence entirely devoid of human input. Typically, a human is likely to play a role in training an AI, and the system may acquire knowledge from copyrighted works created by humans. Furthermore, a human may guide the AI in determining the kind of work it generates, such as selecting the genre of a song and setting its tempo, etc. Nonetheless, as AI becomes more independent in producing art, music, and literature, traditional notions of authorship become unclear. Additionally, concerns have arisen about AI inadvertently replicating copyrighted material, raising questions about liability and accountability. The proliferation of open-source AI models also raises concerns about the boundaries of intellectual property.

In a recent case, US District Judge Beryl Howell ruled that art generated solely by AI cannot be granted copyright protection. This ruling underscores the need for human authorship to qualify for copyright. The case stemmed from Stephen Thaler’s attempt to secure copyright protection for AI-generated artworks. Thaler, the Chief Engineer at Imagination Engines, has been striving for legal recognition of AI-generated creations since 2018. Furthermore, the US Copyright Office has initiated a formal inquiry, called a notice of inquiry (NOI), to address copyright issues related to AI. The NOI aims to examine various aspects of copyright law and policy concerning AI technology. Microsoft is offering legal protection to users of its Copilot AI services who may face copyright infringement lawsuits. Brad Smith, Microsoft’s Chief Legal Officer, introduced the Copilot Copyright Commitment initiative, in which the company commits to assuming legal liabilities associated with copyright infringement claims arising from the use of its AI Copilot services.

On the other hand, Google has submitted a report to the Australian government, highlighting the legal uncertainty and copyright challenges that hinder the development of AI research in the country. Google suggests that there is a need for clarity regarding potential liability for the misuse or abuse of AI systems, as well as the establishment of a new copyright system to enable fair use of copyright-protected content. Google compares Australia unfavourably to other countries with more innovation-friendly legal environments, such as the USA and Singapore.

Training AI models with protected content

Studying is good, but studying in company is better.

Clarifying the legal framework of AI and copyright also requires further guidelines on the training data of AI systems. To train AI systems like ChatGPT, a significant amount of data comprising text, images, and parameters is indispensable. During the training process, AI platforms identify patterns to establish guidelines, make assessments, and generate predictions, enabling them to provide responses to user queries. However, this training procedure may potentially involve infringements of IPR, as it often involves using data collected from the internet, which may include copyrighted content.

In the AI industry, it is common practice to construct datasets for AI models by indiscriminately extracting content and data from websites using software, a process known as web scraping. Data scraping is typically considered lawful, although it comes with certain restrictions. Taking legal action for violations of terms of service offers limited solutions, and the existing laws have largely proven inadequate in dealing with the issue of data scraping. In AI development, the prevailing belief is that the more training data, the better. OpenAI’s GPT-3 model, for instance, underwent training on an extensive 570 GB dataset. These methods, combined with the sheer size of the dataset, mean that tech companies often do not have a complete understanding of the data used to train their models.

An investigation conducted by the online magazine The Atlantic has uncovered that popular generative AI models, including Meta’s open-source Llama, were partially trained using unauthorised copies of books by well-known authors. This includes models like BloombergGPT and GPT-J from the nonprofit EleutherAI. The pirated books, totalling around 170,000 titles published in the last two decades, were part of a larger dataset called the Pile, which was freely available online until recently.

In specific situations, reproducing copyrighted materials may still be permissible without the consent of the copyright holder. In Europe, there are limited and specific exemptions that allow this, such as for purposes like quoting and creating parodies. Despite growing concerns about the use of machine learning (ML) in the EU, it is only recently that EU member states have started implementing copyright exceptions for training purposes. The UK`s 2017 independent AI review, ‘Growing the artificial intelligence industry in the UK’, recommended allowing text and data mining by AI, through appropriate copyright laws. In the USA, access to copyrighted training data seems to be somewhat more permissive. Although US law doesn’t include specific provisions addressing ML, it benefits from a comprehensive and adaptable fair use doctrine that has proven favourable for technological applications involving copyrighted materials.

The indiscriminate scraping of data and the unclear legal framework surrounding AI training datasets and the use of copyrighted materials without proper authorisation have prompted legal actions by content creators and authors. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have filed lawsuits against OpenAI and Meta, alleging that their works were used without permission to train AI models. The lawsuits contend that OpenAI’s ChatGPT and Meta’s LLaMA were trained on datasets obtained from ‘shadow library’ websites containing copyrighted books authored by them.

Why does it matter?

In conclusion, as AI rapidly advances, it blurs the lines between human and machine creativity, raising complex questions regarding IPR. Legislators are facing a challenging decision – whether to grant IP protection or not. As AI continues to advance, it poses significant legal and ethical questions by challenging traditional ideas of authorship and ownership. While navigating this new digital frontier, it’s evident that finding a balance between encouraging AI innovation and protecting IPRs is crucial.

If the stance is maintained that IP protection only applies to human-created works, it could have adverse implications for AI development. This would place AI-generated creations in the public domain, allowing anyone to use them without paying royalties or receiving financial benefits. Conversely, if lawmakers take a different approach, it could profoundly impact human creators and their creativity.

Another approach could be AI developers guaranteeing adherence to data acquisition regulations, which might encompass acquiring licences or providing compensation for IP utilised during the training process. 

One thing is certain, effectively dealing with IP concerns in the AI domain necessitates cooperation among diverse parties, including policymakers, developers, content creators, and enterprises.

Key takeaways from the sixth UN session on cybercrime treaty negotiations

The 6th session of the Ad Hoc Committee (AHC) to elaborate a UN cybercrime convention is over: From 21 August until 1 September 2023,  in New York, delegates from all states finished another round of text-based negotiations. This was a pre-final session before the final negotiation round in February 2024.

Stalled negotiations over a scope and terminology

Well, reaching a final agreement does not seem to be easy. A number of Western advocacy groups and Microsoft publicly expressed their discontent with the current draft (updated on 1 September 2023), which, they stated, could be ‘disastrous for human rights’. At the same time some countries (e.g. Russia and China) shared concerns that the current draft does not meet the scope that was established by the mandate of the committee. In particular, these delegations and their like-minded colleagues believe that the current approach in the chair’s draft does not adequately address the evolving landscape of information and communication technologies (ICTs). For instance, Russia shared its complaint about the secretariat’s alleged disregard for a proposed article addressing the criminalisation of the use of ICTs for extremist and terrorist purposes. Russia, together with a group of states (e.g. China, Namibia, Malaysia, Saudi Arabia and some others), also supported the inclusion of digital assets under Article 16 regarding the laundering of proceeds of crimes. The UK, Tanzania, and Australia opposed the inclusion of digital assets because it does not fall within the scope of the convention. Concerning other articles, Canada, the USA, the EU and its member states, and some other countries also wished to keep the scope more narrow, and opposed proposals, in particular, for articles on international cooperation (i.e. 37, 38, and 39) that would significantly expand the scope of the treaty.

The use of specific words in each provision, considering the power behind them, is yet another issue that remains uncertain. Even though the chair emphasised that the dedicated terminology group continues working to resolve the issues over terms and propose some ideas, many delegations have split into at least two opposing camps: whether to use ‘cybercrime’ or ‘the use of ICTs for malicious purposes’, to keep the verb ‘combat’ or replace it with more precise verbs such as ‘suppress’, or whether to use ‘child pornography’ or ‘online child sexual abuse’, ‘digital’ or ‘electronic’ information, and so on. 

 Book, Publication, Person, Comics, Face, Head, Art

For instance, in the review of Articles 6–10 on criminalisation, which cover essential cybercrime offences such as illegal access, illegal interception, data interference, systems interference, and the misuse of devices, several debates revolved around the terms ‘without right’ vs ‘unlawful’, and ‘dishonest intent’ vs ‘criminal intent’. 

Another disagreement arose over the terms: ‘restitution’ or ‘compensation’ in Article 52. This provision requires states to retain the proceeds of crimes, to be disbursed to requesting states to compensate victims. India, however, supported by China, Russia, Syria, Egypt, and Iran proposed that the term ‘compensation’ be replaced with ‘restitution’ to avoid further financial burden for states. Additionally, India suggested that ‘compensation’ shall be at the discretion of national laws and not under the convention. Australia and Canada suggested retaining the word ‘compensation’ because it would ensure that the proceeds of the crime delivered to requesting states are only used for the compensation of victims.

The bottom line is that terminology and scope, two of the most critical elements of the convention, remain unresolved, needing attention at the session in February 2024. However, if states have not been able to agree for the past 6 sessions, the international community needs a true diplomatic miracle to occur in the current geopolitical climate. At the same time, the chair confirmed that she has no intention of extending her role beyond February.

Hurdles to deal with human rights and data protection-related provisions

We wrote before that states are divided when discussing human rights perspectives and safeguards: While one group is pushing for a stronger text to protect human rights and fundamental freedoms within the convention, another group disagrees, arguing that the AHC is not mandated to negotiate another human rights convention, but an international treaty to facilitate law enforcement cooperation in combating cybercrime. 

In the context of text-based negotiations, this has meant that some states suggested deleting Article 5 on human rights and merging it with Article 24 to remove the gender perspective-related paragraphs because of the concerns over the definition of the ‘gender perspective’ and challenges to translate the phrase into other languages. Another clash happened during discussions about whether the provisions should allow the real-time collection of traffic data and interception of content data (Articles 29 and 30, respectively). While Singapore, Switzerland, Malaysia, and Vietnam proposed removing such powers from the text, other delegations (e.g. Brazil, South Africa, the USA, Russia, Argentina and others) favoured keeping them. The EU stressed that such measures represent a high level of intrusion and significantly interfere with the human rights and freedoms of individuals. However, the EU expressed its openness to consider keeping such provisions, provided that the conditions and safeguards outlined in Articles 24, 36 and 40(21) remain in the text.

With regard to data protection in Article 36, CARICOM proposed an amendment allowing states to impose appropriate conditions in compliance with their applicable laws to facilitate personal data transfers. The EU and its member states, New Zealand, Albania, the USA, the UK, China, Norway, Colombia, Ecuador, Pakistan, Switzerland, and some other delegations supported this proposal. India did not, while some other delegations (e.g. Russia, Malaysia, Argentina, Türkiye, Iran, Namibia and others) preferred retaining the original text.

Cybersecurity,protection

Articles on international cooperation or international competition?

Negotiations on the international cooperation chapter have not been smooth either. During the discussions on mutual assistance, Russia, in particular, pointed out a lack of grounds for requests and suggested adding a request for “data identifying the person who is the subject of a crime report” with, where possible “their location and nationality or account as well as items concerned”. Australia, the USA, and Canada did not support this amendment. 

Regarding the expedited preservation of stored computer data/digital information in Article 42, Russia also emphasised the need to distinguish between the location of a service provider or any other data custodian, as defined in the text, and the necessity to specifically highlight the locations where data flows and processing activities, such as storage and transmission, occur due to technologies like cloud computing. To address this ‘loss of location’ issue, Russia suggested referring to the second protocol of the Budapest Convention. The reasoning for this inclusion was to incorporate the concept of data as being in the possession or under the control of a service provider or established through data processing activities operating from within the borders of another state party. The EU and its member states, the USA, Australia, Malaysia, South Africa, Nigeria, Canada, and others were among delegations who preferred to retain the original draft text.

A number of delegations (e.g. Pakistan, Iran, China, Mauritania) also proposed an additional article on ‘cooperation between national authorities and service providers’ to oblige the reporting of criminal incidents to relevant law enforcement authorities, providing support to such authorities by sharing expertise, training, and knowledge, ensuring the implementation of protective measures and due diligence protocols, ensuring adequate training for their workforce, promptly preserving electronic evidence, ensuring the confidentiality of requests received from such authorities, and taking measures to render offensive and harmful content inaccessible. The USA, Georgia, Canada, Australia, the EU, and its member states, and some other delegations rejected this proposal. 

SDGs in the scope of the convention?

An interesting development was the inclusion of the word ‘sustainability’ under Article 56 on the implementation of the convention. While sustainability was not mentioned in the previous sessions, Australia, China, New Zealand and Yemen, among other countries, proposed that Article 56 should read: ‘Implementation of the convention through sustainable development and technical assistance’. Costa Rica claimed that such inclusion would link the capacity building under this convention to the achievement of the Sustainable Development Goals (SDGs)”. Additionally, Paraguay proposed that Article 52(1) should ensure that the implementation of the convention through international cooperation should take into account ‘negative effects of the offences covered by this Convention on society in general and, in particular, on sustainable development, including the limited access that landlocked countries are facing’. While the USA and Tanzania acknowledged the importance of Paraguay’s proposal, they stated that they could not support this edit.

What’s next?

The committee will continue the negotiations in February 2024 for the seventh session, and if the text is adopted, states will still have to ratify it afterwards. If, however, ‘should a consensus prove not to be possible, the Bureau of the UN Office on Drugs and Crime (UNODC) will confirm that the decisions shall be taken by a two-thirds majority of the present voting representatives’ (from the resolution establishing the AHC). The chair must report their final decisions before the 78th session of the UN General Assembly.