Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations

The UN’s Ad Hoc Committee to  Elaborate a Comprehensive International Convention on Countering the Use of ICTs for Criminal Purposes aka the Ad Hoc Committee on Cybercrime convened in New York for a culminating session held from 29 January to 9 February 2024, marking the end of two years of negotiations. The Ad Hoc Committee (AHC) was tasked with drafting a comprehensive  cybercrime convention. However, as the final session started, there were no signs of significant progress: member states couldn’t agree on significant issues such as the scope of the convention. As a result, the delegations required more time to discuss the content and wording of the draft convention and decided to hold additional meetings. Though some delegations such as China and the US offered financial support for more meetings, several states such as El Salvador, Uruguay, and Lichtenstein pointed out the strain these additional meetings would put on their resources.

 Book, Comics, Publication, People, Person, Face, Head, Art, Baby, Drawing, Mitsutoshi Shimabukuro
Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 5

The chair initially split negotiations in two tracks: formal sessions and informal meetings behind closed doors. The informal meetings seem to have focused on more sensitive issues such as the scope and human rights-related provisions and were extremely intense causing the regular sessions to start late. It also resulted in less transparency in negotiations and excluded the multistakeholder community from contributing.

In the last days of the concluding sessions, there was increased pressure from civil society and the industry, as well as cybersecurity researchers.

“There are fears that if the UN Ad Hoc Committee does not conclude with a convention, it could be considered a failure of multilateral diplomacy. However, in my opinion, the real fiasco of diplomatic efforts to address the problem of cybercrime would happen if the states adopt a treaty that significantly waters down human rights obligations and legitimises the use of criminal justice for oppression and persecution.” 

Dr. Tatiana Tropina, Assistant Professor in Cybersecurity Governance, ISGA, Leiden University

The comments provided are personal opinions and are not representative of the organisation as a whole.

So, what happened?

Here are the issues with the draft convention that need to be resolved:

Scope of the convention and criminalisation 

One of the main unresolved points remains the question whether the cybercrime convention should be a traditional treaty or if it should cover all crimes committed via ICTs. This divide translated into a lengthy discussion on the name of the convention itself, as well as on Article 3 (scope of application) of the draft convention.

In relation to the scope of application, delegations discussed Canada’s proposal, which received support from 66 states. The proposal suggests wide wording of the actions that may fall within the scope of the convention, and adding Article 3.3 to ensure that the convention doesn’t permit or facilitate ’repression of expression, conscience, opinion, belief, peaceful assembly or association; or permitting or facilitating discrimination or persecution based on individual characteristics’.

The Russian Federation continued expressing the view that the AHC hadn’t fully implemented the mandated outline in Resolution 74/247 which established the committee, and the scope of the convention should include broader measures to combat ‘the spread of terrorist, extremist, and Nazi ideas with the use of ICTs’. Russia further highlighted that ‘many articles are simply copied from treaties that are 20 years old’ and that the revised text doesn’t include efforts to agree on procedures of investigation, or creating platforms and channels for law enforcement cooperation.

In the same vein, Iran, Egypt, and Kuwait see the primary mandate of the AHC to elaborate a comprehensive international convention on the use of ICT for criminal purposes and see the inclusion of human rights regulations and detailed international collaboration as duplication of already existing international treaties.  

Representatives from civil society, private entities, and academia also shared feedback on the scope, stressing the importance of limiting the convention’s scope and implementing strong human rights protections. They expressed concerns about the convention’s potential to undermine cybersecurity, jeopardise data privacy, and diminish online rights and freedoms.

Discussing additional provisions in the criminalisation chapter, delegations were deadlocked over specific terms. For instance, concerning Article 6(2), 7(2), and 12, Russia, with support from several delegations, proposed replacing ‘dishonest intent’ with a more specific term. Russia’s representative argued that ‘dishonest’ is not a legal term, thus making it challenging for countries to implement or clarify it in domestic legislation. However, the UK, US, and EU opposed this change. Austria, in particular, explained that ‘dishonest intent’ provides clear criteria for identifying when conduct constitutes an offence, offering flexibility across various legal systems. 

Human rights and safeguards 

Human rights (Article 5) and safeguards (Article 24) have been a difficult topic for delegations from day one. Some delegations such as Iran argued that the cybercrime treaty is not a human rights treaty, suggesting a model akin to the UN Convention against Corruption (UNCAC), which omits explicit human rights references. As reported earlier, this didn’t find support from many other delegations.

Egypt and other delegations also expressed confusion over the repetitive nature of certain human rights provisions within the text, emphasising the redundancy of similar mentions occurring five or six times. 

Additionally, Egypt raised concerns about Article 24 and questioned why the principle of proportionality was singled out from other legal principles recognised under international law. Egypt pointed out the challenge of applying proportionality when different countries have varying legal provisions, such as the death penalty. Pakistan supported Egypt and Brazil suggested appending ‘legality’ to the principle of proportionality, including both of the principles of legality and proportionality. Ecuador expressed support for Brazil’s proposal.

As a result, both articles remain without text in the further revised draft text of the convention

There was no consensus regarding the articles on online sexual abuse (Article 13) and non-consensual distribution of intimate images (Article 15). Delegations tried to find a balance between protecting privacy and criminalising the sharing of intimate images without consent. Many felt the convention should be flexible to accommodate different laws and international human rights agreements. There was debate about whether to stick with the Convention on the Rights of the Child’s (CRC) definition or use a different one. The US worried the CRC’s definition didn’t fit cybercrimes well and might lead to inconsistent interpretations that wouldn’t adequately protect children under Article 13. 

Transfer of technology and technical assistance

The transfer of technology appears twice in Article 1 (statement of purpose) and Article 54 (technical assistance and capacity-building). The group of African countries strongly advocated for keeping a reference to the transfer of technology in both articles, including in Article 1, paragraph 3. 

Russia, Syria, Namibia, India, Senegal, and Algeria supported this, while the US was against it and called to keep this reference in Article 54 only. The EU, Israel, Norway, Canada, Albania, and the UK supported the US.

With Article 54, more or less the same groups of states had further disagreements. The US, Israel, the EU, Norway, Switzerland, and Albania supported inserting ‘voluntary’ before ‘where possible’ and ‘on mutually agreed terms’ in the context of how capacity building shall be provided between states in Article 54(1). Most African countries and Iran, Iraq, Cabo Verde, Colombia, Brazil, and Pakistan, opposed such a proposal because it would undermine the purpose of the provision in ensuring effective assistance to developing countries. With the goal of reaching a consensus on Article 54(1), the US withdrew its proposal and retained the ‘where possible’ and ‘mutually agreed terms’. In the revised draft text of the convention these paragraphs remain open for further negotiations between delegations.

“As offenders, victims and evidence are often located in different jurisdictions, investigations will typically require international coordinated law enforcement action. This means that gaps in the capacity of one country can severely undermine the safety of communities in other countries. Technical assistance and capacity-building are key tools to address this challenge. However, to have a real-world impact, the future Convention needs to recognize that addressing the needs of the diverse actors involved in combating [the criminal use of ICTs] [cybercrime] will require various forms of specialized technical assistance, which no single organization can provide. Even within countries, the various actors involved in combating [the criminal use of ICTs] [cybercrime] – including legislators, prosecutors, law enforcement, national Computer Emergency Response Teams (CERTs) – may have very different technical assistance needs.”

Director Craig Jones, INTERPOL Cybercrime Programme

Scope of international cooperation

Delegations expressed opposing views on provisions related to cooperation on electronic evidence and didn’t reach consensus. The discussion included Articles 35 (1) c, Article 35 (3), and (4), which deal with the general principles of international cooperation and e-evidence. The draft convention allowed countries to collect data across borders without prior legal authorisation. However, there were no agreements across many delegations. 

In particular, New Zealand, Canada, the EU, Brazil, the USA, Argentina, Uruguay, Singapore, Peru, and others expressed concerns: fearing that the current draft of Article 35 would allow an excessively broad application, potentially leading to the pursuit of non-criminal activities. These states expressed views that the previous draft allowed for national law to determine what constitutes criminal conduct and pointed out the need to differentiate between serious crimes and offence, the need for safeguards and guardrails on the power of states to limit the possibility of repression and implementations of intrusive and secret mechanisms and to ensure the protection of human rights. On the other hand, states like Egypt, Saudi Arabia, Iran, Iraq, Mauretania, Oman, and others called for the deletion of Article 35 (3) altogether.

Additionally, New Zealand suggested including a non-discrimination clause in Article 37(15) on extradition to prevent unfair grounds for refusing cooperation. This would ensure consistency across the entire chapter on international cooperation. However, member states couldn’t agree on the language and left this open. 

Within the international cooperation chapter, delegations spend quite a bit of time discussing the terms: in particular, in Article 45 and 46 the debates centred around the use of ‘shall’ vs ‘may’. The EU and other delegations advocated for changing ‘shall’ to ‘may’ in those articles to allow states the option, but not the obligation, to cooperate. This proposal was met with mixed reactions, with some delegations, including Egypt and Russia preferring to retain ‘shall’ to ensure robust international cooperation. The countries opposing the change from shall to may advocate that this would undermine the effectiveness of the cooperation between the states. So far, the further revised draft text of the convention includes both options in brackets. 

Decision postponed on the Cybercrime Convention: What you should know about the latest session of the UN negotiations 6

Preventive measures 

Another term which created some confusion across several delegations was the use of ‘stakeholders’ in Article 53, where preventive measures are discussed and paragraph 2 highlights that ‘States shall take appropriate measures […] to promote the active participation of relevant individuals and stakeholders outside the public sector, such as non-governmental organizations, civil society organizations, academic institutions and the private sector, as well as the public in general, in the prevention of the offences covered by this Convention’. Egypt, in particular, called to remove the word ‘stakeholders’ unless it’s clearly defined. The US didn’t support this proposal. The further revised draft text of the convention ‘relevant individuals and entities […]’, but the paragraph hasn’t been agreed yet.

In the same article, in paragraph 3(h), where ‘gender-based violence is mentioned and strategies and policies should be developed to prevent it, states couldn’t reach an agreement. The first group of states, including the USA, Iceland, Australia, Vanuatu, and Costa Rica, advocated for keeping the provision. Other delegations such as Iran, Namibia, Saudi Arabia, and Russia, among others, proposed the deletion of the term ‘gender-based’ and instead keep ‘violence’. In the end, this part remained as it is with the term ‘gender-based violence’, with the chair emphasising that this article is not obligatory as it says that preventing measures may include.

Another notable example of where states had opposing views was Article 41 on 24/7 network, which is a point of contact designated at the national level, available 24 hours a day and 7 days a week, to ensure the provision of immediate assistance for the purposes of the convention. India proposed new duties for the 24/7 network, explaining that prevention should be a part of such duties. They particularly stressed that ‘if the offence is not prevented and it occurs, States would be needing multiple times the resources that they saved in the process of evidence collection, prosecution, extradition, and so on. So it’s better to prevent rather than to spend multiple times the same resources that States are trying to save in going through the whole process of criminal justice’. Russia, Kazakhstan and Belarus supported this proposal, while the US, UK, Argentina, the EU, and Canada didn’t.

So, what’s next?

A question mark on a wooden cube on a computer keyboard

As mentioned earlier, the delegates managed to agree on one major item to postpone the final decision. The chair’s further revised draft text of the convention is available at the AHC’s website, and new dates for more meetings should be announced soon. 

Does this mean that delegations are close to reaching a consensus over a landmark cybercrime convention before the UN General Assembly? Hardly so, but these two weeks have also demonstrated that many (though less fundamental compared to the scope of application) open issues have been resolved behind closed doors, and there is still a chance that intense non-public negotiations between delegations could speed up the process.

We will continue to monitor the negotiations, in the meantime discover more through our detailed reports from each session generated by DiploAI.

The perfect cryptostorm

To fully understand the incredible story behind the cryptocurrency and blockchain craze of 2017-2021, we must explain the unique setting in which events played out, setting the course for the collision. One component amplified the other, multiplying the effect, thus creating a perfect cryptostorm. Unfortunately, that storm took a toll on trust in the industry and caused financial losses.

The cryptocurrency industry is a one-hit wonder. But what a wonder that is! Bitcoin presents the true marvel of human engineering of money. It has withstood the test of time and resiliency, becoming the worldwide recognised use case for digital gold. We witnessed newly coined terms such as ‘crypto-rich’. In response, a whole new payment industry emerged, forged by the desire of the legacy financial organisation to stay relevant in the new era. 

Moreover, alongside the new fast-digital payment industry, which was delivering miracles on financial inclusion of the unbanked, the retail investing industry was a new form of capital inflow. The emergence of online trading companies, backed mainly by larger institutional investors, was recognised as a risk for the retail users and overall consumer protection rights.

Unanswered risks, the new hype around the change in the financial industry, and the emergence of inexperienced investors were the ingredients for the perfect storm in the cryptocurrency industry. Add human greed to that mixture and it becomes the perfect cryptostorm.

The perfect cryptostorm

The necromancers that summoned this cryptostorm, are quite vividly depicted in the latest Netflix documentary drama, ‘Bitconned’, which aired this January after two years of production. In 2017, the Centra Tech company raised USD 25 million in investments in their main product: a VISA-backed credit card, allowing people to spend their cryptocurrency at any retail store across the USA.
Centra Tech’s CEO, CTO, and other executives had a Harvard Business School background or an MIT engineering degree. The new headquarters in downtown Miami was full of young, bright people, and 20,000 VISA cards were produced. However, none of this was real. Everything was a (not so cleverly) staged mirage.

The court case concluded in 2021, handing jail time sentences to the people involved. The documentary is led by one of three prominent persons behind Centra Tech, Ray Trapani, who collaborated with the federal investigation on the case. In the film, he explained in detail how two young scammers working at a car rental company raised millions in an ICO, having only a one-page website. 
Once it started, the storm did not calm down for years. The story of Centra Tech from 2017 was replicated time and time again, culminating in the collapse of, at the time, the world’s second-largest company in the industry: FTX, an online cryptocurrency exchange. As we read from publicly presented pieces of court evidence, in the cases against Celsius, Luna and FTX, the crypto companies spent funds custodied by their investors.

 Person, Sitting, Adult, Male, Man, Clothing, Footwear, Shoe, Furniture, Face, Head, Home Decor, Chair
Screenshot from the Netflix documentary film ‘Bitconned

How did crypto scam companies utilise the above ingredients?

By promising the right thing at the right moment. Internet users witnessed the financial sector’s transformation and bitcoin’s success. They could easily be convinced that a new decentralised finance infrastructure is on the verge, which will be supported by the lack of a regulatory framework. At the same time, giving them a fair chance to participate in the industry beginnings and become the new crypto millionaires, which was the main incentive for many. If people behind the open-source cryptocurrency (bitcoin) could create the ‘internet of information’,  the next generation of cryptocurrency engineers would surely deliver the ‘internet of money’. However, again, it was false. It was, in fact, a carefully worded money-grabbing experiment.

All the above ideas still stand as a goalpost for further industry developments. Moreover, we must admit that the initial takeover of the industry by scammers, fraudsters, and, in some cases, straightforward sociopaths will taint the forthcoming period of developments in this industry.

In contrast to bitcoin, the creators of almost all cryptocurrencies that came later were incentivised by the financial benefits of ‘tokenisation’ rather than by secure and trustworthy technology. The term tokenisation was supposed to describe the emergence of fast-exchanging digital information (tokens) that could help trade digital products and services, promising the possibility of a ‘creators’ economy, micropayments, or unique digital objects. But in reality, it was merely copying analogue objects to the digital world and charging money for that service. Stocks, bonds, tin cans, energy prices, cloud storage, and dental appointments were all promised to be tokenised, while the term ‘blockchain’ was the ultimate hype word. People soon realised that not all digital artefacts had value solely by being placed on a blockchain. That was the case with projects that honestly intended to build the product (token or cryptocurrency) rather than just sell vapourware and go permanently offline the moment they got busted. As with any other technology, time will show the most efficient and rational use of blockchain.

Could this happen again for online financial services? 

Chances are meagre, certainly not to happen on this scale. Financial agencies worldwide have prepared a set of comprehensive laws and authorities to detect such fraudulent companies much faster and more efficiently. Financial regulations are negotiated with much more success on a global scale. Intergovernmental financial organisations and their bodies have equipped the regulators with the tools to comprehend how technology works and what can be done on the consumer protection side. Also, the users have had their fair share of schooling. Once bitten, twice shy.

For any other technology developed and utilised mainly online, the chances are always there. Users can now easily be engaged directly, via a mobile app, with companies that promise the next technological innovation. All they have to do is to carefully word our societal dreams into their product description.

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

The intellectual property saga: The age of AI-generated content | Part 1

The intellectual property saga: approaches for balancing AI advancements and IP protection | Part 3

The European Union (EU) has reached a historic provisional agreement in 2023 on the world’s first comprehensive set of rules to regulate AI, which will become law, once adopted by the EU Parliament and Council. The legislation, known as the AI Act, sets a new global benchmark for countries seeking to harness the potential benefits of AI, while trying to protect against possible risks of using it. While much of the attention was given to parts such as biometric categorisation and national security, among others, the AI Act will also give new guidance on AI and copyright. 

The AI Act’s approach regarding Copyright and Transparency in AI takes a nuanced stance, requiring transparency regarding training data without demanding exhaustive lists of copyrighted works. Instead, a summary of data collections suffices, easing the burden on AI providers. Nonetheless, uncertainties persist about the foundation model of providers’ obligations under copyright laws. While the AI Act stresses compliance with the existing regulations, including those in the Digital Single Market Directive, it yet raises concerns about applying EU rules to models globally, potentially fostering regulatory ambiguity for developers.

In one of the previous blogs,  the Digital Watch Observatory elucidated the relationship between AI-generated content and copyright. The analysis showed how traditional laws struggle to address AI-generated content, raising questions of ownership and authorship. Various global approaches – denying AI copyright, attributing it to humans – highlight the complexity.

This part will delve into the influence of AI on Intellectual Property Rights, and will assess the ramifications of AI on trade secrets and trademarks focusing on examples from the EU and US legal frameworks.

Trade Secrets and AI Algorithms

Within the realm of AI and intellectual property, trademarks and trade secrets present unique challenges and opportunities that require special attention in the evolving legal landscape. As AI systems often require extensive training datasets and proprietary algorithms, determining what constitutes a protectable trade secret becomes more complex. Companies must navigate how to safeguard their AI-related innovations, including the datasets used for training, without hindering the collaborative nature of the AI development. 

Trade secret laws may need refinement in order to address issues like reverse engineering of AI algorithms and the accidental disclosure of sensitive information by AI systems. However, given the limitations associated with patenting and copyrighting AI-related content, trade secret principles seem to present an alternative, at least in the USA. Patents necessitate a demonstrated utility disclosed in the application, while trade secrets lack this requirement. Trade secrets cover a broader range of information without the immediate need to disclose utility. In addition, trade secret law allows information created by an AI system to be protected, even if the creator is not an individual. This differs from patent law, which requires a human inventor listed on the application. 

Computer security concept with a closed padlock on the laptop.

Trade secrets, traditionally associated with formulae and confidential business information, now extend to AI algorithms and proprietary models. Safeguarding these trade secrets is critical for maintaining a competitive edge in industries in which AI plays a pivotal role. In the USA, trade secret law safeguards a broad spectrum of information, encompassing financial, business, scientific, technical, economic, or engineering data, as long as the owner has taken reasonable measures to maintain its secrecy, and the information derives value from not being widely known or easily accessible through legitimate means by others who could benefit from its disclosure or use (as defined in 18 U.S.C. §1839(3)). It is important, however, to consider that patent owners have a monopoly on the right to make, use, or sell the patented invention. In contrast, owners of AI-based trade secrets face the risk of competitors reverse engineering the trade secret, which is permitted under US trade secret law.

Requirements related to secrecy exclude trade secret protection for AI-generated outputs that are not confidential, such as those produced by systems like ChatGPT or Dall·E. Nevertheless, trade secret laws seem to be more flexible to safeguard various AI-related assets, including training data, AI software code, input parameters, and AI-generated content intended solely for internal and confidential purposes. Importantly, there is no stipulation that a trade secret must be originated by a human being, while AI-generated material is treated like any other form of information, as evident in 18 U.S.C. §1839(4), which defines trade secret ownership.

Instead of pursuing patents, based on traditional laws that seem to provide ambiguous guidance on AI and Copyright,  numerous AI innovators opt for trade secret protections to safeguard their AI advancements, as these innovations in commercial use frequently remain concealed and difficult for others to detect. With the AI Act soon to become law, there’s a likelihood that the EU will necessitate disclosing how AI innovations operate, categorising them as limited or high risk. This consequently leads to trade secret safeguarding to no longer be viable in some instances. 

Establishing clear guidelines for what qualifies as a trade secret in the AI domain, and defining the obligations of parties involved in AI collaborations will be essential for fostering innovation while ensuring the protection of valuable business assets.

Trademarks and Branding in the AI Era

artificial intelligence (ai) and machine learning (ml)

The integration of AI technologies into product and service offerings has also reshaped the landscape of trademark protection, presenting both challenges and opportunities for businesses. Traditionally associated with logos, brand names, and distinctive symbols, trademarks now extend their scope to encompass AI-generated content, virtual personalities, and unique algorithms associated with a particular brand. As companies increasingly rely on AI for customer interactions, the challenge of maintaining brand consistency in automated, AI-powered engagements becomes paramount. In the realm of AI-driven customer service and chatbots, the traditional understanding of the ’average consumer’ in trademark infringement cases undergoes transformation. When an AI application acquires a product with minimal or no human involvement, determining who, or more crucially, what constitutes the average consumer, becomes a pertinent question. Likewise, identifying responsibility for a purchase that results in trademark infringement in such scenarios becomes complex.

While there have been no known cases directly addressing the issue of AI and liability in trademark infringement, there have been several cases within the past decade adjudicated by the Court of Justice of the European Union (CJEU) that could offer insights into the matter when considering this new technology. For instance, the Louis Vuitton vs Google France decision focused on keyword advertising and the automatic selection of keywords in Google’s AdWords system. It concluded that Google wouldn’t be accountable for trademark infringement unless it actively participated in the keyword advertising system. Similarly, the L’Oréal vs eBay case, which revolved around the sale of counterfeit goods on eBay’s online platform, determined that eBay wouldn’t be liable for trademark infringement unless it had clear awareness of the infringing activity. A comparable rationale was applied in the Coty vs Amazon case. 

It would seem that if a provider of AI applications implemented adequate takedown procedures and had no prior knowledge of infringing actions, they would likely not be held responsible for such infringements. However, when the AI provider plays a more active role in potential infringing actions, the two cases indicate that the AI provider could be held accountable. 

In the case of Cosmetic Warriors Ltd and Lush Ltd vs Ltd and Amazon EU Sarl before the United Kingdom High Court, in 2014, Amazon was determined to be liable for trademark infringement. Amazon used ads on Google mentioning ’lush’ to bring people to its UK website, where Lush claimed Amazon was breaking trademark rules by showing ‘LUSH’ in ads and search results for similar products without saying Lush items weren’t available on Amazon. The Court explained that consumers were unable to discern whether the products being offered for sale were those of the brand owner or not, thus illustrating that the evolving definition of the average consumer and the delineation of responsibility in trademark infringement cases involving AI require nuanced legal considerations. 

 Computer, Electronics, Tablet Computer, Pen


As AI continues to impact various industries, the ongoing evolution of intellectual property laws will play a pivotal role in defining and safeguarding AI innovations, underscoring the need for adaptable regulations that balance innovation and protection. The intersection of AI and intellectual property introduces novel challenges and opportunities, necessitating a thoughtful and adaptive legal framework. One crucial aspect involves the recognition and protection of AI-generated innovations. Traditional IP laws, such as patents, copyrights, and trade secrets, were designed with human inventors in mind. However, the autonomous and generative nature of AI raises questions about the attribution of authorship and inventorship. Legal systems will need to address whether AI-generated creations should be eligible for patent or copyright protection and, if so, how to attribute ownership and responsibility. This demands a forward-thinking approach from policymakers, legal scholars, and industry stakeholders to craft a legal landscape that not only accommodates the transformative potential of AI, but also safeguards the rights, responsibilities, and interests of all parties involved.

AI industry faces threat of copyright law in 2024

Copyright laws are set to provide a substantial challenge to the artificial intelligence (AI) sector in 2024, particularly in the context of generative AI (GenAI) technologies becoming pervasive in 2023. At the heart of the matter lie concerns about the use of copyrighted material to train AI systems and the generation of results that may be significantly similar to existing copyrighted works. Legal battles are predicted to affect the future of AI innovation and may even change the industry’s economic models and overall direction.
According to tech companies, the lawsuits could create massive barriers to the expanding AI sector. On the other hand, the plaintiffs claim that the firms owe them payment for using their work without fair compensation or authorization.

Legal Challenges and Industry Impact

AI programs that generate outputs comparable to existing works could infringe on copyrights if they had access to the works and produced substantially similar outcomes. In late December 2023, the New York Times was the first American news organization to file a lawsuit against OpenAI and its backer Microsoft, asking the court to erase all large language models (LLMs), including the famous chatbot ChatGPT, and all training datasets that rely on the publication’s copyrighted content. The prominent news media is alleging that their AI systems engaged in ‘widescale copying’, which is a violation of copyright law.
This high-profile case illustrates the broader legal challenges faced by AI companies. Authors, creators, and other copyright holders have initiated lawsuits to protect their works from being used without permission or compensation.

As recently as 5 January 2024, authors Nicholas Basbanes and Nicholas Gauge filed a new complaint against both OpenAI and its investor, Microsoft, alleging that their copyrighted works were used without authorization to train their AI models, including ChatGPT. In the proposed class action complaint, filed in federal court in Manhattan, they charge the companies with copyright infringement for putting multiple works by the authors in the datasets used to train OpenAI’s GPT large language model (LLM).

This lawsuit is one among a series of legal cases filed by multiple writers and organizations, including well-known names like George R.R. Martin and Sarah Silverman, alleging that tech firms utilised their protected work to train AI systems without offering any payment or compensation. The results of these lawsuits could have significant implications for the growing AI industry, with tech companies openly warning that any adverse verdict could create considerable hurdles and uncertainty.

Ownership and Fair Use

Questions about who owns the outcome generated by AI systems—whether it is the companies and developers that design the systems or the end users who supply the prompts and inputs—are central to the ongoing debate. The ‘fair use‘ doctrine, often cited by the United States Copyright Office (USCO), the United States Patent and Trademark Office (USPTO), and the federal courts, is a critical parameter, as it allows creators to build upon copyrighted work. However, its application to AI-generated content with models using massive datasets for training is still being tested in courts.

Policy and Regulation

The USCO has initiated a project to investigate the copyright legal and policy challenges brought by AI. This involves evaluating the scope of copyright in works created by AI tools and the use of copyrighted content in training foundational and LLM-powered AI systems. This endeavour is an acknowledgement of the need for clarification and future regulatory adjustments to address the pressing issues at the intersection of AI and copyright law.

Industry Perspectives

Many stakeholders in the AI industry argue that training generative AI systems, including LLMs and other foundational models, on the large and diverse content available online, most of which is copyrighted, is the only realistic and cost-effective method to build them. According to the Silicon Valley venture capital firm Andreessen Horowitz, extending copyright rules to AI models would potentially constitute an existential threat to the current AI industry.

Why does it matter?

The intersection of AI and copyright law is a complex issue with significant implications for innovation, legal liability, ownership rights, commercial interests, policy and regulation, consumer protection, and the future of the AI industry.

The AI sector in 2024 is at a crossroads with existing copyright laws, particularly in the US. The legal system’s reaction to these challenges will be critical in striking the correct balance between preserving creators’ rights and promoting AI innovation and progress. As lawsuits proceed and policymakers engage with these issues, the AI industry may face significant pressure to adapt, depending on the legal interpretations and policy decisions that will emerge from the ongoing processes. Ultimately, these legal fights could determine who the market winners and losers would be.

OEWG’s sixth substantive session: the highlights

The sixth substantive session of the UN Open-Ended Working Group (OEWG) on security of and the use of information and communications technologies 2021–2025 was held in December 2023, marking the midway point of the process.


The risks and challenges associated with emerging technologies, such as AI, quantum computing, and IoT, were highlighted by several countries. Numerous nations expressed concerns about ransomware attacks’ increasing frequency and impact on various entities, including critical infrastructure, local governments, health institutions, and democratic institutions.

The need for capacity building efforts to enhance cybersecurity capabilities globally was emphasised by multiple countries, recognising the importance of preparing for and responding to cyber threats.

The Russian Federation raised concerns about the potential for interstate conflicts arising from using information and communication technologies (ICTs). It proposed discussions on a global information security system under UN auspices. El Salvador discussed evolving threats in the ICT sector, particularly during peacetime, indicating that cybersecurity challenges are not limited to times of conflict.

Delegates discussed the impact of malicious cyber activities on international trust and development, particularly in the context of state-sponsored cyber threats and cybercrime.

Several countries, including the United Kingdom, Kenya, Finland, and Ireland, focused on the intersection of AI and cybersecurity, advocating for approaches considering AI systems’ security implications.

Some countries, including Iran and Syria, expressed concerns about threats to sovereignty in cyberspace, including issues related to internet governance and potential interference in internal affairs.

Many countries emphasised the importance of international cooperation and information sharing to address cybersecurity challenges effectively. Proposals for repositories of information on threats and incidents were discussed. The idea of a global repository of cyber threats, as advanced by Kenya, enjoys much support.

Rules, norms and principles 

Many delegations shared how they have already begun implementing national and regional norms through policies, laws and strategies. At the same time, some delegations shared the existing gaps and ongoing processes to introduce new laws, in particular, to protect critical infrastructure (CI) and implement CI-related norms. 

Clarifying the norms and providing implementation guidance

Delegations also signalled that clarifying the norms and providing implementation guidance is necessary. Singapore, for instance, supported the proposal to develop broader norm implementation guidance, such as a checklist. The Netherlands argued that such guidance should not only consider the direct impact of malicious cyber activities but also consider the cascading effects that such activities may have, including their impact on citizens. Canada stressed that a checklist would be a complementary tool, formulating voluntary and non-binding guidelines, while some delegations (e.g. China and Syria) called for translating norms as political commitments into legally binding elements. 

Australia suggested first focusing on developing norms implementation guidance for the three CI norms (F, G, and H). China, in particular, among many other delegations, expressed the same need to develop guidelines for the protection of CI. Portugal proposed the focus on clarifying and implementing the due diligence, including by the private sector in protecting CI, and France supported it.  

Norms related to ICT supply chain security and vulnerability reporting

In response to the Chair’s query about the norms related to ICT supply chain security and vulnerability reporting, Switzerland presented the Geneva Manual on Responsible Behaviour in Cyberspace. This inaugural edition offers comprehensive guidance for non-state stakeholders, emphasising norms related to supply chain security and responsible vulnerability reporting. At the same time, the UK and France raised the issue of the use of commercially available intrusion capabilities. The UK expressed its concerns about the growing market of software intrusion capabilities. It stressed that all actors, including the private sector, are responsible for ensuring that the development, facilitation and use of commercially available ICT capabilities do not undermine stability in cyberspace. In addition, France highlighted the need to guarantee the integrity of the supply chain by ensuring users’ trust in the safety of digital products and, in this context, cited the European Cyber Resilience Act proposal, which aims to impose cybersecurity requirements for digital products. China also addressed these norms and argued that some states abuse them by developing their standards for supply chain security and undermining fair competition for businesses. China also said all states should explicitly commit themselves to not proliferating offensive cyber technologies and urged that the so-called term ‘peacetime’ had never been used in the context of 11 norms in earlier consensus documents.

New norms vs existing norms 

Delegations had divergent views on whether new norms should be developed or not. Some countries supported the idea of creating new norms till 2025 (the end of the OEWG mandate), and, in particular, China called for new norms on data security issues. Other delegations (e.g. Canada, Colombia, France, Israel, the Netherlands, Switzerland, etc.) opposed the development of new norms and instead called for implementing ones. 

South Africa emphasised the need to intensify implementation efforts to identify any gaps in the existing normative frameworks and if there is a need for additional norms to close that gap. Brazil stressed that the implementation of existing standards is not contradictory to discussing the possibility of adopting specifically legally binding norms and thus rejected the idea that ‘there is any dichotomy opposing both perspectives’. Brazil expressed its openness to considering the adoption of both additional voluntary norms and legally binding ones to promote peaceful cyberspace. 

International law

The discussion on international law in the use of ICTs by states was guided by four questions: whether states see convergences in perspectives on how international law applies in the use of ICTs, whether there are possible unique features of cyber domain as compared to other domains that would require distinction in application of international law, whether there are gaps in applicability, and on capacity-building needs. While some delegations had statements prepared by legal departments or had legal counsel input, others, especially developing countries, needed support in formulating their interventions.

Convergences in perspectives on how international law applies in the use of ICTs

The overwhelming majority of delegations stated that convergence is in agreement that international law, in particular, the UN Charter, is applicable in cyberspace (Thailand, Denmark, Iceland, Norway, Sweden, Finland, Brazil, Estonia, El Salvador, Austria, Canada, the EU, Republic of Korea, Netherlands, Israel, Pakistan, UK, Bangladesh, India, France, Japan, Singapore, South Africa, Australia, Chile, Ukraine, and others). These states see the need to deepen a common understanding of how existing international law applies in cyberspace, alongside its possible implications and legal consequences. Most delegations also stated that cyberspace is not unique and would require a distinction in how international law applies. Kenya pointed out the role of regional organisations in clarifying how international law applies to cyberspace, the African Union in particular, and their contributions to this debate, which was supported by many.

India stated that, in their view, the dynamic nature of cyberspace creates ambiguity in the application of international law since a state, as a subject of international law, can exercise its rights and obligations through its organs or other natural and legal persons. 

Another group of states (Cuba, Nicaragua, Vietnam, and the Syrian Arab Republic) thinks cyberspace is unique and can not be addressed by applying existing international law. They call for a legally binding instrument in the UN framework. Russia and Bangladesh see gaps in international law that require new legally binding regulations. According to China and the Syrian Arab Republic, the draft of the International Convention on International Information Security proposed by the Russian Federation would be a good starting point for such negotiations. 

The delegations also discussed general international law principles enshrined in the UN Charter. There is an overarching agreement that the principles of sovereignty and sovereign equality, non-intervention, peaceful settlement of disputes, and prohibition of the use of force apply in cyberspace (Malaysia, Australia, Russian Federation, Italy, the USA, India, Canada, Switzerland, Czech Republic, Estonia, Ireland, others). The states concluded that the principles of due diligence, attribution, invoking the right of self-defence, and assessing whether an internationally wrongful act has been committed requires additional work to understand how they apply in cyberspace.

Many delegations (Australia, Canada, the EU, New Zealand, Germany, Switzerland, Estonia, El Salvador, the USA, Singapore, Ireland, and others) stated that the discussions need to clarify how international law addresses violations, what rights and obligations arise in such case, and how international law of state responsibility applies in cyberspace. Mexico, Italy and Bangladesh see value in the contributions of the UN International Law Commission to this debate.

The majority of delegations see convergence in understanding that international humanitarian law applies in cyberspace in cases of armed conflict and that the states must adhere to international legal principles of humanity, necessity, proportionality and distinction (Kiribati, UK, Germany, the USA, Netherlands, El Salvador, Ukraine, Denmark, Czech Republic, Australia, others). Deeper discussions on this matter are necessary. Cuba, in line with its previous statements, disagrees with the concept of applying international humanitarian law in cyberspace.

Addressing capacity building in international law, Uganda stated that it is extremely difficult for developing countries to be equal partners and effectively participate globally due to a lack of expertise and capacity. The majority of countries have supported continuous capacity building efforts in international law (Thailand, Mexico, Nordic countries; Estonia, Ireland, Kenya, the EU, Spain, Italy, Republic of Korea, Netherlands, Malaysia, Bangladesh, India, France, Japan, Singapore, Australia, Switzerland), with Canada mentioning two priority areas: national expertise to enable meaningful participation in substantive legal discussions in multilateral processes such as our OEWG and expertise to develop national or regional positions. Almost all delegations have found the recent UNIDIR workshop to be a valuable contribution to understanding international law’s applicability in cyberspace. 

Several delegations have underscored the value of sharing national positions (Thailand, Brazil, Austria, the EU, Israel, the UK, India, Nigeria, Nordic countries, and Mexico) in capacity-building and confidence-building measures.

Going forward, most speakers (Estonia, the EU, Austria, Spain, Italy, El Salvador, the Republic of Korea, the UK, Malaysia, Japan, Chile, and others) have supported the proposal to hold a two-day inter-sessional meeting dedicated to international law.


Operationalisation of the Global POC Directory

Many states supported the operationalisation of the agreements to establish a global POC Directory. Australia stressed that those states already positioned to nominate their diplomatic and technical POCs should do so promptly. Switzerland, however, reiterated that the POC Directory should not duplicate the work of CERT and CSIRT teams. The Netherlands stressed the need to regularly evaluate the performance of the POC Directory once it is established. Ghana supported this proposal to develop a feedback mechanism to collect input from states on the Directory’s functionality and user experience. At the end of this agenda item, the Chair also addressed the participation of stakeholders and shared that a dedicated intersessional meeting in May will be convened to discuss stakeholders’ role in the POC directory.

Role of regional organisations

Some delegations (e.g. the US, the EU, Singapore, etc.) highlighted the role of regional organisations in operationalising the POC directory and CBMs. However, several delegations expressed their concerns – e.g. Cuba stated that they are not in favour of ‘attempts to impose the recognition of specific organisations as regional interlocutors on the subject when they do not include the participation of all member states of the region and question’. The EU noted that not all states are members of regional organisations and added that the UN should develop global recommendation service practices on cyber CBMs and encourage regional dialogue and exchanges. 

Additional CBMs

Delegations discussed potentially adding additional CBMs. Iran highlighted the need for universal terminology in ICT security to reduce the risk of misunderstanding between states. India reiterated the proposal for a global cybersecurity cooperation portal to address cooperation channels for incident response. India also called for differentiating between cyberterrorism and other cyber incidents in this context. India also suggested that the OEWG may focus on building mechanisms for states to cooperate in investigating cyber crimes and sharing digital forensic evidence. The Chair, at the end of this agenda item, highlighted that the OEWG must continue discussions on potentially adding new CBMs and the importance of identifying if there are any additional things to do. 

Capacity building

The recent discussions on cybersecurity highlighted a consensus among participating nations regarding the urgency and cross-cutting nature of cyber threats. Delegations emphasised the importance of Cyber Capacity (CB) in enabling countries to identify and address these threats while adhering to international law and norms for responsible behaviour in cyberspace. Central to the dialogue was the pursuit of equity among nations in achieving cyber resilience, with a recurring emphasis on the ‘leave no country behind’ principle. The core notion of foundational capacities was at the centre of the debates. The development of legal frameworks, dedicated agencies, and incident response mechanisms, especially Computer Emergency Response Teams (CERTs) and CERT cooperation, were highlighted. However, delegations also stressed the importance of national contexts and the lack of one-size-fits-all answers to foundational capacities. Instead, efforts should be tailored to individual countries’ specific needs, legal landscape and infrastructure.

Other issues highlighted were the shortage of qualified cybersecurity personnel and the need to develop technical skills through sustainable and self-sufficient traineeship programs, such as train-the-trainer initiatives. Notable among these initiatives was the Western Balkans Cyber Capacity Centre (WB3C), a long-term project fostering information exchange, good practices, and training courses developed by Slovenia and France together with Montenegro 

Concrete actions emerged as a response to past calls from delegations for concrete actions. Two critical planned exercises, the mapping exercise and the Global Roundtable on CB, were commended. The mapping exercise scheduled for March 2024 aims to survey global cybersecurity capacity-building initiatives comprehensively, enhancing operational awareness and coordination. The Global Roundtable, scheduled for May 2024, is considered a milestone in involving the UN, showcasing ongoing initiatives, creating partnerships, and facilitating a dynamic exchange of needs and solutions. These initiatives align with the broader themes of global cooperation, encompassing south-south, north-south, and triangular collaboration in science, technology, and innovation, emphasising needs-based approaches by matching initiatives with specific needs.

Additional points from the discussions included a presentation from India on the technical aspects of the Global Cyber Security Cooperation Portal, emphasising synergy with existing portals. Delegations also supported a voluntary checklist of mainstream cyber capacity-building principles proposed by Singapore. Furthermore, the outcomes of the Global Conference on Cyber Capacity Building, hosted by Ghana and jointly organised by the Cyber Peace Institute, the World Bank, and the World Economic Forum, garnered endorsement from many delegations. The ‘Accra call,’ as it is being termed, is a practical action framework to strengthen cyber resilience as a vital enabler for sustainable development. Switzerland announced its plan to host the follow-up conference in 2025 and urged all states to endorse the Accra Call for cyber-resilient development.

Regular institutional dialogue

The 6th substantive session of the current OEWG marks halfway to the end of the mandate, and the fate of the future dialogue on international  ICT security remains open. The situation is exacerbated with a new plot twist: in addition to the Program of Action (PoA) that was proposed by France and Egypt back in 2019 and noted by GA resolutions lately (77/37 and 78/16), Russia tabled a new concept paper introducing a permanent OEWG as an alternative. 

Delegations spent in total more than 3 hours discussing the RID issue.  All supporters of the PoA stressed the amount of votes that resolution 78/16 got in GA: 161 states upheld the option to create a permanent inclusive and action-oriented mechanism under the UN auspices upon the conclusion of the current OEWG and no later than 2026, implying PoA. Notably, supporters of the resolution stressed that the final vision of the PoA would be defined at the OEWG in a consensus manner, considering the common elements expressed in the 2nd Annual progress report.  Several states noted that no PoA discussions may be held outside the OEWG to maintain consistency.

There is no consolidated view of the details of the PoA architecture. Egypt and Switzerland provided some ideas about the number and frequency of meetings and review mechanisms. However, Slovakia, Germany, Switzerland, Japan, Ireland, Australia, Colombia, Netherlands and France suggested including into the PoA architecture already discussed initiatives like PoC, Cyber Portal, threat repository, national implementation survey and other future ideas. The PoA recognises the possibility of developing new norms (beyond the agreed framework). Through the future review mechanism, it may identify gaps in existing international law and consider new legally binding norms to fill them if necessary.  As for the additional common element to the RID, some states pointed to inclusivity. PoA should allow multistakeholder participation during meetings, especially in the private sector, and allow them to submit positions. However, the final decision-making will remain with states only. 

The Russian proposal of a permanent OEWG after 2025 was co-sponsored by 11 states. It offers several principles for the group’s future work, stressing the consensus nature of decisions and stricter rules for stakeholder participation. It also provides detailed procedural rules and modalities of work.

The consensus issue was crucial at this substantive session as many states, even supporters of PoA, stressed this in statements. The problem may lie in the 78/16 resolution that does not specify the consensus mode of work except that the mechanism should be ‘permanent, inclusive and action-oriented’. 

Another divergence between the two formats is the main scope. According to the statements by PoA supporters, PoA should focus on implementing the existing framework of responsible state behaviour in cyberspace and concentrate efforts on capacity building to enable developing countries to cope with that. There may be a place for a dialogue on new threats and norms, but this is not a primary task. On the contrary, a permanent OEWG will concentrate on drafting legally binding norms and mechanisms of its implementation as elements of a new treaty or convention on ICT security. However, other aspects, such as CBMs and capacity building, will also remain in its scope. 

For Russia, the struggle to push the permanent OEWG format may lie in substance and in preserving the image of the pioneer of cyber negotiations at the UN and agenda-setter. If OEWG as a format ends in 2025, it will end the tradition of Russian diplomacy, which has more than 20 years of history. Also, earlier this year, in the submission to the SecGen under resolution 77/37, Russia frankly expressed its negative attitude towards PoA, saying that it will be ‘used by Western countries, in line with the ‘rules-based order’ concept promoted by the United States, to impose non-binding rules and standards to their advantage, instead of international law’.

The Chair plans to convey intersessional meetings on regular institutional dialogue in 2024 to deliberate this issue carefully.

The intellectual property saga: The age of AI-generated content | Part 1

The intellectual property saga: AI’s impact on trade secrets and trademarks | Part 2

The intellectual property saga: approaches for balancing AI advancements and IP protection | Part 3

As AI advances rapidly, machines are increasingly gaining human-like skills, which is increasingly blurring the distinction between humans and machines. Traditionally, computers were tools that assisted human creativity with clear distinctions: humans had sole ownership and authorship. However, recent AI developments enable machines to independently perform creative tasks, including complex functions such as software development and artistic endeavours like composing music, generating artwork, and even writing novels.

This has sparked debates about whether creations produced by machines should be protected by copyright and patent laws? Furthermore, the question of ownership and authorship becomes complex, as it raises the issue of whether credit should be given to the machine itself, the humans who created the AI, the works the AI feeds off from or perhaps none of the above?

This essay initiates a three-part series that delves into the influence of AI on intellectual property rights (IPR). To start off, we will elucidate the relationship between AI-generated content and copyright. In the following essays, we will assess the ramifications of AI on trademarks, patents, as well as the strategies employed to safeguard intellectual property (IP) in the age of AI.

Understanding IP and the impact of AI 

In essence, IP encompasses a range of rights aimed at protecting human innovation and creativity. These rights include patents, copyrights, trademarks, and trade secrets. They serve as incentives for people and organisations to invest their time, resources, and intelligence in developing new ideas and inventions. Current intellectual property rules and laws focus on safeguarding the products of human intellectual effort. 

Google recently provided financial support for an AI project designed to generate local news articles. Back in 2016, a consortium of museums and researchers based in the Netherlands revealed a portrait named ‘The Next Rembrandt’. This artwork was created by a computer that had meticulously analysed numerous pieces crafted by the 17th-century Dutch artist, Rembrandt Harmenszoon van Rijn. In principle, this invention could be seen as ineligible for copyright protection due to the absence of a human creator. As a result, they might be used and reused without limitations by anyone. This situation could present a major obstacle for companies selling these creations because the art isn’t protected by copyright laws, allowing anyone worldwide to use it without having to pay for it.

Hence, when it comes to creations that involve little to no human involvement the situation becomes more complex and blurred. Recent rulings in copyright law have been applied in two distinct ways.

One approach was to deny copyright protection to works generated by AI (computers), potentially allowing them to become part of the public domain. This approach has been adopted by most countries and was exemplified in the 2022 DABUS case, which centred around an AI-generated image. The US Copyright Office supported this stance by stating that AI lacks the necessary human authorship for a copyright claim. Other patent offices worldwide have made comparable decisions, except for South Africa, where the AI machine Device for Autonomous Bootstrapping of Unified Sentience (DABUS), is recognised as the inventor, and the machine’s owner is acknowledged as the patent holder.

In Europe, the Court of Justice of the European Union (CJEU) has made significant declarations, as seen in the influential Infopaq case (C-5/08 Infopaq International A/S v Danske Dagblades Forening). These declarations emphasise that copyright applies exclusively to original works, requiring that originality represents the author’s own intellectual creation. This typically means that an original work must reflect the author’s personal input, highlighting the need for a human author for copyright eligibility.

The second approach involved attributing authorship to human individuals, often the programmers or developers. This is the approach followed in countries like the UK, India, Ireland, and New Zealand. UK copyright law, specifically section 9(3) of the Copyright, Designs, and Patents Act (CDPA), embodies this approach, stating:

‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’

AI-generated content and copyright

ai artificial intelligence concept robot hands typing on lit keyboard

This illustrates that the laws in many countries are not equipped to handle copyright for non-human creations. One of the primary difficulties is determining authorship and ownership when it comes to AI-generated content. Many argue that it’s improbable for a copyrighted work to come into existence entirely devoid of human input. Typically, a human is likely to play a role in training an AI, and the system may acquire knowledge from copyrighted works created by humans. Furthermore, a human may guide the AI in determining the kind of work it generates, such as selecting the genre of a song and setting its tempo, etc. Nonetheless, as AI becomes more independent in producing art, music, and literature, traditional notions of authorship become unclear. Additionally, concerns have arisen about AI inadvertently replicating copyrighted material, raising questions about liability and accountability. The proliferation of open-source AI models also raises concerns about the boundaries of intellectual property.

In a recent case, US District Judge Beryl Howell ruled that art generated solely by AI cannot be granted copyright protection. This ruling underscores the need for human authorship to qualify for copyright. The case stemmed from Stephen Thaler’s attempt to secure copyright protection for AI-generated artworks. Thaler, the Chief Engineer at Imagination Engines, has been striving for legal recognition of AI-generated creations since 2018. Furthermore, the US Copyright Office has initiated a formal inquiry, called a notice of inquiry (NOI), to address copyright issues related to AI. The NOI aims to examine various aspects of copyright law and policy concerning AI technology. Microsoft is offering legal protection to users of its Copilot AI services who may face copyright infringement lawsuits. Brad Smith, Microsoft’s Chief Legal Officer, introduced the Copilot Copyright Commitment initiative, in which the company commits to assuming legal liabilities associated with copyright infringement claims arising from the use of its AI Copilot services.

On the other hand, Google has submitted a report to the Australian government, highlighting the legal uncertainty and copyright challenges that hinder the development of AI research in the country. Google suggests that there is a need for clarity regarding potential liability for the misuse or abuse of AI systems, as well as the establishment of a new copyright system to enable fair use of copyright-protected content. Google compares Australia unfavourably to other countries with more innovation-friendly legal environments, such as the USA and Singapore.

Training AI models with protected content

Studying is good, but studying in company is better.

Clarifying the legal framework of AI and copyright also requires further guidelines on the training data of AI systems. To train AI systems like ChatGPT, a significant amount of data comprising text, images, and parameters is indispensable. During the training process, AI platforms identify patterns to establish guidelines, make assessments, and generate predictions, enabling them to provide responses to user queries. However, this training procedure may potentially involve infringements of IPR, as it often involves using data collected from the internet, which may include copyrighted content.

In the AI industry, it is common practice to construct datasets for AI models by indiscriminately extracting content and data from websites using software, a process known as web scraping. Data scraping is typically considered lawful, although it comes with certain restrictions. Taking legal action for violations of terms of service offers limited solutions, and the existing laws have largely proven inadequate in dealing with the issue of data scraping. In AI development, the prevailing belief is that the more training data, the better. OpenAI’s GPT-3 model, for instance, underwent training on an extensive 570 GB dataset. These methods, combined with the sheer size of the dataset, mean that tech companies often do not have a complete understanding of the data used to train their models.

An investigation conducted by the online magazine The Atlantic has uncovered that popular generative AI models, including Meta’s open-source Llama, were partially trained using unauthorised copies of books by well-known authors. This includes models like BloombergGPT and GPT-J from the nonprofit EleutherAI. The pirated books, totalling around 170,000 titles published in the last two decades, were part of a larger dataset called the Pile, which was freely available online until recently.

In specific situations, reproducing copyrighted materials may still be permissible without the consent of the copyright holder. In Europe, there are limited and specific exemptions that allow this, such as for purposes like quoting and creating parodies. Despite growing concerns about the use of machine learning (ML) in the EU, it is only recently that EU member states have started implementing copyright exceptions for training purposes. The UK`s 2017 independent AI review, ‘Growing the artificial intelligence industry in the UK’, recommended allowing text and data mining by AI, through appropriate copyright laws. In the USA, access to copyrighted training data seems to be somewhat more permissive. Although US law doesn’t include specific provisions addressing ML, it benefits from a comprehensive and adaptable fair use doctrine that has proven favourable for technological applications involving copyrighted materials.

The indiscriminate scraping of data and the unclear legal framework surrounding AI training datasets and the use of copyrighted materials without proper authorisation have prompted legal actions by content creators and authors. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have filed lawsuits against OpenAI and Meta, alleging that their works were used without permission to train AI models. The lawsuits contend that OpenAI’s ChatGPT and Meta’s LLaMA were trained on datasets obtained from ‘shadow library’ websites containing copyrighted books authored by them.

Why does it matter?

In conclusion, as AI rapidly advances, it blurs the lines between human and machine creativity, raising complex questions regarding IPR. Legislators are facing a challenging decision – whether to grant IP protection or not. As AI continues to advance, it poses significant legal and ethical questions by challenging traditional ideas of authorship and ownership. While navigating this new digital frontier, it’s evident that finding a balance between encouraging AI innovation and protecting IPRs is crucial.

If the stance is maintained that IP protection only applies to human-created works, it could have adverse implications for AI development. This would place AI-generated creations in the public domain, allowing anyone to use them without paying royalties or receiving financial benefits. Conversely, if lawmakers take a different approach, it could profoundly impact human creators and their creativity.

Another approach could be AI developers guaranteeing adherence to data acquisition regulations, which might encompass acquiring licences or providing compensation for IP utilised during the training process. 

One thing is certain, effectively dealing with IP concerns in the AI domain necessitates cooperation among diverse parties, including policymakers, developers, content creators, and enterprises.

Key takeaways from the sixth UN session on cybercrime treaty negotiations

The 6th session of the Ad Hoc Committee (AHC) to elaborate a UN cybercrime convention is over: From 21 August until 1 September 2023,  in New York, delegates from all states finished another round of text-based negotiations. This was a pre-final session before the final negotiation round in February 2024.

Stalled negotiations over a scope and terminology

Well, reaching a final agreement does not seem to be easy. A number of Western advocacy groups and Microsoft publicly expressed their discontent with the current draft (updated on 1 September 2023), which, they stated, could be ‘disastrous for human rights’. At the same time some countries (e.g. Russia and China) shared concerns that the current draft does not meet the scope that was established by the mandate of the committee. In particular, these delegations and their like-minded colleagues believe that the current approach in the chair’s draft does not adequately address the evolving landscape of information and communication technologies (ICTs). For instance, Russia shared its complaint about the secretariat’s alleged disregard for a proposed article addressing the criminalisation of the use of ICTs for extremist and terrorist purposes. Russia, together with a group of states (e.g. China, Namibia, Malaysia, Saudi Arabia and some others), also supported the inclusion of digital assets under Article 16 regarding the laundering of proceeds of crimes. The UK, Tanzania, and Australia opposed the inclusion of digital assets because it does not fall within the scope of the convention. Concerning other articles, Canada, the USA, the EU and its member states, and some other countries also wished to keep the scope more narrow, and opposed proposals, in particular, for articles on international cooperation (i.e. 37, 38, and 39) that would significantly expand the scope of the treaty.

The use of specific words in each provision, considering the power behind them, is yet another issue that remains uncertain. Even though the chair emphasised that the dedicated terminology group continues working to resolve the issues over terms and propose some ideas, many delegations have split into at least two opposing camps: whether to use ‘cybercrime’ or ‘the use of ICTs for malicious purposes’, to keep the verb ‘combat’ or replace it with more precise verbs such as ‘suppress’, or whether to use ‘child pornography’ or ‘online child sexual abuse’, ‘digital’ or ‘electronic’ information, and so on. 

 Book, Publication, Person, Comics, Face, Head, Art

For instance, in the review of Articles 6–10 on criminalisation, which cover essential cybercrime offences such as illegal access, illegal interception, data interference, systems interference, and the misuse of devices, several debates revolved around the terms ‘without right’ vs ‘unlawful’, and ‘dishonest intent’ vs ‘criminal intent’. 

Another disagreement arose over the terms: ‘restitution’ or ‘compensation’ in Article 52. This provision requires states to retain the proceeds of crimes, to be disbursed to requesting states to compensate victims. India, however, supported by China, Russia, Syria, Egypt, and Iran proposed that the term ‘compensation’ be replaced with ‘restitution’ to avoid further financial burden for states. Additionally, India suggested that ‘compensation’ shall be at the discretion of national laws and not under the convention. Australia and Canada suggested retaining the word ‘compensation’ because it would ensure that the proceeds of the crime delivered to requesting states are only used for the compensation of victims.

The bottom line is that terminology and scope, two of the most critical elements of the convention, remain unresolved, needing attention at the session in February 2024. However, if states have not been able to agree for the past 6 sessions, the international community needs a true diplomatic miracle to occur in the current geopolitical climate. At the same time, the chair confirmed that she has no intention of extending her role beyond February.

Hurdles to deal with human rights and data protection-related provisions

We wrote before that states are divided when discussing human rights perspectives and safeguards: While one group is pushing for a stronger text to protect human rights and fundamental freedoms within the convention, another group disagrees, arguing that the AHC is not mandated to negotiate another human rights convention, but an international treaty to facilitate law enforcement cooperation in combating cybercrime. 

In the context of text-based negotiations, this has meant that some states suggested deleting Article 5 on human rights and merging it with Article 24 to remove the gender perspective-related paragraphs because of the concerns over the definition of the ‘gender perspective’ and challenges to translate the phrase into other languages. Another clash happened during discussions about whether the provisions should allow the real-time collection of traffic data and interception of content data (Articles 29 and 30, respectively). While Singapore, Switzerland, Malaysia, and Vietnam proposed removing such powers from the text, other delegations (e.g. Brazil, South Africa, the USA, Russia, Argentina and others) favoured keeping them. The EU stressed that such measures represent a high level of intrusion and significantly interfere with the human rights and freedoms of individuals. However, the EU expressed its openness to consider keeping such provisions, provided that the conditions and safeguards outlined in Articles 24, 36 and 40(21) remain in the text.

With regard to data protection in Article 36, CARICOM proposed an amendment allowing states to impose appropriate conditions in compliance with their applicable laws to facilitate personal data transfers. The EU and its member states, New Zealand, Albania, the USA, the UK, China, Norway, Colombia, Ecuador, Pakistan, Switzerland, and some other delegations supported this proposal. India did not, while some other delegations (e.g. Russia, Malaysia, Argentina, Türkiye, Iran, Namibia and others) preferred retaining the original text.


Articles on international cooperation or international competition?

Negotiations on the international cooperation chapter have not been smooth either. During the discussions on mutual assistance, Russia, in particular, pointed out a lack of grounds for requests and suggested adding a request for “data identifying the person who is the subject of a crime report” with, where possible “their location and nationality or account as well as items concerned”. Australia, the USA, and Canada did not support this amendment. 

Regarding the expedited preservation of stored computer data/digital information in Article 42, Russia also emphasised the need to distinguish between the location of a service provider or any other data custodian, as defined in the text, and the necessity to specifically highlight the locations where data flows and processing activities, such as storage and transmission, occur due to technologies like cloud computing. To address this ‘loss of location’ issue, Russia suggested referring to the second protocol of the Budapest Convention. The reasoning for this inclusion was to incorporate the concept of data as being in the possession or under the control of a service provider or established through data processing activities operating from within the borders of another state party. The EU and its member states, the USA, Australia, Malaysia, South Africa, Nigeria, Canada, and others were among delegations who preferred to retain the original draft text.

A number of delegations (e.g. Pakistan, Iran, China, Mauritania) also proposed an additional article on ‘cooperation between national authorities and service providers’ to oblige the reporting of criminal incidents to relevant law enforcement authorities, providing support to such authorities by sharing expertise, training, and knowledge, ensuring the implementation of protective measures and due diligence protocols, ensuring adequate training for their workforce, promptly preserving electronic evidence, ensuring the confidentiality of requests received from such authorities, and taking measures to render offensive and harmful content inaccessible. The USA, Georgia, Canada, Australia, the EU, and its member states, and some other delegations rejected this proposal. 

SDGs in the scope of the convention?

An interesting development was the inclusion of the word ‘sustainability’ under Article 56 on the implementation of the convention. While sustainability was not mentioned in the previous sessions, Australia, China, New Zealand and Yemen, among other countries, proposed that Article 56 should read: ‘Implementation of the convention through sustainable development and technical assistance’. Costa Rica claimed that such inclusion would link the capacity building under this convention to the achievement of the Sustainable Development Goals (SDGs)”. Additionally, Paraguay proposed that Article 52(1) should ensure that the implementation of the convention through international cooperation should take into account ‘negative effects of the offences covered by this Convention on society in general and, in particular, on sustainable development, including the limited access that landlocked countries are facing’. While the USA and Tanzania acknowledged the importance of Paraguay’s proposal, they stated that they could not support this edit.

What’s next?

The committee will continue the negotiations in February 2024 for the seventh session, and if the text is adopted, states will still have to ratify it afterwards. If, however, ‘should a consensus prove not to be possible, the Bureau of the UN Office on Drugs and Crime (UNODC) will confirm that the decisions shall be taken by a two-thirds majority of the present voting representatives’ (from the resolution establishing the AHC). The chair must report their final decisions before the 78th session of the UN General Assembly.

5G Transformation: The power of good policy 

The global rollout of 5G networks has been met with considerable excitement, and rightly so. While the promise of faster data speeds has captured much of the spotlight, the true transformational potential of 5G extends far beyond mere internet speed enhancements. Across continents, from the bustling metropolises of North America to the vibrant landscapes of Africa, a diverse array of strategies and approaches is shaping the future of 5G transformation connectivity. As policymakers grapple with the intricacies of crafting effective 5G spectrum policies, it’s essential to understand how these policies are intrinsically linked to achieving the wider benefits of this groundbreaking technology. 

The spectrum: A valuable resource

At the heart of 5G technology is the radio spectrum, a finite and valuable resource allocated by governments to mobile network operators. These spectrum bands determine the speed, coverage, and reliability of wireless networks. In 2023, there’s a high demand for mid-band and the millimeter-wave spectrum, both essential for delivering the anticipated 5G transformation.

 City, Chart, Diagram, Plan, Plot, Urban, Neighborhood, Metropolis
Frequency bands of 5G networks [picture from]

Policy imperatives to ensure low latency

Ultra-low latency is one of 5G’s defining features, enabling real-time communication and interaction over the internet. Policy decisions that prioritise and allocate specific spectrum bands for applications that require low latency, such as telemedicine and autonomous vehicles, can have a profound impact on their effectiveness and safety. Policymakers must prioritise the allocation of spectrum for latency-sensitive applications while also accommodating the growing data demands of traditional mobile services. 

The US Federal Communications Commission (FCC) launched its 5G FAST Plan in 2018. This initiative facilitates the deployment of 5G infrastructure by streamlining regulations and accelerating spectrum availability. As part of the programme, the FCC conducted auctions for spectrum bands suitable for 5G, such as the 24 GHz and 28 GHz bands, to support high-frequency, low-latency applications. 
The EU introduced the 5G Action Plan in 2016 as part of its broader Digital Single Market strategy. The plan emphasises cooperation among EU member states to create the conditions needed for 5G deployment, including favourable spectrum policies. 
China launched its National 5G Strategy in 2019, outlining a comprehensive roadmap for 5G development. The strategy includes policies to allocate and optimise spectrum resources for 5G networks.The Independent Communications Authority of South Africa (ICASA) is actively exploring spectrum policies to accommodate 5G. ICASA has published draft regulations for the use of high-demand spectrum, including the 3.5 GHz and 2.6 GHz bands, which are crucial for 5G deployment. ICASA’s efforts to regulate spectrum have been praised by the Wi-Fi Alliance for their role in advancing Wi-Fi technology and connectivity in Africa. ICASA aims to amend radio frequency regulations to stimulate digital development, investment, and innovation in the telecom sector for public benefit.

Enabling massive Internet of Things connectivity

The International Telecommunication Union (ITU) has classified 5G mobile network services in three categories: Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low-latency Communications (uRLLC), and Massive Machine Type Communications (mMTC).The mMTC service was created specifically to enable an enormous volume of small data packets to be collected from large numbers of devices simultaneously; this is the case with internet of things (IoT applications. mMTC classified 5G as the first network designed for Internet of Things from the ground up.

City, Metropolis, Urban, Architecture, Building, Cityscape, 5g speeds, Neighborhood, Office Building, Lighting, Outdoors
5g communication network is important for the Internet of Things powered ‘Smart Cities’

The IoT stands as a cornerstone of 5G transformation potential; 5G is expected to unleash a massive 5G IoT ecosystem where networks can serve the communication needs for billions of connected devices, with the appropriate trade-offs between speed, latency, and cost. However, this potential hinges on the availability of sufficient spectrum for the massive device connectivity that the IoT needs. The demands that the IoT places on cellular networks vary by application, often requiring remote device management. And as connectivity and speed (especially even very short network dropouts) are mission critical for remotely-operated devices, URLLC and 5G Massive MIMO radio access technologies offer key ingredients for effective IoT operations.

Effective 5G spectrum policies must allocate dedicated bands for IoT devices while ensuring interference-free communication. Standards in Releases 14 and 15 of the Third Generation Partnership Project (3GPP) the solve most of the commercial bottlenecks to facilitate the vision of 5G and the huge IoT market. 

Diverse approaches to spectrum allocation

The USA’s spectrum allocation strategy is centered around auctions as its primary methodology. The FCC has been at the forefront of this approach, conducting auctions for various frequency bands. This auction-driven strategy allows network operators to bid for licenses, enabling them to gain access to specific frequency ranges. Notably, the focus has been on making the mid-band spectrum available, with a significant emphasis on cybersecurity.

Its proactive stance has marked South Korea’s approach to spectrum allocation. Among the pioneers in launching commercial 5G services, the South Korean government facilitated early spectrum auctions. As a result, they allocated critical frequency bands, such as 3.5 GHz and 28 GHz, for 5G deployment. This forward-looking strategy not only contributed to the rapid adoption of 5G within the nation, but also positioned South Korea as a global leader in the 5G revolution.

The Korea Fair Trade Commission (KFTC), South Korea’s antitrust regulator, has fined three domestic mobile carriers a total of 33.6 billion won ($25.06 million) for exaggerating 5G speeds. [link]

The EU champions spectrum harmonisation to enable seamless cross-border connectivity. The identification of the 26 GHz band for 5G in the Radio Spectrum Policy Group (RSPG) decision further supports the development of a coordinated approach. By aligning policies across member states, the EU aims to eliminate fragmentation and ensure a cohesive 5G experience.

Moreover, many African countries are in the process of identifying and allocating spectrum for 5G deployment. Governments and regulatory bodies have considered various frequency bands, such as the C-Band (around 3.5 GHz) and the millimeter-wave bands (above 24 GHz), for 5G services. Some African nations have issued trial licenses to telecommunications operators to conduct 5G trials and test deployments. Thesehelp operators understand the technical challenges and opportunities associated with 5G in the African context. For example, in South Africa, ICASA is developing a framework for 5G spectrum allocation. Their approach encompasses license conditions, coverage requirements, and the possibility of sharing spectrum resources. 

Kenya is in the process of exploring opportunities to release additional spectrum to facilitate 5G deployment. The Communications Authority of Kenya is contemplating reallocating the 700 and 800 MHz bands for mobile broadband use, including 5G services.

 Chart, Plot
Ookla 5G Map [link]

A well-structured spectrum management framework serves as the guiding principle for equitable and efficient allocation of this resource. These frameworks include regulatory approaches like exclusive licensing, lightly-managed sharing, and license-exempt usage. Sharing frameworks enable coexistence, from simple co-primary sharing to multi-tiered arrangements. Static sharing uses techniques such as FDMA and CDMA, while Dynamic Spectrum Sharing (DSS) allows users to access spectrum as needed. 

In conclusion, the intricate world of 5G spectrum policies profoundly shapes the path of 5G’s transformative journey. Beyond speed enhancements, global strategies spotlighted here reveal the interplay of technology and governance.

From South Korea’s spectrum leadership to the EU’s harmonisation and Africa’s context-specific solutions to challenges, each of these approaches underscores the link between policies and 5G’s potential. These efforts are indispensable to foster optimal policies for future development.

Today’s decisions will echo into the future, moulding 5G’s global impact. This intricate interweaving emphasises 5G’s capabilities and policy’s role in driving unprecedented connectivity, innovation, and societal change.

Internet shutdowns: Can we find solutions?

By Bojana Kovač

Internet shutdowns present intentional disruptions of the internet or of electronic communications, which can occur nationwide or in specific locations. They can be partial or in the form of total internet blackouts, blocking people from using the internet in its entirety. According to a research conducted by Shurfshark, 4.24 billion individuals have been affected, globally, in the first half of 2023, where 82 internet restrictions affected 29 countries.

8eA9zeLOuA34FFWXwOEoWWX6F26RM P0qoXbM6PXGo AGySitnbRPx RpmhSNrHgSH2z5bY53 txn AHTleI6mjtkFBRBDg9ujjq ecH3GIIXy5CEDDeWdi

It has been estimated that Iran imposed the largest number of internet restrictions, while India proved to be a world leader in internet shutdowns in 2022. While the EU member states have not experienced total or partial blackouts, the statement made on France Info a few months ago by the EU’s Internal Market Commissioner, Thierry Breton, during the riots in France, raised a few eyebrows. Breton suggested that online platforms could be suspended for failing to promptly remove illegal content, especially in case of riots and violent protests.

x0aGqhqa9Gh0eFNwos7dhDMHK09UyXtMhDa5cyRDVJ56BBXolPRXHdIqp4YoCWkEfJTpEF2XpbI7DTQcUwrV6Zlt2FJp oqap6HvEorsFT5xj5tQ6gaPa5QawfvanHQFK256 ekS8oT9KdkL4ehVSto
Author: Pietro Naj-Oleari 

This announcement led to more than 60 civil rights NGO’s seeking clarification, because they were concerned that the Digital Services Act (DSA) might be misused as a means of censorship. Brenton then clarified, in his comments, that the temporary suspension should be the last resort in case platforms fail to remove illegal content. These extreme circumstances include systemic failures in addressing infringements related to calls for violence or manslaughter. Significantly, Breton underscored that the courts will make the final decision on such actions, ensuring a fair and impartial review process.

cEepZFcVu PLhZNgq3ZP2OTZeD 3ERVlAXR3TgHaug6TU5Muy1zgecJNmWq5KI Z04RYE NHiIeUPG5Pd7JJQ7Z1ZRFKaB23QKwQEojZdJ 1lE01 Zf xGr6PQRRx19BYZBPm5UDH RicKrMas1cego

On a global scale,  Shurfshark found that since 2015, there have been:

  • 107 disruptions in Africa,
  • 585 disruptions in Asia,
  • 15 disruptions in Europe,
  • 9 disruptions in North America, and 
  • 42 disruptions in South America.

Shutdowns being used as a tool for repressing fundamental human rights

Most countries, if not all, justify shutdowns as a means of maintaining national security or for the prevention of false information, among others. However, the internet shutdowns have, in some cases, become a tool for digital authoritarianism with derogatory effects on human rights, including the right to free speech, access to information, freedom of assembly, and development.

The UN  Special Rapporteur on the rights to freedom of peaceful assembly and association, Clément Nyaletsossi Voule, stated that the internet shutdowns violate international human rights law and cannot be justified. On a regional level, the European Court of Human Rights (ECtHR) ruled in the Cengiz and Others v. Türkiye case that ‘the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general.’ The case concerned the decision of a Turkish court to block access to Google Sites of an internet site owner who faced criminal proceedings for insulting the memory of Atatürk. While this was not a total blackout case, the ECtHR ruled that even a limited effect on internet restriction constitutes a violation of the freedom of expression.

The African Commission on Human and Peoples’ Rights (ACHPR) also condemned the imposing of internet shutdowns in its 2016 report, and urged African states to ensure effective protection of internet rights.

phDfWwS8NcDxKksb1uHMBOuwbym6vKNC9BI 8bEOAYSu5M64nPzwRA5cSbcjiqfQeaxZE6Q8Sbvmy5rrfBpr9XG79R7xZ3RZ59J8UD f5QHwd7wmGnVbqjI yYpRr79qmjReIM89IDAuATLVhsotZNY

The case of Gabon

The most recent case of an internet blackout was the one in Gabon, which occurred on 26 August 2023, the day the presidential and legislative elections took place. Minister Rodrigue Mboumba Bissawou, announced the internet blackout and a nightly 7:00 pm – 6:00 am curfew from Sunday on state television.

Bissawou claimed that the blackout is aimed at preventing the spread of false information and countering the spread of violence. On 30 August 2023, following the military coup, the independent and non-partisan internet monitor Netblocks reported the gradual restoration of internet connectivity. It should be noted that this is not the first time the country has faced an internet blackout, in view of the same thing occurring in 2019 during the attempted coup. 

 Electrical Device
Internet shutdowns: Can we find solutions?

Responsibility of the ISPs and telecoms 

Taking into account the regional and international instruments, fundamental human rights are being infringed. Considering such restrictions as imposed by governmental authorities, the UN Human Rights Council, in its 2022 annual report, called on private companies, including Internet Service Providers (ISPs) and telecoms, to explore all lawful measures to challenge the implementation of internet shutdowns. Namely, it called on them to ‘carry out adequate human rights due diligence to identify, prevent, mitigate, and assess the risks of ordered Internet shutdowns when they enter and leave markets.’

In 2022, the Human Rights & Human Rights Resource Center urged telecommunications companies to take action to ensure human rights protection due to the growing uncertainty of internet access worldwide. The report also highlighted the companies’ responsibilities under  the UN Guiding Principles on Business and Human Rights to adopt human rights policies to 

  • uphold the rights of users or customers, 
  • enhance transparency on requests by governments for internet shutdowns,
  • negotiate human rights-compliant licensing agreements, 
  • adopt effective and efficient remedial processes.

The problem, however, lies in the fact that most of the telecoms are state-owned and controlled by current governments. Even in cases of foreign ISPs or telecoms, there is a high chance that they would sell their services to the governments because they would have to comply with national laws, which would violate human rights laws under the jurisdiction in which the companies are based. An example of this would be Telenor, a company which operated its services in Myanmar while the country adopted the draft cybersecurity bill, allowing the military junta to order internet shutdowns. Despite the Norwegian telecom company’s opposition to Myanmar’s draft cybersecurity bill for failing to ensure effective human rights protection, Telenor complied with the military’s requests and sold its operations in Myanmar. This raised many concerns among civil society, as Telenor was accused of not being transparent when selling its services. Digital civil rights organisation Access Now criticised Telenor’s lack of transparency, accusing it of making it even more difficult to develop mitigation strategies to avoid serious human rights abuses. 

q2B0TI0fyWYImtaEPBlZULRpolsTc4fEjdUNguF4bKhfBG xiNibXDF4 T4TqterhriX hgKkZ9I2MF1fE9w338i4 Zd5ejvZhWdA

Is there a way out?

Internet shutdowns have intensified over the years, and urgent actions are necessary to prevent further human rights violations. It is evident that the governments are unable or unwilling to take any action, while the private actors have not yet been established at the level of being able to guarantee internet access during disruption. Therefore, until the governments take a more robust action to ensure internet access and end human rights violations, users should be educated on how to prepare themselves, expecting a shutdown. Access Now recommends downloading several Virtual Private Networks (VPNs) in advance if there is a risk of an internet shutdown, while governments often resort to blocking access to VPN providers. At the same time, the privacy policy of each VPN shall be checked beforehand, as not all VPNs guarantee effective privacy protection.

The consequences of Meta’s multilingual content moderation strategies

By Alicia Shepherd-Vega

About 8 million Ethiopians use the world’s most popular social media platform, Facebook, daily. Its use, of course, is confined to the parameters of their specific speech communities. In Ethiopia, there are some 86 languages spoken by the population of 120.3 million), but 2 (Amharic and Oromo) are spoken by two-thirds of the population. Amharic is the second most popular language.

Like most countries across the globe, the use of social media in Ethiopia is ubiquitous. What sets Ethiopia apart, though, as with many countries in the Global South, are the issues that arise with developments designed by the Global North for the Global North context. This perspective becomes apparent when one views social media usage from the angle of linguistics. 

Content moderation and at-risk countries (ARCs)

Increased social media usage has recently engendered a proliferation of policy responses, particularly concerning content moderation. The situation is no different in Ethiopia. Increasingly, Ethiopians blame Meta and other tech giants for the rate and range within which conflict spreads across the country. For instance, Meta faces a lawsuit filed by the son of an Ethiopian academic, Mareg Amare, whose father was assassinated in November 2021. The lawsuit claims that Meta failed to delete life-threatening posts from the platform, categorised as hate speech against Mareg’s father. Meta, earlier, had assured the global public that a wide variety of context-sensitive strategies, tactics, and tools were used to moderate content on its platform. The strategies for this and other such promises was never published, until the leak of the so-called Facebook files, brought to the fore results of key studies conducted by Meta, such as the harmful effects experienced by users of Meta’s platforms, Facebook and Instagram.

Meta employees have also complained of human rights violations, including overexposure to traumatic content, including abuse, human trafficking, ethnic violence, organ selling, and pornography, without a safety net of employee mental health benefits. Earlier this year, workers at Sama, a subsidiary of Meta in Kenya, received a ruling from a local court that Meta must reinstate their jobs after dismissing them for complaints about working under these conditions and attempts to unionise. The court later ruled that the company is also responsible for their mental health, given their overexposure to violent content on the job.

The disparity in the application of content moderation strategies, tactics, and tools used by the tech giant is also a matter of concern. Crosscheck or XCheck, a quality control measure used by Facebook for high-profile accounts, for example, shields millions of VIPs, such as government officials, from the enforcement of established content moderation rules; on the flip side, inadequate safeguards on the platform have coincided with attacks on political dissidents. Hate speech is said to increase by some 300% amidst bloody riots. This is no surprise, given Facebook’s permissiveness in the sharing and recycling of fake news and plagiarised and radical content.

 Flag, Person
Flag of Ethiopia

In the case of Ethiopia, the platform has catalysed conflict. In October 2021, Dejene Assefa, a political activist with over 120 million followers, called for supporters to pick up arms against the Tigrayan ethnic group. The post was shared about 900 times and received 2,000 reactions before it was taken down. During this period, it was reported that the federal army had also waged war against the Tigrayans because of an attack on its forces. Calls for an attack against the group proliferated on the platform, many of which were linked to violent occurrences. According to a former Google data scientist, the situation was reminiscent of what occurred in Rwanda in 1994. In another case, the deaths of 150 persons and the arrest of 2000 others coincided with the protests that ensued following the assassination of activist Hachalu Hundessa after he had campaigned on Facebook for better treatment of the Oromo ethnic group. The incident led to a further increase in hate speech on the platform, including from several diasporic groups. Consequently, Facebook translated its community standards into Amharic and Oromo for the first time.

In light of ongoing conflicts in Ethiopia, Facebook labelled the country a first tier ‘at risk country’, among others like the USA, India, and Brazil. ARCs are at risk of platform discourse inciting offline violence. As a safeguard, war rooms are usually set up to monitor network activities in these countries. For developing countries like Ethiopia, such privileges are not extended by Facebook. In fact, although the Facebook platform can facilitate 110 languages, it only can review 70. At the end of 2021, Ethiopia had no misinformation or hate speech classifiers and had the lowest completion rate for user reports on the platform. User reports help Meta identify problematic content. The problem here was that the interfaces used for such reports lacked local language support.

Languages are only added when a situation becomes openly and obviously untenable, as was the case in Ethiopia. It usually takes Facebook at least one year to introduce the most basic automated tools. By 2022, amidst the outcry for better moderation in Ethiopia, Facebook partnered with local moderation companies PesaCheck and AFP Fact Check and began moderating content in the two languages; however, only five persons were deployed to scan content posted by the 7 million Ethiopian users. Facebook principally uses automation for analysing content in Ethiopia. 

AI and low-resource languages

AI tools are principally used for automatic content moderation. The company claims Generative AI in the form of Large Language Models (LLMs) is the most scalable and best suited for network-based systems like Facebook. These LLMs are developed via natural language processing (NLP), which allows the models to read and write texts like humans do. According to Meta, whether a model is trained in one or more languages, such as XLM-R and Few-Shot Learner, they are used to moderate over 90% of content on its platform, including content in languages on which the models have not been trained.  

These LLMs train on enormous amounts of data from one or more languages. They identify patterns from higher-resourced languages in a process termed cross-lingual transfer, and apply these patterns to lower-resourced languages, to identify and process harmful content. Languages with a resource gap are languages that do not have high-quality digitised data available to train models. However, one challenge with monolingual and multilingual models is that they have consistently missed the mark on analysing violent content appropriately in English. The situation has been worse for other languages, particularly in the case of low-resource languages like Amharic and other Ethiopian languages.  

These LLMs train on enormous amounts of data from one or more languages. They identify patterns from higher-resourced languages in a process termed cross-lingual transfer, and apply these patterns to lower-resourced languages, to identify and process harmful content. Languages with a resource gap are languages that do not have high-quality digitised data available to train models. However, one challenge with monolingual and multilingual models is that they have consistently missed the mark on analysing violent content appropriately in English. The situation has been worse for other languages, particularly in the case of low-resource languages like Amharic and other Ethiopian languages.  

AI models and network-based systems have the following limitations :

  1. They rely on machine-translated texts, which sometimes contain errors and lack nuance. 
  2. Network effects are complex for developers, so it is sometimes difficult to identify, diagnose, or fix the problem when models fail. 
  3. They cannot produce the same quality of work in all languages. One size does not fit all.
  4. They fail to account for the psycho-social context of local-language speakers, especially in high-risk situations.
  5. They cannot parse the peculiarities of a lingua franca and apply them to specific dialects.
  6. Machine language (ML) models depend on previously-seen features, which makes them easy to evade as humans can couch meaning in various forms.
  7. NLP tools require clear, consistent definitions of the type of speech to be identified. This is difficult to ascertain from policy debates around content moderation and social media mining. 
  8. ML models reflect the bias in their training data.
  9. The highest-performing models accessible today only achieve between 70%-75% accuracy rates, meaning one in every four posts will likely be treated inaccurately. Accuracy in ML is also subjective, as the measurement varies from developer to developer.
  10. ML tools used to make subjective predictions, like whether someone might become radicalised, can be impossible to validate.

According to Natasha Duarte and Emma Llansó of the Centre of Democracy and Technology,

Today’s tools for automating social media content analysis have limited ability to parse the nuanced meaning of human communication, or to detect the intent or motivation of the speaker… without proper safeguards these tools can facilitate overboard censorship and a biased enforcement of laws and of platforms’ terms of service.

In essence, given that existing LLM models are proven to be ineffective in analysing human language on Facebook, should tech giants like Facebook be allowed to enforce platform policies around their use for content moderation, there is a risk of stymying free speech as well as the leakage of these ill-informed policies into national and international legal frameworks. According to Duarte and Llansó, this may lead to human rights and liberties violations.

Human languages and hate speech detection

The use and spread of hate speech are taken seriously by UN countries, as evidenced by General Assembly resolution A/res/59/309. Effective analysis of human language requires that fundamental tenets responsible for language formation and use be considered. Except for some African languages not yet thoroughly studied, most human languages are categorised into six main families: Indo-European, from which we have European languages like English and Spanish, and North American, South American, and some Asian languages. The other categories are Sino-Tibetan, Niger-Congo, Afro-Asiatic, Austronesian and Trans-New Guinea. The Ethiopian languages Oromo, Somali, and Afar fall within the Cushitic and Omotic subcategories of the Afro-Asiatic family, whereas Amharic falls within the Semitic subgroup of that family.

This primary level of linguistic distinction is crucial to understanding the differences in language patterns, be they phonemic, phonetic, morphological, syntactic or semantic. These variations, however, are minimal when compared with the variations brought about by social context, mood, tone, audience, demographics, and environmental factors, to name a few. Analysing human language in an online setting like Facebook becomes particularly complex, given its mainly text-based nature and the moderator’s inability to observe non-linguistic cues. 

Variations in language are even more complex in the case of hate speech, given the role played by factors like intense emotions. Davidson et al. (2017) describe hate speech as ‘speech that targets disadvantaged social groups in a manner that is potentially harmful to them, … and in a way that can promote violence or social disorder’. It intends to be derogatory, humiliate or insult. To add to the complexity, hate speech and extremism are also often difficult to distinguish from other types of speech, such as political activism and news reporting. Hate speech can also be mistaken for offensive words. And offensive words can be used in non-offensive contexts such as music lyrics, taunting or gaming. Other factors such as gender, audience, ethnicity and race also play a vital role in deciphering the meaning behind language. 

On the level of dialectology, parlance, such as slang, can be used as offensive language or hate speech, depending partly on whether it is directed at someone or not. For instance, ‘life’s a bi*ch’ is considered offensive language for some models, but it can be considered hate speech when directed at a person. Yet, hate speech does not always contain offensive words. Consider the words of Dejene Assefa in the case mentioned above, ‘the war is with those you grew up with, your neighbour… If you can rid your forest of these thorns… victory will be yours’. Slurs also, whether offensive or not, can emit hate. ‘They are foreign filth’ (containing non-offensive wording used for hate speech) and ‘White people need those weapons to defend themselves from the subhuman trash these spicks unleash on us’ provide examples. Overall, hate speech reflects our subjective biases. For instance, people tend to label racist and homophobic language as hate speech but sexist language as merely offensive. This also has implications for analysing language accurately. Who is the analyst? And in terms of models, whose data was the model trained on?

The complexities mentioned above are further compounded when translating or interpreting between languages. The probability of transliteration (translating words on their phonemic level) increases with machine-enabled translations such as Google Translate. With translations, misunderstanding grows across language families, particularly when one language does not contain the vocabulary, characters, conceptions, or cultural traits associated with the other language, an occurrence referred to by machine-learning engineers as the UNK problem.

Yet, from all indications, Facebook and other tech giants will invariably continue to experiment with using one LLM to moderate all languages on their platforms. For instance, this year, Google announced that its new speech model will encompass the world’s 1000 most spoken languages. Innovators are also trying to develop models to bridge the gap between human language and LLMs. Lesan, a Berlin-based startup, built the first general machine translation service for Tigrinya. It partners with Tigrinya-speaking communities to scan texts and build custom character recognition tools, which can turn the texts into machine-readable forms. The company also partnered with the Distributed AI Research Institute (DAIR) to develop an open-source tool for identifying languages spoken in Ethiopia and detecting harmful speech in them.


In cases like that of Ethiopia, it is best first to understand the broader system and paradigm at play. The situation is typical of the pull and push typical of a globalised world where changes in the developed world wittingly or unwittingly create a pull on the rest of the world, drawing them into spaces where they subsequently realise they do not fit. It is from the consequent discomfort that the push emerges. What is now evident is that the developers of the technology and the powers that sanctioned its use globally did not anticipate the peculiarities of this use case. Unfortunately, this is not atypical of an industry that embraces agility as a modus operandi.

It is, therefore, more critical now than ever that international mechanisms and frameworks, including a multistakeholder, cross-disciplinary approach to decision-making, be inculcated in public and private sector technological innovations at the local level, particularly in the case of rapidly scalable solutions emerging from the Global North. It is also essential that tech giants be held responsible for the equitable distribution within and across countries with the resources needed for the optimal implementation of safety protocols concerning content moderation. To this end, it would serve Facebook and other tech giants well to partner with startups like Lesan.

 It is imperative that a sufficient quantity of qualified persons with on-the-job mental health benefits be engaged. to deal with the specific issue of analysing human languages, which still have innumerable unknowns and unknown unknowns. The use of AI and network-based systems can only be as effective as the humans behind the technologies and processes. Moreover, Facebook users will continue to adapt their use of language. It is reckless to anticipate that these models would be able to adjust to or predict all human adaptive strategies. And even if these models eventually can do so, the present and interim impact, as seen in Ethiopia and other countries, is far too costly in human rights and lives. 

Finally, linguistics, like all disciplines and languages, is still evolving. It is irresponsible, therefore, to pin any, let alone all, languages down to one model without the foreknowledge of dire consequences.