Copyright laws are set to provide a substantial challenge to the artificial intelligence (AI) sector in 2024, particularly in the context of generative AI (GenAI) technologies becoming pervasive in 2023. At the heart of the matter lie concerns about the use of copyrighted material to train AI systems and the generation of results that may be significantly similar to existing copyrighted works. Legal battles are predicted to affect the future of AI innovation and may even change the industry’s economic models and overall direction. According to tech companies, the lawsuits could create massive barriers to the expanding AI sector. On the other hand, the plaintiffs claim that the firms owe them payment for using their work without fair compensation or authorization.
Legal Challenges and Industry Impact
AI programs that generate outputs comparable to existing works could infringe on copyrights if they had access to the works and produced substantially similar outcomes. In late December 2023, the New York Times was the first American news organization to file a lawsuit against OpenAI and its backer Microsoft, asking the court to erase all large language models (LLMs), including the famous chatbot ChatGPT, and all training datasets that rely on the publication’s copyrighted content. The prominent news media is alleging that their AI systems engaged in ‘widescale copying’, which is a violation of copyright law. This high-profile case illustrates the broader legal challenges faced by AI companies. Authors, creators, and other copyright holders have initiated lawsuits to protect their works from being used without permission or compensation.
As recently as 5 January 2024, authors Nicholas Basbanes and Nicholas Gauge filed a new complaint against both OpenAI and its investor, Microsoft, alleging that their copyrighted works were used without authorization to train their AI models, including ChatGPT. In the proposed class action complaint, filed in federal court in Manhattan, they charge the companies with copyright infringement for putting multiple works by the authors in the datasets used to train OpenAI’s GPT large language model (LLM).
This lawsuit is one among a series of legal cases filed by multiple writers and organizations, including well-known names like George R.R. Martin and Sarah Silverman, alleging that tech firms utilised their protected work to train AI systems without offering any payment or compensation. The results of these lawsuits could have significant implications for the growing AI industry, with tech companies openly warning that any adverse verdict could create considerable hurdles and uncertainty.
Ownership and Fair Use
Questions about who owns the outcome generated by AI systems—whether it is the companies and developers that design the systems or the end users who supply the prompts and inputs—are central to the ongoing debate. The ‘fair use‘ doctrine, often cited by the United States Copyright Office (USCO), the United States Patent and Trademark Office (USPTO), and the federal courts, is a critical parameter, as it allows creators to build upon copyrighted work. However, its application to AI-generated content with models using massive datasets for training is still being tested in courts.
Policy and Regulation
The USCO has initiated a project to investigate the copyright legal and policy challenges brought by AI. This involves evaluating the scope of copyright in works created by AI tools and the use of copyrighted content in training foundational and LLM-powered AI systems. This endeavour is an acknowledgement of the need for clarification and future regulatory adjustments to address the pressing issues at the intersection of AI and copyright law.
Industry Perspectives
Many stakeholders in the AI industry argue that training generative AI systems, including LLMs and other foundational models, on the large and diverse content available online, most of which is copyrighted, is the only realistic and cost-effective method to build them. According to the Silicon Valley venture capital firm Andreessen Horowitz, extending copyright rules to AI models would potentially constitute an existential threat to the current AI industry.
Why does it matter?
The intersection of AI and copyright law is a complex issue with significant implications for innovation, legal liability, ownership rights, commercial interests, policy and regulation, consumer protection, and the future of the AI industry.
The AI sector in 2024 is at a crossroads with existing copyright laws, particularly in the US. The legal system’s reaction to these challenges will be critical in striking the correct balance between preserving creators’ rights and promoting AI innovation and progress. As lawsuits proceed and policymakers engage with these issues, the AI industry may face significant pressure to adapt, depending on the legal interpretations and policy decisions that will emerge from the ongoing processes. Ultimately, these legal fights could determine who the market winners and losers would be.
The sixth substantive session of the UN Open-Ended Working Group (OEWG) on security of and the use of information and communications technologies 2021–2025 was held in December 2023, marking the midway point of the process.
Threats
The risks and challenges associated with emerging technologies, such as AI, quantum computing, and IoT, were highlighted by several countries. Numerous nations expressed concerns about ransomware attacks’ increasing frequency and impact on various entities, including critical infrastructure, local governments, health institutions, and democratic institutions.
The need for capacity building efforts to enhance cybersecurity capabilities globally was emphasised by multiple countries, recognising the importance of preparing for and responding to cyber threats.
The Russian Federation raised concerns about the potential for interstate conflicts arising from using information and communication technologies (ICTs). It proposed discussions on a global information security system under UN auspices. El Salvador discussed evolving threats in the ICT sector, particularly during peacetime, indicating that cybersecurity challenges are not limited to times of conflict.
Delegates discussed the impact of malicious cyber activities on international trust and development, particularly in the context of state-sponsored cyber threats and cybercrime.
Several countries, including the United Kingdom, Kenya, Finland, and Ireland, focused on the intersection of AI and cybersecurity, advocating for approaches considering AI systems’ security implications.
Some countries, including Iran and Syria, expressed concerns about threats to sovereignty in cyberspace, including issues related to internet governance and potential interference in internal affairs.
Many countries emphasised the importance of international cooperation and information sharing to address cybersecurity challenges effectively. Proposals for repositories of information on threats and incidents were discussed. The idea of a global repository of cyber threats, as advanced by Kenya, enjoys much support.
Rules, norms and principles
Many delegations shared how they have already begun implementing national and regional norms through policies, laws and strategies. At the same time, some delegations shared the existing gaps and ongoing processes to introduce new laws, in particular, to protect critical infrastructure (CI) and implement CI-related norms.
Clarifying the norms and providing implementation guidance
Delegations also signalled that clarifying the norms and providing implementation guidance is necessary. Singapore, for instance, supported the proposal to develop broader norm implementation guidance, such as a checklist. The Netherlands argued that such guidance should not only consider the direct impact of malicious cyber activities but also consider the cascading effects that such activities may have, including their impact on citizens. Canada stressed that a checklist would be a complementary tool, formulating voluntary and non-binding guidelines, while some delegations (e.g. China and Syria) called for translating norms as political commitments into legally binding elements.
Australia suggested first focusing on developing norms implementation guidance for the three CI norms (F, G, and H). China, in particular, among many other delegations, expressed the same need to develop guidelines for the protection of CI. Portugal proposed the focus on clarifying and implementing the due diligence, including by the private sector in protecting CI, and France supported it.
Norms related to ICT supply chain security and vulnerability reporting
In response to the Chair’s query about thenorms related to ICT supply chain security and vulnerability reporting, Switzerland presented the Geneva Manual on Responsible Behaviour in Cyberspace. This inaugural edition offers comprehensive guidance for non-state stakeholders, emphasising norms related to supply chain security and responsible vulnerability reporting. At the same time, the UK and France raised the issue of the use of commercially available intrusion capabilities. The UK expressed its concerns about the growing market of software intrusion capabilities. It stressed that all actors, including the private sector, are responsible for ensuring that the development, facilitation and use of commercially available ICT capabilities do not undermine stability in cyberspace. In addition, France highlighted the need to guarantee the integrity of the supply chain by ensuring users’ trust in the safety of digital products and, in this context, cited the European Cyber Resilience Act proposal, which aims to impose cybersecurity requirements for digital products. China also addressed these norms and argued that some states abuse them by developing their standards for supply chain security and undermining fair competition for businesses. China also said all states should explicitly commit themselves to not proliferating offensive cyber technologies and urged that the so-called term ‘peacetime’ had never been used in the context of 11 norms in earlier consensus documents.
New norms vs existing norms
Delegations had divergent views on whether new norms should be developed or not. Some countries supported the idea of creating new norms till 2025 (the end of the OEWG mandate), and, in particular, China called for new norms on data security issues. Other delegations (e.g. Canada, Colombia, France, Israel, the Netherlands, Switzerland, etc.) opposed the development of new norms and instead called for implementing ones.
South Africa emphasised the need to intensify implementation efforts to identify any gaps in the existing normative frameworks and if there is a need for additional norms to close that gap. Brazil stressed that the implementation of existing standards is not contradictory to discussing the possibility of adopting specifically legally binding norms and thus rejected the idea that ‘there is any dichotomy opposing both perspectives’. Brazil expressed its openness to considering the adoption of both additional voluntary norms and legally binding ones to promote peaceful cyberspace.
International law
The discussion on international law in the use of ICTs by states was guided by four questions: whether states see convergences in perspectives on how international law applies in the use of ICTs, whether there are possible unique features of cyber domain as compared to other domains that would require distinction in application of international law, whether there are gaps in applicability, and on capacity-building needs. While some delegations had statements prepared by legal departments or had legal counsel input, others, especially developing countries, needed support in formulating their interventions.
Convergences in perspectives on how international law applies in the use of ICTs
The overwhelming majority of delegations stated that convergence is in agreement that international law, in particular, the UN Charter, is applicable in cyberspace (Thailand, Denmark, Iceland, Norway, Sweden, Finland, Brazil, Estonia, El Salvador, Austria, Canada, the EU, Republic of Korea, Netherlands, Israel, Pakistan, UK, Bangladesh, India, France, Japan, Singapore, South Africa, Australia, Chile, Ukraine, and others). These states see the need to deepen a common understanding of how existing international law applies in cyberspace, alongside its possible implications and legal consequences. Most delegations also stated that cyberspace is not unique and would require a distinction in how international law applies. Kenya pointed out the role of regional organisations in clarifying how international law applies to cyberspace, the African Union in particular, and their contributions to this debate, which was supported by many.
India stated that, in their view, the dynamic nature of cyberspace creates ambiguity in the application of international law since a state, as a subject of international law, can exercise its rights and obligations through its organs or other natural and legal persons.
Another group of states (Cuba, Nicaragua, Vietnam, and the Syrian Arab Republic) thinks cyberspace is unique and can not be addressed by applying existing international law. They call for a legally binding instrument in the UN framework. Russia and Bangladesh see gaps in international law that require new legally binding regulations. According to China and the Syrian Arab Republic, the draft of the International Convention on International Information Security proposed by the Russian Federation would be a good starting point for such negotiations.
The delegations also discussed general international law principles enshrined in the UN Charter. There is an overarching agreement that the principles of sovereignty and sovereign equality, non-intervention, peaceful settlement of disputes, and prohibition of the use of force apply in cyberspace (Malaysia, Australia, Russian Federation, Italy, the USA, India, Canada, Switzerland, Czech Republic, Estonia, Ireland, others). The states concluded that the principles of due diligence, attribution, invoking the right of self-defence, and assessing whether an internationally wrongful act has been committed requires additional work to understand how they apply in cyberspace.
Many delegations (Australia, Canada, the EU, New Zealand, Germany, Switzerland, Estonia, El Salvador, the USA, Singapore, Ireland, and others) stated that the discussions need to clarify how international law addresses violations, what rights and obligations arise in such case, and how international law of state responsibility applies in cyberspace. Mexico, Italy and Bangladesh see value in the contributions of the UN International Law Commission to this debate.
The majority of delegations see convergence in understanding that international humanitarian law applies in cyberspace in cases of armed conflict and that the states must adhere to international legal principles of humanity, necessity, proportionality and distinction (Kiribati, UK, Germany, the USA, Netherlands, El Salvador, Ukraine, Denmark, Czech Republic, Australia, others). Deeper discussions on this matter are necessary. Cuba, in line with its previous statements, disagrees with the concept of applying international humanitarian law in cyberspace.
Addressing capacity building in international law, Uganda stated that it is extremely difficult for developing countries to be equal partners and effectively participate globally due to a lack of expertise and capacity. The majority of countries have supported continuous capacity building efforts in international law (Thailand, Mexico, Nordic countries; Estonia, Ireland, Kenya, the EU, Spain, Italy, Republic of Korea, Netherlands, Malaysia, Bangladesh, India, France, Japan, Singapore, Australia, Switzerland), with Canada mentioning two priority areas: national expertise to enable meaningful participation in substantive legal discussions in multilateral processes such as our OEWG and expertise to develop national or regional positions. Almost all delegations have found the recent UNIDIR workshop to be a valuable contribution to understanding international law’s applicability in cyberspace.
Several delegations have underscored the value of sharing national positions (Thailand, Brazil, Austria, the EU, Israel, the UK, India, Nigeria, Nordic countries, and Mexico) in capacity-building and confidence-building measures.
Going forward, most speakers (Estonia, the EU, Austria, Spain, Italy, El Salvador, the Republic of Korea, the UK, Malaysia, Japan, Chile, and others) have supported the proposal to hold a two-day inter-sessional meeting dedicated to international law.
CBMs
Operationalisation of the Global POC Directory
Many states supported the operationalisation of the agreements to establish a global POC Directory. Australia stressed that those states already positioned to nominate their diplomatic and technical POCs should do so promptly. Switzerland, however, reiterated that the POC Directory should not duplicate the work of CERT and CSIRT teams. The Netherlands stressed the need to regularly evaluate the performance of the POC Directory once it is established. Ghana supported this proposal to develop a feedback mechanism to collect input from states on the Directory’s functionality and user experience. At the end of this agenda item, the Chair also addressed the participation of stakeholders and shared that a dedicated intersessional meeting in May will be convened to discuss stakeholders’ role in the POC directory.
Role of regional organisations
Some delegations (e.g. the US, the EU, Singapore, etc.) highlighted the role of regional organisations in operationalising the POC directory and CBMs. However, several delegations expressed their concerns – e.g. Cuba stated that they are not in favour of ‘attempts to impose the recognition of specific organisations as regional interlocutors on the subject when they do not include the participation of all member states of the region and question’. The EU noted that not all states are members of regional organisations and added that the UN should develop global recommendation service practices on cyber CBMs and encourage regional dialogue and exchanges.
Additional CBMs
Delegations discussed potentially adding additional CBMs. Iran highlighted the need for universal terminology in ICT security to reduce the risk of misunderstanding between states. India reiterated the proposal for a global cybersecurity cooperation portal to address cooperation channels for incident response. India also called for differentiating between cyberterrorism and other cyber incidents in this context. India also suggested that the OEWG may focus on building mechanisms for states to cooperate in investigating cyber crimes and sharing digital forensic evidence. The Chair, at the end of this agenda item, highlighted that the OEWG must continue discussions on potentially adding new CBMs and the importance of identifying if there are any additional things to do.
Capacity building
The recent discussions on cybersecurity highlighted a consensus among participating nations regarding the urgency and cross-cutting nature of cyber threats. Delegations emphasised the importance of Cyber Capacity (CB) in enabling countries to identify and address these threats while adhering to international law and norms for responsible behaviour in cyberspace. Central to the dialogue was the pursuit of equity among nations in achieving cyber resilience, with a recurring emphasis on the ‘leave no country behind’ principle. The core notion of foundational capacities was at the centre of the debates. The development of legal frameworks, dedicated agencies, and incident response mechanisms, especially Computer Emergency Response Teams (CERTs) and CERT cooperation, were highlighted. However, delegations also stressed the importance of national contexts and the lack of one-size-fits-all answers to foundational capacities. Instead, efforts should be tailored to individual countries’ specific needs, legal landscape and infrastructure.
Other issues highlighted were the shortage of qualified cybersecurity personnel and the need to develop technical skills through sustainable and self-sufficient traineeship programs, such as train-the-trainer initiatives. Notable among these initiatives was the Western Balkans Cyber Capacity Centre (WB3C), a long-term project fostering information exchange, good practices, and training courses developed by Slovenia and France together with Montenegro
Concrete actions emerged as a response to past calls from delegations for concrete actions. Two critical planned exercises, the mapping exercise and the Global Roundtable on CB, were commended. The mapping exercise scheduled for March 2024 aims to survey global cybersecurity capacity-building initiatives comprehensively, enhancing operational awareness and coordination. The Global Roundtable, scheduled for May 2024, is considered a milestone in involving the UN, showcasing ongoing initiatives, creating partnerships, and facilitating a dynamic exchange of needs and solutions. These initiatives align with the broader themes of global cooperation, encompassing south-south, north-south, and triangular collaboration in science, technology, and innovation, emphasising needs-based approaches by matching initiatives with specific needs.
Additional points from the discussions included a presentation from India on the technical aspects of the Global Cyber Security Cooperation Portal, emphasising synergy with existing portals. Delegations also supported a voluntary checklist of mainstream cyber capacity-building principles proposed by Singapore. Furthermore, the outcomes of the Global Conference on Cyber Capacity Building, hosted by Ghana and jointly organised by the Cyber Peace Institute, the World Bank, and the World Economic Forum, garnered endorsement from many delegations. The ‘Accra call,’ as it is being termed, is a practical action framework to strengthen cyber resilience as a vital enabler for sustainable development. Switzerland announced its plan to host the follow-up conference in 2025 and urged all states to endorse the Accra Call for cyber-resilient development.
Regular institutional dialogue
The 6th substantive session of the current OEWG marks halfway to the end of the mandate, and the fate of the future dialogue on international ICT security remains open. The situation is exacerbated with a new plot twist: in addition to the Program of Action (PoA) that was proposed by France and Egypt back in 2019 and noted by GA resolutions lately (77/37 and 78/16), Russia tabled a new concept paper introducing a permanent OEWG as an alternative.
Delegations spent in total more than 3 hours discussing the RID issue. All supporters of the PoA stressed the amount of votes that resolution 78/16 got in GA: 161 states upheld the option to create a permanent inclusive and action-oriented mechanism under the UN auspices upon the conclusion of the current OEWG and no later than 2026, implying PoA. Notably, supporters of the resolution stressed that the final vision of the PoA would be defined at the OEWG in a consensus manner, considering the common elements expressed in the 2nd Annual progress report. Several states noted that no PoA discussions may be held outside the OEWG to maintain consistency.
There is no consolidated view of the details of the PoA architecture. Egypt and Switzerland provided some ideas about the number and frequency of meetings and review mechanisms. However, Slovakia, Germany, Switzerland, Japan, Ireland, Australia, Colombia, Netherlands and France suggested including into the PoA architecture already discussed initiatives like PoC, Cyber Portal, threat repository, national implementation survey and other future ideas. The PoA recognises the possibility of developing new norms (beyond the agreed framework). Through the future review mechanism, it may identify gaps in existing international law and consider new legally binding norms to fill them if necessary. As for the additional common element to the RID, some states pointed to inclusivity. PoA should allow multistakeholder participation during meetings, especially in the private sector, and allow them to submit positions. However, the final decision-making will remain with states only.
The Russian proposal of a permanent OEWG after 2025 was co-sponsored by 11 states. It offers several principles for the group’s future work, stressing the consensus nature of decisions and stricter rules for stakeholder participation. It also provides detailed procedural rules and modalities of work.
The consensus issue was crucial at this substantive session as many states, even supporters of PoA, stressed this in statements. The problem may lie in the 78/16 resolution that does not specify the consensus mode of work except that the mechanism should be ‘permanent, inclusive and action-oriented’.
Another divergence between the two formats is the main scope. According to the statements by PoA supporters, PoA should focus on implementing the existing framework of responsible state behaviour in cyberspace and concentrate efforts on capacity building to enable developing countries to cope with that. There may be a place for a dialogue on new threats and norms, but this is not a primary task. On the contrary, a permanent OEWG will concentrate on drafting legally binding norms and mechanisms of its implementation as elements of a new treaty or convention on ICT security. However, other aspects, such as CBMs and capacity building, will also remain in its scope.
For Russia, the struggle to push the permanent OEWG format may lie in substance and in preserving the image of the pioneer of cyber negotiations at the UN and agenda-setter. If OEWG as a format ends in 2025, it will end the tradition of Russian diplomacy, which has more than 20 years of history. Also, earlier this year, in the submission to the SecGen under resolution 77/37, Russia frankly expressed its negative attitude towards PoA, saying that it will be ‘used by Western countries, in line with the ‘rules-based order’ concept promoted by the United States, to impose non-binding rules and standards to their advantage, instead of international law’.
The Chair plans to convey intersessional meetings on regular institutional dialogue in 2024 to deliberate this issue carefully.
As ChatGPT turns one, the significance of its impact cannot be overstated. What started as a pioneering step in AI has swiftly evolved into a ubiquitous presence, transforming abstract notions of AI into an everyday reality for many or at least a topic on everyone’s lips.
While ChatGPT and similar large language models (LLMs) have unveiled glimpses of the possibilities within AI, they are the pillars of the new technological revolution. All predictions state these models to be increasingly personalised and context-specific. To leverage proprietary data for refined model training and industry-specific automation.
Since its public launch in November 2022, ChatGPT has undergone substantial evolution. Initially, it operated solely as a text generator, limited to responses derived from its training data gathered until September 2021. Initially, it tended to fabricate information when lacking answers, introducing a new term of ‘hallucination’ into discourse when discussing AI.
At this moment, the evolved iteration of ChatGPT, trained up to April 2023, boasts expanded capabilities. It now harnesses Microsoft’s Bing search engine and internet resources to access more current information. Moreover, it has become a product platform, enabling the integration of images or documents into searches and facilitating conversation through spoken language.
Tech race for AI dominance
In January 2022, ChatGPT achieved 100 million monthly users. The sudden surge in interest in generative AI has taken major tech companies by surprise. In addition to ChatGPT, several other notable generative AI models, such as Midjourney, Stable Diffusion, and Google’s Bard, have been released. These developments are reshaping the technological terrain. Tech giants put all resources into what they perceive as a pivotal future technological infrastructure and shape the narrative of the AI revolution. However, a significant challenge looming ahead is the potential dominance of only a select few players in this landscape.
Venture capitalists invested almost five times as much into generative AI firms in the first half of 2023 as during the same period last year. Even excluding a $10 billion investment by Microsoft unveiled in January, VC funding is still up nearly 58% compared with the first half of 2022.
The anticipated economic impact is substantial, with PwC forecasting that AI could potentially elevate the global economy by over $15 trillion by 2030. The largest economies – the US and China- are at the forefront of this new ‘AI arms race.’
According to the 2023 AI Index Report, the United States and China have consistently held the spotlight regarding AI investment, with the US taking the lead since 2013, accumulating close to $250 billion across 4,643 companies. The momentum in investment shows no signs of slowing. In 2022, the US witnessed the emergence of 524 new AI startups, drawing in an impressive $47 billion from non-government funding. Meanwhile, there were also substantial investment trends in China, with 160 newly established AI startups securing an average of $71 million each in 2022.
Many of these new startups are leveraging ChatGPT API and building specific use-case scenarios for users.
AI governance – to regulate or not to regulate
In the midst of AI’s incredible advancements, there’s a shadow of concern. The worry about AI generating misleading or inappropriate content often referred to as ‘hallucinating,’ remains a significant challenge. The fear of AI also extends to broader societal implications like biases, job displacement, data privacy, the spread of disinformation and AI’s impact on decision-making processes.
The meteoric rise of the OpenAI company was one of the main reasons for the swift action from policymakers regarding Artificial intelligence regulation. OpenAI CEO Sam Altman was a guest of the US Congress and the EU Commission for negotiations of the new AI regulatory framework in the United States and the European Union.
The United States
The global landscape of AI regulation is gradually taking shape. On 30 October, President Biden issued an executive order mandating AI developers to provide the federal government with an evaluation of the data of their applications used to train and test AI, its performance measurements, and its vulnerability to cyberattacks. The Biden-Harris administration is making progress in crafting domestic AI regulation, including with the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the voluntary commitments from AI companies to manage the risks posed by the technology. This is recognised as the industry’s self-regulation approach from the US government and was welcomed in the industry.
In Congress, there are several bipartisan proposals. Just last week, prominent Senators Amy Klobuchar and John Thune and their colleagues introduced the bipartisan ‘AI Research, Innovation, and Accountability Act ‘to boost innovation while increasing transparency, accountability, and security for high-risk AI applications.
European Union
The tiered approach (as currently envisioned in EU AI Act) would mean categorising AI into different risk bands, with more or less regulation depending on the risk level.
In the EU, two and a half years after the draft rules were proposed, the negotiation on the final version hit a significant snag, as France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models. It seems that the EU’s largest economies are moving away from the concept of stringent AI regulation and inclining towards a self-regulatory approach akin to the US model. Many speculate that this shift is a consequence of intense lobbying efforts by Big Tech. These three countries asked the Spanish presidency of the EU Council, which negotiates on behalf of member states in the trialogues, to retreat from the approach. What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself and propose ‘mandatory self-regulation through codes of conduct’ for foundation models.
China
China was the first country to introduce its interim measures on generative AI, effective in August this year.
What is the aim? To solidify China’s role as a key player in shaping global standards for AI regulation. China also unveiled its Global AI Governance Initiative during the Third Belt and Road Forum, marking a significant stride in shaping the trajectory of AI on a global scale. China’s GAIGI is expected to bring together 155 countries participating in the Belt and Road Initiative, establishing one of the largest global AI governance forums. This strategic initiative focuses on five aspects, including ensuring AI development aligns with human progress, promoting mutual benefit, and opposing ideological divisions. It also establishes a testing and assessment system to evaluate and mitigate AI-related risks, similar to the risk-based approach of the EU’s upcoming AI Act.
At the international level
At the international level, there are initiatives like the establishment of a High-Level Body on AI by the UN Secretary-General, the group of seven wealthy nations (G7) agreeing on the Hiroshima guiding principles and endorsing an AI code of conduct for companies, AI Safety Summit at Bletchley Park and more.
The UN Security Council on AI
The UN Security Council held its first-ever debate on AI (18 July), delving into the technology’s opportunities and risks for global peace and security. A few experts were also invited to participate in the debate chaired by Britain’s Foreign Secretary James Cleverly. In his briefing to the 15-member council, UN Secretary-General Antonio Guterres promoted a risk-based approach to regulating AI and backed calls for a new UN entity on AI, akin to models such as the International Atomic Energy Agency, the International Civil Aviation Organization, and the Intergovernmental Panel on Climate Change.
G7
The G7 nations released their guiding principles for advanced AI, accompanied by a detailed code of conduct for organisations developing AI. A notable similarity with the EU’s AI Act is the risk-based approach, placing responsibility on AI developers to assess and manage the risks associated with their systems. While building on the existing Organisation for Economic Co-operation and Development AI Principles (OECD) principles, the G7 principles go a step further in certain aspects. They encourage developers to deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. However, the G7’s approach preserves a degree of flexibility, allowing jurisdictions to adopt the code in ways that align with their individual approaches.
UK AI Safety Summit
The UK’s much-anticipated summit resulted in a landmark commitment among leading AI countries and companies to test frontier AI models before public release.
The Bletchley Declaration identifies the dangers of current AI, including bias, threats to privacy, and deceptive content generation. While addressing these immediate concerns, the focus shifted to frontier AI – advanced models that exceed current capabilities – and their potential for serious harm.
Signatories include Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU. Governments will now play a more active role in testing AI models. The AI Safety Institute, a new global hub established in the UK, will collaborate with leading AI institutions to assess the safety of emerging AI technologies before and after their public release. The summit resulted in an agreement to form an international advisory panel on AI risk.
UN’s High-Level Advisory Body on AI
The UN has taken a unique approach by launching a High-Level Advisory Body on AI comprising 39 members. Led by UN Tech Envoy Amandeep Singh Gill, the body plans to publish its first recommendations by the end of this year, with final recommendations expected next year. These recommendations will be discussed during the UN’s Summit of the Future in September 2024.
Unlike previous initiatives that introduced new principles, the UN’s advisory body focuses on assessing existing governance initiatives worldwide, identifying gaps, and proposing solutions. The tech envoy envisions the UN as the platform for governments to discuss and refine AI governance frameworks.
What can we expect from language models in the future?
If the industry keeps the focus on research and investments, 2024 will bring some massive breakthroughs. For the OpenAI, the Q project is in the focus. The Q project can solve certain math problems, allegedly having a higher reasoning capacity. This could be a potential breakthrough in artificial general intelligence (AGI). If language models expend their powers in the realm of math and reasoning, they will reach higher levels of usefulness. Many experts are reasoning, including Elon Musk, that ‘digital superintelligence’ will exist within the next five to ten years.
When it comes to regulation, the spotlight will continue to be on ensuring the safety of AI usage while removing a bias from future datasets. With further calls for global collaboration in AI governance and for greater transparency of these models.
ChatGPT was launched by OpenAI on the last day of November 2022. It triggered a lot of excitement. Over the last 12 months, the winter of AI excitement was… Read more.
ChatGPT was launched by OpenAI on the last day of November 2022. It triggered a lot of excitement. Over the last 12 months, the winter of AI excitement was… Read more.
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks. Read more.
In the fervent discourse on AI governance, there’s an oversized focus on the risks from future AI compared to more immediate risks, such as short-term risks that include the protection of intellectual property. In this blog post, Jovan Kurbalija explores how we can deal with AI risks. Read more.
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.
As AI advances rapidly, machines are increasingly gaining human-like skills, which is increasingly blurring the distinction between humans and machines. Traditionally, computers were tools that assisted human creativity with clear distinctions: humans had sole ownership and authorship. However, recent AI developments enable machines to independently perform creative tasks, including complex functions such as software development and artistic endeavours like composing music, generating artwork, and even writing novels.
This has sparked debates about whether creations produced by machines should be protected by copyright and patent laws? Furthermore, the question of ownership and authorship becomes complex, as it raises the issue of whether credit should be given to the machine itself, the humans who created the AI, the works the AI feeds off from or perhaps none of the above?
This essay initiates a three-part series that delves into the influence of AI on intellectual property rights (IPR). To start off, we will elucidate the relationship between AI-generated content and copyright. In the following essays, we will assess the ramifications of AI on trademarks, patents, as well as the strategies employed to safeguard intellectual property (IP) in the age of AI.
Understanding IP and the impact of AI
In essence, IP encompasses a range of rights aimed at protecting human innovation and creativity. These rights include patents, copyrights, trademarks, and trade secrets. They serve as incentives for people and organisations to invest their time, resources, and intelligence in developing new ideas and inventions. Current intellectual property rules and laws focus on safeguarding the products of human intellectual effort.
Google recently provided financial support for an AI project designed to generate local news articles. Back in 2016, a consortium of museums and researchers based in the Netherlands revealed a portrait named ‘The Next Rembrandt’. This artwork was created by a computer that had meticulously analysed numerous pieces crafted by the 17th-century Dutch artist, Rembrandt Harmenszoon van Rijn. In principle, this invention could be seen as ineligible for copyright protection due to the absence of a human creator. As a result, they might be used and reused without limitations by anyone. This situation could present a major obstacle for companies selling these creations because the art isn’t protected by copyright laws, allowing anyone worldwide to use it without having to pay for it.
Hence, when it comes to creations that involve little to no human involvement the situation becomes more complex and blurred. Recent rulings in copyright law have been applied in two distinct ways.
One approach was to deny copyright protection to works generated by AI (computers), potentially allowing them to become part of the public domain. This approach has been adopted by most countries and was exemplified in the 2022 DABUS case, which centred around an AI-generated image. The US Copyright Office supported this stance by stating that AI lacks the necessary human authorship for a copyright claim. Other patent offices worldwide have made comparable decisions, except for South Africa, where the AI machine Device for Autonomous Bootstrapping of Unified Sentience (DABUS), is recognised as the inventor, and the machine’s owner is acknowledged as the patent holder.
In Europe, the Court of Justice of the European Union (CJEU) has made significant declarations, as seen in the influential Infopaq case (C-5/08 Infopaq International A/S v Danske Dagblades Forening). These declarations emphasise that copyright applies exclusively to original works, requiring that originality represents the author’s own intellectual creation. This typically means that an original work must reflect the author’s personal input, highlighting the need for a human author for copyright eligibility.
The second approach involved attributing authorship to human individuals, often the programmers or developers. This is the approach followed in countries like the UK, India, Ireland, and New Zealand. UK copyright law, specifically section 9(3) of the Copyright, Designs, and Patents Act (CDPA), embodies this approach, stating:
‘In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’
AI-generated content and copyright
This illustrates that the laws in many countries are not equipped to handle copyright for non-human creations. One of the primary difficulties is determining authorship and ownership when it comes to AI-generated content. Many argue that it’s improbable for a copyrighted work to come into existence entirely devoid of human input. Typically, a human is likely to play a role in training an AI, and the system may acquire knowledge from copyrighted works created by humans. Furthermore, a human may guide the AI in determining the kind of work it generates, such as selecting the genre of a song and setting its tempo, etc. Nonetheless, as AI becomes more independent in producing art, music, and literature, traditional notions of authorship become unclear. Additionally, concerns have arisen about AI inadvertently replicating copyrighted material, raising questions about liability and accountability. The proliferation of open-source AI models also raises concerns about the boundaries of intellectual property.
In a recent case, US District Judge Beryl Howell ruled that art generated solely by AI cannot be granted copyright protection. This ruling underscores the need for human authorship to qualify for copyright. The case stemmed from Stephen Thaler’s attempt to secure copyright protection for AI-generated artworks. Thaler, the Chief Engineer at Imagination Engines, has been striving for legal recognition of AI-generated creations since 2018. Furthermore, the US Copyright Office has initiated a formal inquiry, called a notice of inquiry (NOI), to address copyright issues related to AI. The NOI aims to examine various aspects of copyright law and policy concerning AI technology. Microsoft is offering legal protection to users of its Copilot AI services who may face copyright infringement lawsuits. Brad Smith, Microsoft’s Chief Legal Officer, introduced the Copilot Copyright Commitment initiative, in which the company commits to assuming legal liabilities associated with copyright infringement claims arising from the use of its AI Copilot services.
On the other hand, Google has submitted a report to the Australian government, highlighting the legal uncertainty and copyright challenges that hinder the development of AI research in the country. Google suggests that there is a need for clarity regarding potential liability for the misuse or abuse of AI systems, as well as the establishment of a new copyright system to enable fair use of copyright-protected content. Google compares Australia unfavourably to other countries with more innovation-friendly legal environments, such as the USA and Singapore.
Training AI models with protected content
Clarifying the legal framework of AI and copyright also requires further guidelines on the training data of AI systems. To train AI systems like ChatGPT, a significant amount of data comprising text, images, and parameters is indispensable. During the training process, AI platforms identify patterns to establish guidelines, make assessments, and generate predictions, enabling them to provide responses to user queries. However, this training procedure may potentially involve infringements of IPR, as it often involves using data collected from the internet, which may include copyrighted content.
In the AI industry, it is common practice to construct datasets for AI models by indiscriminately extracting content and data from websites using software, a process known as web scraping. Data scraping is typically considered lawful, although it comes with certain restrictions. Taking legal action for violations of terms of service offers limited solutions, and the existing laws have largely proven inadequate in dealing with the issue of data scraping. In AI development, the prevailing belief is that the more training data, the better. OpenAI’s GPT-3 model, for instance, underwent training on an extensive 570 GB dataset. These methods, combined with the sheer size of the dataset, mean that tech companies often do not have a complete understanding of the data used to train their models.
An investigation conducted by the online magazine The Atlantic has uncovered that popular generative AI models, including Meta’s open-source Llama, were partially trained using unauthorised copies of books by well-known authors. This includes models like BloombergGPT and GPT-J from the nonprofit EleutherAI. The pirated books, totalling around 170,000 titles published in the last two decades, were part of a larger dataset called the Pile, which was freely available online until recently.
In specific situations, reproducing copyrighted materials may still be permissible without the consent of the copyright holder. In Europe, there are limited and specific exemptions that allow this, such as for purposes like quoting and creating parodies. Despite growing concerns about the use of machine learning (ML) in the EU, it is only recently that EU member states have started implementing copyright exceptions for training purposes. The UK`s 2017 independent AI review, ‘Growing the artificial intelligence industry in the UK’, recommended allowing text and data mining by AI, through appropriate copyright laws. In the USA, access to copyrighted training data seems to be somewhat more permissive. Although US law doesn’t include specific provisions addressing ML, it benefits from a comprehensive and adaptable fair use doctrine that has proven favourable for technological applications involving copyrighted materials.
The indiscriminate scraping of data and the unclear legal framework surrounding AI training datasets and the use of copyrighted materials without proper authorisation have prompted legal actions by content creators and authors. Comedian Sarah Silverman and authors Christopher Golden and Richard Kadrey have filed lawsuits against OpenAI and Meta, alleging that their works were used without permission to train AI models. The lawsuits contend that OpenAI’s ChatGPT and Meta’s LLaMA were trained on datasets obtained from ‘shadow library’ websites containing copyrighted books authored by them.
Why does it matter?
In conclusion, as AI rapidly advances, it blurs the lines between human and machine creativity, raising complex questions regarding IPR. Legislators are facing a challenging decision – whether to grant IP protection or not. As AI continues to advance, it poses significant legal and ethical questions by challenging traditional ideas of authorship and ownership. While navigating this new digital frontier, it’s evident that finding a balance between encouraging AI innovation and protecting IPRs is crucial.
If the stance is maintained that IP protection only applies to human-created works, it could have adverse implications for AI development. This would place AI-generated creations in the public domain, allowing anyone to use them without paying royalties or receiving financial benefits. Conversely, if lawmakers take a different approach, it could profoundly impact human creators and their creativity.
Another approach could be AI developers guaranteeing adherence to data acquisition regulations, which might encompass acquiring licences or providing compensation for IP utilised during the training process.
One thing is certain, effectively dealing with IP concerns in the AI domain necessitates cooperation among diverse parties, including policymakers, developers, content creators, and enterprises.
The 6th session of the Ad Hoc Committee (AHC) to elaborate a UN cybercrime convention is over: From 21 August until 1 September 2023, in New York, delegates from all states finished another round of text-based negotiations. This was a pre-final session before the final negotiation round in February 2024.
Stalled negotiations over a scope and terminology
Well, reaching a final agreement does not seem to be easy. A number of Western advocacy groups and Microsoft publicly expressed their discontent with the current draft (updated on 1 September 2023), which, they stated, could be ‘disastrous for human rights’. At the same time some countries (e.g. Russia and China) shared concerns that the current draft does not meet the scope that was established by the mandate of the committee. In particular, these delegations and their like-minded colleagues believe that the current approach in the chair’s draft does not adequately address the evolving landscape of information and communication technologies (ICTs). For instance, Russia shared its complaint about the secretariat’s alleged disregard for a proposed article addressing the criminalisation of the use of ICTs for extremist and terrorist purposes. Russia, together with a group of states (e.g. China, Namibia, Malaysia, Saudi Arabia and some others), also supported the inclusion of digital assets under Article 16 regarding the laundering of proceeds of crimes. The UK, Tanzania, and Australia opposed the inclusion of digital assets because it does not fall within the scope of the convention. Concerning other articles, Canada, the USA, the EU and its member states, and some other countries also wished to keep the scope more narrow, and opposed proposals, in particular, for articles on international cooperation (i.e. 37, 38, and 39) that would significantly expand the scope of the treaty.
The use of specific words in each provision, considering the power behind them, is yet another issue that remains uncertain. Even though the chair emphasised that the dedicated terminology group continues working to resolve the issues over terms and propose some ideas, many delegations have split into at least two opposing camps: whether to use ‘cybercrime’ or ‘the use of ICTs for malicious purposes’, to keep the verb ‘combat’ or replace it with more precise verbs such as ‘suppress’, or whether to use ‘child pornography’ or ‘online child sexual abuse’, ‘digital’ or ‘electronic’ information, and so on.
For instance, in the review of Articles 6–10 on criminalisation, which cover essential cybercrime offences such as illegal access, illegal interception, data interference, systems interference, and the misuse of devices, several debates revolved around the terms ‘without right’ vs ‘unlawful’, and ‘dishonest intent’ vs ‘criminal intent’.
Another disagreement arose over the terms: ‘restitution’ or ‘compensation’ in Article 52. This provision requires states to retain the proceeds of crimes, to be disbursed to requesting states to compensate victims. India, however, supported by China, Russia, Syria, Egypt, and Iran proposed that the term ‘compensation’ be replaced with ‘restitution’ to avoid further financial burden for states. Additionally, India suggested that ‘compensation’ shall be at the discretion of national laws and not under the convention. Australia and Canada suggested retaining the word ‘compensation’ because it would ensure that the proceeds of the crime delivered to requesting states are only used for the compensation of victims.
The bottom line is that terminology and scope, two of the most critical elements of the convention, remain unresolved, needing attention at the session in February 2024. However, if states have not been able to agree for the past 6 sessions, the international community needs a true diplomatic miracle to occur in the current geopolitical climate. At the same time, the chair confirmed that she has no intention of extending her role beyond February.
Hurdles to deal with human rights and data protection-related provisions
We wrote before that states are divided when discussing human rights perspectives and safeguards: While one group is pushing for a stronger text to protect human rights and fundamental freedoms within the convention, another group disagrees, arguing that the AHC is not mandated to negotiate another human rights convention, but an international treaty to facilitate law enforcement cooperation in combating cybercrime.
In the context of text-based negotiations, this has meant that some states suggested deleting Article 5 on human rights and merging it with Article 24 to remove the gender perspective-related paragraphs because of the concerns over the definition of the ‘gender perspective’ and challenges to translate the phrase into other languages. Another clash happened during discussions about whether the provisions should allow the real-time collection of traffic data and interception of content data (Articles 29 and 30, respectively). While Singapore, Switzerland, Malaysia, and Vietnam proposed removing such powers from the text, other delegations (e.g. Brazil, South Africa, the USA, Russia, Argentina and others) favoured keeping them. The EU stressed that such measures represent a high level of intrusion and significantly interfere with the human rights and freedoms of individuals. However, the EU expressed its openness to consider keeping such provisions, provided that the conditions and safeguards outlined in Articles 24, 36 and 40(21) remain in the text.
With regard to data protection in Article 36, CARICOM proposed an amendment allowing states to impose appropriate conditions in compliance with their applicable laws to facilitate personal data transfers. The EU and its member states, New Zealand, Albania, the USA, the UK, China, Norway, Colombia, Ecuador, Pakistan, Switzerland, and some other delegations supported this proposal. India did not, while some other delegations (e.g. Russia, Malaysia, Argentina, Türkiye, Iran, Namibia and others) preferred retaining the original text.
Articles on international cooperation or international competition?
Negotiations on the international cooperation chapter have not been smooth either. During the discussions on mutual assistance, Russia, in particular, pointed out a lack of grounds for requests and suggested adding a request for “data identifying the person who is the subject of a crime report” with, where possible “their location and nationality or account as well as items concerned”. Australia, the USA, and Canada did not support this amendment.
Regarding the expedited preservation of stored computer data/digital information in Article 42, Russia also emphasised the need to distinguish between the location of a service provider or any other data custodian, as defined in the text, and the necessity to specifically highlight the locations where data flows and processing activities, such as storage and transmission, occur due to technologies like cloud computing. To address this ‘loss of location’ issue, Russia suggested referring to the second protocol of the Budapest Convention. The reasoning for this inclusion was to incorporate the concept of data as being in the possession or under the control of a service provider or established through data processing activities operating from within the borders of another state party. The EU and its member states, the USA, Australia, Malaysia, South Africa, Nigeria, Canada, and others were among delegations who preferred to retain the original draft text.
A number of delegations (e.g. Pakistan, Iran, China, Mauritania) also proposed an additional article on ‘cooperation between national authorities and service providers’ to oblige the reporting of criminal incidents to relevant law enforcement authorities, providing support to such authorities by sharing expertise, training, and knowledge, ensuring the implementation of protective measures and due diligence protocols, ensuring adequate training for their workforce, promptly preserving electronic evidence, ensuring the confidentiality of requests received from such authorities, and taking measures to render offensive and harmful content inaccessible. The USA, Georgia, Canada, Australia, the EU, and its member states, and some other delegations rejected this proposal.
SDGs in the scope of the convention?
An interesting development was the inclusion of the word ‘sustainability’ under Article 56 on the implementation of the convention. While sustainability was not mentioned in the previous sessions, Australia, China, New Zealand and Yemen, among other countries, proposed that Article 56 should read: ‘Implementation of the convention through sustainable development and technical assistance’. Costa Rica claimed that such inclusion would link the capacity building under this convention to the achievement of the Sustainable Development Goals (SDGs)”. Additionally, Paraguay proposed that Article 52(1) should ensure that the implementation of the convention through international cooperation should take into account ‘negative effects of the offences covered by this Convention on society in general and, in particular, on sustainable development, including the limited access that landlocked countries are facing’. While the USA and Tanzania acknowledged the importance of Paraguay’s proposal, they stated that they could not support this edit.
What’s next?
The committee will continue the negotiations in February 2024 for the seventh session, and if the text is adopted, states will still have to ratify it afterwards. If, however, ‘should a consensus prove not to be possible, the Bureau of the UN Office on Drugs and Crime (UNODC) will confirm that the decisions shall be taken by a two-thirds majority of the present voting representatives’ (from the resolution establishing the AHC). The chair must report their final decisions before the 78th session of the UN General Assembly.
The global rollout of 5G networks has been met with considerable excitement, and rightly so. While the promise of faster data speeds has captured much of the spotlight, the true transformational potential of 5G extends far beyond mere internet speed enhancements. Across continents, from the bustling metropolises of North America to the vibrant landscapes of Africa, a diverse array of strategies and approaches is shaping the future of 5G transformation connectivity. As policymakers grapple with the intricacies of crafting effective 5G spectrum policies, it’s essential to understand how these policies are intrinsically linked to achieving the wider benefits of this groundbreaking technology.
The spectrum: A valuable resource
At the heart of 5G technology is the radio spectrum, a finite and valuable resource allocated by governments to mobile network operators. These spectrum bands determine the speed, coverage, and reliability of wireless networks. In 2023, there’s a high demand for mid-band and the millimeter-wave spectrum, both essential for delivering the anticipated 5G transformation.
Frequency bands of 5G networks [picture from digi.com]
Policy imperatives to ensure low latency
Ultra-low latency is one of 5G’s defining features, enabling real-time communication and interaction over the internet. Policy decisions that prioritise and allocate specific spectrum bands for applications that require low latency, such as telemedicine and autonomous vehicles, can have a profound impact on their effectiveness and safety. Policymakers must prioritise the allocation of spectrum for latency-sensitive applications while also accommodating the growing data demands of traditional mobile services.
The US Federal Communications Commission (FCC) launched its 5G FAST Plan in 2018. This initiative facilitates the deployment of 5G infrastructure by streamlining regulations and accelerating spectrum availability. As part of the programme, the FCC conducted auctions for spectrum bands suitable for 5G, such as the 24 GHz and 28 GHz bands, to support high-frequency, low-latency applications. The EU introduced the 5G Action Plan in 2016 as part of its broader Digital Single Market strategy. The plan emphasises cooperation among EU member states to create the conditions needed for 5G deployment, including favourable spectrum policies. China launched its National 5G Strategy in 2019, outlining a comprehensive roadmap for 5G development. The strategy includes policies to allocate and optimise spectrum resources for 5G networks.The Independent Communications Authority of South Africa (ICASA) is actively exploring spectrum policies to accommodate 5G. ICASA has published draft regulations for the use of high-demand spectrum, including the 3.5 GHz and 2.6 GHz bands, which are crucial for 5G deployment. ICASA’s efforts to regulate spectrum have been praised by the Wi-Fi Alliance for their role in advancing Wi-Fi technology and connectivity in Africa. ICASA aims to amend radio frequency regulations to stimulate digital development, investment, and innovation in the telecom sector for public benefit.
Enabling massive Internet of Things connectivity
The International Telecommunication Union (ITU) has classified 5G mobile network services in three categories: Enhanced Mobile Broadband (eMBB), Ultra-reliable and Low-latency Communications (uRLLC), and Massive Machine Type Communications (mMTC).The mMTC service was created specifically to enable an enormous volume of small data packets to be collected from large numbers of devices simultaneously; this is the case with internet of things (IoT applications. mMTC classified 5G as the first network designed for Internet of Things from the ground up.
5g communication network is important for the Internet of Things powered ‘Smart Cities’
The IoT stands as a cornerstone of 5G transformation potential; 5G is expected to unleash a massive 5G IoT ecosystem where networks can serve the communication needs for billions of connected devices, with the appropriate trade-offs between speed, latency, and cost. However, this potential hinges on the availability of sufficient spectrum for the massive device connectivity that the IoT needs. The demands that the IoT places on cellular networks vary by application, often requiring remote device management. And as connectivity and speed (especially even very short network dropouts) are mission critical for remotely-operated devices, URLLC and 5G Massive MIMO radio access technologies offer key ingredients for effective IoT operations.
Effective 5G spectrum policies must allocate dedicated bands for IoT devices while ensuring interference-free communication. Standards in Releases 14 and 15 of the Third Generation Partnership Project (3GPP) the solve most of the commercial bottlenecks to facilitate the vision of 5G and the huge IoT market.
Diverse approaches to spectrum allocation
The USA’s spectrum allocation strategy is centered around auctions as its primary methodology. The FCC has been at the forefront of this approach, conducting auctions for various frequency bands. This auction-driven strategy allows network operators to bid for licenses, enabling them to gain access to specific frequency ranges. Notably, the focus has been on making the mid-band spectrum available, with a significant emphasis on cybersecurity.
Its proactive stance has marked South Korea’s approach to spectrum allocation. Among the pioneers in launching commercial 5G services, the South Korean government facilitated early spectrum auctions. As a result, they allocated critical frequency bands, such as 3.5 GHz and 28 GHz, for 5G deployment. This forward-looking strategy not only contributed to the rapid adoption of 5G within the nation, but also positioned South Korea as a global leader in the 5G revolution.
The Korea Fair Trade Commission (KFTC), South Korea’s antitrust regulator, has fined three domestic mobile carriers a total of 33.6 billion won ($25.06 million) for exaggerating 5G speeds. [link]
The EU champions spectrum harmonisation to enable seamless cross-border connectivity. The identification of the 26 GHz band for 5G in the Radio Spectrum Policy Group (RSPG) decision further supports the development of a coordinated approach. By aligning policies across member states, the EU aims to eliminate fragmentation and ensure a cohesive 5G experience.
Moreover, many African countries are in the process of identifying and allocating spectrum for 5G deployment. Governments and regulatory bodies have considered various frequency bands, such as the C-Band (around 3.5 GHz) and the millimeter-wave bands (above 24 GHz), for 5G services. Some African nations have issued trial licenses to telecommunications operators to conduct 5G trials and test deployments. Thesehelp operators understand the technical challenges and opportunities associated with 5G in the African context. For example, in South Africa, ICASA is developing a framework for 5G spectrum allocation. Their approach encompasses license conditions, coverage requirements, and the possibility of sharing spectrum resources.
Kenya is in the process of exploring opportunities to release additional spectrum to facilitate 5G deployment. The Communications Authority of Kenya is contemplating reallocating the 700 and 800 MHz bands for mobile broadband use, including 5G services.
A well-structured spectrum management framework serves as the guiding principle for equitable and efficient allocation of this resource. These frameworks include regulatory approaches like exclusive licensing, lightly-managed sharing, and license-exempt usage. Sharing frameworks enable coexistence, from simple co-primary sharing to multi-tiered arrangements. Static sharing uses techniques such as FDMA and CDMA, while Dynamic Spectrum Sharing (DSS) allows users to access spectrum as needed.
In conclusion, the intricate world of 5G spectrum policies profoundly shapes the path of 5G’s transformative journey. Beyond speed enhancements, global strategies spotlighted here reveal the interplay of technology and governance.
From South Korea’s spectrum leadership to the EU’s harmonisation and Africa’s context-specific solutions to challenges, each of these approaches underscores the link between policies and 5G’s potential. These efforts are indispensable to foster optimal policies for future development.
Today’s decisions will echo into the future, moulding 5G’s global impact. This intricate interweaving emphasises 5G’s capabilities and policy’s role in driving unprecedented connectivity, innovation, and societal change.
Internet shutdowns present intentional disruptions of the internet or of electronic communications, which can occur nationwide or in specific locations. They can be partial or in the form of total internet blackouts, blocking people from using the internet in its entirety. According to a research conducted by Shurfshark, 4.24 billion individuals have been affected, globally, in the first half of 2023, where 82 internet restrictions affected 29 countries.
It has been estimated that Iran imposed the largest number of internet restrictions, while India proved to be a world leader in internet shutdowns in 2022. While the EU member states have not experienced total or partial blackouts, the statement made on France Info a few months ago by the EU’s Internal Market Commissioner, Thierry Breton, during the riots in France, raised a few eyebrows. Breton suggested that online platforms could be suspended for failing to promptly remove illegal content, especially in case of riots and violent protests.
Author: Pietro Naj-Oleari
This announcement led to more than 60 civil rights NGO’s seeking clarification, because they were concerned that the Digital Services Act (DSA) might be misused as a means of censorship. Brenton then clarified, in his comments, that the temporary suspension should be the last resort in case platforms fail to remove illegal content. These extreme circumstances include systemic failures in addressing infringements related to calls for violence or manslaughter. Significantly, Breton underscored that the courts will make the final decision on such actions, ensuring a fair and impartial review process.
Shutdowns being used as a tool for repressing fundamental human rights
Most countries, if not all, justify shutdowns as a means of maintaining national security or for the prevention of false information, among others. However, the internet shutdowns have, in some cases, become a tool for digital authoritarianism with derogatory effects on human rights, including the right to free speech, access to information, freedom of assembly, and development.
The UN Special Rapporteur on the rights to freedom of peaceful assembly and association, Clément Nyaletsossi Voule, stated that the internet shutdowns violate international human rights law and cannot be justified. On a regional level, the European Court of Human Rights (ECtHR) ruled in the Cengiz and Others v. Türkiye case that ‘the Internet plays an important role in enhancing the public’s access to news and facilitating the dissemination of information in general.’ The case concerned the decision of a Turkish court to block access to Google Sites of an internet site owner who faced criminal proceedings for insulting the memory of Atatürk. While this was not a total blackout case, the ECtHR ruled that even a limited effect on internet restriction constitutes a violation of the freedom of expression.
The African Commission on Human and Peoples’ Rights (ACHPR) also condemned the imposing of internet shutdowns in its 2016 report, and urged African states to ensure effective protection of internet rights.
The case of Gabon
The most recent case of an internet blackout was the one in Gabon, which occurred on 26 August 2023, the day the presidential and legislative elections took place. Minister Rodrigue Mboumba Bissawou, announced the internet blackout and a nightly 7:00 pm – 6:00 am curfew from Sunday on state television.
Bissawou claimed that the blackout is aimed at preventing the spread of false information and countering the spread of violence. On 30 August 2023, following the military coup, the independent and non-partisan internet monitor Netblocks reported the gradual restoration of internet connectivity. It should be noted that this is not the first time the country has faced an internet blackout, in view of the same thing occurring in 2019 during the attempted coup.
Internet shutdowns: Can we find solutions?
Responsibility of the ISPs and telecoms
Taking into account the regional and international instruments, fundamental human rights are being infringed. Considering such restrictions as imposed by governmental authorities, the UN Human Rights Council, in its 2022 annual report, called on private companies, including Internet Service Providers (ISPs) and telecoms, to explore all lawful measures to challenge the implementation of internet shutdowns. Namely, it called on them to ‘carry out adequate human rights due diligence to identify, prevent, mitigate, and assess the risks of ordered Internet shutdowns when they enter and leave markets.’
In 2022, the Human Rights & Human Rights Resource Center urged telecommunications companies to take action to ensure human rights protection due to the growing uncertainty of internet access worldwide. The report also highlighted the companies’ responsibilities under the UN Guiding Principles on Business and Human Rights to adopt human rights policies to
uphold the rights of users or customers,
enhance transparency on requests by governments for internet shutdowns,
negotiate human rights-compliant licensing agreements,
adopt effective and efficient remedial processes.
The problem, however, lies in the fact that most of the telecoms are state-owned and controlled by current governments. Even in cases of foreign ISPs or telecoms, there is a high chance that they would sell their services to the governments because they would have to comply with national laws, which would violate human rights laws under the jurisdiction in which the companies are based. An example of this would be Telenor, a company which operated its services in Myanmar while the country adopted the draft cybersecurity bill, allowing the military junta to order internet shutdowns. Despite the Norwegian telecom company’s opposition to Myanmar’s draft cybersecurity bill for failing to ensure effective human rights protection, Telenor complied with the military’s requests and sold its operations in Myanmar. This raised many concerns among civil society, as Telenor was accused of not being transparent when selling its services. Digital civil rights organisation Access Now criticised Telenor’s lack of transparency, accusing it of making it even more difficult to develop mitigation strategies to avoid serious human rights abuses.
Is there a way out?
Internet shutdowns have intensified over the years, and urgent actions are necessary to prevent further human rights violations. It is evident that the governments are unable or unwilling to take any action, while the private actors have not yet been established at the level of being able to guarantee internet access during disruption. Therefore, until the governments take a more robust action to ensure internet access and end human rights violations, users should be educated on how to prepare themselves, expecting a shutdown. Access Now recommends downloading several Virtual Private Networks (VPNs) in advance if there is a risk of an internet shutdown, while governments often resort to blocking access to VPN providers. At the same time, the privacy policy of each VPN shall be checked beforehand, as not all VPNs guarantee effective privacy protection.
About 8 million Ethiopians use the world’s most popular social media platform, Facebook, daily. Its use, of course, is confined to the parameters of their specific speech communities. In Ethiopia, there are some 86 languages spoken by the population of 120.3 million), but 2 (Amharic and Oromo) are spoken by two-thirds of the population. Amharic is the second most popular language.
Like most countries across the globe, the use of social media in Ethiopia is ubiquitous. What sets Ethiopia apart, though, as with many countries in the Global South, are the issues that arise with developments designed by the Global North for the Global North context. This perspective becomes apparent when one views social media usage from the angle of linguistics.
Content moderation and at-risk countries (ARCs)
Increased social media usage has recently engendered a proliferation of policy responses, particularly concerning content moderation. The situation is no different in Ethiopia. Increasingly, Ethiopians blame Meta and other tech giants for the rate and range within which conflict spreads across the country. For instance, Meta faces a lawsuit filed by the son of an Ethiopian academic, Mareg Amare, whose father was assassinated in November 2021. The lawsuit claims that Meta failed to delete life-threatening posts from the platform, categorised as hate speech against Mareg’s father. Meta, earlier, had assured the global public that a wide variety of context-sensitive strategies, tactics, and tools were used to moderate content on its platform. The strategies for this and other such promises was never published, until the leak of the so-called Facebook files, brought to the fore results of key studies conducted by Meta, such as the harmful effects experienced by users of Meta’s platforms, Facebook and Instagram.
Meta employees have also complained of human rights violations, including overexposure to traumatic content, including abuse, human trafficking, ethnic violence, organ selling, and pornography, without a safety net of employee mental health benefits. Earlier this year, workers at Sama, a subsidiary of Meta in Kenya, received a ruling from a local court that Meta must reinstate their jobs after dismissing them for complaints about working under these conditions and attempts to unionise. The court later ruled that the company is also responsible for their mental health, given their overexposure to violent content on the job.
The disparity in the application of content moderation strategies, tactics, and tools used by the tech giant is also a matter of concern. Crosscheck or XCheck, a quality control measure used by Facebook for high-profile accounts, for example, shields millions of VIPs, such as government officials, from the enforcement of established content moderation rules; on the flip side, inadequate safeguards on the platform have coincided with attacks on political dissidents. Hate speech is said to increase by some 300% amidst bloody riots. This is no surprise, given Facebook’s permissiveness in the sharing and recycling of fake news and plagiarised and radical content.
Flag of Ethiopia
In the case of Ethiopia, the platform has catalysed conflict. In October 2021, Dejene Assefa, a political activist with over 120 million followers, called for supporters to pick up arms against the Tigrayan ethnic group. The post was shared about 900 times and received 2,000 reactions before it was taken down. During this period, it was reported that the federal army had also waged war against the Tigrayans because of an attack on its forces. Calls for an attack against the group proliferated on the platform, many of which were linked to violent occurrences. According to a former Google data scientist, the situation was reminiscent of what occurred in Rwanda in 1994. In another case, the deaths of 150 persons and the arrest of 2000 others coincided with the protests that ensued following the assassination of activist Hachalu Hundessa after he had campaigned on Facebook for better treatment of the Oromo ethnic group. The incident led to a further increase in hate speech on the platform, including from several diasporic groups. Consequently, Facebook translated its community standards into Amharic and Oromo for the first time.
In light of ongoing conflicts in Ethiopia, Facebook labelled the country a first tier ‘at risk country’, among others like the USA, India, and Brazil. ARCs are at risk of platform discourse inciting offline violence. As a safeguard, war rooms are usually set up to monitor network activities in these countries. For developing countries like Ethiopia, such privileges are not extended by Facebook. In fact, although the Facebook platform can facilitate 110 languages, it only can review 70. At the end of 2021, Ethiopia had no misinformation or hate speech classifiers and had the lowest completion rate for user reports on the platform. User reports help Meta identify problematic content. The problem here was that the interfaces used for such reports lacked local language support.
Languages are only added when a situation becomes openly and obviously untenable, as was the case in Ethiopia. It usually takes Facebook at least one year to introduce the most basic automated tools. By 2022, amidst the outcry for better moderation in Ethiopia, Facebook partnered with local moderation companies PesaCheck and AFP Fact Check and began moderating content in the two languages; however, only five persons were deployed to scan content posted by the 7 million Ethiopian users. Facebook principally uses automation for analysing content in Ethiopia.
AI and low-resource languages
AI tools are principally used for automatic content moderation. The company claims Generative AI in the form of Large Language Models (LLMs) is the most scalable and best suited for network-based systems like Facebook. These LLMs are developed via natural language processing (NLP), which allows the models to read and write texts like humans do. According to Meta, whether a model is trained in one or more languages, such as XLM-R and Few-Shot Learner, they are used to moderate over 90% of content on its platform, including content in languages on which the models have not been trained.
These LLMs train on enormous amounts of data from one or more languages. They identify patterns from higher-resourced languages in a process termed cross-lingual transfer, and apply these patterns to lower-resourced languages, to identify and process harmful content. Languages with a resource gap are languages that do not have high-quality digitised data available to train models. However, one challenge with monolingual and multilingual models is that they have consistently missed the mark on analysing violent content appropriately in English. The situation has been worse for other languages, particularly in the case of low-resource languages like Amharic and other Ethiopian languages.
These LLMs train on enormous amounts of data from one or more languages. They identify patterns from higher-resourced languages in a process termed cross-lingual transfer, and apply these patterns to lower-resourced languages, to identify and process harmful content. Languages with a resource gap are languages that do not have high-quality digitised data available to train models. However, one challenge with monolingual and multilingual models is that they have consistently missed the mark on analysing violent content appropriately in English. The situation has been worse for other languages, particularly in the case of low-resource languages like Amharic and other Ethiopian languages.
They rely on machine-translated texts, which sometimes contain errors and lack nuance.
Network effects are complex for developers, so it is sometimes difficult to identify, diagnose, or fix the problem when models fail.
They cannot produce the same quality of work in all languages. One size does not fit all.
They fail to account for the psycho-social context of local-language speakers, especially in high-risk situations.
They cannot parse the peculiarities of a lingua franca and apply them to specific dialects.
Machine language (ML) models depend on previously-seen features, which makes them easy to evade as humans can couch meaning in various forms.
NLP tools require clear, consistent definitions of the type of speech to be identified. This is difficult to ascertain from policy debates around content moderation and social media mining.
ML models reflect the bias in their training data.
The highest-performing models accessible today only achieve between 70%-75% accuracy rates, meaning one in every four posts will likely be treated inaccurately. Accuracy in ML is also subjective, as the measurement varies from developer to developer.
ML tools used to make subjective predictions, like whether someone might become radicalised, can be impossible to validate.
Today’s tools for automating social media content analysis have limited ability to parse the nuanced meaning of human communication, or to detect the intent or motivation of the speaker… without proper safeguards these tools can facilitate overboard censorship and a biased enforcement of laws and of platforms’ terms of service.
In essence, given that existing LLM models are proven to be ineffective in analysing human language on Facebook, should tech giants like Facebook be allowed to enforce platform policies around their use for content moderation, there is a risk of stymying free speech as well as the leakage of these ill-informed policies into national and international legal frameworks. According to Duarte and Llansó, this may lead to human rights and liberties violations.
Human languages and hate speechdetection
The use and spread of hate speech are taken seriously by UN countries, as evidenced by General Assembly resolution A/res/59/309. Effective analysis of human language requires that fundamental tenets responsible for language formation and use be considered. Except for some African languages not yet thoroughly studied, most human languages are categorised into six main families: Indo-European, from which we have European languages like English and Spanish, and North American, South American, and some Asian languages. The other categories are Sino-Tibetan, Niger-Congo, Afro-Asiatic, Austronesian and Trans-New Guinea. The Ethiopian languages Oromo, Somali, and Afar fall within the Cushitic and Omotic subcategories of the Afro-Asiatic family, whereas Amharic falls within the Semitic subgroup of that family.
This primary level of linguistic distinction is crucial to understanding the differences in language patterns, be they phonemic, phonetic, morphological, syntactic or semantic. These variations, however, are minimal when compared with the variations brought about by social context, mood, tone, audience, demographics, and environmental factors, to name a few. Analysing human language in an online setting like Facebook becomes particularly complex, given its mainly text-based nature and the moderator’s inability to observe non-linguistic cues.
Variations in language are even more complex in the case of hate speech, given the role played by factors like intense emotions. Davidson et al. (2017) describe hate speech as ‘speech that targets disadvantaged social groups in a manner that is potentially harmful to them, … and in a way that can promote violence or social disorder’. It intends to be derogatory, humiliate or insult. To add to the complexity, hate speech and extremism are also often difficult to distinguish from other types of speech, such as political activism and news reporting. Hate speech can also be mistaken for offensive words. And offensive words can be used in non-offensive contexts such as music lyrics, taunting or gaming. Other factors such as gender, audience, ethnicity and race also play a vital role in deciphering the meaning behind language.
On the level of dialectology, parlance, such as slang, can be used as offensive language or hate speech, depending partly on whether it is directed at someone or not. For instance, ‘life’s a bi*ch’ is considered offensive language for some models, but it can be considered hate speech when directed at a person. Yet, hate speech does not always contain offensive words. Consider the words of Dejene Assefa in the case mentioned above, ‘the war is with those you grew up with, your neighbour… If you can rid your forest of these thorns… victory will be yours’. Slurs also, whether offensive or not, can emit hate. ‘They are foreign filth’ (containing non-offensive wording used for hate speech) and ‘White people need those weapons to defend themselves from the subhuman trash these spicks unleash on us’ provide examples. Overall, hate speech reflects our subjective biases. For instance, people tend to label racist and homophobic language as hate speech but sexist language as merely offensive. This also has implications for analysing language accurately. Who is the analyst? And in terms of models, whose data was the model trained on?
The complexities mentioned above are further compounded when translating or interpreting between languages. The probability of transliteration (translating words on their phonemic level) increases with machine-enabled translations such as Google Translate. With translations, misunderstanding grows across language families, particularly when one language does not contain the vocabulary, characters, conceptions, or cultural traits associated with the other language, an occurrence referred to by machine-learning engineers as the UNK problem.
Yet, from all indications, Facebook and other tech giants will invariably continue to experiment with using one LLM to moderate all languages on their platforms. For instance, this year, Google announced that its new speech model will encompass the world’s 1000 most spoken languages. Innovators are also trying to develop models to bridge the gap between human language and LLMs. Lesan, a Berlin-based startup, built the first general machine translation service for Tigrinya. It partners with Tigrinya-speaking communities to scan texts and build custom character recognition tools, which can turn the texts into machine-readable forms. The company also partnered with the Distributed AI Research Institute (DAIR) to develop an open-source tool for identifying languages spoken in Ethiopia and detecting harmful speech in them.
Conclusion
In cases like that of Ethiopia, it is best first to understand the broader system and paradigm at play. The situation is typical of the pull and push typical of a globalised world where changes in the developed world wittingly or unwittingly create a pull on the rest of the world, drawing them into spaces where they subsequently realise they do not fit. It is from the consequent discomfort that the push emerges. What is now evident is that the developers of the technology and the powers that sanctioned its use globally did not anticipate the peculiarities of this use case. Unfortunately, this is not atypical of an industry that embraces agility as a modus operandi.
It is, therefore, more critical now than ever that international mechanisms and frameworks, including a multistakeholder, cross-disciplinary approach to decision-making, be inculcated in public and private sector technological innovations at the local level, particularly in the case of rapidly scalable solutions emerging from the Global North. It is also essential that tech giants be held responsible for the equitable distribution within and across countries with the resources needed for the optimal implementation of safety protocols concerning content moderation. To this end, it would serve Facebook and other tech giants well to partner with startups like Lesan.
It is imperative that a sufficient quantity of qualified persons with on-the-job mental health benefits be engaged. to deal with the specific issue of analysing human languages, which still have innumerable unknowns and unknown unknowns. The use of AI and network-based systems can only be as effective as the humans behind the technologies and processes. Moreover, Facebook users will continue to adapt their use of language. It is reckless to anticipate that these models would be able to adjust to or predict all human adaptive strategies. And even if these models eventually can do so, the present and interim impact, as seen in Ethiopia and other countries, is far too costly in human rights and lives.
Finally, linguistics, like all disciplines and languages, is still evolving. It is irresponsible, therefore, to pin any, let alone all, languages down to one model without the foreknowledge of dire consequences.
The UN Open-Ended Working Group (OEWG) on security of and the use of information and communications technologies 2021–2025 held its fifth substantive session in July 2023. On the agenda: adopting the annual progress report (APR).
As the chair astutely noted:
‘Gaps remain on a number of issues and there is no way to finesse a gap in substantive positions. Our discussions will have to continue to build understanding to find solutions to the gaps, differences in positions and these differences are deeply held. And some of the differences have been held not just this week, not just this past 12 months, but for the last 20 years or more. So it’s challenging to try and bridge differences over overnight drafting process for issues that have eluded consensus for the last 25 years’.
During this session, the crux of the issue was that Russia and like-minded countries were disappointed by the inclusion of language and human rights, international humanitarian law, and the overemphasis on gender issues. Despite the apparent disagreement of like-minded delegations, such contentious topics should not have been incorporated without achieving a consensus. Also, the concept of a UN Convention on International Information Security was not mentioned.
Among palpable tensions, the APR was, in the end, adopted.
Threats
New expected consensual additions were praised by most countries, including the reference to the use of ICTs in current conflicts and the inclusion of ransomware despite the latter not being considered relevant by some countries during previous sessions.
Regarding critical infrastructures, South Korea’s proposal to add the energy sector to the list of sectors of peculiar concern was supported by many states. It made it to the final report, whereas the proposal to add financial institutions went unheeded. Finally, while China and Kazakhstan resisted the reference to malicious ICT activities targeting humanitarian organisations, it still made it into the APR.
Old disputes: data security
As an item listed in the OEWG mandate, China, supported by Syria, requested the group to have a more focused discussion on data security. The Netherlands, followed by several other states (e.g. Malaysia, Croatia, the UK, New Zealand, Belgium), expressed concerns regarding this reference as ‘it is not clear how this impacts international security’ and proposed referencing it in para. 14, along with the potential impact of new emerging technologies. While Australia suggested reverting to the language of the 2021 report on the issue, the USA requested the deletion of that reference as ‘it could be interpreted as elevating the issue’ along with other issues perceived as more critical. Similar criticisms were addressed to the references to misinformation, disinformation and deepfakes.
Outcome: These contentious references do not appear in the APR.
Were there any concrete proposals?
Most of the new proposals were watered down or did not make it into the APR. Among them, Kenya’s proposal for a threat repository received support from many delegations that expressed interest in furthering discussions on the issue. However, Austria, the UK, and Mexico recommended that this proposal be moved to the CBM section, as echoed by the USA. The latter, supported by Chile, expressed concerns related to this initiative duplicating other technical forums among practitioners (such as CERT to CERT channels). Nicaragua, on behalf of Belarus, Burundi, China, Cuba, North Korea, Iran, Russia, Syria and Venezuela, strongly opposed the proposal and described it as a tool for the politicisation of ICT security issues. At the same time, Cuba added that ‘it could be used for false attributions or accusations for political ends’.
Outcome: This proposal didn’t reach the APR, threat, or CBM sections.
Many delegations also expressed their concerns regarding the impact of the development of new technologies (notably AI, quantum computing and cloud technology) on cybersecurity. New Zealand, South Africa, the Netherlands, Czech Republic, Ireland, Croatia, Singapore, Vietnam, Belgium and Bangladesh also supported the proposal to hold an intersessional meeting dedicated to these emerging technologies. The USA and Russia opposed this, arguing that several UN initiatives on emerging technologies (such as the GGE on LAWS) already cover these issues. Austria recommended having a focused discussion on how these technologies affect cyber specifically. Finally, Colombia, supported by Fiji, proposed a meeting where states victims of cyberattacks could share their experiences, lessons learned, protocols and best practices.
Outcome: Any reference to these new technologies was deleted from the report. A less focused intersessional meeting ‘on existing and potential threats to security in the use of ICTs’ with relevant experts’ participation was recommended as the next step.
What did stakeholders say?
Stakeholders emphasised the crucial role of non-governmental actors in comprehending and addressing threats that disproportionately affect vulnerable groups. They also highlighted the significance of these actors in ensuring that the efforts of the OEWG encompass a gender perspective, amplify youth voices, and work towards bridging the digital divide in both low and high-income countries.
The proposal presented by Colombia and other delegations garnered widespread support for its aim to facilitate the contributions of non-governmental stakeholders in the proposed repository of threats. Furthermore, stakeholders highlighted the value of information exchange and incident response that extends beyond the state level. These stakeholders can function as trusted intermediaries, offering insights into incidents that attack common civil society targets like human rights defenders and journalists, thereby contributing to more effective countermeasures.
Specific recommendations put forth by stakeholders included the addition of energy and water facilities as critical infrastructure in the Threats section of the APR proposed by Hitachi America. Additionally, Safe PC Solutions called for the inclusion of emerging security threats related to 5G broadband technologies. Moreover, Access Now stressed the need for a concrete acknowledgement of the cyber threats and capabilities against humanitarian actors and human rights defenders.
Rules, norms and principles
Old disputes: implementation vs development
The existing fault lines in opinions resurfaced again. In the section on norms, most member states have supported the implementation of the 11 existing voluntary norms before exploring the need for additional norms. According to these member states, the development of new norms is premature. On the other hand, Russia, China, Cuba, and others consider focusing on implementing existing norms to be outside of the mandate of the OEWG and think that the development of additional norms and new legally binding obligations should be the main agenda of the OEWG.
Some states were not satisfied with the level of emphasis put on implementation: for instance, Australia suggested that in the section on rules, norms and principles para 23 f) notes that states stressed the need for further focus discussions on implementing the rules, standards, and principles of responsible state behaviour in the use of ICTs, adding the word ‘implementing’ to the original phrasing.
Many states emphasised the importance of the private sector in the integrity, stability, and security of supply chains and cyberspace, which is now reflected in Art. 23e) of the APR. Other discussions related to critical infrastructure, critical information infrastructure, and the safety and integrity of supply chains (Art. 23 c), d) APR).
A group of states also resurrected the proposal to establish a voluntary glossary of national definitions of technical ICT terms, which was declined by most states as they needed more consensus. Suggestions were made to include this glossary as part of CBMs.
A new debate – glossary of terms
This time, states disagreed over a new topic – a glossary of terms. Some states (e.g. Switzerland, the UK, New Zealand, South Africa, etc.) did not support the proposal and asked to remove this from the progress report. They argued that states could more usefully continue to share national policies and their statement on international law and threat information. Some countries (e.g. Kazakhstan and Iran) disagreed with deleting this proposal.
A new proposal – substantiation of accusations
Russia suggested supplementing the section on norms with the provisions that accusations of wrongful acts with the use of ICTs brought against states must be substantiated, and that computer incident response must not be politically motivated.
OutcomesThe final wording of the APR (Art. 23 f)) includes a focus on implementing norms to which the opposing states agreed in the spirit of goodwill and compromise. A mention of the possibility of future elaboration of new legally binding obligations within OEWG found its place in Art. 29 b) I and in Art—32 of the APR, with a footnote referring to a proposal. The reference to the glossary of the terms has been removed from the final draft.
What did stakeholders say?
Stakeholders highlighted the importance of developing a norms checklist with a comprehensive and coordinated approach to capacity development and the significance of regional-level implementation by leveraging regional organisations’ expertise.
International law
The statements at the session clearly reflected that over the past year, the member states have advanced in explaining their positions and clarifying their points of disagreement on both norms and international law, thus making drafting the APR language more challenging.
Discussion on international law has built upon the intersessional meeting in May 2023. There are two key opinions present.
Most states reaffirm that international law, including the UN Charter, applies in cyberspace. This group proposed to deepen the discussion on how international law applies (Art. 30 of APR) and focus on sovereignty and sovereign equality, due diligence, respect and protection of human rights. The proposals within this group of states also included a direct reference to Art. 2(3), Art. 2(4) and Art. 33 of the UN Charter (Art. 30 a)-c) APR) and international humanitarian law’s applicability (Art. 29 b) ii APR).
Another group of states insists on discussing a new legally binding instrument to regulate the state’s behaviour in cyberspace (Art. 29 b) i APR). The proposal by Argentina and South Africa to involve the International Law Commission in the discussions on the applicability of international law to cyberspace did not find support.
There were, however, proposals that have found support from all across the board – on the need to hold dedicated inter-sessional meetings on how international law applies to cyberspace (Art. 35 APR) and on capacity building in international law (Art. 36 APR).
Which were old disputes?
Russia and Iran noted that the report needs more references to formulate a legally binding instrument, with Iran stating that para 32 contains a weak reference, which they found insufficient. China requested that para 32 be deleted, or that additional wording be added under the section on Norms accordingly. Estonia, on behalf of Australia, Colombia, El Salvador and Uruguay, proposed an alternative language to article 32 of Rev 2: States discuss the need to consider whether any gaps exist in how existing international law applies in the use of ICTs and whether further to consider the possible development of additional legally binding obligations if appropriate. The USA, New Zealand, and Switzerland supported this edit.
Para 32: Noting the possibility of future elaboration of additional binding obligations, if appropriate, States discussed the need to consider whether any gaps exist in how existing international law applies in the use of ICTs and further consider the development of additional legally-binding obligations.
Australia suggested changing the word ‘norms’ to ‘obligations’ in para 30 because the word ‘norms’ in the original text is used in the context of this OEWG, slightly differently from how it is often used in international law. Many delegations, such as South Korea, Switzerland, Japan, and Austria, supported this edit. The USA called new references to norms in the international law section ‘muddying of waters.’
Are there any new debates?
At the same time, states shared disagreements on human rights in the progress report: Germany first proposed adding the reference to human rights, and several countries (e.g. Switzerland, the EU and its member states, New Zealand, etc.) supported this proposal. Another group of like-minded States (Russia, Iran, China, Cuba, etc.) shared that they were “disappointed” by the inclusion of language on human rights in the final text. These countries argued that IHL and the overemphasis on gender issues should not have been incorporated without achieving consensus.
Were there any concrete proposals?
States discussed the proposal for conducting an intersessional on international law, and the Netherlands and Mexico proposed to broaden the list of relevant briefers (in para 33) so the OEWG can benefit from the expertise of stakeholders, including from regional and sub-regional organisations, businesses, NGOs, and academia. Some countries (e.g. the UK, Switzerland, Croatia) strongly supported this proposal.
Concerning the same para 33, South Africa proposed amending the language and replacing ‘developing a common understanding of the applicability of international law” to “better inform the OEWG’s deliberations’, arguing that States should not be forced and the OEWG should let the conversation about the applicability of the international law develop in a bottom-up manner.
Australia stressed that it does not support reference to the UN Secretariat compiling national views, noting this would be a duplication of existing efforts, such as those undertaken by UNIDIR.
Outcomes
Both formulations ‘norms’ and ‘objectives’ have been removed from para 30 of Rev 2.
What did the stakeholders say?
Stakeholders reinstated the centrality of IHL and human rights in discussions on international law as applied to cyberspace and the importance of stakeholders in helping contextualise norms to their local and national contexts by developing and contributing to working papers, guidance and checklists.
ICT for Peace Foundation urged further discussion on how peaceful settlement of disputes, state responsibility for incidence and state response options principles would translate to ICT in cyberspace.
CBMs
Are there any new debates?
Regarding the POCs, Russia expressed the view that the global intergovernmental POCs directory should become the “centrepiece in organising interaction of countries in response to computer attacks/incidents”. In this regard, Russia considered it inappropriate to limit cooperation between POCs to incidents with possible implications for international peace and security. Instead, the interaction between PoCs should be built on an ongoing basis, regardless of the significance of a computer incident. On the other hand, Switzerland noted that the PoCs network will complement the work of CERTs and CSIRTs in cases of ICT incidents with possible implications for international peace and security.
An unsolved issue is the nature of PoCs, which will be nominated for the directory. India noted that states should remain flexible on having multiple technical or operational POCs. India suggested the integration of the POC Directory Module with the Global Cyber Security Cooperation Portal – a mechanism proposed earlier by the Indian delegation. Ghana recommends that this nomination be made at a technical, policy, and diplomatic level due to the differences in capacities.
What did the stakeholders say? In the context of track 2 processes, stakeholders encouraged delegations to partner with the private sector. These informal dialogues serve as a means to establish or re-establish mutual trust among involved parties. Furthermore, these dialogues are crucial in aiding states in co-creating a comprehensive set of CBMs.
Capacity building
Which were old disputes?
Iran notes that their recommendation for creating a new capacity-building mechanism under the UN has been disregarded. Instead, the focus solely revolves around enhancing coordination among existing mechanisms, which Iran cannot support.
Some states (e.g. Indonesia, Vietnam, and the Netherlands) supported considering gender perspective in capacity building. In contrast, a group of like-minded states such as Russia, Cuba, China, Venezuela, Nicaragua, Iran and others have not supported adding the gender-related wording. Iran and Russia wanted gender removed from the report, and Iran specifically wanted para 43 A, which relates to preparing a survey to identify countries’ needs regarding gender equality in the field of ICT security, removed.
Are there any new debates?
Indonesia proposed to connect the mapping of capacity building programmes to the implementation of the frameworks’ recommendations. The USA strongly supported it, while some states (e.g. Australia, Japan, and New Zealand) raised concerns about resources in conducting such a mapping. The USA and Japan, in particular, called for the making of most of the existing capacity-building efforts undertaken by other international organisations such as ITU. The Netherlands said that the text is missing the sub-regional aspects and proposed that it be added to reflect efforts from the regional level. The EU shared the same view and suggested that the UN could encourage and serve as a platform to enhance the implementation of the UN agreements and stipulate capacity building in this context, including cooperation with the multistakeholder community. Egypt believed the progress report should not refer to specific regional or sub-regional organisations. Australia disagreed and stressed the importance of mentioning concrete organisations, such as the GFCE. Hungary shared the view that while mapping is needed to coordinate better the efforts of the growing number of donors and implementers, the UN could undoubtedly play a complementary role. Still, other stakeholders have their roles to play.
What did the stakeholders say?
Stakeholders emphasised the significance of regional and cross-regional formats for sharing best practices and identifying capacity building needs to align these with national, regional, and international conferences.
Stakeholders also underlined the importance of mainstreaming Capacity Building Principles into capacity building efforts. The organisation Developing Capacity mentioned the opportunity of doing this at the Global Conference on Capacity Building in Ghana next November.
Finally, a concrete proposal was made by the ICC in the name of 21 other stakeholders to add language that explicitly states that the OEWG should consider how cybersecurity considerations and good practices can be integrated into future digital development projects.
Regular institutional dialogue
Which were old disputes?
To PoA or not to PoA
The division among delegations was stark, splitting Cuba, Iran, Pakistan, Syria and Russia on one side and the EU, the USA, Korea, France, and other Western democracies on the other. The critical point of contention lay between those favouring the Program of Action (PoA) and those advocating for equal consideration of all country proposals.
Cuba and Iran were proponents of inclusivity, urging the incorporation of all future mechanism proposals into the report. Russia voiced concerns about the existing draft, arguing that the section on regular institutional dialogue was biassed in favour of the PoA. Syria asserted that prioritising the PoA gave the impression of broad consensus, contrary to the working group’s mandate to consider various security initiatives. Syria also noted that discussions revealed differing viewpoints on the effectiveness of the PoA and recommended evaluating it before any definitive steps.
Conversely, the US strongly criticised these states’ push for an authoritarian revision of the consensus framework, pointing out that their proposal lacked substantial backing and had been repeatedly dismissed over the years. They maintained that proposals should only be included in the report if they garnered significant support.
Portugal and Korea also supported the PoA, citing its considerable support under the UN’s umbrella, referencing broad approval from member states through General Assembly resolution 7737.
The EU emphasised that the PoA could enhance transparency, credibility, and sustainability in decision implementation.
Finally, China introduced a potential compromise, suggesting compiling common elements from various positions and proposals to reduce differences and find convergences. They emphasised the importance of a balanced representation of all parties’ positions in the report.
Legally binding vs deletion of 49 C (bis)
Pakistan, Iran and Russia advocated for the work of the future mechanism to be based on the recommendations of the OEWG and the possibility of crafting a legally binding ICT instrument within that framework.
However, several delegations, including Belgium, Korea, the EU, the USA, and Japan, among others, supported France’s proposal to remove paragraph 49 C bis due to concerns about incorporating language about a legally binding instrument. Korea viewed such an instrument as premature and suggested the deletion of 49 C bis, aligning with the perspectives of other countries like the EU, the USA, Japan, and France, that if it were included at all, it should be under the international law section. Vietnam also deemed paragraph 49 C inappropriate in acknowledging diverse views and ideas discussed in the working group, echoing the language from the 2021 OEWG report.
State-led vs intergovernmental
Similarly, the block of Russia, Syria and China supported the proposals made by Cuba and Iran, among other countries, to change state-led to intergovernmental regarding fifth paragraph 53. Conversely, Western democracies defended the state-led nomenclature.
Are there any new debates?
A new debate about the consensus in the future regular institutional dialogue emerged: France noted that para 56. should not prejudge that the decision-making processes in the future mechanism will be consensus-based. Australia and Austria supported France’s suggestion. Iran said that paragraph 56 does not reflect the need to pay attention to a step-by-step negotiation approach. The USA noted that para 56 is too prescriptive – states do not need to agree by consensus on establishing a future mechanism for regular institutional dialogue, as the General Assembly does not require it. Austria supported this view.
Were there any concrete proposals?
The IPS forum (India, Brazil and South Africa) proposed a comprehensive institutional dialogue mechanism encompassing crucial aspects of the ICT environment, including trust-building and deeper discussion on aspects lacking common understanding. This mechanism should be intergovernmental, open, inclusive, transparent, flexible, and action-oriented, operating by consensus to prevent stagnation while avoiding a potential veto power.
Vietnam suggested that the future mechanism should build upon the efforts of GGE and OEWG, as indicated in paragraph 51. In the same paragraph, Bangladesh proposed this mechanism should have a multistakeholder approach. Numerous countries advocated for dedicated intersessional meetings to delve into specific discussions and elaborate on the modalities of the PoA.
The US proposed inserting a new paragraph 49 b, highlighting discussions on UNGA resolution 7737. This resolution supports a new Program of Action for responsible state behaviour in cyberspace, including an SG report on scope, structure, and content, to be discussed at the OEWG after release in July 2023.
France, the EU, and the US proposed that the APR reflects the SG report and intersessional meeting outcomes regarding the POA’s modalities. Additionally, the UN Secretariat was requested to brief the OEWG during the OEWG’s sixth session about the POA’s scope, content, and structure.
The Philippines underlined the importance of addressing the gender digital divide in future dialogues, alongside promoting meaningful participation and leadership of women in future decision-making mechanisms. In a complementary vein, Nigeria proposed incorporating responsible state behaviour and an online child protection mechanism, aligning with efforts to combat online gender exploitation. Australia recommended embedding these proposals as a fundamental principle within paragraph 51. While not in Section G of the final APR draft, the gender perspective is mentioned in the Threats and Capacity Building sections.
Outcomes To settle divergence on the proposals for the future mechanism, the APR reflects other proposals made for regular institutional dialogue while highlighting the progress made in discussing the PoA (para 52b). The wording on the future permanent mechanism followed the compromise suggested by China. As an initial step to building confidence and convergence, States will propose some common elements that could underpin the development of any future mechanism for regular institutional dialogue (para 53). This approach aims to build consensus while maintaining discourse on the suggestions highlighted in subparagraphs 52(a) to 52(b). Other noteworthy aspects integrated into the APR encompassed focused dialogues on the relationship between the PoA and OEWG and acknowledgement of the relevance of previous OEWG and GGE work (paragraph 55c), both proposals made by Vietnam. Additionally, paragraphs 52, ‘an open, inclusive, transparent, sustainable and flexible process’ and 52a, ‘understanding in areas where no common understandings have yet emerged’, reflected the suggestions made by the IPS forum. Additionally, the engagement of various stakeholders, including businesses, NGOs, and academia, was recognised as pertinent, so Bangladesh’s proposal was included in paragraph 57. The proposal on dedicated intersessional meetings to continue discussions on the PoA received broad support. It was included in paragraph 58 with the amendment ‘to further discuss proposals on regular institutional dialogue, including the PoA.’As per the proposal by the US, there was no mention of UNGA resolution 7737. However, the Secretariat was still requested to brief the OEWG at its sixth session on the report of the Secretary-General submitted to the General Assembly at its seventy-eighth session.
What did the stakeholders say?
Stakeholders supported the proposals that the future permanent mechanism should be multistakeholder. Access Now proposed discussions on the PoA would benefit from even further openness and planning around how stakeholders can contribute.
The digital changes in the topography of journalism have, for better or worse, resulted from two essential shifts in how information circulates in society. One tectonic change shows in the amount and types of actors that engage in news reporting have massively increased due to the accessibility afforded by the Fourth Industrial (or Digital) Revolution. From non-governmental private companies, such as social media conglomerates Meta and Alphabet, to individuals with information power, like Julian Assange and Elon Musk, to everyday consumers like you and me, the new engagement paradigm is in full swing. For another, and leading on from the first, the sheer abundance of information being shared through different media and platforms has reinforced the plurality of discourse. Different sources, communities, filter bubbles, and political or personal biases shape what information appears, who is writing it, and where and why.
A question of (anti-)social media?
In a rapidly evolving technological landscape, the internet and social media have revolutionised how information is disseminated. However, this transformation does not necessarily translate to improved journalism.
With greater accessibility and connectivity for both citizens and reporters, concerns are mounting over the proliferation of biassed information. Recent pivotal examples are the COVID-19 pandemic, ongoing conflicts in Ukraine or in central Africa and elections worldwide (particularly in the USA and Türkiye – where the interplay between governments, private media companies, and individuals has increased political tensions across society). The surge in online news outlets and social media has further exacerbated the situation, providing a platform for individuals to disseminate biassed, misleading, or inaccurate information. Consequently, this will reach a wider audience, giving rise to a host of emerging issues in the media landscape, like disengagement or polarisation.
Recent trends in news consumption
Over the past year, trust in the news has experienced a notable decline, dropping by an additional 2 percentage points across various markets worldwide, according to Reuters 2023 Digital News Report. In many countries, this setback has undone the progress made during the peak of the COVID-19 pandemic, when trust in broadcast and paper news sources had witnessed an upswing.
Their research has also shown that users on rapidly popularising short-form platforms such as TikTok, Instagram, and Snapchat are notably more inclined to pay attention to updates from celebrities, influencers, and social media personalities rather than relying on professional journalists. In stark contrast, and counterintuitively, Facebook and Twitter maintain their status as platforms where news media, journalists and reporters remain central in shaping conversations.
Three improvements any journalist should institute
In the quest to tackle trust issues between sources and journalists, the crux of the matter lies in the power balance. The powers at play are the selection of a plurality of sources or one absolute ‘truth’. In other words, do we recognise that there is no universal truth and seek to include diverse perspectives, or do we trust that there is a ubiquitous, truth that can be proved and reported on. Acknowledging this fundamental challenge, experts worldwide emphasise that journalists must take charge and adapt to the digital era. By embracing the online environment to their advantage, journalists can consolidate their position and credibility, thereby enhancing public trust in their work.
To achieve this end, journalists must think of information no longer as a product, but as a service that the news media should be responsible for delivering. With the abundance of interconnected information sources in today’s society, expecting reporters to single-handedly provide all-encompassing news coverage has become impractical. Instead, experts propose a shift in focus, emphasising journalists’ role as arbiters, curators, and information filters (Broersma & Graham, 2016; Beckett, 2018; Dahlgren, 2009). By becoming gatekeepers of trustworthy information, they can guide audiences through the sea of media confusion that characterises modern life.
Second, journalists might want to foster open discourse by providing contrarian opinions while removing themselves from the perceived role of absolute authority. Doing this, allows journalists or reporters to effectively and accessibly communicate knowledge while allowing room for healthy debate and critical examination.
Third, the dynamic power of so-called citizen journalists should not be underestimated. Journalists should see the increased involvement of members of the public gathering and spreading news and information as a tool, not as a constraint. At a time when many news organisations face staff cutbacks, citizen journalists have emerged as valuable contributors who play a crucial role in monitoring society using online resources and social media.
Takeaway
Amidst the ongoing mis- and dis- information crises, credible sources and information filtering emerge as a potent antidote, fostering a fresh perspective on information management within the field of news journalism. Good information and good journalism empower people through knowledge and allow individuals to make informed decisions. Emphasising the pivotal role of reliable news reporting, this approach bolsters the belief that trustworthy journalism remains integral to the fabric of society.
References
Beckett, C. (2018). The Paradox of Power for Journalism: Back to the Future of News [new book]. London School of Economics. https://blogs.lse.ac.uk/polis/2018/11/23/the-paradox-of-power-for-journalism-back-to-the-future-of-news-new-book/
Broersma, M. & Graham, T. (2016). ‘Chapter 6: Tipping the Balance of Power: Social Media and the Transformation of Political Journalism’, in Burns, A. (ed.) The Routledge Companion to Social Media and Politics. New York: Routledge, pp. 89–103.
Dahlgren, P. (2009). Media and Political Engagement: Citizens, Communication and Democracy. Cambridge; New York: Cambridge University Press, pp. 172–181.
Newman, N., Fletcher, R., Eddy, K., Robertson, C. T., & Kleis Nielsen, R. (2023). Reuters Institute Digital News Report 2023. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2023-06/Digital_News_Report_2023.pdf