Dear readers,
Momentum for a new international AI governance body under the UN is growing. Last week’s UN General Assembly debate showed that it’s a matter of when, not if. In other news, OpenAI was sued for copyright infringement again, while competition authorities kept busy with new AI principles, merger reviews, and antitrust practices.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Countries worried about AI risks; UN ready to host global AI body
If you want to know which policies are top priority for any country’s leader, just go through their address during the UN General Assembly’s annual debate.
As anticipated, more than one-third of country leaders who addressed last week’s debate (which concludes tomorrow) spoke about their concerns over AI risks. Failing to tackle the risks will undoubtedly spoil any benefits that AI has to offer.
Secretary-General Antonio Guterres, who opened the debate, is urging new forms of governance for emerging threats. He’s been among the first to mention AI at the UN’s general debate; now, he says, it’s on everyone’s minds. And he’s right.
A global entity on AI. Guterres believes that a new UN agency might hold the key to effectively governing AI. In July, he told the UN Security Council that he welcomed calls from member states for the creation of a new UN body for AI, inspired by entities such as the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO), or the Intergovernmental Panel on Climate Change (IPCC).
Reiterating these models as inspiration, last week he added: ‘The UN stands ready to host the global and inclusive discussions that are needed, depending on the decisions of member states.’
Which countries support a new AI body? French President Emmanuel Macron was among the first to call for an IPCC for AI; European Commission President Ursula von der Leyen has also lent her support.
After last week’s debate, the list of countries in favour of a new AI body and supporting the UN’s work has continued to grow. Spain offered to host ‘the headquarters of the future International Artificial Intelligence Agency’. South Korea said it would support the creation of an international organisation under the UN by organising a Global AI Forum. Others shared their support more generally: No one country can single-handedly address AI governance. (For more coverage, head to our UNGA 78th session reporting page).
Tech companies are also in favour. In May, ChatGPT-maker OpenAI called for a new AI watchdog akin to the IAEA (though, to be precise, they want it to handle future superintelligence rather than tackle existing AI). In August, researchers from Microsoft and a few non-profit centres called for an International AI Organization (IAIO) that could work with national and regional regulators to develop standards and certify jurisdictions (rather than directly overseeing AI tech companies).
Until then? The UN’s next step is the newly-established High-Level Advisory Body on AI, whose membership will be announced in the coming weeks. In an interview, the Secretary-General’s Envoy on Technology, Amandeep Gill (who also believes that a new UN agency is the answer to help steer this disruptive technology) said that the 32-member body will be asked to present the Secretary-General with options for the international governance of AI by the end of the year. They will then be asked to develop more detailed recommendations by mid-next year. By next September’s Summit of the Future, Gill hopes that the conclusions of the high-level body will help member states decide on whether and how they should support a new UN agency.
It’s just the beginning. It’s not a secret that things at the UN take a long time to develop. The fact that so many countries are in favour is a strong recipe for success, but that’s just the start.
Countries will need to determine the new body’s mandate and how far it will be allowed to go. An overly ambitious mandate could halt progress in its tracks – as the lethal autonomous weapons (LAWs) discussions showed us – and as the UN negotiations on a new cybercrime treaty hint at.
A light touch could make it toothless, but will have better chances of taking off. The work of the Council of Europe’s Commission on AI and the G7’s Hiroshima process are good examples of finding common denominators, and building on them.
No longer if. For those few countries who haven’t yet made up their minds about whether the world needs an AI governance framework, the UK’s address may be just what they need to hear:
‘At this frontier, we need to accept that we simply do not know the bounds of possibilities… We are as Edison before the light came on… or as Tim Berners-Lee before the first email was sent. They could not – surely – have respectively envisaged the illumination of the New York skyline at night… or the wonders of the modern internet… but they suspected the transformative power of their inventions.’
It is no longer a matter of if, but rather a matter of when and how.
// AI GOVERNANCE //
UK’s competition authority proposes new AI principles
The British anti-trust regulator, the Competitions and Markets Authority (CMA), has proposed seven principles to guide the development and deployment of AI foundational models (the technology that’s been trained on vast amounts of data to carry out a wide range of tasks and operations).
The principles were issued as part of the CMA’s review of existing models, and will go through an iterative process based on consultations with stakeholders.
Why is this relevant? First, the principles come ahead of the UK’s AI Safety Summit in November. Any measure that the UK introduces during this time will influence the guidance the country provides to the nations it will host. Second, beyond the summit, the CMA thinks AI regulation will still be needed as AI develops further (as long as the regulations are proportionate). Although it’s in line with British lawmakers’ call for new rules, this won’t appease parliamentarians who have been urging for new rules to be introduced before November.
// COPYRIGHT //
OpenAI sued for copyright… again
Game of Thrones author George R.R. Martin is among a group of writers who filed a class-action lawsuit against OpenAI last week in a federal court in New York. The Authors Guild, a writers’ organisation, and the 17 authors are accusing the company of utilising their books without permission, to train ChatGPT.
‘The core of these algorithms revolves around systematic theft on a massive scale,’ the lawsuit states. The authors are requesting a prohibition on the usage of copyrighted books, as well as damages.
Why is it relevant? It seems that new copyright lawsuits are being filed every week, adding to the long list of litigation: Just a fortnight ago, OpenAI (and Microsoft) faced two new cases in California. But it also looks like companies are ready to fight back. Microsoft is arguing that the authors have failed to demonstrate that Llama’s software code or output substantially resembled their works. Plus, Microsoft says the company used the copyrighted material under the fair use doctrine.
Case details: Authors Guild et al v OpenAI, District Court, Southern District of New York, 1:23-cv-8292
// CYBERCRIME //
International Criminal Court says it has been hacked
The International Criminal Court (ICC) said its network had been hacked, seeing unusual activity at the end of last week. A few days later, the court was still operating with disruptions to email and document-sharing, the ICC confirmed in a statement sent to the press.
The court, renowned as one of the world’s most prominent international institutions, deals with sensitive information regarding war crimes. It refrained from providing any further details on the severity of the hack, the status of its resolution, or the potential perpetrators behind it.
Why is it relevant? The ICC is working on its first-ever policy on cybercrime, which it plans to release in the coming months. The ICC’s lead prosecutor, Karim Khan, also recently confirmed that his office will be collecting and reviewing evidence of cybercrimes as part of its ongoing work. He thinks that some of the states’ conduct we see today could have the attributes of grave crimes that the ICC was established to prosecute. Considering the sensitive information the court handles, criminals might have viewed these recent developments as extra fodder.
Was this newsletter forwarded to you, and you’d like to see more?
// ANTITRUST //
UK set to approve Microsoft’s Activision deal
The UK’s CMA said that Microsoft’s new restructured deal to acquire Activision Blizzard appeared to address the concerns the CMA had raised. It looks like the deal will be approved by the CMA.
As part of a new agreement, Microsoft has granted Ubisoft, the French video games publisher, the rights to stream Activision games from the cloud for 15 years.
Why is it relevant? The UK held off on its go-ahead, despite the fact that the deal was approved by its US and EU counterparts earlier this year. The authority’s chief, Sarah Cardell, couldn’t resist telling Microsoft off: ‘It would have been far better, though, if Microsoft had put forward this restructure during our original investigation. This case illustrates the costs, uncertainty, and delay that parties can incur if a credible and effective remedy option exists but is not put on the table at the right time.’
Intel hit with EU antitrust fine in decades-old case
Intel was fined EUR376 million (USD400 million) last week in an EU antitrust case over the chipmaker’s anti-competitive practice of blocking rivals.
The European Commission had originally imposed a fine of EUR1.06 billion in 2009 after it ruled that Intel had engaged in two specific illegal practices – one known as conditional rebates (providing hidden rebates to computer manufacturers for purchasing CPUs from the company) and the other, naked restrictions (paying manufacturers to halt the release of products containing CPUs from rival companies). In 2022, the European Court of Justice overruled the 2009 commission decision on Intel’s rebates practices, but confirmed the naked restrictions. Last week’s fine reinstates the penalty for this remaining breach.
Why is it relevant? The European Commission doesn’t hesitate to fine companies for anticompetitive practices, even when it involves companies operating in critical sectors vital to the EU, such as semiconductors. Since it’s only one infraction, though, it’s a (much) lower fine.
Ongoing till 26 September: The high-level debate of the UN General Assembly’s 78th session comes to an end tomorrow.
Ongoing till 13 October: The digital policy issues that are being tackled during the 54th session of the Human Rights Council (HRC) include cyberbullying and digital literacy.
AI governance: An update on what’s ahead
8–12 October: AI discussions are set to take centre stage at the upcoming Internet Governance Forum (IGF2023) in Japan. As the host country, currently leading the G7, expect Japan to provide updates on the G7 Hiroshima AI Process.
1–2 November: The UK’s AI Safety Summit will focus on tackling AI risks. And after weeks of rumours, the UK government has now confirmed that it extended an invitation to China.
12–14 December: The Global Partnership on Artificial Intelligence (GPAI) will hold its annual summit in India.
Some good news: Fewer people without internet access
New data from ITU shows progress in internet connectivity. The latest statistics for 2023 indicate a global reduction in the number of individuals without internet access to approximately 2.6 billion, down from 2.7 billion in 2022. The news arrives at a time of heightened attention to the role of digital technologies in advancing the realisation of the 2030 Agenda.
Was this newsletter forwarded to you, and you’d like to see more?