Home | Newsletters & Shorts | DW Weekly #135 – 06 November 2023

DW Weekly #135 – 06 November 2023

 Text, Paper, Page

Dear readers,

Last week’s AI Safety Summit, hosted by the UK government, was on everyone’s radar. Despite coming just days after the US President’s Executive Order on AI and the G7’s guiding principles on AI, the summit served to initiate a global process on establishing AI safety standards. The week saw a flurry of other AI policy developments, making it one of the busiest weeks of the year for AI.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Landmark agreement on AI safety-by-design reached by UK, USA, EU, and others

The UK has secured a landmark commitment with leading AI countries and companies to test frontier AI models before releasing them for public use. That’s just one of the initiatives agreed on during last week’s AI Safety Summit, hosted by the UK at Bletchley Park.

Delicate timing. The summit came just after US President Joe Biden announced his executive order on AI, the G7 released its guiding principles, and China’s President Xi Jinping announced its Global AI Governance Initiative. With such a diverse line-up of developments, there was a risk that the UK’s summit would be outshined, and its initiatives overshadowed. But judging by how the UK successfully avoided turning the summit into a marketplace (at least, not publicly), it managed to launch not just a product but a process.

Signing the Bletchley Declaration. The group of countries signing the communique on Day 1 of the summit included Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.

Yes, China too. We’ve got to hand it to Prime Minister Rishi Sunak for bringing everyone around the table, including China: ‘Some said, we shouldn’t even invite China… others that we could never get an agreement with them. Both were wrong. A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.’ And he’s right. On his part, Wu Zhaohui, China’s vice minister of science and technology, told the opening session that Beijing was ready to increase collaboration on AI safety. ‘Countries regardless of their size and scale have equal rights to develop and use AI’, he added, possibly referring to China’s latest efforts to help developing nations build their AI capacities.

Like-minded countries testing AI models. The countries agreeing on the plan to test frontier AI models were actually a smaller group of like-minded countries – Australia, Canada, the EU, France, Germany, Italy, Japan, Korea, Singapore, the USA, and the UK – and ten leading AI companies – Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, and xAI. 

No China this time. China (and others) were not part of this smaller group, even though China’s representative reportedly attended Day 2. Why China did not sign the AI testing plan remains a mystery (we do have a theory or 2, though).

AI Safety Summit
UK Prime Minister Rishi Sunak addressing the AI Safety Summit (1–2 November 2023)

Outcome 1: Shared consensus on AI risks

Current risks. For starters, countries agreed on the dangers of current AI, as outlined in the Bletchley Declaration, which they signed on Day 1 of the summit. Those include bias, threats to privacy and data protection, and risks arising from the ability to generate deceptive content. 

A more significant focus: Frontier AI. Though current risks need to be mitigated, the focus was predominantly on frontier AI, that is, advanced models that exceed the capabilities of what we’re seeing today, and their ‘potential for serious, even catastrophic, harm’. It’s not difficult to see why governments have come to fear what’s around the corner, as there have been plenty of stark warnings about the future’s superintelligent systems, the risk of extinction, and the seriousness of these warnings. But as long as they don’t let the dangers of tomorrow divert them from addressing the immediate concerns, they’re on track. 

Outcome 2: Governments to test AI models 

Shared responsibility. Gone are the days when AI companies were solely responsible for ensuring the safety of their models. Or as Sunak said on Day 2, ‘we shouldn’t rely on them to mark their own homework’. Governments (the like-minded ones) will soon be able to see for themselves whether next-generation AI models are safe enough to be released to the public, or whether they pose threats to critical national security.

How it will work. A new global hub, called the AI Safety Institute (an evolution of the existing Frontier AI Taskforce), will be established in the UK, and will be tasked with testing the safety of emerging AI technologies before and after their public release. It will work closely with the UK’s Alan Turing Institute and the USA’s AI Safety Institute, among others.

Outcome 3: An IPCC for AI 

Panel of experts. A third major highlight of the summit is that countries agreed to form an international advisory panel on AI risk. Prime Minister Sunak said the panel was ‘inspired by how the Intergovernmental Panel on Climate Change (IPCC) was set up to reach international science consensus.’

How it will work. Each country who signed on to the Bletchley Declaration will nominate a representative to support a larger group of leading AI academics, tasked with producing State of the Science reports. Turing Award winner Yoshua Bengio will lead the first report as chair of the drafting group. The chair’s secretariat will be housed within the AI Safety Institute.

So what’s next? As far as gatherings go, it looks like the UK’s AI Safety Summit is the first of many. The second summit will be online, co-hosted by Korea in 6 months. An in-person meeting in France will follow a year later. As for the first report, we can expect it to be published ahead of the Korea summit. 


Digital policy roundup (30 October–6 November)

// AI //

Big Tech accused of exaggerating AI risks to eliminate competition

On today’s AI landscape, there are a few dominant Big Tech companies, alongside a vibrant open-source community, which is driving significant advancements in AI. The latter is posing a challenging competition to Big Tech, according to Google Brain founder Andrew Ng, leading giant companies to exaggerate the risks of AI in the hope of triggering strict regulation that would stymie the open-source community.

‘It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,’ Ng said.

Why is it relevant? Firstly, this statement echoes the cautionary note expressed in a leaked internal Google document from last May, which said that open-source AI would outcompete Google and OpenAI. Second, it paralyses open-source’s ability to resolve governance issues due to Big Tech’s control over data and knowledge.

UN advisory body to tackle gaps in AI governance initiatives  

The UN’s newly formed High-Level Advisory Body on AI, comprising 39 members, will assess governance initiatives worldwide, identify existing gaps, and find out how to bridge them, according to UN Tech Envoy Amandeep Singh Gill. He said the UN provides ‘the avenue’ for governments to discuss AI governance frameworks.

The advisory body will publish its first recommendations by the end of this year, and final recommendations next year. They will be discussed during the UN’s Summit of the Future, to be held in September next year.

Why is it relevant? It appears that the advisory body will not release another set of AI principles. Instead, they will focus on closing gaps rather than adding to the growing number of principles.


Tweet from @netblocks says: Confirmed: Live network data show a new collapse in connectivity in the #Gaza Strip with high impact to Paltel, the last remaining major operator serving the territory; the incident will be experienced as the third telecommunications blackout since the start of the conflict. A line graph shows connectivity declining in percentages from October 2–30 in 2023.

// MIDDLE EAST //

Third internet blackout in Gaza

The Gaza Strip was disconnected from internet, mobile, and telephone networks over the weekend – the third time since the start of the conflict. NetBlocks, a global internet monitoring service, said: ‘We’ve tracked the gradual decline of connectivity, which has corresponded to a few different factors: power cuts, airstrikes, as well as some amount of connectivity decline due to population movement.’


Was this newsletter forwarded to you, and you’d like to see more?


// DATA PROTECTION //

Facebook and Instagram banned from running behavioural advertising in EU

The European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram. According to the EU’s GDPR, companies need to have a good reason for collecting and using someone’s personal information; Meta had none.

Ireland is where Meta’s headquarters are located. The ban imposed on the company, which owns Facebook and Instagram, covers all EU countries and those in the European Economic Area.

Why is it relevant? There are six different reasons, or legal bases, that a company can use to process data. One of them, based on consent (meaning that a person has given their clear and specific agreement for their information to be used), is Meta’s least favourite, as the chance of users refusing consent is high. Yet, it may soon be the only basis Meta can actually use – a development which will surely make Austria-based NGO noyb quite happy.


The week ahead (6–13 November)

7-8 November: The 2023 Conference on International Cyber Security takes place at The Hague, the Netherlands. The theme is ‘War and Peace. Conflict, Behaviour and Diplomacy in Cyberspace’

8 November: The International AI Summit, organised by ForumEurope and EuroNews in Brussels and online, will ask whether a global approach to AI regulation is possible.

10-11 November: The annual Paris Peace Forum will tackle trust and safety in the digital world, among other topics.

13–16 November: The Web Summit, dubbed Europe’s biggest tech conference, meets in Lisbon.


#ReadingCorner
 Person, Security, Disk

A new chapter in IPR: The age of AI-generated content

Intellectual property authorities worldwide face a major challenge: How to approach inventions created not by human ingenuity, but by AI. This issue has sparked significant debate within the intellectual property community, and many lawsuits. Read part one of a three-part series that delves into the impact of AI on intellectual property rights.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation