Home | Newsletters & Shorts | DW Weekly #137 – 20 November 2023

DW Weekly #137 – 20 November 2023

 Text, Paper, Page

Dear all,

Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping was momentous not so much for what was said (see outcomes further down), but for the fact that it happened at all. 

Over the weekend, the news of Sam Altman’s ousting from OpenAI caused quite a stir. He didn’t need to wait long to find a new home: Microsoft.

Lots more happened, so let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden-Xi Summit cools tensions after long tech standoff

Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping, in San Francisco on the sidelines of the Asia-Pacific Economic Cooperation’s (APEC) Leaders’ Meeting, marked a significant step towards reducing tensions between the two countries. 

Implications for tech policy. Tensions, especially about technology, have been escalating for months and years. For instance, in August, the US government issued a new executive order banning several Chinese-owned software and apps from its market. This order was met with some trepidation by tech companies operating in both countries as it was unclear how this would affect their businesses. But now, after Biden and Xi’s meeting, there is hope that tensions between the two countries will ease and that this softening will cover many aspects, including tech cooperation and policy. At least, so we hope.

Responsible competition. Prior to their closed-door meeting, the two leaders pragmatically acknowledged that the USA and China have contrasting histories, cultures, and social systems. Yet, President Xi said, ‘As long as they respect each other, coexist in peace, and pursue win-win cooperation, they will be fully capable of rising above differences and find the right way for the two major countries to get along with each other’. Biden earlier had said, ‘We have to ensure that competition does not veer into conflict. And we also have to manage it responsibly.’

Biden Xi
State meeting with Xi, Biden, and staff. Credit: @POTUS on X.

Cooperation on AI. Among other topics, the two presidents agreed on the need ‘to address the risks of advanced AI systems and improve AI safety through US-China government talks,’ the post-summit White House readout said. It’s unclear what this means exactly, given that both China and the USA have already introduced the first elements of an AI framework. The fact that they brought this up, however, means that the USA certainly wants to stop any trace of AI technology theft in its tracks. But what’s in it for China?

US investment. A high-level diplomat suggested to Bloomberg that Xi’s address asking US executives to invest more in China was a signal that China needs US capital because of mistakes at home that have hurt China’s economic growth. If US Ambassador to Japan Rahm Emanuel is right, that explains why cooperation is a win-win outcome.

Tech exports. There’s a significant ‘but’ to the appearance of a thaw. Cooperation will continue as long as advanced US technologies are not used by China to undermine US national security. The readout continued: ‘The President emphasised that the United States will continue to take necessary actions’ to prevent this from happening, at the same time ‘without unduly limiting trade and investment’.

Unreported? Undoubtedly, there were other undisclosed topics discussed by the two leaders during their private meeting. For instance, what happened to the ‘likely’ deal on banning AI from autonomous weapon systems, including drones, which a Chinese embassy official hinted at before the meeting and on which the USA took a new political stand just two days prior?

Although it’s early days to see any significant positive ripple waves after the meeting, we’ll let the fact that Biden and Xi met face to face sink in a little bit. After all, as International Monetary Fund managing director Kristalina Georgieva told Reuters, the meeting was a badly needed signal that the world can cooperate more.


Digital policy roundup (13–20 November)

// AI //

Sam Altman ousted from OpenAI, joins Microsoft

Sam Altman, the CEO of OpenAI, who was fired on Friday in a surprise move by the company’s board, will now be joining Microsoft. Altman will lead a new AI innovation team at Microsoft, CEO Satya Nadella announced today (Monday). Fellow OpenAI co-founder Greg Brockman, who was removed from the board, will also join Microsoft.

Although Twitch co-founder Emmett Shear has been appointed as interim CEO, OpenAI’s future is far from stable: A letter signed by over 700 OpenAI employees has demanded the resignation of the board and the reinstatement of Altman (which might not even be possible at this stage).

Why is it relevant? First, Altman was the driving force behind the company – and its technology – which pushed the boundaries in AI and machine learning in such a short and impactful time. More than that, Altman was OpenAI’s main fundraiser; the new CEO will have big shoes to fill. Second, Microsoft has been a major player in the world of AI for many years; Altman’s move will further increase Microsoft’s already significant influence in this field. Third, tech companies can be as volatile as stock markets.

Sam Altmans badge
Sam Altman shows off an OpenAI badge, which he said was the last time to ever wear it.

US Senate’s new AI bill to make risk assessments and AI labels compulsory

A group of US senators have introduced a bill to establish an AI framework for accountability and certification based on two categories of AI systems – high-impact and critical-impact ones. The AI Research, Innovation, and Accountability Act of 2023 – or AIRIA – will also require internet platforms to implement a notification mechanism to inform the users when the platform is using generative AI. 

Joint effort. Under the bill, introduced by members of the Senate Commerce Committee, the National Institute of Standards and Technology (NIST) will be tasked with developing risk-based guidelines for high-impact AI systems. Companies using critical-impact AI will be required to conduct detailed risk assessments and comply with a certification framework established by independent organisations and the Commerce Department.

Why is it relevant? The bipartisan AIRIA is the latest US effort to establish AI rules, closely following President Biden’s Executive Order on Safe, Secure, and Trustworthy AI. It’s also the most comprehensive AI legislation introduced in the US Congress to date.


// IPR //

Music publishers seek court order to stop Anthopic’s AI models from training on copyrighted lyrics

A group of music publishers have requested a US federal court judge to block AI company Anthropic from reproducing or distributing their copyrighted song lyrics. The publishers also want the AI company to implement effective measures that would prevent its AI models from using the copyrighted lyrics to train future AI models. 

The publishers’ request is part of a lawsuit they filed on 18 October. The case continues on 29 November.

Why is it relevant? First, although the lawsuit is not new, the music publishers’ request for a preliminary injunction shows how impatient copyright holders are with AI companies allegedly using copyrighted materials. Second, the case raises once more the issue of fair use: In a letter to the US Copyright Office last month, Anthropic argued that its models use copyrighted data only for statistical purposes and not for copying creativity.

Case details: Concord Music Group, Inc. v Anthropic PBC, District Court, M.D. Tennessee, 3:23-cv-01092.


 Rocket, Weapon, Launch, Ammunition, Missile

// CONNECTIVITY //

Amazon’s Project Kuiper’s successful Protoflight mission

The team behind Amazon’s Project Kuiper, a satellite network developed by Amazon, has successfully tested the prototype satellites, which were launched on 6 October. Watch this video to see the Project Kuiper team testing a two-way video call from an Amazon site in Texas. The next step is to start mass producing the satellites for deployment in 2024.


Was this newsletter forwarded to you, and you’d like to see more?


// DMA //

Meta and others challenge DMA gatekeeper status

A number of tech companies are challenging the European Commission’s label of digital gatekeepers, which places them into the scope of the new Digital Markets Act. Among the companies: 

  • Meta (Case T-1078/23): The company disagrees with the Commission’s decision to designate its Messenger and Marketplace services under the new law, but does not challenge the inclusion of Facebook, Whatsapp, or Instagram.
  • Apple (Cases T-1079/23 & T-1080/23): Details aren’t public but media reports said the company was challenging the inclusion of its App Store on the list of gatekeepers.
  • TikTok (Case (T-1077/23): The company said its designation risked entrenching the power of dominant tech companies.

Microsoft and Google decided not to challenge their gatekeeper status.

Why is it relevant? The introduction of the Digital Markets Act has far-reaching implications for the operations of tech giants. This legal challenge is a first attempt to block its effective implementation. The outcomes of these cases could establish a precedent for the future regulation of digital markets in the EU.


The week ahead (20–27 November)

20 November–15 December: The ITU’s World Radiocommunication Conference, which starts today (Monday) in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.

21–23 November: The 8th European Cyber Week (ECW) will be held in Renne, France, and will bring together cybersecurity and cyber defence experts from the public and private sectors.

27–29 November: The 12th UN Forum on Business and Human Rights will be held in a hybrid format next week to discuss effective change in implementing obligations, responsibilities, and remedies.


#ReadingCorner

Copyright lawsuits: Who’s really protected?

Microsoft, OpenAI, and Adobe are all promising to defend their customers against intellectual property lawsuits, but that guarantee doesn’t apply to everyone. Plus, those indemnities are narrower than the announcements suggest. Read the article.

Guarding artistic creations by polluting data

Data poisoning is a technique used to protect copyrighted artwork from being used by generative AI models. It involves imperceptibly changing the pixels of digital artwork in a way that ‘poisons’ any AI model ingesting it for training purposes, rendering it functionally useless. While it has been primarily used by content creators against web scrapers, it has many other uses. However, data poisoning is not as straightforward and requires a targeted approach to pollute the datasets. Read the article.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation