Home | Newsletters & Shorts | DW Weekly #133 – 23 October 2023

DW Weekly #133 – 23 October 2023

 Text, Paper, Page

Dear all,

The spread of illegal content and fake news linked to the Middle East conflict has been worrying EU and US policymakers, who are putting more pressure on social media companies to step up their efforts. The USA-China trade war is escalating with tighter restrictions on US chip exports to China and retaliation by China. As other updates confirm, it’s been anything but blue skies as of late. But let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

China unveils Global AI Governance Initiative as part of Belt and Road

In a significant stride towards shaping the trajectory of AI on a global scale, China’s President Xi Jinping announced the Global AI Governance Initiative (GAIGI) during the opening speech of last week’s Third Belt and Road Forum. 

The initiative is expected to bring together all 155 countries that make up the Belt and Road Initiative. This will make it one of the largest global AI governance forums.

Key tenets. Releasing additional details, the Foreign Ministry’s spokesperson said the strategic initiative will focus on five aspects. It will ensure that AI development remains synonymous with human progress, which is quite a noble aim. It will promote mutual benefit, and ‘oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI’ – a clear dig at Western allies. It would establish a testing and assessment system to evaluate and mitigate AI-related risks, which reminds us of the risk-based approach the EU is taking in its upcoming AI Act. It will also support efforts to develop consensus-based frameworks, ‘with full respect for policies and practices among countries,’ and provide vital support to developing nations to build their AI capacities.

Chinese President Xi Jinping stands behind a wide podium covered in flowers.

First-mover advantage. In recent months, China has been moving swiftly to regulate its homegrown AI industry. Its interim measures on generative AI, effective since August, were a world first; it introduced rules for the ethical application of science and tech (including AI).  China is now looking at basic security requirements for generative AI. Very few acknowledge that despite its deeply ideological approach, China was the first to regulate generative AI, giving itself a significant advantage and mileage in the race to influence global standards. So much so that even US experts are now suggesting that the USA and its allies should engage with China ‘to learn from its experience and explore whether any kind of global consensus on AI regulation is possible’.

China’s approach. Interestingly, the interim measures are a watered-down version – or at least, a less robust version compared to its initial draft – a signal that China was favouring a more industry-friendly approach. A few weeks after the measures came into effect, eight major Chinese tech companies obtained approval from the Cyberspace Administration of China (CAC) to deploy their conversational AI services. In between the USA’s underwhelming progress on AI regulation, and the EU’s strict approach, China’s approach could easily gain appeal on the international stage.

Quasi-global. The international audience watching that stage is very large. With over 150 countries forming part of the Belt and Road Initiative, China’s Global AI Governance Initiative will be one of the largest AI governance forums. But the coalition’s size is not the only reason why the initiative will be highly influential. As the Belt and Road Initiative celebrates its 10th anniversary, China is extolling its success in stimulating nearly USD 1 trillion in investment, forming more than 3,000 cooperative projects, creating 420,000 jobs, and lifting 40 million people out of poverty. All of this gives China geopolitical clout and leverage.

Showtime. China’s Global AI Governance Initiative will undoubtedly influence other processes. Of the coalitions that have launched their own vision or process for regulating AI, the most recent is the draft guide to AI ethics, which the Association of Southeast Asian Nations (ASEAN) is working on. The unveiling of China’s initiative comes a few weeks before the UK’s AI Safety Summit (see programme), which China is set to attend (even though it’s still unclear who will represent China – the decision will indicate the level of significance China gives to the UK process). 

Xi’s speech conveys a willingness to engage: ‘We stand ready to increase exchanges and dialogue with other countries and jointly promote the sound, orderly and secure AI development in the world’. But as China’s Global Times writes, ‘China is already a very important force in global AI development… there is no way the USA and its Western allies can set up a system of AI management and regulation while squeezing China out.’


Digital policy roundup (16–23 October)

// DISINFORMATION //

EU formally asks Meta, TikTok for details on anti-disinformation measures

As the Middle East conflict unfolds, ‘the widespread dissemination of illegal content and disinformation linked to these events carries a clear risk of stigmatising certain communities and destabilising our democratic structures’, to quote European Commission Thierry Breton.

Last week, we wrote how Breton personally reached out to X’s Elon Musk, TikTok’s Shou Zi Chew, Alphabet’s Sundar Pichai, and Meta’s Mark Zuckerberg, urging them to promptly remove illegal content from their platforms. Two days later, X received a formal request for information.

Now, the European Commission has sent formal requests for information about the measures they have taken to curb the spread of illegal content and disinformation to Meta and TikTok (Alphabet has been spared so far, it seems). Meta has been documenting the measures publicly.

Deadlines. The companies must provide the commission with information on crisis response measures by 25 October and measures to protect the integrity of elections by 8 November (plus in TikTok’s case, how it’s protecting kids online). As we mentioned previously, we don’t think this exchange will stop with just a few polite letters.


DSA not yet fully operational? Honour it just the same

The European Commission is applying pressure on EU member states to implement parts of the DSA months ahead of its full implementation on 17 February 2024. The ongoing wars and instabilities have led to an ‘unprecedented increase in illegal and harmful content being disseminated online’, it said.

The commission is appealing to the countries’ ‘spirit of sincere cooperation’ to go ahead and form the planned informal network once the DSA starts applying fully, to take coordinated action, and to assist it with enforcing the DSA.  

Why is it relevant? It shows the commission’s (or rather, Breton’s) eagerness to see the DSA applied. It’s the kind of pressure that one can hardly choose to ignore.


US senator urges social media platforms to curb deceptive news

Disinformation is not just a concern for European policymakers. US Senator Michael Bennett has also written to the CEOs of Meta, Google, TikTok, and X to take prompt action against ‘deceptive and misleading content about the Israel-Hamas conflict’, which he says is ‘spreading like wildfire’.

Bennett’s letter was quite critical: ‘In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement, and redistribution… Your platforms have made particular design decisions that hamper your ability to identify and remove illegal and dangerous content.’ 

Why is it relevant? First, it shows that concerns about the spread of disinformation and illegal content in the context of the Middle East conflict are not limited to European policymakers alone (although the approach taken by both sides hasn’t been quite the same). Second, Bennett is drawing attention to the platforms’ algorithms (something that the EU did not mention), which have arguably played a significant role in inadvertently promoting misleading content and creating filter bubbles.

@Senator Bennet (Michael Bennet) tweets ‘Because of social media companies' practices, deceptive and misleading content about the Israel-Hamas conflict is spreading like wildfire. We need an independent agency able to write rules to prevent foreign disinformation and increase transparency. The tweet is accompanied by a blurb from The Hill saying ‘Senate Democrat questions tech giants on efforts to stop false Israel-Hamas conflict content. Screenshot of Michael Bennet links to his intervention in the US Senate:  https://trib.al/PnNXOBl.

Was this newsletter forwarded to you, and you’d like to see more?


// CHIPS //

USA tightens restrictions on semiconductor exports to China

The US Department of Commerce’s (DOC) Bureau of Industry and Security (BIS) has tightened export restrictions on advanced semiconductors to China and other countries that are subject to an arms embargo. In practice, this means that China will be unable to obtain high-end chips that are used to train powerful AI models and equipment that can enable the production of tiny chips that are used for AI.

China reacted strongly to the BIS decision, calling these measures ‘unilateral bullying’, and an abuse of export control measures. The measures are an expansion of semiconductor export restrictions implemented last year

Why is it relevant? This latest tit-for-tat is meant to close loopholes from the 2022 measures. US Secretary of Commerce Gina Raimondo says that the objective remains unchanged: to restrict China from advancements in AI that are vital for its military applications. But the Washington-based Semiconductor Industry Association cautions that export controls ‘could potentially harm the US semiconductor ecosystem instead of advancing national security’.


 Adult, Female, Person, Woman, Crowd, Male, Man, People, Audience, Speech, Flag, Head, Condoleezza Rice
The heads of US, UK, Australian, Canadian and New Zealand security agencies meeting publicly for the first time, on a stage at Stanford University. Credit: FBI

// CYBERSECURITY //

Five Eyes warn of China’s ‘innovation theft’ campaign

The heads of the Five Eyes security agencies – composed of the USA, UK, Australia, Canada and New Zealand – have warned of a sizeable Chinese espionage campaign to steal commercial secrets. The agency heads met publicly for the first time during a security summit held in Silicon Valley. Over 20,000 people in the UK have been approached online by Chinese spies, the head of the UK’s MI5 told the BBC.


// NET NEUTRALITY //

US FCC vote kicks off process to restore net neutrality rules

The US Federal Communications Commission (FCC) has voted in favour of starting the process to restore net neutrality rules in the USA. The rules were originally adopted by the Obama administration in 2015, but repealed a few years later under the Trump government.

The steps ahead. Although net neutrality proponents will have uttered a collective sigh of relief at this renewal, the process involves multiple steps, including a period for public comments. 

Why is it relevant? We won’t state the obvious about net neutrality, or how the FCC will broaden its reach. Rather, we’ll highlight what chairwoman Jessica Rosenworcel said last week: There are already several state-led open internet policies that providers are abiding by right now; it’s time for a national one.


// COMPETITION //

South Africa investigating competition in local news media and adtech market

South Africa’s Competition Commission has launched an investigation into the distribution of media content and the advertising technology (adtech) markets that link buyers and sellers of digital advertising. 

The investigation will also determine whether digital platforms such as Meta and Google are engaging in unfair competition with local news publishers by using their content to generate advertising revenue.

Why is it relevant? First, it shows how global investigations – most notably in Australia and Canada – are drawing attention to Big Tech’s behaviour in other markets, and are influencing the measures taken by other regulators. Second, it reflects rising concerns about the shift from print advertising to digital content and advertising – a trend that is not sparing anyone.


// DIGITAL EURO //

ECB launches prep phase for digital euro

The European Central Bank (ECB) has announced a two-year prep phase for the digital euro, which will work on its regulatory framework and the technical setup. The phase starts on 1 November, and comes after a two-year research phase. 

The ECB made it clear that the launch doesn’t mean that the digital euro is a certainty. But if there’s eventually a green light, the digital euro will function similarly to online wallets or bank accounts, and will be guaranteed by the ECB. It will only be available to EU residents.

Why is it relevant? Digital currencies issued by central banks (known as Central Bank Digital Currencies (CBDCs)) are in a rapidly developing phase worldwide. Last year, a BIS report said that two-thirds of the world’s central banks are considering introducing CBDC in the near future. Even though only a few countries – such as China, Sweden, and a handful of Caribbean countries – have launched digital currencies or pilot projects, the EU is treading slowly but surely, expecting the digital euro to coexist alongside physical cash and to introduce measures that would safeguard its existing commercial banking sector.


The week ahead (23–30 October)

21–26 October: ICANN78, the organisation’s 25th annual general meeting, is ongoing in Hamburg, Germany and online.

24–26 October: The CEOs of some of the world’s leading telecoms operators are meeting in Paris for the 5G World Summit this week. 

25–26 October: The European Commission’s Global Gateway Forum – dubbed the European response to China’s Belt and Road Forum – is taking place in Brussels. 

25–27 October: Nashville, Tennessee, will host the 13th (ISC)2 Security Congress, convening the cybersecurity community in person and online.


#ReadingCorner
 Advertisement, Poster, Adult, Female, Person, Woman, Face, Head

Online abuse of kids ‘escalating’

Child sexual exploitation and abuse online is escalating worldwide, in both scale and methods, the latest We Protect Global Alliance’s threat assessment warns. To put this into numerical perspective, the reports of abuse material reported in the USA in 2019 dwarfs the 32 million reports made in 2022. It gets worse: ‘The true scale of child sexual exploitation and abuse online is likely greater than this as a lot of harm is not reported.’ Read the report, including its recommendations.

File photo of a child using a digital device.

If abuse is on the rise, why isn’t the tech industry doing more?

As the eSafety Commissioner of Australia noted last week, some of the biggest tech companies just aren’t living up to their responsibilities to halt the spread of online child sexual abuse content and livestreaming. 

‘Within online businesses much of the child safety and wider consumer agenda is marked as an overhead cost not a profit centre …’, writes John Carr, a UK leading expert in child internet safety. ‘Companies will obey clearly stated laws. But the unvarnished truth is many are also willing to exploit any and all available wiggle room or ambiguity to minimise or delay the extent of their engagement with anything which does not contribute directly to the bottom  line. If it makes them money they need no further encouragement. If it doesn’t, they do.’ Read the blog post.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation