Home | Newsletters & Shorts | DW Weekly #134 – 30 October 2023

DW Weekly #134 – 30 October 2023

 Text, Paper, Page

Dear readers,

The stage is set for some major AI-related developments this week. Biden’s executive order on AI, and the G7’s guiding principles and code of conduct, are out. On Wednesday and Thursday, the UK will host the much-anticipated AI Safety Summit, where political leaders and CEOs will focus squarely on AI risks. In other news, the landscape for children’s online safety is changing, while antitrust lawsuits and investigations show no signs of easing up.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden issues AI executive order; G7 adopts AI principles and code of conduct

You can tell how much AI is on governments’ minds by how many developments take place in a week – or in this case, one day.

Today’s double bill – Biden’s new executive order on AI, and the G7’s guiding principles on AI and code of conduct for developers – was highly anticipated. The White House first announced plans for the executive order in July; more recently, Biden mentioned it again during a tech advisors’ meeting. As for the G7, Japan Prime Minister Fumio Kishida has been providing regular updates on the Hiroshima AI Process for months. 

Executive order targets federal agencies’ deployment of AI

Biden’s executive order represents the government’s most substantial effort thus far to regulate AI, providing actionable directives where it can, and calling for bipartisan legislation where needed (such as data privacy). There are three things that stand out:

AI safety and security. The order places heavy emphasis on safety and security by requiring, for instance, that developers of the most powerful AI systems share their safety test results and other critical information with the US government. It also requires that AI systems used in critical infrastructure sectors be subjected to rigorous safety standards.

Sectoral approach. Apart from certain aspects that apply to all federal agencies, the order employs a somewhat sectoral approach to federal agencies’ use of AI (in contrast with other emerging laws such as the EU’s AI Act). For instance, the order directs the US Department of Health and Human Services to advance the responsible use of AI in healthcare, the Department of Commerce to develop guidelines for content authentication and watermarking to clearly label AI-generated content, and the Department of Justice to address algorithmic discrimination. 

Skills and research. The order directs authorities to make it easier for highly skilled workers to study and work in the country, an attempt to boost the USA’s technological edge. It will also heavily promote AI research through funding, access to AI resources and data, and new research structures.

G7’s principles place risk-based responsibility on developers

The G7 has adopted two texts: The first is a list of 11 guiding principles for advanced AI. The second – a code of conduct for organisations developing advanced AI – repeats the principles but expands on some of them with details on how to implement them. Our three main highlights:

Risk-based. One notable similarity with the EU’s AI Act is the risk-based element, which places responsibility on developers of AI to adequately assess and manage the risks associated with their systems. The EU promptly welcomed the texts, saying they will ‘complement, at an international level, the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act’.

A step further. The texts build on the existing OECD AI Principles, but in some instances they go a few steps further. For instance, they encourage developers to develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content. 

(Much) softer approach. Differing viewpoints on AI regulation exist among the G7 countries, ranging from strict enforcement to more innovation-friendly guidelines. The documents allow jurisdictions to adopt the code in ways that align with their individual approaches. But despite this flexibility, a few other provisions are overly vague. Take the provision on privacy and copyright, for instance: ‘Organisations are encouraged to implement appropriate safeguards, to respect rights related to privacy and intellectual property, including copyright-protected content.’ That’s probably not specific enough to provoke change.

Amid mounting concerns about the risks associated with AI, today’s double bill begs the question: Will these developments succeed in changing the security landscape for AI? Biden’s executive order has the most significant strength: Although it lacks enforcement teeth, it carries the constitutional weight to manage federal agencies. But on a global scale, perspectives vary so greatly that their influence is limited. And yet, today’s developments are just the beginning this week.


Digital policy roundup (23–30 October)

// MIDDLE EAST //

Musk’s Starlink to provide internet access to Gaza for humanitarian purposes

Elon Musk confirmed on Saturday that his SpaceX’s Starlink will provide internet connectivity to ‘internationally recognised aid organisations’ in Gaza. This prompted Israel’s communication minister, Shlomo Karhi, to express strong opposition about the Starlink’s potential exploitation by Hamas.

Responding to Karhi’s tweet, Musk replied: ‘We are not so naive. Per my post, no Starlink terminal has attempted to connect from Gaza. If one does, we will take extraordinary measures to confirm that it is used *only* for purely humanitarian reasons. Moreover, we will do a security check with both the US and Israeli governments before turning on even a single terminal.’

A telephone and internet blackout isolated people in the Gaza Strip on Saturday, which added to Israel’s weeks-long suspension of electricity and fuel to Gaza.  

Why is it relevant? First, it shows how internet connectivity is increasingly being weaponised during conflicts. Second, the world half-expected Starlink to intervene, given the role it played during the Ukraine conflict, and in other countries affected by natural disasters. But its (public) promise to get go-aheads from both governments could expose the company to new dimensions of responsibility and risks, and could be counterproductive to the aid organisations who so desperately need access to coordinate their relief efforts.

Screenshot of exchange on X

// KIDS ONLINE //

Meta sued by 33 US states over children’s mental health

Meta, Instagram and Facebook’s parent company, is facing a new legal battle from over 30  US states, which are alleging that the company engaged in deceptive practices and contributed to a mental health crisis among young users of its social media platforms. 

The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13. 

Why is it relevant? The concerns raised in this lawsuit have been simmering for quite some time. Two years ago, Meta’s former employee Frances Haugen catapulted them into the public consciousness after leaking thousands of internal documents to the press and testifying to the US Senate about the company’s practices. Since then, the issue  even showed up on US President Joe Biden’s radar earlier this year. Biden called for tighter regulation ‘to stop Big Tech from collecting personal data on kids and teenagers online’.

Case details: People of the State of California v. Meta Platforms, Inc. et al., District Court, Northern District of California, 4:23-cv-05448


UK implements Online Safety Act, imposing child safety obligations on companies

The UK’s Online Safety Act, which imposes new responsibilities on social media companies, came into effect last week after the law received royal assent. 

Among other obligations, social media platforms will be required to swiftly remove illegal content, ensure that harmful content (such as adult pornography) is inaccessible to children, enforce age limits and verification measures, provide transparent information about risks to children, and offer easily accessible reporting options for users facing online difficulties. As is to be expected, there are harsh fines – up to GBP 18 million (USD 21.8 million) or 10% of global annual revenues – in store for non-compliance.

Why is it relevant? For many years, the UK relied on companies’ self-regulated efforts to keep children safe from harmful content. The industry’s initially well-intentioned efforts gradually yielded to alternate choices that prioritised financial interests – the self-regulation experiment is now over, as one child safety expert put it.


Was this newsletter forwarded to you, and you’d like to see more?


A robotic arm with an articulated hand hovers over a keyboard as though ready to type.

// CYBERWARFARE //

US official: North Korea and other states using AI in cyberwarfare

US Deputy National Security Advisor Anne Neuberger has confirmed that North Korea is using AI to escalate its cyber capabilities. In a recent press briefing (held on the sidelines of Singapore International Cyber Week), Neuberger explained: ‘We have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.’ Although experts have often spoken about the risks of AI in cyberwarfare, it’s the first time there’s been an open acknowledgement of its use in offensive cyberattacks. There will be lots to talk about in London this week.


// ANTITRUST //

Google paid billions of dollars to be default search engine

Alphabet’s Google paid USD 26.3 billion (EUR 24.8 billion) to other companies in 2021 to ensure its search engine was the default on web browsers and mobile phones. This was revealed by a company executive testifying during the US Department of Justice’s (DOJ) antitrust trial and in a court record, which the presiding judge refused to redact.

The case, filed in 2020, concerns Google’s search business, which the DOJ and state attorneys-general consider ‘anticompetitive and exclusionary’ sustaining its monopoly on the digital advertising market. 

Why is it relevant? First, the original complaint had already indicated that ‘Google pays billions of dollars each year to distributors… to secure default status for its general search engine’. The exact figures have now been made known. Second, this will make it even more difficult for Google to argue against the implications of its exclusionary agreements with other companies.

Case details: USA v. Google LLC, District Court, District of Columbia, 1:20-cv-03010


Japan’s competition authority investigating Google’s practices

The Japan Fair Trade Commission (JFTC) is seeking information on Google’s suspected anti-competitive behaviour in the Japanese market, as part of an investigation still in its early stages.

The commission will determine whether Google excluded or restricted the activities of its competitors by entering into exclusionary agreements with other companies.

Why is this relevant? If it all sounds too familiar, that’s because the Japan case is very similar to the US DoJ’s ongoing case against Google.


The week ahead (30 October–6 November)

1–2 November: The UK will host its much-anticipated AI Safety Summit in the historic Bletchley Park, Milton Keynes. British Prime Minister Rishi Sunak will welcome CEOs of leading companies and political leaders, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and UN Secretary-General Antonio Guterres. In addition to discussing AI capabilities, risks, and cross-cutting challenges, the UK government is expected to announce an AI Safety Institute, which ‘will advance the world’s knowledge of AI safety and it will carefully examine, evaluate and test new types of AI’, the Prime Minister said. Here’s the discussion paper and the two-day programme.

1–2 November: The Global Cybersecurity Forum gathers in Riyadh, Saudi Arabia, for its annual event, which will this year be dedicated to ‘charting shared priorities in cyberspace’.

3–4 November: The 4th AI Policy Summit takes place in Zurich, Switzerland (at the ETH Zurich campus) and online. Diplo (publisher of this newsletter) is a strategic partner.

4–10 November: The Internet Engineering Task Force (IETF) is gathering in Prague, Czechia and online for its 118th annual meeting

6 November: Deadline for very large online platforms and search engines to publish their first transparency reports under the EU’s Digital Services Act. A handful of platforms have already published theirs: Amazon, LinkedIn, Pinterest, Snapchat, Zalando, Bing, and yes, TikTok.


#ReadingCorner
Image of human head made up of wired connections

Exploring the state of AI in 2023

The topic of AI safety, which appears for the first time in the annual State of AI report, has gained widespread attention and spurred governments and regulators worldwide into action, the 2023 report explains. Yet, beneath this flurry of activity lie significant divisions within the AI community and a lack of substantial progress towards achieving global governance, with governments pursuing conflicting approaches. Read the report.


How to manage AI risks

A group of AI experts has summed up the risks of upcoming, advanced AI systems in a seven-page open letter that urges prompt action, including regulations and safety measures by AI companies. ‘Large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems are looming’, they warn. 


AI and social media: Driving us down the rabbit hole

Harvard professor Lawrence Lessig holds a critical stance on the impact of AI and social media, and an even more critical perspective on the human capacity for critical thinking. ‘People have a naïve view: They open up their X feed or their Facebook feed, and [they think] they’re just getting stuff that’s given to them in some kind of neutral way, not recognizing that behind what’s given to them is the most extraordinary intelligence that we have ever created in AI that is extremely good at figuring out how to tweak the attitudes or emotions of the people they’re engaging with to drive them down rabbit holes of engagement.’ Read the interview.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation