Home | Newsletters & Shorts | DW Weekly #128 – 18 September 2023

DW Weekly #128 – 18 September 2023

DigWatch Weekly 100th issue 1920x1080px generic
DW Weekly #128 – 18 September 2023 6

Dear readers,

The sense of urgency surrounding AI regulations that marked the start of the month continues unabated. The European Commission is advocating for a global framework (including an IPCC for AI to govern it), while the USA is deliberating over who should take the lead in regulating AI. And we haven’t even started Q4, which will accelerate things even more. Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

European Commission calls for an IPCC for AI (the concept’s not new though)

European Commission President Ursula Von der Leyen’s State of the EU speech (read or watch) last week, wouldn’t have been complete without a deep-dive into how to govern AI. 

The EU’s way ahead of the rest in developing AI regulations: The draft AI Act has reached the final stages of its legislative journey, although it will take years before the rules come into effect. And yet, the EU is not quite finished. It wants to do more – this time, on a global scale – by transposing the effectiveness of the Intergovernmental Panel on Climate Change (IPCC) to the AI realm.

The context: A global framework for AI

The EU is acutely aware that without similar rules in other major economies, the impact of its rules remains limited. It, therefore, wants others to align their policies and collaborate towards a collective goal. That goal is a new global framework on AI built on three pillars – guardrails, governance, and guiding innovation.

What it means in practice is that Von der Leyen wants to see the EU’s upcoming AI Act exported to other countries. She proudly asserts, ‘Our AI Act serves as a global blueprint’, hence positioning it as the ideal guardrail.

How to get there: An IPCC for AI

The goal of a global framework is to cultivate a shared understanding of the profound impact AI has on our societies. The way this would be achieved (von der Leyen’s second pillar) is through a body similar to the Intergovernmental Panel on Climate Change (IPCC), whose reports establish scientific consensus on climate change. 

Von der Leyen explains: ‘Think about the invaluable contribution of the IPCC for climate, a global panel that provides the latest science to policymakers. I believe we need a similar body for AI.’ Its aim would be to develop ‘a fast and globally coordinated response – building on the work done by the Hiroshima Process and others.’

In reality, the IPCC for AI is not a new proposal. The concept goes back to (at least) 2018, when French President Emmanuel Macron told the Internet Governance Forum (held in Paris that year) of his intention to create ‘an equivalent of the renowned IPCC for artificial intelligence’. 

At the time, Macron’s vision was ahead of its time. ‘I believe this “IPCC” should have a large scope. It should naturally work with civil society, top scientists, all the innovators here today […] there can be no artificial intelligence and no genuine “artificial intelligence IPCC” if reflection with an ethical dimension is not conducted.’

One could say that the idea of creating an exact replica of the original IPCC, refined for application to AI, was sidelined with the establishment of the Global Partnership on Artificial Intelligence (GPAI), an international forum for collaborating on AI policies. Nevertheless, the current surge in interest and concerns surrounding generative AI has generated enough momentum to revive the original concept.

Who to rope in: The industry

Von der Leyen’s third pillar, ‘guiding innovation in a responsible way’, is the European Commission’s way of saying that until rules come into effect, the industry needs to agree on voluntary commitments.

Arguably, the EU is doing a good job at this through the AI Pact, a voluntary set of rules that will act as a precursor to the AI Act. European Commission Thierry Breton has advocated heavily among Big Tech companies to adopt these guidelines.

But more needs to be done locally: The EU needs to foster a homegrown AI industry, which is still lagging behind. In its latest initiative, the EU will launch an AI Start-Up Initiative, which, according to Breton, will give start-ups access to public high-performance computing infrastructure (and ‘help them lead the development and scale-up of AI responsibly and in line with European values’.)

Reality check

The EU has several feathers in its cap (see our recent article on the Brussels effect), but its global ambitions might be a tad premature in AI. First, the AI Act is not yet law. Second, the EU knows that many other countries do not share the same willingness for binding rules (see Japan’s update below).

At most, the EU can aim to export its values of human centricity and transparency to other countries, and advocate for them to become minimum global standards for the safe and ethical use of AI. It’s worth the effort.


Digital policy roundup (11–18 September)
// AI GOVERNANCE //

US Senate judiciary hearings and closed-door meetings: AI debates continue

We’ve heard it all before: AI can be leveraged for good. But AI risks must be curbed. Governments must step in. 

US Senate hearing: All this (and more) was discussed during the US Senate Judiciary Subcommittee’s latest meeting, led by Chairman Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO), which was attended by Boston University Law Professor Woodrow Hartzog, NVIDIA Chief Scientist William Dally, and Microsoft President Brad Smith. The hearing emphasised the need to curb the misuse of AI-generated deceptive practices in content used during electoral campaigns, and AI’s misuse for other criminal purposes such as scams.

AI Insight Forum: Lawmakers and tech industry leaders also gathered at Senate Majority Leader Chuck Schumer’s inaugural AI Insight Forum last week. The meeting was held behind closed doors, so we’ve had to rely on reports by journalists gathered outside the building. The main topic was how to address the pressing requirement for AI regulation given that, according to X’s (formerly Twitter) Elon Musk, ‘AI development is potentially harmful to all humans everywhere’. Musk also floated the idea of a federal department on AI. The unanimous agreement among the tech leaders was that the government needs to intervene

Why is it relevant? Despite the USA’s traditional laissez-faire approach, the outcomes from these discussions suggest a bipartisan willingness to legislate. But disagreements over how to do this run (too?) deep. At least, there is convergence around the need for a new regulator – either through the creation of a new agency or by mandating an existing government entity such as the National Institute of Standards and Technology (NIST). The general election is looming around the corner, so if there are any developments to be initiated, now’s the time.


Japan publishes draft AI transparency guidelines 

Japan, the current chair of the G7, has unveiled new draft guidelines on AI transparency. The voluntary guidelines, which Tokyo will finalise by the end of the year, will urge AI platform developers to disclose vital information about the purpose of their algorithms and what they see to be the potential risks. Additionally, companies involved in AI training will be asked to disclose the data they’re utilising to train their models. 

These guidelines were outlined during a government AI strategy meeting, where it was also revealed that the government intends to earmark 164 billion yen (USD1.11 billion) for AI next year. That’s an increase of over 40% compared to this year’s allocation.

Why is it relevant? Although Japan’s AI spending shows how serious the country is in a homegrown AI industry, it re-confirms the country’s preference for non-binding rules and, therefore, indicates that there is still a split in how G7 countries choose to approach AI governance. As the current G7 chair, Japan’s preference for a softer approach puts a damper on the EU efforts to establish its upcoming AI Act as a global benchmark. But it’s not all bad: At least the Japan-led G7 AI Hiroshima Process will try to find common denominators among these widely differing approaches (see more below on what to expect in the coming weeks).


// ANTITRUST //

Legal battle against Google’s search monopoly abuse kicks off 

The trial of the US Justice Department’s (DOJ) major antitrust case against Google kicked off last week, signalling the start of a months-long legal battle that could potentially reshape the entire tech industry. The DOJ had filed the civil antitrust suit against Google in late 2020 after examining the company’s business for more than a year.

The lawsuit concerns Google’s search business, which the DOJ and state attorneys-general consider ‘anticompetitive and exclusionary’ sustaining its monopoly on the digital advertising market. The case revolves around Google’s agreements with smartphone manufacturers and other firms, which allegedly strengthen its search monopoly.

Google has argued that users have plenty of choices and opt for Google due to its superior product.

Why is it relevant? It’s the first major tech antitrust trial since Microsoft’s 1998 case. If Google is found to have breached antitrust law, the judge could simply order Google to refrain from these practices or, more seriously for Google, order the company to sell assets. If the DOJ loses, it would undermine years of effort by the agency to challenge Big Tech’s power.

Case details: USA v Google LLC, District Court, District of Columbia, 1:20-cv-03010


// PRIVACY //

TikTok fined millions for breaching GDPR on children’s data

TikTok has been fined EUR345 million (USD370 million) for breaching privacy laws on the processing of children’s personal data in the EU, the Irish Data Protection Commissioner (DPC) confirmed. The DPC gave TikTok three months to bring all of its processing into compliance where infringements were found.

TikTok was found to have allowed specific profile settings to pose severe risks to underaged users. For instance, some settings were set to public by default (anyone could view the child’s content), while another setting allowed any public user to pair their account to a child’s user account and, therefore, to direct message them.

Why is it relevant? First, the DPC’s final decision is another blow to TikTok’s woes in Europe (there’s another ongoing case in the EU). Second, it’s among the largest fines imposed on a tech company under the GDPR.


// COPYRIGHT //

Two new lawsuits allege copyright infringement in AI-model training

A group of writers have initiated legal action against Meta and separately against OpenAI, alleging that the tech giants inappropriately used their literary creations to train their AI models.

In Meta’s case, the writers say their copyrighted books appear in the dataset that Meta has admitted to using to train LLaMA, the company’s large language model. In OpenAI’s case, ChatGPT generates in-depth analyses of the themes in the plaintiffs’ copyrighted works, which the authors say is possible only if the underlying GPT model was trained using their works.

Why is it relevant? First, the lawsuits add to the growing number of cases against AI companies over copyright infringement, broadening the legal minefield surrounding AI training. Second, it adds pressure on regulators to bring intellectual property rules up to speed with developments in generative AI. The USA is already mulling new rules, pending a public call for comment

Case details: Chabon et al v OpenAI et al, California Northern District Court, 3:2023cv04625; Chabon et al v Meta Platforms, California Northern District Court, 3:23-cv-04663


The week ahead (18–25 September)

11 September–13 October: The digital policy issues to be tackled during the 54th session of the Human Rights Council (HRC) include cyberbullying and digital literacy

18 September: The Commonwealth Artificial Intelligence Consortium (CAIC) is meeting in New York to endorse a new AI action plan for sustainable development.

18–19 September: The SDG Summit in New York will mark ‘the beginning of a new phase of accelerated progress towards the Sustainable Development Goals’. It’s very much needed, considering that, with only seven years left to go, none of the 17 SDGs have been fully met.

19–26 September: The high-level debate of the UN General Assembly’s 78th session kicks off this week. The theme may well be about accelerating progress on sustainable development goals, but we can expect several countries to explain how they view AI developments and AI regulation. As usual, our team will analyse each and every country statement and tell us what’s weighing the most on governments’ minds. Subscribe for just-in-time updates.

20–21 September: The 8th session of the WIPO Conversation, a multistakeholder forum which attracts thousands of stakeholders, will be about generative AI and intellectual property.

21 September: The President of the UN General Assembly will convene a prep ministerial meeting in New York ahead of the 2024 Summit of the Future.

24 September: The EU’s Data Governance Act becomes enforceable.

PLUS: What’s ahead on the AI front

8–12 October: AI discussions will likely be a primary focus during this year’s Internet Governance Forum (IGF2023) in Japan. Expect the host country, currently at the G7’s helm, to share updates on the development of guiding principles and code of conduct for organisations developing advanced AI systems. 

1–2 November: The UK’s AI Safety Summit, scheduled to take place in Bletchley Park, Milton Keynes, is expected to build consensus on international measures to tackle AI risks, which is arguably quite a challenge. But the UK’s toughest challenge is actually back home, as it faces pressure to introduce new AI rules.

November–December. The G7 digital and tech ministers are also expected to meet to sign off on draft rules before presenting them to the G7 leaders (as per outcomes of the recent G7 Hiroshima AI Process ministerial meeting).

12–14 December: The Global Partnership on Artificial Intelligence (GPAI) will hold its annual summit in India (which holds the current presidency).


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?