Home | Newsletters & Shorts | DW Weekly #138 – 27 November 2023

DW Weekly #138 – 27 November 2023

DigWatch Weekly. Capturing top digital policy news worldwide

Dear all,

Negotiations on the EU AI Act face challenges as France, Germany, and Italy oppose tiered regulation for foundation AI models. OpenAI’s leadership changes and alleged project Q* raise transparency concerns. The UK, USA, and partners released global AI system development guidelines. Italy is investigating AI data collection, while Switzerland is exploring regulatory approaches. India warned social media giants about the spread of deepfakes and misinformation. The US Treasury imposed record penalties on Binance, and the Australian regulator called for regulatory reform of digital platforms.

Let’s get started.

Andrijana and the Digital Watch team


// HIGHLIGHT //

EU warring on AI Act

The negotiations on the EU AI Act have hit a significant snag, as France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models. These three countries asked the Spanish presidency of the EU Council, which negotiates on behalf of member states in the trialogues, to retreat from the approach.

The tiered approach would mean categorising AI into different risk bands, with more or less regulation depending on the risk level. 

What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself and propose ‘mandatory self-regulation through codes of conduct’  for foundation models.

To implement the use-based approach, developers of foundation models would have to define model cards – documents that provide information about machine learning models, detailing various aspects such as their intended use, performance characteristics, limitations, and potential biases.

An EU AI governance body could contribute to formulating guidelines and overseeing the implementation of model cards giving detailed context information.

A hard ‘no’ from the European Parliament. However, European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. 

A suggested compromise. The European Commission circulated a possible compromise: Bring back a two-tiered approach, watering down the transparency obligations and introducing a non-binding code of conduct for the models that pose a systemic risk. Further negotiations are expected to centre around this proposal.

Still a no from the European Parliament. The Parliament is not budging: It is not willing to accept self-regulation and only accepts the idea of EU codes of practice as a complementary element to the horizontal transparency requirements for all foundation models.

Chart details the content of five different AI Act proposals: the IT/FR/DE Non-Paper, the White House Executive Order, the Spanish Presidency Compromise Proposal, Parliament’s Adopted Position, and the Council’s Adopted Position grouped under the five broad areas of Required safety obligations, Compute-related monitoring, Governance body oversight, code of conduct, and Information sharing.
A comparison of key AI Act proposals. Source: Future of Life Institute

Why is it relevant? 

The Franco-German-Italian non-paper and the commission’s proposed compromise have sparked concerns that the largest foundation models will remain underregulated in the EU. Add a time constraint to that: Policymakers hoped to finalise the act at a meeting scheduled for 6 December. The chances for that are currently looking slim. If the EU doesn’t pass the EU Act in 2023, it may lose its chance to establish the gold standard of AI rules.


Digital policy roundup (20–27 November)

// AI //

OpenAI – Last week’s episode

Much has been written about what transpired at OpenAI last week. We have followed the developments, too.

Here’s the quickest recap of the situation on the internet. OpenAI CEO Sam Altman was ousted from the company because he ‘was not consistently candid in his communications’ with the board. Mira Murati took over as Interim CEO. Altman then joined Microsoft. The OpenAI board proceeded to appoint Twitch co-founder Emmett Shear as interim CEO. Approximately 700 out of 750 OpenAI staff sent a letter to the board claiming they would resign from the company over the debacle and join Altman at Microsoft. Altman came back as CEO and OpenAI’s board changed some of its members.

And here’s the most exciting part. Reuters reported that Altman was dismissed partly because of Q*, an AI project allegedly so powerful that it could threaten humanity. 

Q* can supposedly solve certain math problems, suggesting a higher reasoning capacity. This could be a potential breakthrough in artificial general intelligence (AGI), which OpenAI sees as AI that aims to surpass human capabilities in economically valuable tasks.

Why is it relevant? The news has caused quite a stir, with many wondering what exactly Q* is, if it even exists. Is this really about AGI? Well, it’s hard to tell. On the one hand, AI surpassing human capabilities sounds like a dystopia (why does no one ever think it might be a utopia?) is ahead. On the other hand, since the company hasn’t even commented so far, it’s best not to buy into the hype yet. 

But what this is definitely about is transparency – and not only at OpenAI. We all need to understand who (or what) it is that shapes our future. Are we mere bystanders?

Drawing of a game board with different-coloured marker pieces waiting for the three die being tossed by a human hand to signal the next move. Some marker pieces have chat bubbles with icons indicating surprise, intellectual thought or AI implications, justice, economics, agreement, and globalisation.
The grand game of addressing AI for the future of humanity. Who holds the dice? Credit: Vladimir Veljašević

UK, USA, and 16 other partners publish guidelines for secure AI system development

In collaboration with 16 other countries, the UK and the USA have released the first global guidelines to enhance cybersecurity throughout the life cycle of an AI system.

The guidelines were developed by the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) with international partners, while major companies such as Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI contributed.

The guidelines span four key areas within the life cycle of the development of an AI system: secure design, secure development, secure deployment, and secure operation and maintenance.

  1. The section about the secure design stage focuses on understanding risks, threat modelling, and considerations for system and model design. 
  2. The section on the secure development stage includes guidelines for supply chain security, documentation, and managing assets and technical debt. 
  3. The secure deployment stage section emphasises protecting infrastructure and models, developing incident management processes, and ensuring responsible release. 
  4. The secure operation and maintenance stage section provides guidelines for actions relevant after deployment, such as logging, monitoring, update management, and information sharing.

Why is it relevant? The considerable number of institutions from other countries that are signatories indicates a growing consensus on the importance of securing AI technologies.

Graphic shows a humanoid AI in front of a half-circle world map showing various icons representing technology and networks. Outside the half circle, icons for the sun and clouds are shown, with one of the clouds representing a cloud network.
Image credit: NSCS.

Italy’s DPA launches investigation into data collection for AI training

The Italian Data Protection Authority (DPA) is initiating a fact-finding inquiry to assess whether online platforms have put in place sufficient measures to stop AI platforms from scraping personal data for training AI algorithms. The investigation will cover all public and private entities operating as data controllers, established or providing services in Italy. The DPA has invited trade associations, consumer groups, experts, and academics to offer their input on security measures currently in place and those that could be adopted to prevent the extensive collection of personal data for training purposes.

Why is it relevant? Italy’s DPA is taking privacy very seriously: It even imposed a (temporary) limitation on ChatGPT earlier this year. They stated they would adopt necessary measures after the investigation is concluded, and we have no doubt they will be pulling their punches.

Desktop with partial keyboard extending off to the right side has a paperclipped yellow note that says ‘Personal Data’

Switzerland examines regulatory approaches for AI

Switzerland’s Federal Council has tasked the Department of the Environment, Transport, Energy, and Communications (DETEC) with providing an overview of potential regulatory approaches for AI by the end of 2024

Those approaches must align with existing Swiss law and be compatible with the upcoming EU AI Act and the Council of Europe AI Convention. The council aims to use the analysis as a foundation for an AI regulatory template in 2025.


Was this newsletter forwarded to you, and you’d like to see more?


// CONTENT POLICY //

India’s government issues warning to social media giants on deepfakes and misinformation

The Indian government has issued a warning to social media giants, including Facebook and YouTube, regarding the dissemination of content that violates local laws. The government is particularly concerned about harmful content related to children, obscenity, and impersonation, with a focus on deepfakes. 

The government emphasised the non-negotiable nature of these regulations and stressed the need for continuous user reminders about content restrictions, and warning of potential government directives for non-compliance. Social media platforms have reportedly agreed to align their content policies with government regulations in response to these concerns.

Digital smartphone shows its home screen with its Social Media apps highlighted in a group that contains icons for Pinterest, YouTube, X (formerly Twitter), and other apps

// CRYPTO //

US Treasury hits Binance with record-breaking penalties for money laundering and sanctions violations

The US Department of the Treasury, alongside various enforcement agencies, took unprecedented action against Binance Holdings Ltd., the world’s largest virtual currency exchange, for violating anti-money laundering (AML) and sanctions laws. 

Binance admitted to operating as an unregistered money services business, disregarding anti-money laundering protocols, bypassing customer identity verification, failing to report suspicious transactions including those involving terrorist groups, ransomware, child sexual exploitation, and other illicit activities, and facilitating trades between US users and sanctioned jurisdictions. 

Binance reached a settlement with the US government, including a historic $4.2 billion payment, a five-year monitoring period, and stringent compliance obligations. Binance agreed to exit the US market entirely and comply with sanctions. Failure to meet these terms could result in further substantial penalties.

Why is it relevant? Because it sends a strong message that the cryptocurrency industry must adhere to the rules of the US financial system or face government action.

Compound digital illustration shows the Binance logo, a $50 US bill, and several bitcoins tokens.

// COMPETITION //

Australian regulator calls for new competition laws for digital platforms

The Australian Competition and Consumer Commission (ACCC) has emphasised the urgent need for regulatory reform in response to the expanding influence of major digital platforms, including Alphabet, Amazon, Apple, Google, Meta, and Microsoft. The ACCC’s seventh interim report from the Digital Platform Services Inquiry underscores the risks associated with these platforms extending into various markets and technologies, potentially harming competition and consumers. While acknowledging the benefits of digital platforms, the report highlights concerns about invasive data collection practices, consumer lock-in, and anti-competitive behaviour. 

The report further explores the impact of digital platforms on emerging technologies, emphasising the need for adaptable competition laws to address evolving challenges in the digital economy. 

The ACCC suggests updating competition and consumer laws, introducing targeted consumer protections, and implementing service-specific codes to mitigate these risks and ensure effective competition in evolving digital markets. 

Why is it relevant? The concerns raised by the ACCC are not unique to Australia. Regulatory reforms in Australia could set a precedent for other jurisdictions grappling with similar issues.

Cover page of the seventh Digital platform services inquiry interim report dated September 2023. It has a dark blue isosceles triangle with a lighter bluish internal triangle at the lower left apex and has multiple chat bubbles containing icons representing digital services.
Image credit: ACCC.
The week ahead (27 November–4 December)

27–29 November: The 12th UN Forum on Business and Human Rights is taking place in a hybrid format to discuss effective change in implementing obligations, responsibilities, and remedies.

29–30 November: The inaugural Global Conference on Cyber Capacity Building (GC3B) will be held under the theme of cyber resilience for development and will culminate with the announcement of the Accra Call: a global action framework that supports countries in strengthening their cyber resilience. 

30 Nov 2023: Held in conjunction with the UN Business and Human Rights Forum, the UN B-Tech Generative AI Summit: Advancing Rights-Based Governance and Business Practice will explore practical applications of the UN Guiding Principles on Business and Human Rights and facilitate discussions on implementing these principles for generative AI and other general-purpose AI.

4–8 Dec 2023: UNCTAD eWeek 2023 will address pivotal questions about the future of the digital economy: What does the future we want for the digital economy look like? What is required to make that future come true? How can digital partnerships and enhanced cooperation contribute to more inclusive and sustainable outcomes? Bookmark our dedicated eWeek 2023 page on the Digital Watch Observatory or download the app to read reports from the event. In addition to providing just-in-time reporting from the eWeek, Diplo will also be involved in several activities throughout the event.


#ReadingCorner
The cover page of Diplo Bloges AI Seasons Autumn 2023 edition highlights the article ‘How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation’ by Jovan Kurbalija. It has the word humAInism in the lower right corner.

How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation?

Jovan Kurbalija explores the implications of the 1888 Montenegrin Civil Code on the AI era. He temporises that AI governance, much like the Montenegrin Civil Code, is about integrating tradition with modernity.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation