DW Weekly #128 – 18 September 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 6

Dear readers,

The sense of urgency surrounding AI regulations that marked the start of the month continues unabated. The European Commission is advocating for a global framework (including an IPCC for AI to govern it), while the USA is deliberating over who should take the lead in regulating AI. And we haven’t even started Q4, which will accelerate things even more. Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

European Commission calls for an IPCC for AI (the concept’s not new though)

European Commission President Ursula Von der Leyen’s State of the EU speech (read or watch) last week, wouldn’t have been complete without a deep-dive into how to govern AI. 

The EU’s way ahead of the rest in developing AI regulations: The draft AI Act has reached the final stages of its legislative journey, although it will take years before the rules come into effect. And yet, the EU is not quite finished. It wants to do more – this time, on a global scale – by transposing the effectiveness of the Intergovernmental Panel on Climate Change (IPCC) to the AI realm.

The context: A global framework for AI

The EU is acutely aware that without similar rules in other major economies, the impact of its rules remains limited. It, therefore, wants others to align their policies and collaborate towards a collective goal. That goal is a new global framework on AI built on three pillars – guardrails, governance, and guiding innovation.

What it means in practice is that Von der Leyen wants to see the EU’s upcoming AI Act exported to other countries. She proudly asserts, ‘Our AI Act serves as a global blueprint’, hence positioning it as the ideal guardrail.

How to get there: An IPCC for AI

The goal of a global framework is to cultivate a shared understanding of the profound impact AI has on our societies. The way this would be achieved (von der Leyen’s second pillar) is through a body similar to the Intergovernmental Panel on Climate Change (IPCC), whose reports establish scientific consensus on climate change. 

Von der Leyen explains: ‘Think about the invaluable contribution of the IPCC for climate, a global panel that provides the latest science to policymakers. I believe we need a similar body for AI.’ Its aim would be to develop ‘a fast and globally coordinated response – building on the work done by the Hiroshima Process and others.’

In reality, the IPCC for AI is not a new proposal. The concept goes back to (at least) 2018, when French President Emmanuel Macron told the Internet Governance Forum (held in Paris that year) of his intention to create ‘an equivalent of the renowned IPCC for artificial intelligence’. 

At the time, Macron’s vision was ahead of its time. ‘I believe this “IPCC” should have a large scope. It should naturally work with civil society, top scientists, all the innovators here today […] there can be no artificial intelligence and no genuine “artificial intelligence IPCC” if reflection with an ethical dimension is not conducted.’

One could say that the idea of creating an exact replica of the original IPCC, refined for application to AI, was sidelined with the establishment of the Global Partnership on Artificial Intelligence (GPAI), an international forum for collaborating on AI policies. Nevertheless, the current surge in interest and concerns surrounding generative AI has generated enough momentum to revive the original concept.

Who to rope in: The industry

Von der Leyen’s third pillar, ‘guiding innovation in a responsible way’, is the European Commission’s way of saying that until rules come into effect, the industry needs to agree on voluntary commitments.

Arguably, the EU is doing a good job at this through the AI Pact, a voluntary set of rules that will act as a precursor to the AI Act. European Commission Thierry Breton has advocated heavily among Big Tech companies to adopt these guidelines.

But more needs to be done locally: The EU needs to foster a homegrown AI industry, which is still lagging behind. In its latest initiative, the EU will launch an AI Start-Up Initiative, which, according to Breton, will give start-ups access to public high-performance computing infrastructure (and ‘help them lead the development and scale-up of AI responsibly and in line with European values’.)

Reality check

The EU has several feathers in its cap (see our recent article on the Brussels effect), but its global ambitions might be a tad premature in AI. First, the AI Act is not yet law. Second, the EU knows that many other countries do not share the same willingness for binding rules (see Japan’s update below).

At most, the EU can aim to export its values of human centricity and transparency to other countries, and advocate for them to become minimum global standards for the safe and ethical use of AI. It’s worth the effort.


Digital policy roundup (11–18 September)
// AI GOVERNANCE //

US Senate judiciary hearings and closed-door meetings: AI debates continue

We’ve heard it all before: AI can be leveraged for good. But AI risks must be curbed. Governments must step in. 

US Senate hearing: All this (and more) was discussed during the US Senate Judiciary Subcommittee’s latest meeting, led by Chairman Richard Blumenthal (D-CT) and Ranking Member Josh Hawley (R-MO), which was attended by Boston University Law Professor Woodrow Hartzog, NVIDIA Chief Scientist William Dally, and Microsoft President Brad Smith. The hearing emphasised the need to curb the misuse of AI-generated deceptive practices in content used during electoral campaigns, and AI’s misuse for other criminal purposes such as scams.

AI Insight Forum: Lawmakers and tech industry leaders also gathered at Senate Majority Leader Chuck Schumer’s inaugural AI Insight Forum last week. The meeting was held behind closed doors, so we’ve had to rely on reports by journalists gathered outside the building. The main topic was how to address the pressing requirement for AI regulation given that, according to X’s (formerly Twitter) Elon Musk, ‘AI development is potentially harmful to all humans everywhere’. Musk also floated the idea of a federal department on AI. The unanimous agreement among the tech leaders was that the government needs to intervene

Why is it relevant? Despite the USA’s traditional laissez-faire approach, the outcomes from these discussions suggest a bipartisan willingness to legislate. But disagreements over how to do this run (too?) deep. At least, there is convergence around the need for a new regulator – either through the creation of a new agency or by mandating an existing government entity such as the National Institute of Standards and Technology (NIST). The general election is looming around the corner, so if there are any developments to be initiated, now’s the time.


Japan publishes draft AI transparency guidelines 

Japan, the current chair of the G7, has unveiled new draft guidelines on AI transparency. The voluntary guidelines, which Tokyo will finalise by the end of the year, will urge AI platform developers to disclose vital information about the purpose of their algorithms and what they see to be the potential risks. Additionally, companies involved in AI training will be asked to disclose the data they’re utilising to train their models. 

These guidelines were outlined during a government AI strategy meeting, where it was also revealed that the government intends to earmark 164 billion yen (USD1.11 billion) for AI next year. That’s an increase of over 40% compared to this year’s allocation.

Why is it relevant? Although Japan’s AI spending shows how serious the country is in a homegrown AI industry, it re-confirms the country’s preference for non-binding rules and, therefore, indicates that there is still a split in how G7 countries choose to approach AI governance. As the current G7 chair, Japan’s preference for a softer approach puts a damper on the EU efforts to establish its upcoming AI Act as a global benchmark. But it’s not all bad: At least the Japan-led G7 AI Hiroshima Process will try to find common denominators among these widely differing approaches (see more below on what to expect in the coming weeks).


// ANTITRUST //

Legal battle against Google’s search monopoly abuse kicks off 

The trial of the US Justice Department’s (DOJ) major antitrust case against Google kicked off last week, signalling the start of a months-long legal battle that could potentially reshape the entire tech industry. The DOJ had filed the civil antitrust suit against Google in late 2020 after examining the company’s business for more than a year.

The lawsuit concerns Google’s search business, which the DOJ and state attorneys-general consider ‘anticompetitive and exclusionary’ sustaining its monopoly on the digital advertising market. The case revolves around Google’s agreements with smartphone manufacturers and other firms, which allegedly strengthen its search monopoly.

Google has argued that users have plenty of choices and opt for Google due to its superior product.

Why is it relevant? It’s the first major tech antitrust trial since Microsoft’s 1998 case. If Google is found to have breached antitrust law, the judge could simply order Google to refrain from these practices or, more seriously for Google, order the company to sell assets. If the DOJ loses, it would undermine years of effort by the agency to challenge Big Tech’s power.

Case details: USA v Google LLC, District Court, District of Columbia, 1:20-cv-03010


// PRIVACY //

TikTok fined millions for breaching GDPR on children’s data

TikTok has been fined EUR345 million (USD370 million) for breaching privacy laws on the processing of children’s personal data in the EU, the Irish Data Protection Commissioner (DPC) confirmed. The DPC gave TikTok three months to bring all of its processing into compliance where infringements were found.

TikTok was found to have allowed specific profile settings to pose severe risks to underaged users. For instance, some settings were set to public by default (anyone could view the child’s content), while another setting allowed any public user to pair their account to a child’s user account and, therefore, to direct message them.

Why is it relevant? First, the DPC’s final decision is another blow to TikTok’s woes in Europe (there’s another ongoing case in the EU). Second, it’s among the largest fines imposed on a tech company under the GDPR.


// COPYRIGHT //

Two new lawsuits allege copyright infringement in AI-model training

A group of writers have initiated legal action against Meta and separately against OpenAI, alleging that the tech giants inappropriately used their literary creations to train their AI models.

In Meta’s case, the writers say their copyrighted books appear in the dataset that Meta has admitted to using to train LLaMA, the company’s large language model. In OpenAI’s case, ChatGPT generates in-depth analyses of the themes in the plaintiffs’ copyrighted works, which the authors say is possible only if the underlying GPT model was trained using their works.

Why is it relevant? First, the lawsuits add to the growing number of cases against AI companies over copyright infringement, broadening the legal minefield surrounding AI training. Second, it adds pressure on regulators to bring intellectual property rules up to speed with developments in generative AI. The USA is already mulling new rules, pending a public call for comment

Case details: Chabon et al v OpenAI et al, California Northern District Court, 3:2023cv04625; Chabon et al v Meta Platforms, California Northern District Court, 3:23-cv-04663


The week ahead (18–25 September)

11 September–13 October: The digital policy issues to be tackled during the 54th session of the Human Rights Council (HRC) include cyberbullying and digital literacy

18 September: The Commonwealth Artificial Intelligence Consortium (CAIC) is meeting in New York to endorse a new AI action plan for sustainable development.

18–19 September: The SDG Summit in New York will mark ‘the beginning of a new phase of accelerated progress towards the Sustainable Development Goals’. It’s very much needed, considering that, with only seven years left to go, none of the 17 SDGs have been fully met.

19–26 September: The high-level debate of the UN General Assembly’s 78th session kicks off this week. The theme may well be about accelerating progress on sustainable development goals, but we can expect several countries to explain how they view AI developments and AI regulation. As usual, our team will analyse each and every country statement and tell us what’s weighing the most on governments’ minds. Subscribe for just-in-time updates.

20–21 September: The 8th session of the WIPO Conversation, a multistakeholder forum which attracts thousands of stakeholders, will be about generative AI and intellectual property.

21 September: The President of the UN General Assembly will convene a prep ministerial meeting in New York ahead of the 2024 Summit of the Future.

24 September: The EU’s Data Governance Act becomes enforceable.

PLUS: What’s ahead on the AI front

8–12 October: AI discussions will likely be a primary focus during this year’s Internet Governance Forum (IGF2023) in Japan. Expect the host country, currently at the G7’s helm, to share updates on the development of guiding principles and code of conduct for organisations developing advanced AI systems. 

1–2 November: The UK’s AI Safety Summit, scheduled to take place in Bletchley Park, Milton Keynes, is expected to build consensus on international measures to tackle AI risks, which is arguably quite a challenge. But the UK’s toughest challenge is actually back home, as it faces pressure to introduce new AI rules.

November–December. The G7 digital and tech ministers are also expected to meet to sign off on draft rules before presenting them to the G7 leaders (as per outcomes of the recent G7 Hiroshima AI Process ministerial meeting).

12–14 December: The Global Partnership on Artificial Intelligence (GPAI) will hold its annual summit in India (which holds the current presidency).


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Numéro 82 de la lettre d’information Digital Watch – Septembre 2023

 Advertisement, Poster, Page, Text, Person, Face, Head

Observatoire

Coup d’œil : quelles sont les nouvelles tendances en matière de politique numérique ?

Géopolitique

Le Gouvernement américain a annoncé son intention (voir : décret) d’interdire ou de restreindre ses investissements en Chine dans trois secteurs industriels – les semi-conducteurs, les technologies quantiques et les systèmes d’IA (spécifiques) – tandis que les autorités de régulation chinoises n’ont pas approuvé le projet d’Intel d’acquérir le fabricant de puces israélien Tower Semiconductor. La Ville de New York a adopté l’interdiction d’installer TikTok sur les appareils appartenant au Gouvernement.

Gouvernance de l’IA

Quatre entreprises développant l’IA – Anthropic, Google, Microsoft et OpenAI – ont lancé un nouveau corps de métier spécialisé dans le développement sûr et responsable de modèles d’IA d’avant-garde. Dans le même temps, des dizaines de grandes entreprises se sont empressées de bloquer GPTBot, le nouveau robot d’exploration d’OpenAI qui recueille des données pour alimenter ChatGPT. 

Les députés britanniques exhortent le Gouvernement à instaurer de nouvelles règles en matière d’IA d’ici à la fin de l’année, faute de quoi ils risquent d’être distancés. Les pays du groupe BRICS (Brésil, Russie, Inde, Chine et Afrique du Sud) ont créé un groupe chargé d’étudier les cadres et les normes de gouvernance de l’IA, et de contribuer à rendre ses technologies « plus sûres, plus fiables, plus contrôlables et plus équitables ». Le projet de code de pratique du Canada pour la réglementation de l’IA générative est mis à la disposition du public.

Les autorités chargées de la protection des données ont exprimé leur inquiétude quant aux pratiques des entreprises technologiques en matière de récupération de données (ou de sites web) et aux conséquences pour les données personnelles. Ce n’est pas parce que des informations sont accessibles au public sur l’Internet que les protections de la vie privée ne s’appliquent plus, précise la déclaration.

Sécurité

Le sixième cycle de négociations des Nations unies sur un nouveau traité relatif à la cybercriminalité s’est achevé à New York sans progrès notable.
Le logiciel malveillant Qakbot, qui a infecté plus de 700 000 appareils, a été neutralisé par une opération des services répressifs réunissant les États-Unis, la France, l’Allemagne, les Pays-Bas, le Royaume-Uni, la Roumanie et la Lettonie. Meta a supprimé des milliers de comptes et de pages liés à Spamouflage, qu’il décrit comme la plus grande opération d’influence secrète connue au monde. La société de sécurité NCC Group a signalé un nombre record d’attaques par rançongiciel en juillet, qu’elle attribue à l’exploitation d’une vulnérabilité dans MOVEit, un logiciel de transfert de fichiers, par un groupe de pirates informatiques connu sous le nom de CLOP ou Cl0p.

 Adult, Male, Man, Person, Officer, Police Officer, Clothing, Glove, Car, Transportation, Vehicle, Bicycle, Bus, Hat, Footwear, Shoe
Campaigns 16

Au Royaume-Uni, des vidéos partagées sur TikTok et Snapchat encourageant les individus à voler dans les magasins ont provoqué une forte agitation et plusieurs arrestations dans Oxford Street, à Londres. 

Les autorités américaines chargées de la sécurité et de la normalisation demandent instamment aux organisations, en particulier celles qui soutiennent les infrastructures critiques, d’envisager la migration vers des normes cryptographiques post-quantiques en prévision de cyberattaques utilisant la puissance quantique.

Infrastructure

Il faudra du temps pour que la connexion internet de l’Afrique soit totalement rétablie après qu’un glissement de terrain sous-marin dans le canyon du Congo a endommagé deux câbles sous-marins importants qui longent la côte occidentale de l’Afrique. 

Économie de l’Internet

Le 25 août, des réglementations plus rigoureuses pour les très grandes plateformes en ligne et les moteurs de recherche sont entrées en vigueur dans le cadre de la nouvelle loi sur les services numériques de l’UE. La Commission européenne a engagé une procédure formelle à l’encontre de Microsoft pour avoir intégré le logiciel de communication Teams à Office 365. Quelques semaines plus tard, Microsoft a annoncé qu’elle dégrouperait ses logiciels pour les clients européens et suisses à partir du mois d’octobre.

L’autorité française de la concurrence enquête sur Apple pour un possible comportement discriminatoire. Les annonceurs affirment que la société leur a imposé sa politique de transparence en matière de suivi des applications (App Tracking Transparency – ATT), mais s’est exemptée des mêmes règles.

Microsoft a accepté de transférer à Ubisoft les droits de licence pour la diffusion en nuage des jeux d’Activision Blizzard, afin d’obtenir l’approbation du Royaume-Uni pour l’acquisition d’Activision. La Commission européenne devra réévaluer son approbation préalable.

Droits numériques

Le projet Worldcoin relancé par Sam Altman, qui comprend une cryptomonnaie et un réseau d’identité, a attiré l’attention des régulateurs de la vie privée en raison d’éventuelles irrégularités liées à ses méthodes de collecte de données biométriques. Les conditions d’utilisation révisées de Zoom ont suscité la controverse en raison de l’intention de l’entreprise d’utiliser les données des clients pour l’apprentissage automatique et l’IA. Elle a par la suite clarifié sa position.

L’autorité norvégienne de protection des données a imposé des amendes journalières d’un million de couronnes (98 500 USD) à Meta pour non-respect de l’interdiction du ciblage commercial basé sur le comportement par Facebook et Instagram. OpenAI fait l’objet d’une enquête en Pologne : un chercheur a affirmé que l’entreprise avait traité ses données « de manière illégale, déloyale et non transparente ».

Politique de contenu

L’administration chinoise du cyberespace a publié un projet de directives pour l’introduction d’un logiciel de gestion du temps d’écran afin d’endiguer le problème d’addiction aux téléphones intelligents chez les mineurs. 

Le Canada a critiqué Meta pour avoir banni les informations nationales de ses plateformes alors que des incendies de forêt ravageaient certaines régions du pays. Il demande à Google et à Meta de contribuer à hauteur d’au moins 230 millions de dollars canadiens (157 millions d’euros) au soutien des médias locaux.

Développement

Les projets d’identité numérique se multiplient dans le monde entier. L’ Australie prévoit de nouvelles règles pour son identification numérique soutenue par le Gouvernement fédéral d’ici l’année prochaine. Le Gouvernement américain souhaite collaborer avec le secteur privé afin de développer des normes de téléphonie mobile pour l’identification numérique, à l’instar de ce que les Philippines envisagent de faire. Le Nigeria reçoit l’aide de la Banque mondiale pour mettre en place des cartes d’identité numériques à l’échelle nationale.

LES CONVERSATIONS DE LA VILLE – GENÈVE

Le Colloque mondial sur les indicateurs des télécommunications / TIC (WTIS) de cette année (3-4 juillet) s’est penché sur les moyens de mesurer les données pour faire progresser la connectivité universelle à l’Internet et a examiné les résultats de deux groupes d’experts qui ont réaffirmé l’importance de disposer de données comparables au niveau international pour suivre les évolutions liées aux TIC. L’UIT, en collaboration avec l’UE, a lancé le projet « Dashboard for Universal and Meaningful Connectivity » (tableau de bord pour une connectivité universelle et significative) afin de suivre les progrès et les performances des pays.

Lors du Sommet mondial AI for Good (6-7 juillet), plus de 280 projets ont présenté les capacités de l’IA à faire progresser les ODD et à répondre aux besoins urgents du monde, dans le cadre de discussions sur les politiques et les réglementations en matière d’IA, et sur l’évolution future de l’IA. 
Le Conseil de l’UIT a réuni ses 48 pays membres pour examiner les perspectives stratégiques de l’UIT. Lors du Conseil de cette année (11-21 juillet), la secrétaire générale, Doreen Bogdan-Martin, a mis en avant deux objectifs principaux pour l’UIT : la connectivité universelle et la transformation numérique durable. Le Conseil a noté que les questions numériques occupent une place de plus en plus importante dans les programmes mondiaux, notamment lors du prochain Sommet sur les objectifs du Millénaire pour le développement (2023) et du Sommet de l’avenir (2024).


En bref

IA et droits d’auteur : les États-Unis et le Royaume-Uni envisagent de nouvelles mesures

Si vous utilisez le travail de quelqu’un d’autre, vous avez alors besoin d’une autorisation. Cela résume la façon dont le monde a essentiellement abordé les droits des auteurs – jusqu’à présent.

L’ arrivée de modèles d’IA générative, comme ChatGPT, a bouleversé les règles du droit d’auteur. Tout d’abord, les modèles qui alimentent l’IA générative sont formés à partir de toutes les données qu’ils parviennent à se procurer, qu’il s’agisse ou non de contenus protégés par le droit d’auteur. Les auteurs et les artistes mécontents souhaitent que cette pratique cesse. Pour eux, la notion d’utilisation équitable ne suffit pas, surtout si des entreprises gagnent de l’argent grâce à ce système. 

Mais il y a un autre problème : les utilisateurs qui coécrivent de nouveaux contenus avec l’aide de l’IA demandent la protection du droit d’auteur pour leurs œuvres. Le droit d’auteur étant attaché à la paternité de l’œuvre, les organismes de réglementation de la propriété intellectuelle sont confrontés à un dilemme : quelles parties doivent être protégées par le droit d’auteur, et où doit se situer la limite ?

Confrontées à ces questions, les agences de la propriété intellectuelle du Royaume-Uni et des États-Unis ont lancé des procédures de consultation pour les aider à définir les prochaines démarches à entreprendre. Tous deux ont reconnu que de nouvelles règles pourraient être nécessaires.

L’action du Royaume-Uni. En juin, l’agence britannique de la propriété intellectuelle a formé un groupe de travail chargé d’élaborer un code de pratique à caractère facultatif. Microsoft, DeepMind et Stability AI figurent parmi les membres du groupe de travail, ainsi que des représentants de groupes artistiques et de recherche.

Le Gouvernement souhaite que le groupe élabore un guide de bonnes pratiques pour « … aider les entreprises d’IA à accéder à des œuvres protégées par le droit d’auteur pour alimenter leurs modèles, tout en veillant à ce que les résultats générés soient protégés (par exemple par un étiquetage) afin de soutenir les auteurs d’œuvres protégées par le droit d’auteur ». Le Gouvernement a clairement indiqué que s’il n’y avait pas d’accord ou si le guide n’était pas adopté, il pourrait légiférer.

Les questions de droits d’auteur figurent également parmi les principaux défis que le Gouvernement britannique doit relever dans le cadre de sa lutte contre la gouvernance de l’IA.
L’action des États-Unis. L’agence américaine du droit d’auteur (US Copyright Office) a lancé un appel à contributions pour recueillir les commentaires du public sur les mesures réglementaires possibles ou les nouvelles règles nécessaires pour réguler ces pratiques. Il s’agit généralement de la dernière étape avant que de nouvelles mesures ou règles ne soient proposées, et nous pourrions donc être amenés à examiner des propositions de nouvelle législation avant la fin de l’année.

A humanoid robot sitting at the desk and sketching with a pen in hand.

Les questions sur lesquelles se penche l’agence des droits d’auteur sont assez précises. Premièrement, elle souhaite comprendre comment les modèles d’IA utilisent, et devraient utiliser, des données protégées par le droit d’auteur dans leurs mécanismes d’apprentissage. Deuxièmement, elle souhaite recevoir des propositions sur la manière dont le contenu généré par l’IA pourrait être protégé par le droit d’auteur. Troisièmement, elle entend déterminer comment la responsabilité en matière de droit d’auteur fonctionnerait dans le contexte du contenu généré par l’IA. Quatrièmement, elle sollicite des commentaires sur la violation potentielle des droits de publication, c’est-à-dire les droits des individus à contrôler l’utilisation commerciale de leur image ou de leurs informations personnelles.

Sujet tabou. Et pourtant, rien ne permet de penser que ces consultations aborderont – et encore moins résoudront – la manière d’inverser les dommages déjà causés. Les contenus protégés par le droit d’auteur font désormais partie de cette masse énorme de données sur laquelle les modèles ont été entraînés, ainsi que des contenus générés par les robots d’intelligence artificielle. En outre, si l’intervention humaine est nécessaire pour déclencher la protection du droit d’auteur (comme l’indiquent les dernières orientations des États-Unis, par exemple), qu’en est-il des résultats de l’IA qui intègrent des contenus protégés par le droit d’auteur dans une proportion inquiétante ?

Solutions provisoires. Dans l’intervalle, les entreprises à l’origine de puissants modèles de langage (les modèles qui forment les outils d’IA générative) pourraient être amenées à faire davantage pour s’assurer que les contenus protégés par le droit d’auteur ne sont pas utilisés. L’une des solutions pourrait consister à mettre en œuvre des mécanismes automatisés qui détectent les œuvres protégées par le droit d’auteur dans le contenu destiné à être utilisé dans les processus de formation ou de génération, et à supprimer cette partie des données avant le début du processus de formation. Par exemple, les robots de recherche (que les sites web peuvent limiter ou désactiver) pourraient être incapables d’extraire du contenu protégé par le droit d’auteur grâce à un codage adéquat. 

Une autre solution – probablement plus attrayante pour les entreprises – consiste à trouver de nouveaux moyens, tels que l’octroi de licences, pour monétiser le processus de manière que les auteurs et le secteur de l’IA puissent en bénéficier. Il s’agit là d’une solution gagnant-gagnant.

Caricature of a human hand and an AI hand working on the same cartoon of Zarya of the Dawn

Qui est Zarya de l’Aube, le personnage qui fait notre couverture ?

Zarya est la protagoniste d’une courte bande dessinée écrite par Kris Kashtanova et illustrée par Midjourney, un générateur d’images basé sur l’IA. En septembre 2022, Kashtanova a sollicité la protection des droits d’auteur pour la bande dessinée auprès de l’Institut américain des droits d’auteur sans révéler que Midjourney était impliqué dans la création des illustrations. Le droit d’auteur a d’abord été accordé, mais l’Institut des droits d’auteur a ensuite révoqué la protection de l’œuvre d’art. Il a expliqué que seules les œuvres créées par un être humain peuvent être protégées. Dans ce cas, la mise en page, le texte et l’intrigue du livre pouvaient bénéficier d’une protection, mais pas les images elles-mêmes.

Cette affaire constitue un précédent important pour l’application de la législation sur le droit d’auteur aux œuvres générées par l’IA. La décision de l’Institut du droit d’auteur confirme que les êtres humains doivent avoir le contrôle de la production, même lorsqu’un ordinateur est impliqué dans le processus créatif. En comparaison, « au lieu d’être un outil que Mme Kashtanova contrôlait et guidait pour obtenir l’image qu’elle souhaitait, Midjourney génère des images de manière imprévisible. En conséquence, les utilisateurs de Midjourney ne sont pas les “auteurs”, aux fins du droit d’auteur, des images générées par la technologie ».


« L’effet Bruxelles » : le DSA et le cadre transatlantique de protection des données personnelles entrent en vigueur

Lorsqu’une ville devient synonyme de prouesse en matière d’élaboration de règles, ses législateurs savent qu’ils font quelque chose de bien. Telle est la renommée mondiale de Bruxelles, qui abrite les principales institutions de l’Union européenne.

Au cours des dernières semaines, deux nouveaux ensembles de règles sont entrés en vigueur, qui, avec le RGPD, établissent de nouvelles normes en matière de respect des droits des utilisateurs et de réglementation du marché. Tous deux sont susceptibles d’influencer les pratiques et les mesures dans d’autres pays, ce qui témoigne de l’influence de « l’effet Bruxelles » (un concept inventé par un professeur de droit de Columbia, semble-t-il).

Le premier. Le règlement européen sur les services numériques (DSA) vient de démarrer la mise en œuvre de mesures strictes sur 19 très grandes plateformes en ligne et moteurs de recherche. Ces mesures vont de l’obligation d’étiqueter toutes les publicités et d’informer les utilisateurs de l’identité de leurs auteurs à la possibilité pour les utilisateurs de désactiver les recommandations de contenu personnalisées. Comme pour le RGPD, l’impact du DSA s’étend au-delà des frontières de l’UE. Toute entreprise au service d’utilisateurs européens, quel que soit son lieu d’implantation, sera soumise à ces nouvelles règles. Il est intéressant de noter que parmi ces 19 entreprises majeures, seules deux sont basées en Europe : Booking.com, dont le siège se trouve aux Pays-Bas, et Zalando, dont le siège se trouve en Allemagne. Les autres sont principalement originaires des États américains de Californie et de Washington (soit 15 d’entre elles), et les deux autres (Alibaba et TikTok) sont des entreprises chinoises.

Le second. Le cadre transatlantique de protection des données (TADPF) récemment adopté par l’Union européenne et les États-Unis garantit que les données personnelles des citoyens européens qui traversent l’Atlantique bénéficient du même niveau de protection aux États-Unis qu’au sein de l’Union européenne. Même la Cour de justice de l’UE contribue à « l’effet Bruxelles » : avant le TADPF, la Cour a invalidé deux versions antérieures de cadres transatlantiques – le Safe Harbor Act et le Privacy Shield –, renvoyant à chaque fois les décideurs politiques à leur propre réflexion sur la manière de faire correspondre la législation américaine aux normes de l’UE.

Respecté. Le RGPD, qui a été la première loi à donner son éponyme à la capitale belge, a fait des émules dans d’autres pays (ce que l’on appelle l’effet Bruxelles de droit). La loi chinoise sur la protection des informations personnelles (PIPL), par exemple, a été fortement influencée par le RGPD, avec des dispositions sur la collecte, le stockage et l’utilisation des données qui reflètent celles de la législation de l’UE.

Peur de manquer quelque chose. Mais « l’effet Bruxelles » est également redouté par d’autres. Dans la course à la réglementation des technologies émergentes, telles que l’IA, les pays rivalisent pour arriver les premiers, de peur d’être distancés par d’autres législations. Les députés britanniques se sont montrés particulièrement inquiets et ont insisté auprès du Gouvernement pour qu’il accélère les choses : nous pensons que si le Royaume-Uni n’introduit pas de nouvelle réglementation statutaire pendant trois ans, les bonnes intentions du Gouvernement risquent d’être dépassées par d’autres législations – comme la loi européenne sur l’IA –, qui pourraient devenir la norme de facto et être difficiles à supplanter.
L’UE a dû payer le prix de son influence en matière d’élaboration de règles, car les entreprises sont souvent très critiques à l’égard des réglementations européennes, qui sont comparativement très strictes, et l’accusent de manquer de prouesses technologiques et d’un avantage concurrentiel. Mais il s’agit d’un risque stratégiquement calculé de la part de l’UE : Bruxelles ne sait que trop bien que son pouvoir réglementaire ne peut pas être facilement limité ou déplacé.


Sans conducteur : l’avenir des taxis autonomes

La révolution de la voiture sans conducteur s’installe à San Francisco. Des centaines de voitures autonomes, appartenant principalement à Waymo, de Google, Cruise, de General Motors, Uber et Lyft, peuvent désormais être aperçues régulièrement dans les rues de la ville.

L’essor des véhicules sans conducteur intervient après que la California Public Utilities Commission, une agence de l’État, a voté le 11 août pour autoriser Waymo et Cruise à prendre des passagers payants de jour comme de nuit dans tout San Francisco.

Forte opposition. Avant le vote, la California Public Utilities Commission a dû faire face à une vive opposition de la part des habitants et des services municipaux. Les organismes chargés des transports et de la sécurité, tels que la police et les pompiers, ainsi que les habitants de la Californie, se sont opposés à l’extension des services de taxis autonomes payants pour des raisons de sécurité. Les manifestants sont descendus dans la rue non seulement pour souligner les problèmes de sécurité, mais aussi parce que les voitures affectaient les ressources nécessaires au bon fonctionnement des transports publics, par exemple en bloquant une voie très fréquentée ou en provoquant des embouteillages par des manœuvres imprévisibles.

Accident de voiture. Quelques jours plus tard, des rapports faisant état de multiples accidents dans la ville ont contraint le California Department of Motor Vehicles (DMV) à ordonner à General Motors de réduire le nombre de véhicules Cruise actifs. Les habitants et les services municipaux ont eu raison de s’inquiéter pour la sécurité, mais la décision du DMV n’a pas suffi à apaiser les craintes. Les manifestations se poursuivent.

Problèmes de démarrage ? Toute technologie émergente connaît des problèmes de démarrage. Cela devient critique lorsque ces problèmes menacent la vie humaine. Heureusement, les passagers impliqués directement dans ces accidents n’ont subi que des blessures ne mettant pas leur vie en danger (l’affirmation selon laquelle deux véhicules Cruise ayant bloqué par inadvertance une ambulance auraient contribué à un retard fatal dans le transfert du piéton à l’hôpital, a été réfutée par la société).Toutefois, cela soulève une question qui donne à réfléchir : et si les accidents de véhicules autonomes étaient plus graves ? Le spectre des accidents mortels, rappelant l’incident de Tesla en 2016, se profile comme un rappel obsédant des défis et des responsabilités associés au développement de la technologie de conduite autonome – et du fait que rien ne garantit que des accidents de voiture mortels ne se reproduiront pas. Il est fort probable qu’ils se répètent.

A small white car with an orange stripe carrying the word Cruise is parked on a street. It has an orange and white traffic cone on its hood.
Des méthodes peu orthodoxes : à San Francisco, un groupe de manifestants a arrêté des taxis et placé des cônes de signalisation sur leur capot pour déclencher des alarmes de sécurité. Les voitures restent bloquées jusqu’à ce qu’un technicien les réinitialise. 
Source : Safe Street Rebel

Pas d’effet de masse. Tant que les taxis autonomes n’auront pas gagné la confiance des citoyens, leur adoption sera relativement limitée. Il ne s’agit pas d’acheter un appareil électroménager après avoir lu des critiques élogieuses ou de s’inscrire sur une nouvelle plateforme de médias sociaux parce que la moitié du monde y est déjà inscrite.

Les préoccupations en matière de sécurité constituent une entrave importante qui pourrait freiner les personnes et inciter les conducteurs potentiels à réfléchir à deux fois (voire trois fois) avant de mettre leur vie entre les mains d’une voiture sans conducteur. La question est la suivante : que faudra-t-il aux taxis autonomes pour gagner – ou perdre définitivement – la confiance du public ?


PayPal va là où l’on craignait que Libra ne s’aventure

Quatre ans se sont écoulés depuis que Facebook (aujourd’hui Meta) a annoncé le lancement de sa monnaie numérique, Libra. À l’époque, l’entreprise était embourbée dans des scandales liés à la confidentialité des données, ce qui a scellé le destin du projet avant même qu’il n’ait eu le temps de prendre son envol.
Avance rapide. PayPal vient d’annoncer un nouveau projet : PayPal dollar, une monnaie stable (l’équivalent numérique des monnaies fiduciaires comme le dollar américain, l’euro et d’autres) qui est très similaire à ce que Facebook avait en tête avec Libra.

 Logo, Text

Comment ça marche. Les prévisions de PayPal pour sa nouvelle monnaie stable remontent à 2020. Créé par Paxos, une société technologique privée spécialisée dans les monnaies stables, PayPal USD (PYUSD) a été lancé il y a quelques semaines sur la plateforme d’échange (blockchain) Ethereum. La valeur des monnaies stables est directement liée à une monnaie fiduciaire sous-jacente, généralement le dollar américain ; dans le cas présent, chaque pièce PayPal dollar est garantie à un taux de 1:1 avec le dollar américain détenu sur des comptes de réserve gérés par Paxos et d’autres dépositaires.

Au cœur d’une surveillance plus stricte. Malgré sa position, PayPal opère sur un marché où la réglementation est plus stricte. En novembre, FTX, qui était alors l’une des plus grandes bourses de cryptomonnaies au monde, a fait faillite. Dans le même ordre d’idées, Paxos a reçu l’ordre de cesser d’émettre des BUSD, la monnaie stable développée par Binance, la plus grande Bourse de cryptomonnaies au monde. À bien des égards, PYUSD fonctionne de manière très similaire à BUSD (paiements instantanés et faibles frais).

Des perspectives renforcées. Quelques différences fondamentales distinguent PayPal et sa monnaie stable. Tout d’abord, PayPal jouit d’une meilleure réputation dans le secteur financier que Facebook et Binance n’auraient jamais pu l’espérer. Deuxièmement, les décideurs politiques sont aujourd’hui plus conscients du fonctionnement des monnaies stables et de leurs avantages (et défis). Par exemple, le fait que les monnaies stables ne soient pas aussi vulnérables que les cryptomonnaies en fait une option beaucoup plus sûre. PayPal est à jour en ce qui concerne les exigences relatives à la connaissance du client, et son code source ouvert permet à quiconque de le consulter. Les chances sont en faveur de PayPal.

Pourtant, PayPal ne doit pas considérer ce moment décisif comme acquis. Il peut soit contribuer à la méfiance croissante des régulateurs à l’égard des cryptomonnaies, soit montrer que les monnaies stables – la forme la plus populaire de cryptomonnaie – sont l’avenir des paiements numériques.


Actualités de la Francophonie

 Logo, Text
Campaigns 17

Proposition de contenus pour les deux pages « Actualités de la Francophonie » de la revue en français Digital Watch de septembre 2023 / n°82

Lancement des rencontres bimensuelles sur l’actualité de la gouvernance du numérique pour les délégations francophones auprès des Nations Unies à New York 

 Art, Graphics, Advertisement, Nature, Outdoors, Sea, Water, Pattern, Accessories

Les enjeux liés aux développements technologiques occupent une place centrale dans l’agenda des Nations Unies. À New York, le numérique est devenu un sujet transversal des principaux organes onusiens, et sa transversalité s’est également révélée dans les travaux des six Commissions de l’Assemblée générale des Nations Unies, qui l’abordent sous différents angles. La Première Commission s’intéresse aux implications sécuritaires du numérique en consacrant un pan important de ses travaux aux questions de cybersécurité. La Deuxième Commission l’examine sous le prisme du Programme 2030, formulant des recommandations visant à orienter le potentiel du numérique vers la mise en œuvre des Objectifs de développement durable. Ses implications en matière de droits de l’Homme sont examinées par la Troisième Commission tandis que la Quatrième Commission travaille sur le rôle des technologies numériques dans l’action de communication des Nations Unies, dans la lutte contre les discours haineux et la désinformation, de même que dans la réalisation des mandats confiés aux opérations de maintien de la paix. Le numérique n’échappe pas aux travaux de la Cinquième Commission en raison des implications budgétaires de la transformation numérique des Nations Unies. Les enjeux relatifs à la réglementation internationale du numérique font partie intégrante des travaux de la Sixième Commission.

La publication du rapport du Secrétaire général de l’ONU «Notre Programme Commun» (septembre 2021) a parallèlement donné le coup d’envoi à de multiples processus intergouvernementaux et multipartites qui abordent de manière spécifique la question du numérique dans le cadre des consultations qui jalonnent l’élaboration d’un Pacte numérique mondial et d’un Code de conduite pour l’intégrité de l’information sur les plateformes numériques. Ainsi, un chapitre de «Notre Programme Commun » est au centre des discussions intergouvernementales: les «Nations Unies 2.0», ambitieux projet pour faire, comme son nom l’indique, des innovations numériques un vecteur de modernisation des Nations Unies. Le numérique est donc omniprésent dans le travail des diplomates à New York et le sera davantage à mesure que la transition numérique engendrera de nouveaux défis et des opportunités inédites pour la paix, le développement et les droits humains.

Malgré cette importance croissante, la participation des diplomates francophones aux processus consacrés aux enjeux numériques demeure assez faible à New York, en particulier ceux des pays en développement. Faute de ressources humaines suffisamment formées à ces enjeux au sein des Missions permanentes, la question du numérique semble réservée aux experts de Genève ainsi qu’aux spécialistes dépêchés ponctuellement par les capitales pour suivre des sessions spéciales. Les intérêts des États membres de l’Organisation internationale de la Francophonie (OIF) pourraient pâtir de cette absence dans les sessions de discussion et de négociation qui se tiennent à New York, d’où la nécessité de renforcer la sensibilisation de leurs représentants sur les implications diplomatiques des développements numériques.

À cette fin, à travers sa Représentation auprès des Nations Unies à New York (RPNY) et sa Direction de la Francophonie économique et numérique (DFEN), l’OIF a mis en place un « Café numérique francophone » à l’intention des délégations des pays francophones.

Dans un cadre informel et sur une base régulière (bimensuelle), cette initiative consiste à réunir les experts francophones à la RPNY pour échanger durant une heure trente sur l’actualité de la coopération numérique et faire le point sur les processus des Nations Unies dédiés au numérique en vue de :

  1. soutenir l’appropriation des défis et opportunités diplomatiques liés aux développements du numérique et de l’intelligence artificielle ;
  • informer sur les processus multilatéraux consacrés aux enjeux du numérique avec un accent particulier sur les sujets les plus pertinents par rapport à l’agenda des Nations Unies à New York ;
  • encourager le dialogue et la concertation sur les points à l’agenda des instances qui traitent des divers aspects des technologies numériques en vue du développement des positions communes.

La finalité est de bâtir une communauté diplomatique francophone informée, formée et organisée pour défendre aux mieux ses intérêts et appuyer le plaidoyer de la Francophonie dans les discussions intergouvernementales sur le numérique.

S’inscrivant dans le cadre du volet « Gouvernance du numérique » du projet D-CLIC de l’OIF, la première session du Café numérique francophone s’est tenue le 6 juillet 2023. Elle a été consacrée à la Contribution de la Francophonie au Pacte numérique mondial. Remise en main propre à New York, le 3 mai 2023, à l’Envoyé pour les technologies du Secrétaire général des Nations Unies, M. Amandeep Singh Gill, cette contribution positionne l’espace francophone sur les grands enjeux de la discussion internationale sur la gouvernance du numérique et met l’accent sur deux défis de taille : le renforcement des capacités numériques comme composante indispensable pour réaliser la connectivité universelle et réduire la fracture numérique d’une part, et la défense de la diversité culturelle et linguistique dans l’espace numérique à travers un plaidoyer robuste en faveur de la « découvrabilité » des contenus en ligne d’autre part.

En savoir plus : www.francophonie.org

Une troisième cohorte de fonctionnaires et diplomates bénéficie d’une formation en ligne en français sur la gouvernance de l’Internet


À l’issue d’un appel ayant suscité plus de 300 candidatures, les 26 fonctionnaires et diplomates de 18 des Etats et gouvernements membres de l’OIF sélectionnés suivront à partir du 14 septembre un cycle de formation en ligne de 10 semaines sur l’Introduction à la gouvernance de l’Internet. Cette formation s’inscrit dans la lignée des 2 formations pilotes du projet « D-CLIC, formez-vous au numérique avec l’OIF », précédemment soutenues par l’OIF en 2022 et dispensées en français par la DiploFoundation. Afin de capitaliser sur ces actions, de toucher davantage d’agents publics francophones et d’en faire un cycle de formation de long terme, l’OIF a souhaité appuyer en 2023 l’Université Senghor pour déployer cette nouvelle activité de renforcement de capacités. Cette troisième session sera donc dispensée par cette université située à Alexandrie, opérateur direct de la Francophonie, dont la mission est de former, en français, des cadres capables de relever les défis du développement durable en Afrique et en Haïti.

À cet égard, la gouvernance de l’Internet (GI) est de plus en plus prépondérante dans le travail des diplomates et des fonctionnaires nationaux. Ce cycle de formation en français qui mobilise un minimum de 6 à 8 heures d’étude par semaine en présente les enjeux stratégiques et opérationnels pour les pays en couvrant des questions centrales, notamment : l’infrastructure et la normalisation, la cybersécurité, les questions juridiques, économiques, de développement et socioculturelles, les droits de l’Homme, ainsi que les processus et les acteurs de la GI.

À travers ce cycle de formation, l’OIF vise ainsi à renforcer les compétences des fonctionnaires et diplomates francophones afin de leur permettre de mieux apprécier les défis actuels et futurs de la gouvernance numérique. Les bénéficiaires de cette formation pourront mieux comprendre les terminologies et concepts de la gouvernance numérique mais aussi identifier ses aspects institutionnels, régionaux et internationaux.

Les candidatures féminines et provenant des pays francophones en développement membres de l’OIF ont été fortement encouragées. 30% des fonctionnaires et diplomates sélectionnés sont des femmes et plus de 92% d’entre eux proviennent de pays du Sud en développement.

Dans une dynamique de réplication de cette initiative, d’autres sessions sont prévues en 2024.

Événements à venir :

  • Conférence du Réseau francophone des régulateurs des médias – REFRAM (9-10 octobre 2023, Dakar)
  • Participation de l’OIF à l’Assemblée générale annuelle de l’ICANN (ICANN 78), Société pour l’attribution des noms de domaines et des numéros sur Internet (21-26 octobre 2023, Hambourg)

DW Weekly #127 – 11 September 2023

 Text, Paper, Page

Dear all,

Last week focused on the G20 Summit and the success of Indian diplomacy in fostering consensus on the New Delhi Leaders’ Declaration. AI remained in focus for G7 and on both sides of the Atlantic, and Google is facing a monopoly trial in the USA, the first of its kind in the modern internet era. 

Let’s have a closer look.

Pavlina and the Digital Watch team


// HIGHLIGHT //

G20 Summit and the New Delhi Leaders’ Declaration

The G20 summit over the weekend reached an unanticipated consensus. The summit statement on the Russia-Ukraine conflict, together with the inclusion of the African Union as a new member, is seen as a significant success of Indian diplomacy. 

The group adopted the New Delhi Leaders’ Declaration by consensus, where digital issues received relatively higher relevance than other diplomatic issues. The declaration deals with technological transformation and digital public infrastructure, giving a boost to India’s measures to push the global adoption of digital public infrastructure. G20 Framework for Systems of Digital Public Infrastructure, the global Digital Public Infrastructure Repository, and the One Future Alliance (OFA) proposal are voluntary measures aimed at supporting the Global South to build up inclusive digital public infrastructure.

The declaration also endorses the International Monetary Fund (IMF) and the G20’s Financial Stability Board (FSB)  joint paper outlining policy and regulatory recommendations to address the risks of crypto assets. 

On the topic of AI, the declaration reaffirmed existing G20 AI principles from 2019 with calls for global discussions on AI governance. The declaration also places a strong emphasis on the gender digital divide.

Why is it relevant?

The fact that the G20 has adopted a consensus document, unlike the previous G20 meetings that resulted in the chair’s summaries, is seen as a win, staving off the division within the G20. The outcomes, however, are being criticised for lack of action, implementation steps and timelines.


Digital policy roundup (5–11 September)
// ON THE SIDELINES OF G20 SUMMIT //

India, the Middle East and Europe’s new economic corridor

The USA, India, Saudi Arabia, the UAE, France, Germany, Italy, Japan, and the EU have announced a major international infrastructure project – the India-Middle East-Europe Economic Corridor (IMEC) – to connect India, the Middle East, and Europe with railways, shipping lines, high-speed data cables, and energy pipelines. The project aims to counter China’s Belt and Road vision, where the Middle East is also a key player. 

Why is this relevant?

The Chinese Belt and Road Initiative (BRI), launched in 2013, also referred to as the New Silk Road is an ambitious infrastructure project devised to link East Asia and Europe. Over the years, it has expanded to Africa, Oceania, and Latin America, broadening Chinese influence. The new IMEC project would create an economic corridor between India, the Middle East, and the EU, fostering trade and export, as well as the influence of the partner countries in this region. It also deals with laying high-speed data cables from India to Europe and providing internet access throughout this region.


// AI GOVERNANCE //

G7 to develop an international code of conduct for AI

In seeking a unified approach towards AI, the G7 countries have agreed to create an international code of conduct for AI. According to the G7 statement, the current process shall result in a nonbinding international rulebook that would set principles for oversight of advanced forms of AI and cover guidelines and control over the use of AI technology. The code of conduct shall be presented to the G7 leaders at the beginning of November.

Why is this relevant?

The G7 code of conduct for AI would require companies to take responsibility for the AI mechanisms they have created, for potential societal harm, and put cybersecurity and risk management systems in place to mitigate risks caused by the AI, from its development to implementation. The G7 code of conduct aims to guide the development of regulatory and governance regimes, coinciding with the current adoption process of the EU AI Act and the US voluntary commitments in July.


Civil society issues a statement on EU’s AI Act loophole

More than 115 civil society organisations are calling on EU legislators to remove a loophole in the draft AI Act, set to be adopted by the end of the year. In a joint statement, civil society calls for changes to the high-risk classification process in Article 6, asking the legislators to revert to the original wording and ensure that the rights of people affected by AI systems are prioritised.

As per the current wording of Article 6, the regulation would allow ‘the developers of high-risk systems to decide themselves if they believe the system is “high-risk”’. As a result, the same company that would be subject to the law is given the power to decide whether the law applies to them. The changes that created this loophole were introduced as a result of lobbying efforts by tech companies

Why is this relevant?

In its original form, the draft AI Act outlined a list of ‘high-risk uses’ of AI, including AI systems used to monitor students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who gets access to welfare benefits. The legislation would require developers and deployers of such high-risk AI to ensure that their systems are safe and free from discriminatory bias and to provide publicly accessible information about how their systems work.


Pressure builds to legislate on AI in the US

On the other side of the Atlantic, Biden’s administration is under increased pressure to require government agencies to comply with the AI Bill of Rights. More than 60 organisations currently called for the AI Bill of Rights to be a binding policy for US federal government agencies, contractors, and grantees to ensure guardrails and protections against algorithmic abuse.

The USA has taken a comparatively hands-off approach to AI regulation so far. However, the calls to legislate have now materialised in a bipartisan AI legislative effort. The heads of the Senate Judiciary Subcommittee on Privacy, Technology, and Law, Sen. Richard Blumenthal (D-CT) and Sen. Josh Hawley (R-MO), announced a framework to regulate AI. According to The Hill, ‘The framework calls for establishing a licensing regime administered by an independent oversight body. It would require companies that develop AI models to register with the oversight authority, which would have the power to audit the companies seeking licenses.’ It also calls for Congress to clarify that Section 230 of the Communications Decency Act, which shields tech companies from legal consequences of content posted by third parties, does not apply to AI.

Why is this relevant?

The latest push to legislate AI would put in place a binding framework for companies (through the AI framework) and the US federal government (through the binding AI Bill of Rights), providing transparency protection to consumers and children and defending national security. The AI Framework announcement comes days before the AI Senate Forum scheduled for 13 September. The initiative of Senate Majority Leader Chuck Schumer (D-NY) will bring together top executives of the biggest tech companies to an ‘Insight Forum’ and is aimed to supplement the work already underway regarding AI regulation.


UNESCO released guidance on AI in education

Another call to regulate generative AI, this time in schools, comes from UNESCO with its Guidance for Generative AI in Education and Research. According to the guidance, governments should take steps to safeguard data privacy and implement age restrictions (minimum age limit of 13 years) for users, without delay. 

Why is this relevant?

Most educational institutions worldwide are currently faced with the dilemma of implementing and overseeing AI in educational processes. In addition to the dilemma of whether AI should be prohibited or not and how it should be regulated, the current generative AI models, such as ChatGPT, are trained on data from online users, which mostly reflect the values and dominant social norms of the Global North and, therefore, may widen the digital divide.


Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

European Commission designates six companies as gatekeepers under the DMA

The European Commission has designated 6 major tech companies, including Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft, as gatekeepers under the Digital Markets Act (DMA), concluding a 45-day review process. The designation applies to a total of 22 core platform services provided by these companies.

These companies must ensure full compliance with the relevant obligations under the DMA  until 6 March 2024, related to, for example, data use, favouring of own products or services, pre-installation of applications or default settings, interoperability, etc.

In the event a gatekeeper breaches the rules under the DMA, it risks being fined up to 10% of its total global annual turnover. This can be increased to up to 20% of the total annual turnover in case of repeated offences.

Why is this relevant?

The DMA is directly applicable in EU member states. Third parties may invoke the rights and obligations stemming from the DMA directly in their national courts.


Google faces trial on market dominance

In the 1st trial on a monopoly of Big Tech companies since 1998, the US Department of Justice (DOJ) and a bipartisan group of attorneys general from 38 states and territories has commenced a trial against Google on the question of whether Google abused its dominant position in online search. Filed 3 years ago, the case – U.S. et al, v. Google,  alleges that Google used its 90 percent market share to illegally throttle competition in both searches and search advertising. This trial is seen as pivotal for 2 reasons. It moves beyond challenging the mergers and acquisitions of Big Tech to examine their business models, and it is the first case by the DOJ since 1998, when it successfully argued that Microsoft monopolised the personal computers market. 

On the same note, Google has agreed to pay USD2 billion in a tentative settlement with 50 US states on an alleged app store monopoly. 

Why is this relevant?

The outcome of this case will set a precedent for Big Tech on their business practices and their contribution to market dominance. The trial is set to last ten weeks.


The week ahead (11–18 September)

12–15 September: WTO Public Forum 2023 (Geneva), with a launch of the Digital and Sustainable Trade Facilitation Global Report 2023: State of Play and Way Forward on 15 September 

11 September–13 October: The 54th session of the Human Rights Council

14–15 September: Global Cyber Conference 2023

18–19 September: SDG Summit 2023


#ReadingCorner
 Advertisement, Poster

The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0

The RAND Corporation published a report on the impacts of generative AI on social media manipulation and national security risks. While the authors focus on China as an example of this potential threat, many factors could use generative AI for social media manipulation, including technically sophisticated non-state actors. Read the full report.


 Architecture, Building, House, Housing, Villa, Neighborhood, City, Road, Street, Urban, Advertisement, Poster, Hotel, Resort, Person, Bicycle, Transportation, Vehicle

Digital Government Review of Latin America and the Caribbean

The OECD report analyses how governments in Latin America and the Caribbean could use digital technology and data to foster responsiveness, resilience, and proactivity in the public sector. Looking at governance frameworks, digital government capabilities, data-driven public sector, public service design and delivery, and digital innovation in the public sector, it provides policy recommendations. Read the full report.


Itlelson Pavlina square
Pavlina Ittelson – Author
Executive Director, Diplo US
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

DW Weekly #126 – 4 September 2023

 Text, Paper, Page

Dear all,

We’re starting September with a heightened sense of urgency around AI rules. In focus right now: how to make up for infringements suffered by copyright holders, and how to tackle content that’s been both human-authored and AI-generated (heads-up: We’re discussing this in more depth in our monthly issue, out this week). Meanwhile, cybercrime convention negotiations concluded in New York last week. There’s been limited headway, to no one’s surprise.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

UN’s cybercrime treaty: Limited headway as sixth round of negotiations concludes in New York

The sixth round of UN negotiations on a new cybercrime treaty (technically a convention on countering the use of ICTs for criminal purposes) concluded in New York last week (the report is in two parts: first, and second). The discussions have been captured in a longer, predominantly red-tracked, draft document

And on it goes. The outcome is similar to previous rounds. Lots of proposed additions and deletions to the draft text, with limited headway in resolving the primary disagreements. 

Disagreements persist. One of them relates to the scope of the convention and the related definitions. Is the convention trying to address core cybercrime offences (backed by the USA, the EU etc.), or the broader use of ICTs for criminal purposes (backed by Russia, China, etc.)? The other relates to the lack of human rights safeguards which could expand government surveillance powers and criminalise and expand cross-border government access to personal data.

Drawing ire. Human rights organisations have repeatedly decried the apparent lack of human rights safeguards. The current wording goes far beyond tackling cybercrime, they said in a press conference. As it stands, the draft doesn’t address issues such as overreaching power or judicial oversight.

Meanwhile, Microsoft has put forward a set of recommendations that could mitigate some of these same concerns. For instance, it has suggested that the definition of cybercrime shouldn’t be expanded in a way that it could encompass online content. And that the convention should concern itself only with acts involving criminal intent (to avoid criminalising the work of ethical hackers and cybersecurity researchers).

Why is it relevant? The process, kicked off by Russia a few years ago, is seeing countries like the USA, the EU, China, and Russia working alongside each other on how to solve issues of transnational internet crimes. It’s not a common occurrence, given all that’s currently going on. These negotiations will (or at least should) result in an international cybercrime treaty backed by the UN, the first of its kind. 

What happens next? There’s still time for the US government’s optimism (arguably less so in Russia’s opinion) to become reality. After all, according to diplomatic sources (the source remains unknown), ‘anything capable of getting a vote at the General Assembly next year would be seen as a win’. Informal consultation groups have until mid-Oct to send their texts to the chair, after which a revised draft convention will be circulated by the end of November. 

Keep track: Our UN Cybercrime Ad Hoc Committee page is tracking the developments.


Digital policy roundup (28 August–4 September)
// AI GOVERNANCE //

US Copyright Office eyeing new copyright rules for AI

The US Copyright Office is seeking public comment on issues that it’s facing since the stellar rise in popularity of generative AI tools. There are four challenges:

  • The first relates to the use of copyrighted works to train AI models: What permission and/or compensation for copyright owners is or should be required when their works are included? 
  • The second concerns the copyrightability of material generated using AI systems: Where and how to draw the line between human creation and AI-generated content. 
  • The third is about liability for infringement, when for instance, content generated using AI systems is very similar to copyrighted content. In such cases, how liability should be apportioned between the user whose instructions prompted the output, and developers of the system that used copyrighted content to train it. 
  • The fourth relates to the treatment of generative AI outputs that imitate the persona or style of human artists: Although these personal attributes are not generally protected by copyright law, their impersonation might involve differing state rights and laws.

Why is it relevant? The copyright office held public listening sessions and webinars to gather information and then published a notice of inquiry. This is typically the last step before new measures or rules are proposed, so we might be looking at proposals for new legislation before the year’s end. 

In the meantime. In tackling one of the four challenges it seeks to regulate, the copyright office has adopted the approach that individuals who use AI technology in creating a work may claim copyright protection for their own contributions to that work, as long as the (human) author demonstrates significant creative control over the AI-generated components of the work.


More companies rushing to block OpenAI’s web crawler GPTBot

Dozens of large companies, including Amazon, The New York Times, CNN, and several European online news portals, have rushed to block GPTBot, OpenAI’s new web crawler that scraps data to be fed to its popular chatbot, ChatGPT.

The latest update by Originality.ai, a company that checks content to see if it’s AI-generated or plagiarised, reveals that 12% of the top 1,000 websites (as ranked by Google) are now blocking OpenAI’s crawler. 

Why is it relevant? First, authors are taking the advantage (if you can call it that) into their own hands with an ex-ante solution. Second, it makes one wonder whether crawlers could be modified to filter out specific content such as copyrighted material.


UK MPs urge government to introduce new AI rules

UK members of parliament are urging the government to pass new AI rules or risk being left behind. In its latest report, the Science, Innovation and Technology Committee calls for a new law to be introduced by November (in the upcoming King’s Speech). 

In July, British Prime Minister Rishi Sunak told Parliament that AI guardrails could be developed around existing laws, at least initially. But tackling the AI challenges outlined in its reports, the committee thinks rules are required to avoid situations similar to what occurred with data protection regulations where the UK was left playing catch-up.

Why is it relevant? The British MPs are feeling the pressure from the Brussels effect, where EU rules become a de facto standard, saying: ‘We see a danger that if the UK does not bring in any new statutory regulation for three years it risks the Government’s good intentions being left behind by other legislation.’ But the UK government may be trying to balance regulation with AI-friendlier measures (which is, by the way, what companies have just asked Australia to do). 


Polish data protection authority investigating ChatGPT privacy breaches

OpenAI, the company behind ChatGPT, is being investigated for GDPR breaches in Poland. The complaint was made by security and privacy researcher Lukasz Olejnik, who is arguing that OpenAI processed his data ‘unlawfully, unfairly, and in a non-transparent manner’.

Why is this relevant? This investigation adds to the many cases that OpenAI is currently dealing with on issues related to data protection and privacy breaches.


// CYBERCRIME //

Qakbot malware halted by mega operation; USD9 million in crypto seized

The Qakbot malware, which infected over 700,000 devices, has been disrupted by a mega operation led by the USA, and involving France, Germany, the Netherlands, the UK, Romania, and Latvia. Infamous cybercrime gangs are known to have used the malware, also known as Qbot. 

The FBI said the criminals extorted over USD58 million in ransom payments between October 2021 and April 2023 alone, from victims that included financial services and healthcare entities.

Malware removed. The best part, at least for infected computers, is that the FBI managed to remove the malware by redirecting Qakbot botnet traffic to servers controlled by law enforcement. From there, infected computers were instructed to download an uninstaller that would remove the malware and prevent the installation of any additional malware. 

In perspective. Security company Check Point reported in its mid-year report that Qakbot was the most prevalent malware globally.

Why is it relevant? According to the FBI, this was one of the largest-ever USA-led enforcement actions against a botnet. But just to be clear: No arrests were made, so an improved version of the malware could return in one form or another.


Meta disrupts largest known influence operation (and it’s linked to China)

Meta took down thousands of accounts and pages linked to Spamouflage, which it described as the largest known covert influence operation in the world. The company said it also managed to link the campaign to individuals associated with Chinese law enforcement. Active since 2018, Spamouflage has been used to disseminate positive content about China while criticising the USA and disparaging Western foreign policies

China’s foreign ministry said it was not aware of the findings, and added that individuals and institutions have often launched campaigns against China on social media platforms.

Why is it relevant? The magnitude of accounts and pages used as part of the operation set this operation apart. In spite of its enormous size, the campaign’s impact was quite poor, partly because the campaign used accounts formerly associated with unrelated purposes, resulting in irrelevant and incoherent messages – a classic tell-tale sign of inauthentic content.


Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

Microsoft unbundles Team from Office to appease EU’s antitrust concerns

Just a month after the European Commission opened an antitrust investigation into bundling tactics by Microsoft and limiting the interoperability of competing offerings, Microsoft announced it would unbundle its communication and work platform Teams from its computer software Office 365, starting in October. These changes apply to Microsoft’s users in the EU and Switzerland; there’s no change for customers elsewhere.

Why is this relevant? The European Commission has so far declined to comment. If we had to bet on its reaction, though, we’d say that while it’s pleased that Microsoft is cooperating with the investigation, it doesn’t solve the potentially antitrust behaviour that occurred during the height of COVID-19. 

A likely outcome? The commission will not want to give people the idea that it’s fine to engage in antitrust behaviour (assuming this is confirmed), benefit from it (Teams is now a firmly established software), and emerge unscathed (without any fine). Avoiding a fine is quite unlikely.


// ONLINE NEWS //

Canada wants Google and Meta to contribute at least CAD230 million to local media

The Canadian government wants tech giants Google and Meta to contribute a minimum of CAD230 million (EUR157 million) to support local media, according to draft regulations that will implement the recently enacted Online News Act. The draft rules are open for public consultation until 3 October, meaning they can still change.

The formula for contributions takes into account the global revenues of tech companies earning more than CAD1 billion annually (which currently includes Google and Meta) and Canada’s share of the global GDP. Companies that fail to meet this threshold through voluntary agreements would be required to engage in negotiating fair compensation with local media as per the new law.

Why is it relevant? As you’d expect, Meta was far from thrilled with these developments (Google was still evaluating the rules) and said it would continue to block news content (started as a reaction to the Online News Act). On its part, the government boycotted Meta by pulling $10 million in advertising from its platforms – triggering other Canadian news and telecom companies to do the same. Surprisingly, however, if Canadian users’ time spent on Facebook is anything to go by, users have been quite indifferent to Meta’s strong arm tactics since the ban. This doesn’t give Meta much reason to think about changing its mind, does it?


The week ahead (4–11 September)

1–4 September: The self-organised privacy and digital rights conference Freedom Not Fear in Brussels ends today.

5 September: The Tallinn Digital Summit will tackle democracy and technology, and how to chart a course for a more resilient, responsive and open future.

PLUS: What’s ahead on the AI front

9–10 September: In an apparent change of direction in India’s approach to AI governance, we can now expect AI to be on the agenda at this coming weekend’s G20 Summit.

13 September: US Senator Chuck Schumer will kickstart his series of AI-forum meetings in Washington with prominent figures from the technology industry and lawmakers. Their focus? Delving into the implications of AI and its future regulation. The meetings are closed-door sessions, alas.

8–12 October: AI discussions are likely to be a primary focus during this year’s Internet Governance Forum in Japan.

1–2 November: The UK’s AI Safety Summit, scheduled to take place in Bletchley Park, Milton Keynes, is expected to build consensus on international measures to tackle AI risks.

Expect also…

  • Freshly minted guidelines for AI companies, developed by Japan (this year’s G7 chair), which are expected to be discussed by the G7 later this year.
  • The launch of the UN Secretary-General’s High-Level Advisory Group on AI, which is expected to start its work by the end of the year.

#ReadingCorner

 Blonde, Hair, Person, Adult, Male, Man, Head, Face, Clothing, Formal Wear, Suit, Photography, Portrait
Campaigns 28

Microsoft’s Brad Smith calls for human control over AI

Microsoft’s president and vice-chairman, Brad Smith, has emphasised the need for humans to retain control of AI technology, in an interview with CNBC. Concerned about the potential weaponisation of AI, he’s urging new rules that ensure human control, especially in critical infrastructure and military applications. As for AI’s impact on jobs, he thinks AI’s augmenting human abilities, not displacing people. Read or watch.


 Leisure Activities, Person, Sport, Swimming, Water, Water Sports, Advertisement, Poster, Outdoors, Nature
Campaigns 29

A deep dive into the undersea cables ecosystem 

Undersea cables often fall into the out-of-sight, out-of-mind category, but they play a critical role in carrying over 97% of internet traffic. Have you ever wondered how subsea cables are regulated or what causes most cable cuts? (If not, you should). The latest report from the EU Agency for Cybersecurity (ENISA) delves into the subsea cable ecosystem and highlights the primary security challenges it faces. Worth a read.


FWAzpGt5 steph
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 82 – September 2023

Cover page of the DigWatch newsletter for September 2023, issue 82, with the heading: AI and copyright: USA, UK eyeing new rules' and the caricature of a human hand and an AI hand working on the same cartoon of Zarya of the Dawn

Snapshot: What’s making waves in digital policy?

Geopolitics

The US government announced plans (see: executive order) to prohibit or restrict US investments in China across three industry sectors – semiconductors, quantum technologies, and (specific) AI systems – while Chinese regulators failed to approve Intel’s plans to acquire Israeli chipmaker Tower Semiconductor. New York City has implemented a TikTok ban on government-owned devices. 

AI governance

Four companies developing AI – Anthropic, Google, Microsoft, and OpenAI – launched a new industry body to focus on the safe and responsible development of frontier AI models. Meanwhile, dozens of large companies rushed to block GPTBot, OpenAI’s new web crawler that scrapes data to feed to its ChatGPT. 

UK members of parliament are urging the government to introduce new AI rules by the end of the year or risk being left behind. BRICS countries – Brazil, Russia, India, China, and South Africa – established an AI study group to research AI governance frameworks and standards, and help make AI technologies ‘more secure, reliable, controllable, and equitable’. Canada’s draft code of practice for regulating generative AI is available for public input.

Data protection authorities expressed concern about tech companies’ data (or web) scraping practices and the implications for personal data. Just because information is publicly available on the internet does not mean that privacy protections no longer apply,the statement said.

Security

The sixth round of UN negotiations on a new cybercrime treaty concluded in New York without making significant headway.

The Qakbot malware, which infected over 700,000 devices, was disrupted by a law enforcement operation involving the USA, France, Germany, the Netherlands, the UK, Romania, and Latvia. Meta took down thousands of accounts and pages linked to Spamouflage, which it described as the world’s largest known covert influence operation. The NCC Group security company reported a record high number of ransomware attacks in July, which it attributed to the exploitation of a vulnerability in MOVEit, a file transfer software, by a hacker group known as CLOP or Cl0p

In the UK, videos shared on TikTok and Snapchat encouraging people to steal from shops led to heavy commotion and several arrests in Oxford Street, London. 

Security and standards authorities in the US are urging organisations, especially those supporting critical infrastructures, to start thinking about migrating to post-quantum cryptographic standards in anticipation of quantum-powered cyberattacks.

Infrastructure

It will still take time for Africa’s internet connection to be fully restored after an underwater landslide in Congo Canyon damaged two major submarine cables that run along the western coast of Africa. 

Internet economy

On 25 August, stricter rules for very large online platforms and search engines came into effect as part of the EU’s new Digital Services Act. The European Commission launched formal proceedings against Microsoft for bundling the communication software Teams with its Office 365. A few weeks later, Microsoft announced it would unbundle its software for European and Swiss customers as of October. The French competition authority is investigating Apple for potential self-preferencing treatment. Advertisers say the company imposed its App Tracking Transparency (ATT) policy upon them, but exempted itself from the same rules.

Microsoft agreed to transfer the licensing rights for the cloud streaming of Activision Blizzard games to Ubisoft, in order to win approval from the UK to acquire Activision. The European Commission will need to reevaluate its earlier approval. 

Digital rights

Sam Altman’s relaunched Worldcoin project, featuring a cryptocurrency and an identity network, captured the attention of privacy regulators over possible irregularities linked to its biometric data collection methods. Zoom’s revised Terms of Service sparked controversy due to the company’s intention to use customer data for machine learning and AI. It later clarified its position.

The Norwegian Data Protection Authority imposed daily fines of 1 million kroner (USD98,500) on Meta over non-compliance with a ban on behaviour-based marketing carried out by Facebook and Instagram. OpenAI is being investigated in Poland: A researcher claimed the company processed his data ‘unlawfully, unfairly, and in a non-transparent manner’.

Content policy

China’s Cyberspace Administration released draft guidelines for the introduction of screen time software to curb the problem of smartphone addiction among minors. 

Canada criticised Meta for banning domestic news from its platforms as wildfires ravaged parts of the country. It wants Google and Meta to contribute a minimum of CAD230 million (EUR157 million) to support local media.

Development

Digital identity projects picked up around the world. Australia is planning new rules for its federally-backed digital ID by next year. The US government wants to work with the private sector to develop mobile phone standards for digital identification – similar to what the Philippines plans to do. Nigeria is getting help from the World Bank to implement nationwide digital IDs.

THE TALK OF THE TOWN – GENEVA

This year’s World Telecommunication/ICT Indicators Symposium (WTIS) (3–4 July) tackled ways of measuring data to advance universal internet connectivity and discussed the outcomes of two expert groups that reaffirmed the importance of internationally comparable data in monitoring ICT-related developments. ITU, in collaboration with the EU, launched the Dashboard for Universal and Meaningful Connectivity for tracking country progress and performance.

At the AI for Good Global Summit (6–7 July), over 280 projects showcased the capabilities of AI in advancing the SDGs and addressing the pressing needs of the world, amid discussions on AI policy and regulations, and future AI developments. 


The annual ITU Council gathered its 48 member countries to discuss ITU’s strategic plans. At this year’s council (11–21 July) Secretary-General Doreen Bogdan-Martin highlighted two primary goals for ITU: universal connectivity and sustainable digital transformation. The council noted that digital issues have become more prominent on global agendas, such as the upcoming 2023 SDG Summit and the 2024 Summit of the Future.


AI and copyright: USA, UK eyeing new measures

If you’re using someone else’s work, you need permission. This sums up how the world has primarily approached the rights of authors – until now.

The arrival of generative AI models, like ChatGPT, has wreaked havoc on copyright rules. For starters, the models powering generative AI are trained on whatever data they can lay their hands on, regardless of whether they are copyrighted content or not. Disgruntled authors and artists want this practice to stop. For them, the notion of fair use doesn’t cut it, especially if companies are making money off the system. 

A humanoid robot sitting at the desk and sketching with a pen in hand.

But there’s another issue: Users co-authoring new content with the help of AI are seeking copyright protection for their works. Since copyright attaches to human authorship, IP regulators are in a quandary: Which parts should be copyrighted, and where should the line be drawn?

Faced with these issues, IP offices in the UK and the USA have initiated consultation processes to help inform their next steps. Both have acknowledged that new rules might be needed.

What the UK is doing. In June, the UK’s IP office formed a working group to develop a voluntary code of practice. Microsoft, DeepMind, and Stability AI are among the working group members,together with representatives from art and research groups.

The government aims for the group to develop a code of practice to ‘… support AI firms to access copyrighted work as an input to their models, whilst ensuring there are protections (e.g. labelling) on generated output to support right holders of copyrighted work’. The government made it quite clear that if agreement is not reached or the code not adopted, it may legislate.

Copyright issues are also among the top challenges that the UK government is dealing with in its quest to tackle AI governance. 

What the USA is doing. The US Copyright Office has issued a call for public comments to inform it on possible regulatory measures or new rules that are needed to regulate these practices. This is typically the last step before new measures or rules are proposed, so we might be looking at proposals for new legislation before the year’s end. 

The issues that the copyright office is looking at are quite defined. First, it wants to understand how AI models are using, and should use, copyrighted data in their training processes. Second, it wants to hear proposals on how AI-generated material could be copyrighted. Third, it wants to determine how copyright liability would work in the context of AI-generated content. Fourth, it is seeking comments on the potential violation of publication rights, that is, the rights of individuals to control the commercial use of their likeness or personal information.

Elephants in the room. And yet, there’s little to suggest that these consultations will tackle – let alone solve – how to reverse the damage that’s already been done. Copyrighted content is now part of the enormous hoard of data on which the models were trained, and part of the content that’s being generated by AI chatbots. In addition, if human intervention is required to trigger copyright protection (as the US’s latest guidance states, for instance), where does this leave AI outputs that incorporate copyrighted content to a problematic extent? 

Interim solutions. In the meantime, companies behind large language models (the models that train generative AI tools) might need to do more to ensure copyrighted content isn’t being used. One solution could be to implement automated mechanisms that detect copyrighted work in material slated to be used in training or generation processes, and drop that part of the data before the training process begins. For instance, web crawlers (which websites can limit or disable) might be prevented from scraping copyrighted content by employing effective coding. 

Another solution – probably more attractive for companies – is to find new ways, such as licensing, to monetise the process in a way that both authors and the AI sector can benefit. Now that would be a win-win.

Caricature of a human hand and an AI hand working on the same cartoon of Zarya of the Dawn

Who is Zarya of the Dawn, the character gracing our front cover?

Zarya is the protagonist of a short comic book written by Kris Kashtanova and illustrated by Midjourney, an AI-based image generator. In September 2022, Kashtanova sought copyright protection for the comic from the US Copyright Office without disclosing that Midjourney was involved in creating the illustrations. The copyright was initially granted, but later the copyright office revoked the artwork’s protection. The copyright office explained that only human-authored works can be protected. In this case, the book’s layout, text, and storyline were eligible for protection, but the images themselves weren’t.

This case sets an important precedent for how copyright law applies to works generated by AI. The copyright office’s decision confirms that humans must be in control of the output, even when a computer is involved in the creative process. By comparison, ‘rather than a tool that Ms Kashtanova controlled and guided to reach her desired image, Midjourney generates images in an unpredictable way. Accordingly, Midjourney users are not the “authors” for copyright purposes of the images the technology generates’.


The Brussels effect: DSA and Trans-Atlantic Data Privacy Framework kick in

When a city becomes synonymous with its rule-making prowess, its lawmakers know they must be doing something right. Such is the worldwide fame of Brussels, home to the EU’s institutions.

In the past few weeks, two new sets of rules kicked in, which, together with the GDPR, are setting new standards for upholding users’ rights and market regulations. Both are likely to shape practices and measures in other countries, testament to the influence of the Brussels effect (a concept coined by a Columbia Law professor, it seems).

The first. The Digital Services Act (DSA) has just begun implementing strict measures on 19 very large online platforms and search engines. These range from the obligation to label all adverts and inform users who’s behind the ads, to allowing users to turn off personalised content recommendations. As with the GDPR, the DSA’s impact extends beyond the boundaries of the EU. Any company servicing European users, no matter where it’s based, will be subject to the new rules. Interestingly, from among those 19 giant companies, only 2 are based in Europe – Booking.com, headquartered in the Netherlands, and Zalando, headquartered in Germany. The rest are predominantly from the US states of California and Washington (that’s 15 of them), and the remaining two (Alibaba and TikTok) are Chinese companies.

The second. The newly adopted EU-US Trans-Atlantic Data Privacy Framework (TADPF) ensures that European citizens’ personal data crossing the Atlantic is afforded the same level of protection in the USA as within the EU. Even the EU’s Court of Justice contributes to the Brussels effect: Before the TADPF, the court invalidated two earlier versions of transatlantic frameworks – the Safe Harbor Act and the Privacy Shield – each time sending policymakers back to the drawing board to see how they could bring US law in line with EU standards.

Respected. The GDPR, which was the first law to earn the Belgian capital its eponym, has been emulated in other countries (the so-called de jure Brussels effect). China’s Personal Information Protection Law (PIPL), for instance, was heavily influenced by the GDPR, featuring provisions on data collection, storage, and use that mirror those in the EU legislation. 

FOMO. But the Brussels effect is also feared by others. In the race to regulate emerging technologies, such as AI, countries vie to get there first, lest they be left behind by other legislation. UK members of parliament have been particularly concerned, and have urged the government to speed things up: ‘We see a danger that if the UK does not bring in any new statutory regulation for three years, it risks the government’s good intentions being left behind by other legislation – like the EU AI Act – that could become the de facto standard and be hard to displace.’

The EU has had to pay a price for its influential rulemaking, as industries are often very critical of the EU’s comparably stringent regulations, accusing it of lacking technological prowess and a competitive edge. But it’s a strategically calculated risk on the part of the EU: Brussels knows all too well that its regulatory power can’t be easily restrained or displaced.


Driverless: The future of robotaxis

The driverless car revolution is taking hold in San Francisco. Hundreds of autonomous cars, owned mainly by Google’s Waymo, General Motors’ Cruise, Uber, and Lyft, can now routinely be seen in the city streets. 

The surge in driverless vehicles comes after the California Public Utilities Commission, a state agency, voted on 11 August to allow Waymo and Cruise to take paying passengers day or night throughout San Francisco.

A small white car with an orange stripe carrying the word Cruise is parked on a street. It has an orange and white traffic cone on its hood.
Unorthodox methods: A group of protestors in San Francisco have been stopping taxis and placing traffic cones on their hoods to trigger safety alarms; the cars remain stuck until a technician resets them. Credit: Safe Street Rebel

Strong opposition. In the lead-up to the vote, the California Public Utilities Commission was faced with vigorous opposition from residents and city agencies. Transportation and safety agencies, such as the police and fire departments, and California residents, opposed expanding paid robotaxi services over safety concerns. Protestors took to the streets not only to highlight safety challenges, but over concerns that the cars were affecting resources needed for public transportation to work well, su ch as blocking a busy thoroughfare or causing congestion with unpredictable manoeuvres.

Car accidents. A few days later, reports of multiple crashes in the city forced the California Department of Motor Vehicles (DMV) to order General Motors to reduce the number of active Cruise vehicles. Residents and city agencies were proven right to worry about safety, but the DMV’s decision hasn’t been enough to assuage concerns. The protests continue.

Teething problems? Every emerging technology experiences teething problems. This turns critical when those problems threaten human life. Luckily, the passengers involved directly in these accidents only suffered non-life-threatening injuries (the claim that two Cruise vehicles that inadvertently blocked an ambulance contributed to a fatal delay in getting the pedestrian to a hospital has been refuted by the company). 

However, this raises a sobering question: What if autonomous vehicle accidents are more serious? The spectre of fatal accidents, reminiscent of Tesla’s 2016 incident, looms as a haunting reminder of the challenges and responsibilities associated with the development of self-driving technology – and the fact that there’s no guarantee that fatal car crashes won’t happen again. Most probably, they will. 

No viral effect. Until robotaxis earn people’s trust, their take-up will be relatively slow. It’s not a matter of buying an appliance after watching rave reviews or signing up on a new social media platform because half the world’s already on it.

Safety concerns form a strong barrier that could hold people back and make potential riders think twice (or even three times) before putting their lives in the hands of a driverless car. The question is: What will it take for robotaxis to earn – or definitively lose – the public’s trust?


PayPal goes where it was feared Libra would tread

It’s been four years since Facebook (now Meta) announced the launch of its digital currency, Libra. At the time, the company was mired in data privacy scandals, sealing the project’s fate before it had any time to fledge.

Fast forward. PayPal has just announced a new project: PayPal dollar, a stablecoin (the digital equivalent of fiat currencies like the US dollar, euro, and others), which is very similar to what Facebook had in mind with Libra. 

 Logo, Text

How it works. PayPal’s plans for its new stablecoin date back to 2020. Created by Paxos, a private tech company specialising in stablecoins, PayPal USD (PYUSD) was launched a few weeks ago on the Ethereum blockchain. Stablecoins’ value is directly linked to an underlying fiat currency, usually the US dollar; in this case, each PayPal dollar coin is backed 1:1 by a USD held in reserve accounts managed by Paxos and other custodians. 

Amid tighter scrutiny. Despite its standing, PayPal is operating in a market where there’s tighter regulatory scrutiny. In November, FTX, then one of the world’s biggest crypto exchanges, went bankrupt. In a related development, Paxos was ordered to stop issuing BUSD, the stablecoin developed by Binance, the world’s largest cryptocurrency exchange. In many ways, PYUSD works very similarly to how BUSD worked (instant payments, low fees). 

Stronger outlook. There are a few fundamental differences that set PayPal and its stablecoin apart. First, PayPal has a better standing in the financial sector than Facebook and Binance could  ever have hoped for. Second, policymakers today are more aware of how stablecoins work and what their benefits (and challenges) are. For instance, the fact that stablecoins are not as volatile as cryptocurrencies makes them a much safer option. PayPal is up-to-date with know-your-customer requirements, and its open-source code allows anyone to inspect it. The odds are in PayPal’s favour.

And yet, PayPal mustn’t take its watershed moment for granted. It can either contribute to regulators’ growing mistrust in cryptocurrencies, or show that stablecoins – the most popular form of cryptocurrency – are the future of digital payments.


DW Weekly #125 – 28 August 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 41

Dear readers,

The EU’s Digital Services Act stole the show last week, with sweeping new rules coming into effect on 25 August for very large online platforms. But for now, that date may not mean much: It’s the DSA’s enforcement that will make the biggest difference. In other news, ransomware has reared its ugly head, while damaged cables have slowed down internet access along Africa’s western coast. Microsoft’s Activision deal is anything but sealed.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

EU DSA’s stricter rules for tech giants come into effect

Much as 25 May 2018 marked the birth of the EU’s General Data Protection Regulation (GDPR), 25 August 2023 will be etched as the day on which very large online platforms and search engines began implementing stricter measures under the EU’s new Digital Services Act (DSA).

The DSA and GDPR have a lot in common. Both prioritise the protection of European users’ rights; both extend their impact beyond the boundaries of the EU; and most significantly, they both (re-) affirm the EU’s role as the leading global authority in setting regulatory standards. So, even if European citizens are the primary beneficiaries, the DSA’s approach to regulating digital services (and how the EU will enforce those rules) will undoubtedly influence how other countries address similar issues. 

Which users will benefit most from the new rules? 

European users. But remember how the GDPR influenced non-EU jurisdictions to adopt similar rules? Companies that operate globally may also decide to adjust their practices for their non-EU user base while making these changes, as applying different rules to different markets is time-consuming, costly, and complex.

Which companies are affected?

For now, it’s the 19 very large platforms and search engines, each of which has at least 45 million monthly active users: AliExpress, Amazon Store, Apple AppStore, Bing, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Google Search, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, and Zalando. As of February 2024, the DSA will impose some of these obligations on smaller companies. 

What do very large platforms and search engines need to do?  

  1. Make it easier for users to report illegal content.
  2. Remove illegal content quickly.
  3. Label all ads and inform users about who is promoting them. While they’re at it, they also need to publish repositories of all the ads shown on their platforms.
  4. Clarify terms and conditions by providing an easily understandable, plain-language summary.
  5. Allow users to turn off personalised content recommendations.
  6. Ban targeted adverts to children and ads based on a user’s sensitive data.
  7. Analyse the specific risks in their platforms and practices, and establish mitigation measures.
  8. Publish transparency reports on how content moderation is implemented.

Have companies started implementing these changes?

In all fairness, some of these obligations (such as transparency reports by Google, Facebook, Snapchat, and others) have existed for years. Other changes have been implemented during the past weeks, including ad libraries published by TikTok and Booking.com; simplified terms and conditions posted by AliExpress; Facebook’s ad limitations for teenagers; and more straightforward reporting tools by Google. But there are changes we haven’t seen yet – where is Booking.com’s simplified version of their terms? – and others that must be carried out in due time (such as risk assessments by the end of the year).

Will the EU monitor compliance?

Definitely. The European Commission will actually be in charge itself, which is perhaps the biggest difference between the DSA and the GDPR. To do so, the commission and the entities helping it will need more staff, reports suggest. (In comparison, Facebook had a 1,000-strong team working on the DSA). Digital Services Coordinators – national regulators tasked with overseeing the DSA’s implementation – must also be appointed by February. 

The DSA has yet to face its greatest challenge. Enforcing the rules remains an uncharted territory. But for now, it’s essentially a waiting game.

Digital policy roundup (21–28 August)
// AI GOVERNANCE //

BRICS announces new body to develop AI governance frameworks

The BRICS countries (Brazil, Russia, India, China, and South Africa) have joined the list of groups establishing specialised entities to cover AI governance issues. 

Addressing the annual summit, China’s President Xi Jinping referred to a new BRICS AI study group, as part of the BRICS Institute of Future Networks, that would develop governance frameworks and standards, and help make AI technologies ‘more secure, reliable, controllable, and equitable’.

Why is it relevant? Although there’s a placeholder for this new working group on the institute’s website, the institute doesn’t divulge any details, nor does the BRICS’ final communique refer to this development.


// CYBERSECURITY //

Ransomware on the rise; MOVEit vulnerability partly to blame

The NCC Group security company reported the highest record number of ransomware attacks in July. The company said that over 500 cyberattacks were recorded, most targeting large companies. The increase has been attributed to the exploitation of a vulnerability in MOVEit, a file transfer software, by a hacker group known as CLOP or Cl0p

Why is it relevant? If you thought cybercrime takes a break in summer, think again. The list of victims affected by CLOP since June seems endless (over 1,000 entities and millions of users) and includes airlines, universities, and health centres.

Plan ahead to counter quantum-powered cyberattacks, US security institutes urge

The US Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the National Institute of Standards and Technology (NIST) are urging organisations, especially those supporting critical infrastructures, to plan early for the probability (not if, but when) of quantum-powered cyberattacks. 

The agencies are advising organisations to start thinking about migrating to post-quantum cryptographic standards, and have released guidelines on how to prepare a customised roadmap.

Why is it relevant? To explain the upcoming risk, we’ll cite an excerpt from our ongoing infrastructure policy course: ‘Breaking one of the most secure codes of today… by trying all the possible options with a conventional computer would take around 300 trillion years. A powerful quantum computer would take only 8 hours for this task. In essence, all of the data we have ever encrypted could suddenly become exposed, and most of the current encryption algorithms rendered obsolete.’


// DATA PROTECTION //

Data scraping concerns raised by data protection authorities

Data protection authorities from around the world have issued a joint statement expressing their concerns about the practice of data (or web) scraping by tech companies due to the potential of data scraping technologies to harvest personal data. Just because information is publicly available on the internet does not mean that privacy protections no longer apply, the statement said.

The statement, issued by the privacy protection authorities of New Zealand, Canada, Australia, the United Kingdom, Hong Kong, Switzerland, Norway, Columbia, Morocco, Argentina, Mexico and Jersey, was sent to several tech companies.

Why is it relevant? The statement highlights one of the most widely used techniques for harvesting internet content to train large language models. Although many platforms prohibit web scraping (not to mention the data protection laws that also impose restrictions), the practice is nonetheless prevalent.


// ANTITRUST //

Back to the drawing board? The EU might reassess the Microsoft-Activision acquisition.

Microsoft has agreed to transfer the licensing rights for cloud streaming of Activision Blizzard games to Ubisoft, in order to win approval from the UK to acquire Activision. All will be well and good if the UK’s Competition and Markets Authority agrees.

But Microsoft’s new proposal has also prompted the European Commission to reconsider whether it should reevaluate the deal once more, according to a media report

Why is it relevant? The commission approved the deal in May; Microsoft’s new strategy could upset the approval that the commission had granted, placing the planned merger on an uncertain track once again.


// SUBSEA CABLES //

Western Africa’s choppy internet access after cable damage

It could take weeks for Africa’s internet connection to be fully restored, after an underwater landslide in Canyon damaged two major submarine cables. The impacted cables are the SAT-3 and WACS cables, which led to the loss of international internet bandwidth along the western coast of Africa. 

At the time of writing, the cable-laying ship Léon Thévenin was still on its way to the suspected break points off the Congo coast after setting out from Cape Town in South Africa last week. The cables were damaged earlier in August

Why is it relevant? We take undersea cables largely for granted. Not only do they carry over 90% of the world’s internet traffic, but there can be serious implications (economic impact, disrupted communications, etc.) when they get damaged.

The week ahead (28 August–4 September)

21 August–1 September: The UN Ad Hoc Committee working on a new cybercrime convention is meeting in New York for its 6th session.

1–4 September: The self-organised privacy and digital rights conference Freedom Not Fear returns to Brussels this weekend.

#ReadingCorner

Job losses or better prospects?

AI is more likely to enhance jobs by automating some tasks rather than replacing them entirely, according to a new study by the Geneva-based International Labour Organization (ILO). The extent of automation hinges on a country’s level of development: The higher a country’s income, the higher the likelihood of automation. Full text.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #124 – 21 August 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 51

Dear readers,

The already fragile relationship between the USA and China is becoming further complicated by new restrictions and measures affecting the semiconductor industry. On the AI regulation front, nothing much has happened, but we can’t say the same for data protection and privacy issues.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

USA to restrict investment in China in key tech sectors

The US government announced plans to prohibit or restrict US investments in China in areas deemed critical for a country’s military, intelligence, and surveillance capabilities across 3 industry sectors – semiconductors, quantum technologies, and (certain) AI systems. The decision stems from an executive order signed by US President Joe Biden on 9 August 2023, which grants authorisation to the US Treasury Secretary to impose restrictions on US investments in designated ‘countries of concern’ – with an initial list that includes China, Hong Kong, and Macau.

The executive order serves a dual purpose: to preempt potential national security risks and to regulate investments in sectors that could empower China with military and intelligence advantages. While the US already enforces export restrictions on various technologies bound for China, the new executive order extends its scope to restrict investment flows that could support China’s domestic capabilities.

Semiconductors. The intent to impose restrictions on semiconductors – now a critical strategic asset due to their integration into so many industries – is particularly significant. It comes at a time when the semiconductor landscape is increasingly intertwined with geopolitical considerations of market dominance, self-sufficiency, and national security. A move on one geopolitical side usually triggers repercussions on the other, as history has confirmed time and again.

Delayed countermeasures? So far, this hasn’t been the case. China’s reaction has been a mix of caution and concern, with no actual countermeasures announced yet. One questions whether this is a sign that Beijing will react more cautiously than usual. Although Chinese authorities have expressed disappointment, Beijing has only said so far that China is undergoing a comprehensive assessment of the US executive order’s impact and will respond accordingly. 

Too early. There are several reasons that could explain this reaction. The restrictions won’t come into effect before next year (and even then, they won’t apply retroactively). It might therefore be too early to gauge the implications of what the order and the US Treasury’s regulations will mean for China. 

Antitrust arsenal. China may also opt to hit back through other means, as it has been doing with merger approvals involving US companies. China’s failure to approve Intel’s acquisition of Israel’s Tower Semiconductor is a tough blow. (More coverage below).

Reactions from US allies. Beijing may also be waiting for more concrete reactions from other countries. Both the EU and the UK have signalled their intent to adopt similar strategies. The European Commission said it was analysing the executive order closely, and will continue its cooperation with the US on this issue, while the UK’s premier Rishi Sunak is consulting on the issue with UK businesses

It seems that neither the EU nor the UK is expected to immediately follow the USA. For China, the USA is a confrontational open book; the EU is diplomatically less so.


Digital policy roundup (7–21 August)
// SEMICONDUCTORS //

China blocks Intel’s acquisition of Tower Semiconductor

Intel has abandoned its plans to acquire Israeli chipmaker, Tower Semiconductor, after Chinese regulators failed to approve the deal. The acquisition was central to Intel’s efforts to build its semiconductor business and better compete with industry giant Taiwan Semiconductor Manufacturing Company (TSMC).

Acquisitions involving multinational companies typically require regulatory approval in several jurisdictions, due to the complex operations and market impact in those countries. China’s antitrust regulations require that a deal be reviewed if the 2 companies seeking a merger have a total revenue of more than USD117 million a year from China

Why is it relevant? The failure of the deal shows how China is able to disrupt strategic plans for US companies involved in the semiconductor industry. In Intel’s case, the move will complicate its plans to increase the production of chips for other companies alongside its own products.

microchips on a circuit board
Campaigns 52

// AI GOVERNANCE //

Canada opens consultation on guardrails for generative AI 

The Canadian government has just launched a draft code of practice for regulating generative AI, and is seeking public input. The code consists of six elements:

1. Safety: Generative AI systems must be safe, and ways to identify potential malicious or harmful use must be established.

2. Fairness: The system’s output must be fair and equitable. Datasets are to be assessed and curated, and measures to assess and mitigate biassed output are to be in place.

3. Transparency: The system must be transparent.

4. Human supervision: Deployment and operations of the system must be supervised by humans, and a mechanism to identify and report adverse impacts must be established.

5. Validity and robustness: The system’s validity and robustness must be ensured by employing testing methods and appropriate cybersecurity measures.

6. Accountability: Multiple lines of defence must be in place, and roles and responsibilities have to be clearly defined to ensure the accountability of the system.

Why is it relevant? First, it’s a voluntary code that aims to provide legal clarity ahead of the implementation of Canada’s AI and Data Act (AIDA), known as Bill C-27, which is still undergoing parliamentary review. Second, it reminds us of the European Commission’s approach: Developing voluntary AI guardrails ahead of the actual AI law.


// DATA PROTECTION //

Meta seeks to block Norwegian authority’s daily fine for privacy breaches

The Norwegian Data Protection Authority has imposed daily fines of one million kroner (USD98,500) on Meta, starting from 14 August 2023. These penalties are a consequence of Meta’s non-compliance with a ban on behaviour-based marketing carried out by Facebook and Instagram. In response, Meta has sought a temporary injunction from the Oslo District Court to halt the ban. The court will review the case this week (22–23 August).

The Norwegian watchdog believes Meta’s behaviour-based marketing – which involves the excessive monitoring of users for targeted ads – is illegal. The watchdog’s ban does not prohibit the use of Facebook or Instagram in Norway.

What is it relevant? The GDPR, the EU’s data protection regulations, offer companies six options for gathering and processing people’s data, depending on the context. Meta attempted to rely on two of the options (the ones where users do not need to consent specifically. But European data protection authorities deemed Meta’s use of these options for its behaviour-based marketing practises illegal. On 1 August, Meta announced that it would finally switch to asking users for specific consent, but so far, it hasn’t yet done so.

 Person, Security
Campaigns 53

Google fails to block USD5 billion consumer privacy lawsuit

A US District judge has rejected Google’s bid to dismiss a lawsuit claiming it invaded the privacy of millions of people by secretly tracking their internet use. The reason? Users did not consent to letting Google collect information about what they viewed online, because the company never explicitly told them it would. The case will therefore continue.

Why is it relevant? Many people believe that using a browser’s ‘private’ or ‘incognito’ mode ensures their online activities remain untracked. However, according to the plaintiffs, Google continues to track and gather browsing data in real time. 

Probable outcomes: Google’s explanation of how private browsing functions states that data won’t be stored on devices, yet websites might still collect user data. This suggests that the problem might boil down to two aspects: Google’s representation of its privacy settings (the fact that user data is still collected renders the setting neither private nor incognito), and the necessity of seeking user consent regardless.

Case details: Brown et al v Google LLC et al, US District Court, Northern District of California, No. 20-03664


Was this newsletter forwarded to you, and you’d like to see more?


// NEWS MEDIA //

Canadian PM criticises Meta for putting profits before safety

Canadian Prime Minister Justin Trudeau has criticised Meta for banning domestic news from its platforms as wildfires ravage parts of Canada. Up-to-date information during a crisis is crucial, he told a news conference. ‘Facebook is putting corporate profits ahead of people’s safety.’

Meanwhile, Canadian news industry groups have asked the country’s antitrust regulator to investigate Meta’s decision to block news on its platforms in the country, accusing the Facebook parent of abusing its dominant position.

Why is it relevant? The fight is turning into both a safety and an antitrust issue. Plus, we’re not sure Meta is not doing itself any favours by telling Canadian users that they can still access timely information from other reputable sources, and directing them to its Safety Check feature which allows users to let their Facebook friends know they are safe. 


// TIKTOK //

TikTok adapts practices to EU rules, allowing users to opt out of personalised feeds…

TikTok will allow European users to opt out from receiving a personalised algorithm-based feed. This change is in response to the EU’s Digital Services Act (DSA), which imposes more onerous obligations on very large platforms such as TikTok. 

The new law also prohibits companies from targeting children with advertising. The DSA’s deadline for companies to implement these changes is 28 August. 

Why is it relevant? With TikTok’s connections to China and the ensuing security concerns, the company has been trying very hard to convince European policymakers of its commitment to data protection and the implementation of robust safety measures. A few weeks ago, for instance, it willingly subjected itself to a stress test (which pleased European Commissioner for Markets Thierry Breton very much). Compliance with the DSA could also help improve the company’s standing in Europe.

…but is banned in New York City

New York City has implemented a TikTok ban on government-owned devices due to security and privacy concerns. The ban requires NYC agencies to remove TikTok within 30 days, and employees are barred from downloading or using the app from any city-owned devices and networks. The ban brings NYC in line with the federal government.

Why is it relevant? TikTok has faced bans around the world, but perhaps the toughest restrictions (including draft laws with more restrictions) in the USA. And yet, generative AI seems to have displaced the legislative momentum of imposing more restrictions on TikTok.


Law enforcement personnel presence dominates the scene on Oxford Street
Campaigns 54

TikTok, Snapchat videos encourage looting

There were several arrests and a heavy police presence on Oxford Street, London, on 9 August, after videos encouraging people to steal from shops made the rounds on TikTok and Snapchat. A photo circulating on social media with the time and location of the planned loot said: ‘Last year was lit, we know this years gonna be 10x better’ (this message has since been taken down). Meanwhile, former Chief Superintendent of Greater London’s Metropolitan Police, Dal Babu, has criticised politicians for their reluctance to confront technology firms. Similar grab-and-go flash mob shoplifting has occurred in the USA. Photo credit: Skynews


The week ahead (21–28 August)

21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session

25 August: Very Large Online Platforms and search engines must comply with the DSA’s obligations


#ReadingCorner

Rise in criminals’ use of generative AI, but impact is limited so far: study

Cybercriminals have shown interest in using AI for malicious activities since 2019, but its adoption remains limited, according to researchers at Mandiant, a cybersecurity company owned by Google. The malicious use of generative AI is mainly linked to social engineering, a practice involving fraudsters impersonating a trusted entity to trick users into providing confidential information. What about the techniques which criminals are using? The researchers say that criminals are increasingly using imagery and video in their campaigns, which are more deceptive than text-based or audio messages. Access the full report.

 Crowd, Person, People, Audience, Business Card, Paper, Text, Press Conference, Face, Head
Fake! Screenshot from an AI-generated deepfake video of Ukrainian President Volodymyr Zelenskyy stating that Ukraine would surrender to Russia. Source: Mandiant.com

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #123 – 7 August 2023

 Text, Paper, Page
Campaigns 63

Dear all,

The recently approved EU-US Data Privacy Framework is about to undergo the same legal battle as its predecessors starting in September. In other news, OpenAI filed a trademark application for GPT-5 (we raised our eyebrows too), and Zoom is under fire for data processing practices related to training AI models and use of user content. Google’s antitrust case in Italy over data portability has been settled, but the US Justice Department’s case will go to trial next month (we’ll cover this one in upcoming digests).    

Let’s get started.
Stephanie and the Digital Watch team
PS. We’re taking a short break next week; expect us back in a fortnight.


// HIGHLIGHT //

Schrems III: EU-US privacy framework to be challenged in court in September

A legal challenge to the recently approved EU-US Trans-Atlantic Data Privacy Framework (TADPF) is expected to be filed by Austrian privacy activist Max Schrems, chairman of NOYB (European Center for Digital Rights, called NOYB for None Of Your Business) in September

The new framework, which governs the transfer of European citizens’ personal data across the Atlantic, was finalised by the European Commission and the US government last month. Known as the TADPF on Twit…sorry, X, the framework is actually the third of its kind, succeeding the invalidated Safe Harbour in October 2015 and the Privacy Shield in July 2020. Notably, it was Max Schrems who played a significant role in invalidating both frameworks, earning the distinctive labels Schrems I and Schrems II for each case.
NOYB had already announced its plans to challenge the new framework a few weeks ago, which it says is essentially a copy of the failed Privacy Shield.

 Person, Text
Campaigns 64

Issue #1: Surveillance on non-US individuals

The fundamental problem with the new framework, much like the previous versions, has to do largely with a US law: Section 702 of the Foreign Intelligence Surveillance Act (FISA), which allows for surveillance against non-US individuals. Although the US 4th Amendment protects the privacy of American citizens, European citizens have no constitutional rights in the USA. Therefore they cannot defend themselves from FISA 702 in the same way. 

At the same time, in the EU, personal data may only leave the EU if adequate protection is ensured. So what the USA and EU agreed to, for the EU to green-light data transfers under the new framework, was to limit bulk surveillance to ‘what is necessary and proportionate’ and share a common understanding of what ‘proportionate’ means without actually undermining the powers that US authorities wield.

Issue #2: The redress mechanism

The previous framework for citizens seeking redress through the ombudsperson did not align with European law. The new agreement introduces changes by establishing a Civil Liberties Protection Officer and a body referred to as a court (which NOYB thinks is simply a semi-independent executive entity).

Although there are some minor enhancements compared to the ombudsperson, individuals will probably have no direct interaction with the new bodies, so the outcomes of seeking redress will be similar to those that the former Ombudsperson could have reached.

On the path to Schrems III

The system needs to be implemented by companies, so that it can be challenged by a person whose data is transferred under the new instrument. Schrems indicated the lawsuit will be filed in Austria, his home country. 

Then, it is hoped that the Austrian court will quickly decide to accept or reject, this challenge, and send it to the Court of Justice of the European Union (CJEU).

Is there any chance that this trajectory might be avoided? Yes, but it’s unlikely. FISA 702 has a sunset clause, which means that it needs to be re-authorised by the US Congress by the end of 2023. The new litigation will add further pressure to existing calls for reforming FISA 702, but Schrems himself thinks the US government may not be willing to reauthorise or reform FISA 702, since the framework has now been agreed. 

As the Schrems III litigation unfolds, it is increasingly probable that the case will end up before the CJEU, where Schrems has strong confidence in the outcome: ‘Just announcing that something is “new”, “robust” or “effective” does not cut it before the Court of Justice.’


Digital policy roundup (31 July–7 August)
// AI GOVERNANCE //

OpenAI files trademark application for GPT-5

OpenAI has filed a trademark application for GPT-5 at the US Patent and Trademark Office, aiming to cover various aspects such as AI-generated text, neural network software, and related services. While the filing was spotted by a trademark attorney (who tweeted about it), there has been no official confirmation from OpenAI about GPT-5. 

A trademark application doesn’t always mean a working product is in the making. Often, companies file trademarks to stay ahead of competitors or protect their intellectual property. 

Why is it relevant? OpenAI CEO Sam Altman recently denied that the company was working on GPT-5. During an event at MIT, Altman reacted to an open letter requesting a pause in the development of AI systems more powerful than GPT-4. Altman clarified that the letter lacked technical nuance and mistakenly stated that OpenAI is currently training GPT-5, deeming it ‘sort of silly’. (Jump to minute 16’00 to listen to the recording). Time will tell.


Zoom under fire for training AI models with user data without opt-out option

Zoom’s latest update to its Terms of Service will allow it to leverage user data for machine learning and AI, without providing users the possibility of opting out.

In addition, Section 10.4 of the updated terms also grants Zoom a ‘perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license’ to use customer content in any way it likes.

Why is it relevant? First, it gives Zoom a sweeping range of powers over people’s content (the argument that users should read the terms and conditions will not earn Zoom any kudos from users nor alleviate their concerns). Second, the Zoom case echoes one of the earliest legal challenges that OpenAI faced when the Italian data protection authority banned ChatGPT from Italy, and later allowed it to operate after OpenAI ‘granted all individuals in Europe, including non-users, the right to opt-out from processing of their data for training of algorithms also by way of an online, easily accessible ad-hoc form’. But there was one main difference: OpenAI uses legitimate interest as a basis for using data to train its models, which means it needs an opt-out form. Zoom users can enable generative AI features, but as yet, there is no clear way to opt out.

UPDATE (8 August 2023): Zoom updated its terms of service in the evening of 7 August (right after this issue was published) to say that ‘Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent’. However, it’s unlikely that this will alleviate concerns: First, the term ‘customer content’ does not cover all the content that Zoom will use to train its AI models; second, it’s still unclear whether Zoom is seeking to obtain users’ consent (in Europe) in accordance with GDPR requirements; third, there’s still no possibility to opt out – at least, not a straightforward one; fourth, there’s no change to the sweeping powers Zoom has given itself over user content (Section 10.4).


 Person, Text
Campaigns 65

Are labels for AI-generated content around the corner?

Alessandro Paluzzi, a mobile developer and self-proclaimed leaker, has disclosed that Instagram is developing a label specifically for AI-generated content. As companies vie for dominance in generative AI technology, the introduction of content labels thrusts them into a race on combating misinformation. The tool that successfully and accurately labels AI content could earn the trust of users and governments.


UK labels AI as chronic risk

AI has now been officially classified, for the first time, as a security threat to the UK, as stated in the recently published National Risk Register 2023. This level of risk falls into the category of chronic risks, different from acute risks because they present ongoing challenges that gradually undermine our economy, community, way of life, and national security. While chronic risks typically unfold over an extended time, they are not limited to such.

The advancements in AI systems and their capabilities entail various implications, including both chronic and acute risks. For instance, they could facilitate the proliferation of harmful misinformation and disinformation. If mishandled, these risks could have significant consequences.

Why is this relevant? The UK recently announced it will host the first global summit on AI Safety, bringing together key countries, leading tech companies, and researchers to agree (hopefully) safety measures to evaluate and monitor risks from AI. The UK also recently chaired the UN Security Council’s first-ever debate on AI.


Was this newsletter forwarded to you, and you’d like to see more?


// CRYPTO-BIOMETRICS //

WorldCoin wants to attract governments; Kenya suspends project

Tools For Humanity, the San Francisco and Berlin-based company behind WorldCoin, the new crypto-biometric project we wrote about last week, hopes it will attract governments to use it

Ricardo Macieira, general manager for Europe at Tools For Humanity, said the company’s idea is to build the infrastructure for others to use it.

Why is this relevant? The project is already shrouded in controversy over Worldcoin’s data collection processes, not least because of the crypto-for-iris scans method of encouraging sign-ups. Kenya is the latest country to investigate the project, and has suspended local activities of WorldCoin in the meantime.


// KIDS //

China proposes screen time limits for kids

The Cyberspace Administration of China (CAC) released draft guidelines for the introduction of screen time software to curb the problem of smartphone addiction among minors and the impact the government says screen time has on children’s academic performance, social skills, and overall well-being. The regulations mandate curfew and time limits by age, as well as age-appropriate content. 

The draft rules also provide for anti-bypass functions, such as restoring factory settings if the device is not used according to the rules. 

Why is it relevant? The guidelines, which are an add-on to previous regulations that restrict the amount of time under-18s spend online, give parents much of the management responsibility. This makes the widespread enforcement of the rules questionable – which we’re pretty sure is what kids in China are hoping for.


// COMPETITION //

Italian consumer watchdog closes Google’s data portability investigation

Italy’s Autorità Garante della Concorrenza e del Mercato (AGCM) has accepted commitments proposed by Google, ending its investigation over the alleged abuse of its dominant position in the user data portability market. Data portability, governed by the GDPR, allows users to move their data between services, creating competition for companies like Google. 

Google presented three commitments: The first two offer supplementary solutions to Takeout, which helps users backup their data, making it easier to export to third-party operators. The third commitment allows testing of a new solution that enables direct data portability between services, with authorisation from users. This aims to improve interoperability within the Google ecosystem.

Why is it relevant? First, amid the multitude of antitrust cases faced by the company worldwide, this particular one had the potential to escalate further, but reached its resolution here. Second, the benefits of this outcome extend beyond just Italian users.


The month ahead (August)

More as a reminder, since we covered these events last week. It will be a quiet month. Happy August! 

10–13 Aug: DEF CON 31 is the Las Vegas event which will show, among other workshops, training, and contests, the White House-backed red-teaming of OpenAI’s models.

21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session.

25 August: Very large online platforms and search engines must start abiding by the DSA’s obligations.


#ReadingCorner
 Sphere

WSJ: How Binance transacts billions in Chinese market despite ban

It seems that cryptocurrency exchange Binance continues to operate in China despite the country’s ban on cryptocurrencies. Binance reportedly does around $90 billion worth of business in China, one of its largest markets. An investigative article from the Wall Street Journal explores how Binance is able to operate in China despite the ban and the potential risks associated with doing so.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #122 – 31 July 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 73

Dear readers,

A new biometric-cryptocurrency project has diverted everyone’s attention from AI developments to iris patterns and privacy issues. Still, over at the regulators in charge of competition, no fewer than four new cases against Big Tech emerged, with two of them outlined below.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

There’s a new device in town, 
and it’s coming for your iris

Nowadays, people are happily sharing biometric data through their trendy smartwatches. The allure of profiting from cryptocurrency is as tantalising as Bitcoin’s early days. And Sam Altman has garnered a considerable following of techno-enthusiasts since the launch of ChatGPT.

So the timing couldn’t be better for Sam Altman to relaunch his Worldcoin project, a cryptocurrency-cum-identity network that functions by verifying that someone is both a human being and a unique person. The verification is carried out by a custom-built spherical device called an Orb. (Read about Worldcoin’s history.)

Are you unique? The uniqueness requirement is why verification is based on an iris scan: Since the structure of our irises is both individually identifiable and stays more or less the same over time, iris biometrics are much more accurate and reliable.

 machine, Spoke, Wheel, Electronics, Photography, Camera
A cross-section of the Orb. Source: Worldcoin

Privacy safeguards: WorldCoin also provides some privacy features. Iris scans are processed locally on each Orb, and turned into a set of numbers. The original scan is then deleted (unless the user prefers to have it stored on Worldcoin’s servers ‘to reduce the number of times you may need to go back to an Orb’).

Regulators stepping in: Despite the safeguards, European regulators have been quick to react. France’s privacy watchdog said it had reservations about the legality of the biometric data collection, and how it’s being stored. The UK said it was reviewing the project

Germany is way ahead: Its data protection regulator in Bavaria – the lead EU authority investigating Worldcoin due to the company’s German subsidiary in the region – has been investigating the project’s biometric data processing since November last year.

Why the iris scanning project is more than a headache

As the investigations unfold, there are several challenges that are raising alarm bells about the iris project.

1. The massive database. Regardless of all the noble purposes (mostly) behind the Worldcoin project, the fact is that a massive biometric database is being built. And we all know the risks that come with that – from breaches to data misuse. 

2. The Orb operators. Let’s say your iris pattern is deleted immediately. There are still plenty of risks associated with how that data is collected. The company emphasises that the orb operators – the people entrusted with the shiny spheres – are independent contractors over whom ‘we have no control over and disclaim all liability for what they say or how they conduct themselves’. 

3. The money pitch. Worldcoin is providing people with an incentive to have their irises scanned: the prospect of making money. ‘Eligible verified users’, that is, anyone who’s had their iris scanned, ‘can claim one free WLD token per week with no maximum.’ On the one hand, a company is finally paying users for their data, but on the other hand, that data is sensitive biometric information. Are users on an equal footing with the company in this exchange? Should the sale of sensitive biometric information be permitted? It’s a transaction that warrants closer scrutiny.

Beyond the boundaries of what is acceptable or prohibited, projects that involve large-scale collection of biometric data are undoubtedly contributing to society’s changing attitudes towards privacy. It’s probably time to reassess the essence of what users are actually trading, and more than that, whether users have the power to defend their rights and position in this negotiation.


Digital policy roundup (24–31 July)
// AI //

Industry leaders partner to establish forum for responsible development of frontier AI 

Four companies developing AI – Anthropic, Google, Microsoft, and OpenAI – have launched a new industry body to focus on the safe and responsible development of frontier AI models, that is, models that exceed the capabilities of what’s currently available.

The Frontier Model Forum will focus on identifying best practices for safety standards, advancing AI safety research by coordinating efforts on areas like adversarial robustness and interpretability, and facilitating secure information sharing between companies and governments on AI safety and risks.

Why is this relevant? Beyond the AI models we see today, over 350 AI experts recently raised concerns on the potential for future AI to bring about human extinction and other global perils, such as pandemics and nuclear warfare. The list of signatories comprised the leaders of the very AI companies driving the Frontier Model Forum forward. 


// CONTENT POLICY //

Biden administration challenges social media censorship order

The Biden administration has criticised a recent court order restricting government officials’ communications with social media companies as overly broad. Appealing the court order, the government said the order hampers its ability to fight misinformation, and must be lifted. 

How it started. In May 2022, the attorneys general of Missouri and Louisiana sued the government for demanding that social media platforms remove content that the government deemed misinformation. On 4 July 2023, the Louisiana court ordered government agencies to refrain from communicating with social media companies for the purpose of moderating content. In other words, the court said the government was only allowed to contact social media companies on content related to national security threats, criminal activity, and cyberattacks.

The government’s counter-argument. It’s one thing to try to persuade platforms, and quite another to coerce them. ‘The district court’s ruling ignored that fundamental distinction… [it] equated the government’s legitimate efforts to identify truthful information with illicit efforts to “‘silenc[e] the voice of opposition’… and… to coerce.’

Why is this case relevant? First, this places a wedge between the US government and social media companies by setting a precedent for how the US government can interact with social media companies. Second, it affects the way misinformation is tackled by undermining the credibility of public authorities as trustworthy providers of information. Third, the idea that social media giants such as Facebook and Twitter can be easily coerced into compliance is not exactly the image we all have of them…

Case numbers: District Court, W.D. Louisiana, 3:22-cv-01213; Court of Appeals, 5th Circuit, 23-30445

Breton tells NGOs: Shutdowns only in far-reaching situations; courts will have final say

You could say that Internal Market Commissioner Thierry Breton rocked the boat a little when he recently suggested on France Info that online platforms could be shut down if they don’t remove illegal content immediately, especially when riots and violent protests are involved. Over 60 civil rights NGOs immediately asked him to clarify that the Digital Services Act (DSA) would not be used as a censorship tool.

Breton has now clarified his comment: The possibility of a temporary suspension is a last resort if a platform fails to take necessary and effective actions in far-reaching situations, such as systemic failure to terminate infringements linked to calls for violence or manslaughter. In any case, the courts will have the final say.

Why is this relevant? The exchange between the European Commission and the NGOs served to clarify what type of last-resort measures against infringement can be ordered by authorities. The DSA’s obligations for very large online platforms and search engines come into effect on 25 August. 


// ANTITRUST //

EU confirms antitrust investigation against Microsoft for bundling Teams with Office

It didn’t take long for the European Commission to confirm our hunch from last week. Just days after Alfaview’s anti-competition complaint against Microsoft, the commission launched formal proceedings against Microsoft for bundling the communication software Teams with its Office 365. 

A long time coming. At the height of the COVID-19 pandemic in 2020, Zoom soared to success while Teams emerged as a formidable competitor. It was during this time that Microsoft decided to bundle Teams with Office. The move faced backlash from Slack, a rival company (that was subsequently acquired by Salesforce in 2021), which complained to the commission that Microsoft’s bundling constituted an abuse of its dominant position.

Why is this case relevant? This makes it the first investigation by the European Commission against Microsoft since the Internet Explorer bundling case concluded in 2009 (Microsoft was fined a few years later for breaching its commitments). This case also highlights the limited effectiveness of antitrust laws and enforcement in deterring dominant companies. Even if Microsoft were to lose the case, Teams would remain firmly established as one of the leading meeting software apps, making any findings of anti-competitive behaviour ineffective in displacing it.

Case number: AT.40721

French competition authority to investigate Apple’s app tracking policy 

The French competition authority has launched an investigation into Apple’s practices for allegedly abusing its dominant market position. Advertisers have complained that while Apple imposes its App Tracking Transparency (ATT) policy upon them, it exempts itself from the same regulations, resulting in self-preferential treatment.

The issues with Apple’s tracking policy. Apple’s ATT policy, first announced in 2020, triggers a privacy pop-up to iPhone and iPad users during the installation of third-party apps attempting to track them. That’s very much welcomed by privacy advocates. However, app developers say that this policy does not extend to Apple’s own apps, creating hesitation among users to allow third-party tracking, leading them to favour Apple’s apps. This also means that Apple has access to more complex device and advertising data than third-party developers, allowing it to more accurately target its ads to users in ways that third-party developers cannot.

Apple says its apps do not track users via third-party apps, and hence, do not require the ATT prompt. But competition authorities aren’t so sure anymore that this isn’t an abusive self-preferencing practice.

Screenshot from an iPhone shows a pop-up message asking whether the user wants to allow the app ‘PalAbout’ to track their activity across other companies’ apps and websites.

Why is this case relevant? First, this case has been gaining momentum since 2020. At that time, the French Competition Authority was approached by advertising associations with a complaint against the ATT policy and a request for interim measures against Apple. A year later, the French authority concluded that there was nothing wrong with providing users additional possibilities for deciding whether they wished to be tracked, and after all, at that time, the French authority did not have any proof that Apple was subjecting third-party app developers to stricter measures than those it imposed on itself for comparable purposes. And this is precisely what the French authority will now be looking at. Second, because multiple jurisdictions are looking into the same issue, including the UK, Italy, Germany, and California.


Was this newsletter forwarded to you, and you’d like to see more?


The month ahead (August)

Since it’s a relatively quiet month, we’re looking ahead at the next 4-5 weeks:

10–13 August: DEF CON 31 is the Las Vegas event which will show, among other workshops, training, and contests, the White House-backed red-teaming of OpenAI’s models.

21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session.

25 August: Very large online platforms and search engines must start abiding by the DSA’s obligations as of this date. 


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #121 – 24 July 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 82

Dear readers,

It’s more AI governance this week: The US White House is inching towards AI regulation, marking a significant shift from the laissez-faire approach of previous years. At the UN, the Secretary-General is also shaking (some) things up.
Elsewhere, cybercrime is rearing its ugly head, bolstered by generative AI. Antitrust regulators gear up for new battles while letting go of others. And in case you haven’t heard, Twitter’s iconic blue bird logo is no more.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Voluntary measures a precursor to White House regulations on AI

There’s more than meets the eye in last week’s announcement that seven leading AI companies in the USA – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – agreed to implement voluntary safeguards. The announcement made it crystal clear that executive action on AI is imminent, indicating a shift to a higher gear towards AI regulation within the White House.

AI laws on the horizon

In comments after his meeting with AI companies, President Joe Biden spoke of plans for new rules: ‘In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation.’ The White House also confirmed that it ‘is currently developing an executive order and bipartisan legislation to position America as a leader in responsible innovation’. In addition, the voluntary commitments state that they ‘are intended to remain in effect until regulations covering similar issues are officially enacted’. 

In June, officials revealed they were already laying the groundwork for several policy actions, including executive orders, set to be unveiled this summer. Their work involved creating a comprehensive inventory of government regulations applicable to AI, and identifying areas where new regulations are needed to fill the gaps.

The extent of the White House’s shift in focus will be revealed when the executive order(s) are announced. One possibility is that they will focus on the same safety, security and trust aspects that the voluntary safeguards reflect on, mandating new rules to fill in the gaps. Another possibility, though less likely, is for the executive action to focus on tackling China’s growth in the AI race

The voluntary measures

While the voluntary commitments address some of the main risks, they mostly encompass practices that companies are either already implementing or have announced, making them less impressive. In a way, the commitments appear reminiscent of the AI Pact announced by European Commissioner Thierry Breton as a preparatory step for the EU’s AI Act – a way for companies to get ready for impending regulations. In addition, these commitments apply primarily to generative models that surpass the current industry frontier in terms of power and scope.

The safeguards revolve around three crucial principles that should underpin the future of AI: Safety, security, and trust.

1. Safety: Companies have pledged to conduct security testing of their AI systems before release, employing internal and external experts to mitigate risks related to biosecurity, cybersecurity, and societal impacts. The White House previously endorsed a red-teaming event at DEFCON 31 (taking place in August), aimed at identifying vulnerabilities in popular generative AI tools through the collaboration of experts, researchers, and students.

2. Security: Companies have committed to invest in cybersecurity and insider threat safeguards, ensuring proprietary model weights (numeric parameters that machine learning models learn from data during training to make accurate predictions) are released only under intended circumstances and after assessing security risks. They have also agreed to facilitate third-party discovery and reporting of AI system vulnerabilities to support prompt action on any post-release challenges.

3. Trust: Companies have committed to developing technical mechanisms, such as watermarking, to indicate AI-generated content, promoting creativity while reducing fraud and deception. OpenAI is already exploring watermarking. Companies have also pledged to publicly disclose AI system capabilities, limitations, and appropriate/inappropriate use. They will also address security and societal risks, including fairness, bias, and privacy – again, a practice some companies already implement. 

The industry’s response

Companies have welcomed the White House’s lead in bringing them together to agree on voluntary commitments (emphasis on voluntary). While they have advocated for future AI regulation in their testimonies and public remarks, the industry generally leans towards self-regulation as the preferred approach.

For instance, Meta’s Nick Clegg said the company was pleased to commit these voluntary commitments alongside others in the sector, which ‘create a model for other governments to follow’. (We’re unsure what he meant, given that other countries have already introduced new laws or draft rules on AI.) Microsoft’s Brad Smith’s comment went a step further, noting that the company is not only already implementing the commitments, but is going beyond them (see infographic).

Infographic chart shows the alignment of Microsoft's efforts with the US White House voluntary AI commitments under three categories: Safe, Secure, and Trustworthy, with 4–5 points under each one
Microsoft’s infographic explaining its voluntary AI commitments

Minimal impact on the international front 

The White House said that its current work seeks to support and complement ongoing initiatives, including Japan’s leadership of the G7 Hiroshima Process, the UK’s leadership in hosting a Summit on AI Safety, India’s leadership in the Global Partnership on AI, and ongoing talks at the UN (no mention of the Council of Europe negotiations on AI though). 

In practice, we all know how intergovernmental processes operate, along with the pace at which things generally unfold. So no immediate changes are expected. 

Plus, the USA may well contemplate the regulation of AI companies within its own borders, but opening the doors to international regulation of its domestic enterprises is an entirely separate issue.


Digital policy roundup (17–24 July)
// AI //

UN Security Council holds first-ever AI debate; Secretary-General announces initiatives

The UN Security Council held its first-ever debate on AI (18 July), delving into the technology’s opportunities and risks for global peace and security. A few experts were also invited to participate in the debate chaired by Britain’s Foreign Secretary James Cleverly. (Read an AI-generated summary of country positions, prepared by DiploGPT).

In his briefing to the 15-member council, UN Secretary-General Antonio Guterres promoted a risk-based approach to regulating AI, and backed calls for a new UN entity on AI, akin to models such as the International Atomic Energy Agency, the International Civil Aviation Organization, and the Intergovernmental Panel on Climate Change.

Why is it relevant? In addition to the debate, Guterres announced that a high-level advisory group will begin exploring AI governance options by late 2023. He also said that his latest policy brief (published 21 July) recommends that countries develop national AI strategies and establish global rules for military AI applications, and urges them to ban lethal autonomous weapons systems (LAWS) that function without human control by 2026. Given that a global agreement on AI principles is already a big challenge in itself, agreement on a ban on LAWS (negotiations have been ongoing within the dedicated Group of Governmental Experts since 2016) is an even greater challenge.

Cybercriminals using generative AI for phishing and producing child sexual abuse content

Canada’s leading cybersecurity official, Sami Khoury, warned that cybercriminals are now exploiting AI for hacking and spreading misinformation by developing harmful software, creating convincing phishing emails, and propagating false information online, all generated by AI. 

In separate news, the International Watch Foundation (IWF) reported it has looked into 29 reported cases of URLs potentially housing AI-generated child sexual abuse imagery. From these reports, it has been confirmed that 7 URLs did indeed contain such content. In addition, during their analysis, IWF experts also discovered an online manual that teaches offenders how to refine prompts and train AI systems to produce increasingly realistic outcomes.

Why is it relevant? Reports from law enforcement and cybersecurity authorities (such as Europol) have previously warned about the potential risks of generative AI. Real-world instances of suspected AI-generated undesirable content are now being documented, marking a transition from perceiving it as a possible threat to acknowledging it as a current risk.


// ANTITRUST //

FTC suspends competition case in Microsoft’s Activision takeover

The US Federal Trade Commission (FTC) has suspended its competition case against Microsoft’s takeover of Activision Blizzard, which was scheduled for a hearing in an administrative court in early August. 

Since then, Microsoft and Activision Blizzard agreed to extend the deadline for closing the acquisition deal by 3 months to 18 October. 

Why is it relevant? This indicates that the deal is close to being approved everywhere, especially since Microsoft and Sony have also reached agreements ensuring the availability of the Call of Duty franchise on PlayStation – commitments which are appeasing the concerns raised by regulators who were initially opposed to the deal.

Activision Blizzard
Campaigns 83

Microsoft faces EU antitrust complaint over bundling Teams with Office 

Microsoft is facing a new EU antitrust complaint lodged by German company alfaview, centering on Microsoft’s practice of bundling its video app, Teams, with its Office product suite. Alfaview says the bundling gives Teams an unfair competitive advantage, putting rivals at a disadvantage.

The European Commission has confirmed receipt of the antitrust complaint, which was first announced by alfaview, and is said to be preparing to launch a formal investigation into Microsoft’s actions since the company’s remedies so far were deemed insufficient. Microsoft has been undergoing an informal investigation by the European Commission. 

Why is it relevant? This is not the first complaint against Microsoft’s Teams-Office bundling: Salesforce lodged a similar complaint in 2020. The Commission doesn’t take lightly to anti-competitive practices, so we can expect it to come out against Microsoft’s practices in full force.


// DSA //

TikTok: Almost, but not quite

TikTok, a social media platform owned by a Chinese company, appears to be making progress towards complying with the EU’s Digital Services Act. It willingly subjected itself to a stress test, indicating its commitment to meeting the necessary requirements.

After a debrief with TikTok CEO Shou Zi Chew, European Commissioner for the Internal Market Thierry Breton tweeted that the meeting was constructive, and that it was now time for the company ‘to accelerate to be fully compliant’.

Why is it relevant? TikTok is trying very hard to convince European policymakers of its commitment to protecting people’s data and to implementing other safety measures. Countries have been viewing the company as a security concern, prompting the company to double and triple its efforts at proving its trustworthiness. Compliance with the EU’s Digital Services Act (DSA) could help restore the company’s standing in Europe.

A monitor shows TikTok CEO Shou Zi Chew during a debrief with European Commission Thierry Breton (not in the photo)
Campaigns 84

// CYBERSECURITY //

Chinese hackers targeted US high-ranking diplomats

The US ambassador to China, Terry Branstad, was hacked by a Chinese government-linked spying operation in 2019, according to a report by the Wall Street Journal. The operation targeted Branstad’s private email account and was part of a broader effort by Chinese hackers to target US officials and their families. 

Daniel Kritenbrink, the assistant secretary of state for East Asia, was among those targeted in the cyber-espionage attack. These two diplomats are considered to be the highest-ranking State Department officials affected by the alleged spying campaign. 

The Chinese government has denied any involvement in the hacking.

Why is it relevant? The news of the breach comes amid ongoing tensions between the USA and China; the fact that the diplomats’ email accounts had been monitored for months could further strain relations between the two countries. It also highlights the ongoing issue of state-sponsored cyber espionage.


Was this newsletter forwarded to you, and you’d like to see more?


The week ahead (24–31 July)

22–28 July: The 117th meeting of the Internet Engineering Task Force (IETF) continues this week in San Francisco and online.

24–28 July: The Open-Ended Working Group (OEWG) is holding its 5th substantive session this week in New York. Bookmark our observatory for updates.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?