DW Weekly #138 – 27 November 2023

DigWatch Weekly. Capturing top digital policy news worldwide

Dear all,

Negotiations on the EU AI Act face challenges as France, Germany, and Italy oppose tiered regulation for foundation AI models. OpenAI’s leadership changes and alleged project Q* raise transparency concerns. The UK, USA, and partners released global AI system development guidelines. Italy is investigating AI data collection, while Switzerland is exploring regulatory approaches. India warned social media giants about the spread of deepfakes and misinformation. The US Treasury imposed record penalties on Binance, and the Australian regulator called for regulatory reform of digital platforms.

Let’s get started.

Andrijana and the Digital Watch team


// HIGHLIGHT //

EU warring on AI Act

The negotiations on the EU AI Act have hit a significant snag, as France, Germany, and Italy spoke out against the tiered approach initially envisioned in the EU AI Act for foundation models. These three countries asked the Spanish presidency of the EU Council, which negotiates on behalf of member states in the trialogues, to retreat from the approach.

The tiered approach would mean categorising AI into different risk bands, with more or less regulation depending on the risk level. 

What France, Germany, and Italy want is to regulate only the use of AI rather than the technology itself and propose ‘mandatory self-regulation through codes of conduct’  for foundation models.

To implement the use-based approach, developers of foundation models would have to define model cards – documents that provide information about machine learning models, detailing various aspects such as their intended use, performance characteristics, limitations, and potential biases.

An EU AI governance body could contribute to formulating guidelines and overseeing the implementation of model cards giving detailed context information.

A hard ‘no’ from the European Parliament. However, European Parliament officials walked out of a meeting to signal that leaving foundation models out of the law was not politically acceptable. 

A suggested compromise. The European Commission circulated a possible compromise: Bring back a two-tiered approach, watering down the transparency obligations and introducing a non-binding code of conduct for the models that pose a systemic risk. Further negotiations are expected to centre around this proposal.

Still a no from the European Parliament. The Parliament is not budging: It is not willing to accept self-regulation and only accepts the idea of EU codes of practice as a complementary element to the horizontal transparency requirements for all foundation models.

Chart details the content of five different AI Act proposals: the IT/FR/DE Non-Paper, the White House Executive Order, the Spanish Presidency Compromise Proposal, Parliament’s Adopted Position, and the Council’s Adopted Position grouped under the five broad areas of Required safety obligations, Compute-related monitoring, Governance body oversight, code of conduct, and Information sharing.
A comparison of key AI Act proposals. Source: Future of Life Institute

Why is it relevant? 

The Franco-German-Italian non-paper and the commission’s proposed compromise have sparked concerns that the largest foundation models will remain underregulated in the EU. Add a time constraint to that: Policymakers hoped to finalise the act at a meeting scheduled for 6 December. The chances for that are currently looking slim. If the EU doesn’t pass the EU Act in 2023, it may lose its chance to establish the gold standard of AI rules.


Digital policy roundup (20–27 November)

// AI //

OpenAI – Last week’s episode

Much has been written about what transpired at OpenAI last week. We have followed the developments, too.

Here’s the quickest recap of the situation on the internet. OpenAI CEO Sam Altman was ousted from the company because he ‘was not consistently candid in his communications’ with the board. Mira Murati took over as Interim CEO. Altman then joined Microsoft. The OpenAI board proceeded to appoint Twitch co-founder Emmett Shear as interim CEO. Approximately 700 out of 750 OpenAI staff sent a letter to the board claiming they would resign from the company over the debacle and join Altman at Microsoft. Altman came back as CEO and OpenAI’s board changed some of its members.

And here’s the most exciting part. Reuters reported that Altman was dismissed partly because of Q*, an AI project allegedly so powerful that it could threaten humanity. 

Q* can supposedly solve certain math problems, suggesting a higher reasoning capacity. This could be a potential breakthrough in artificial general intelligence (AGI), which OpenAI sees as AI that aims to surpass human capabilities in economically valuable tasks.

Why is it relevant? The news has caused quite a stir, with many wondering what exactly Q* is, if it even exists. Is this really about AGI? Well, it’s hard to tell. On the one hand, AI surpassing human capabilities sounds like a dystopia (why does no one ever think it might be a utopia?) is ahead. On the other hand, since the company hasn’t even commented so far, it’s best not to buy into the hype yet. 

But what this is definitely about is transparency – and not only at OpenAI. We all need to understand who (or what) it is that shapes our future. Are we mere bystanders?

Drawing of a game board with different-coloured marker pieces waiting for the three die being tossed by a human hand to signal the next move. Some marker pieces have chat bubbles with icons indicating surprise, intellectual thought or AI implications, justice, economics, agreement, and globalisation.
The grand game of addressing AI for the future of humanity. Who holds the dice? Credit: Vladimir Veljašević

UK, USA, and 16 other partners publish guidelines for secure AI system development

In collaboration with 16 other countries, the UK and the USA have released the first global guidelines to enhance cybersecurity throughout the life cycle of an AI system.

The guidelines were developed by the US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) with international partners, while major companies such as Amazon, Anthropic, Google, IBM, Microsoft, and OpenAI contributed.

The guidelines span four key areas within the life cycle of the development of an AI system: secure design, secure development, secure deployment, and secure operation and maintenance.

  1. The section about the secure design stage focuses on understanding risks, threat modelling, and considerations for system and model design. 
  2. The section on the secure development stage includes guidelines for supply chain security, documentation, and managing assets and technical debt. 
  3. The secure deployment stage section emphasises protecting infrastructure and models, developing incident management processes, and ensuring responsible release. 
  4. The secure operation and maintenance stage section provides guidelines for actions relevant after deployment, such as logging, monitoring, update management, and information sharing.

Why is it relevant? The considerable number of institutions from other countries that are signatories indicates a growing consensus on the importance of securing AI technologies.

Graphic shows a humanoid AI in front of a half-circle world map showing various icons representing technology and networks. Outside the half circle, icons for the sun and clouds are shown, with one of the clouds representing a cloud network.
Image credit: NSCS.

Italy’s DPA launches investigation into data collection for AI training

The Italian Data Protection Authority (DPA) is initiating a fact-finding inquiry to assess whether online platforms have put in place sufficient measures to stop AI platforms from scraping personal data for training AI algorithms. The investigation will cover all public and private entities operating as data controllers, established or providing services in Italy. The DPA has invited trade associations, consumer groups, experts, and academics to offer their input on security measures currently in place and those that could be adopted to prevent the extensive collection of personal data for training purposes.

Why is it relevant? Italy’s DPA is taking privacy very seriously: It even imposed a (temporary) limitation on ChatGPT earlier this year. They stated they would adopt necessary measures after the investigation is concluded, and we have no doubt they will be pulling their punches.

Desktop with partial keyboard extending off to the right side has a paperclipped yellow note that says ‘Personal Data’

Switzerland examines regulatory approaches for AI

Switzerland’s Federal Council has tasked the Department of the Environment, Transport, Energy, and Communications (DETEC) with providing an overview of potential regulatory approaches for AI by the end of 2024

Those approaches must align with existing Swiss law and be compatible with the upcoming EU AI Act and the Council of Europe AI Convention. The council aims to use the analysis as a foundation for an AI regulatory template in 2025.


Was this newsletter forwarded to you, and you’d like to see more?


// CONTENT POLICY //

India’s government issues warning to social media giants on deepfakes and misinformation

The Indian government has issued a warning to social media giants, including Facebook and YouTube, regarding the dissemination of content that violates local laws. The government is particularly concerned about harmful content related to children, obscenity, and impersonation, with a focus on deepfakes. 

The government emphasised the non-negotiable nature of these regulations and stressed the need for continuous user reminders about content restrictions, and warning of potential government directives for non-compliance. Social media platforms have reportedly agreed to align their content policies with government regulations in response to these concerns.

Digital smartphone shows its home screen with its Social Media apps highlighted in a group that contains icons for Pinterest, YouTube, X (formerly Twitter), and other apps

// CRYPTO //

US Treasury hits Binance with record-breaking penalties for money laundering and sanctions violations

The US Department of the Treasury, alongside various enforcement agencies, took unprecedented action against Binance Holdings Ltd., the world’s largest virtual currency exchange, for violating anti-money laundering (AML) and sanctions laws. 

Binance admitted to operating as an unregistered money services business, disregarding anti-money laundering protocols, bypassing customer identity verification, failing to report suspicious transactions including those involving terrorist groups, ransomware, child sexual exploitation, and other illicit activities, and facilitating trades between US users and sanctioned jurisdictions. 

Binance reached a settlement with the US government, including a historic $4.2 billion payment, a five-year monitoring period, and stringent compliance obligations. Binance agreed to exit the US market entirely and comply with sanctions. Failure to meet these terms could result in further substantial penalties.

Why is it relevant? Because it sends a strong message that the cryptocurrency industry must adhere to the rules of the US financial system or face government action.

Compound digital illustration shows the Binance logo, a $50 US bill, and several bitcoins tokens.

// COMPETITION //

Australian regulator calls for new competition laws for digital platforms

The Australian Competition and Consumer Commission (ACCC) has emphasised the urgent need for regulatory reform in response to the expanding influence of major digital platforms, including Alphabet, Amazon, Apple, Google, Meta, and Microsoft. The ACCC’s seventh interim report from the Digital Platform Services Inquiry underscores the risks associated with these platforms extending into various markets and technologies, potentially harming competition and consumers. While acknowledging the benefits of digital platforms, the report highlights concerns about invasive data collection practices, consumer lock-in, and anti-competitive behaviour. 

The report further explores the impact of digital platforms on emerging technologies, emphasising the need for adaptable competition laws to address evolving challenges in the digital economy. 

The ACCC suggests updating competition and consumer laws, introducing targeted consumer protections, and implementing service-specific codes to mitigate these risks and ensure effective competition in evolving digital markets. 

Why is it relevant? The concerns raised by the ACCC are not unique to Australia. Regulatory reforms in Australia could set a precedent for other jurisdictions grappling with similar issues.

Cover page of the seventh Digital platform services inquiry interim report dated September 2023. It has a dark blue isosceles triangle with a lighter bluish internal triangle at the lower left apex and has multiple chat bubbles containing icons representing digital services.
Image credit: ACCC.
The week ahead (27 November–4 December)

27–29 November: The 12th UN Forum on Business and Human Rights is taking place in a hybrid format to discuss effective change in implementing obligations, responsibilities, and remedies.

29–30 November: The inaugural Global Conference on Cyber Capacity Building (GC3B) will be held under the theme of cyber resilience for development and will culminate with the announcement of the Accra Call: a global action framework that supports countries in strengthening their cyber resilience. 

30 Nov 2023: Held in conjunction with the UN Business and Human Rights Forum, the UN B-Tech Generative AI Summit: Advancing Rights-Based Governance and Business Practice will explore practical applications of the UN Guiding Principles on Business and Human Rights and facilitate discussions on implementing these principles for generative AI and other general-purpose AI.

4–8 Dec 2023: UNCTAD eWeek 2023 will address pivotal questions about the future of the digital economy: What does the future we want for the digital economy look like? What is required to make that future come true? How can digital partnerships and enhanced cooperation contribute to more inclusive and sustainable outcomes? Bookmark our dedicated eWeek 2023 page on the Digital Watch Observatory or download the app to read reports from the event. In addition to providing just-in-time reporting from the eWeek, Diplo will also be involved in several activities throughout the event.


#ReadingCorner
The cover page of Diplo Bloges AI Seasons Autumn 2023 edition highlights the article ‘How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation’ by Jovan Kurbalija. It has the word humAInism in the lower right corner.

How can legal wisdom from 19th-century Montenegro and Valtazar Bogišić help AI regulation?

Jovan Kurbalija explores the implications of the 1888 Montenegrin Civil Code on the AI era. He temporises that AI governance, much like the Montenegrin Civil Code, is about integrating tradition with modernity.


Andrijana20picture
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Numéro 84 de la lettre d’information Digital Watch – novembre 2023

Cover of the November 2023 newsletter in French, with the Illustration of the tree, representing an Artificial Intelligence, in a pot, being watered and its branches being shaped by one woman and two men.

Observatoire

Coup d’œil : ce qui fait des remous dans le domaine de la politique numérique

Géopolitique

Le Bureau de l’industrie et de la sécurité (BIS) du ministère américain du Commerce (DoC) a annoncé un renforcement des restrictions à l’exportation de semi-conducteurs avancés vers la Chine et d’autres pays soumis à des embargos sur les armes. Cette décision a suscité une vive réaction de la part de la Chine, qui a qualifié ces mesures d’intimidation unilatérale et d’abus des mécanismes de contrôle des exportations.

Pour ne rien arranger au paysage technologique sino-américain, le Gouvernement américain envisage d’imposer des restrictions à l’accès des entreprises chinoises aux services d’informatique dématérialisée (cloud). Si cette mesure est mise en œuvre, elle pourrait avoir des conséquences importantes pour les deux pays, en particulier pour des acteurs majeurs comme Amazon Web Services et Microsoft. Enfin, pour des raisons de sécurité, le Canada a interdit les logiciels chinois et russes sur les appareils fournis par le Gouvernement.

Gouvernance de l’IA

Par ailleurs, un projet de loi ayant fait l’objet d’une fuite suggère que les pays d’Asie du Sud-Est, sous l’égide de l’Association des nations de l’Asie du Sud-Est (ASEAN), adoptent une approche favorable aux entreprises en ce qui concerne la réglementation de l’IA. Le projet de guide sur l’éthique et la gouvernance de l’IA demande aux entreprises de tenir compte des différences culturelles et ne prescrit pas de catégories de risques inacceptables. De son côté, l‘Allemagne a mis en place un plan d’action sur l’IA visant à accroître sa progression à l’échelle nationale et européenne, afin de concurrencer les forces prédominantes des États-Unis et de la Chine dans le domaine de l’IA.

Sécurité

Les responsables des agences de sécurité des États-Unis, du Royaume-Uni, de l’Australie, du Canada et de la Nouvelle-Zélande, collectivement connus sous le nom de « Five Eyes », ont publiquement mis en garde contre la vaste campagne d’espionnage menée par la Chine pour obtenir des secrets commerciaux. La Commission européenne a annoncé un examen complet des risques de sécurité dans des domaines technologiques essentiels, notamment les semi-conducteurs, l’IA, les technologies quantiques et les biotechnologies. Le 8 novembre, ChatGPT a été confronté à des pannes, qui seraient dues à une attaque par déni de service distribué (DDoS). Le groupe de pirates informatiques Anonymous Sudan en a revendiqué la responsabilité. Enfin, le dernier rapport de Microsoft sur la défense numérique a révélé une augmentation globale des cyberattaques, avec une hausse des opérations d’espionnage et du trafic d’influence parrainés par les gouvernements.

Infrastructure

La Commission fédérale des communications (FCC) des États-Unis a voté en faveur du rétablissement des règles relatives à la neutralité de l’Internet. Initialement adoptées en 2015, ces règles ont été abrogées par l’Administration précédente, mais sont maintenant sur le point d’être rétablies. 

Access Now, l’Internet Society et 22 autres organisations et experts ont conjointement envoyé une lettre à l’Autorité de régulation des télécommunications de l’Inde (TRAI) pour s’opposer à l’application de coûts de réseau discriminatoires ou de régimes de licence pour les plateformes en ligne.

Économie de l’Internet

La société Google, propriété d’Alphabet, aurait versé une somme substantielle de 26,3 milliards de dollars à d’autres entreprises en 2021 pour s’assurer que son moteur de recherche reste l’option par défaut sur les navigateurs Internet et les téléphones portables. Cette information a été révélée lors du procès anticoncurrentiel intenté par le ministère américain de la Justice (DoJ). Pour des actions anticoncurrentielles similaires, la Japan Fair Trade Commission (JFTC) a ouvert une enquête anti-monopole sur la position dominante de Google dans le domaine de la recherche sur le web. 

La Banque centrale européenne (BCE) a décidé d’entamer une phase de préparation de deux ans à compter du 1er novembre 2023 afin de finaliser les réglementations et de sélectionner les partenaires du secteur privé avant le lancement éventuel d’une version numérique de l’euro. L’étape suivante sera la mise en œuvre éventuelle, après le feu vert des décideurs politiques. Parallèlement, le Conseil européen de la protection des données (CEPD) a demandé que la législation sur l’euro numérique proposée par la Commission européenne renforce les garanties en matière de protection de la vie privée.

Droits numériques

La présidence du Conseil et le Parlement européen sont parvenus à un accord provisoire sur un nouveau cadre pour l’identité numérique européenne (eID) afin de fournir à tous les Européens une identité numérique fiable et sécurisée. Dans le cadre de ce nouvel accord, les États membres fourniront aux citoyens et aux entreprises des portefeuilles numériques qui relient leur identité numérique nationale à d’autres attributs personnels, tels que les permis de conduire et les diplômes.

La Commission du marché intérieur et de la protection des consommateurs du Parlement européen a publié un rapport mettant en garde contre la nature addictive de certains services numériques et préconisant une réglementation plus stricte pour lutter contre la conception addictive des plateformes numériques. Dans le même ordre d’idées, le Comité européen de la protection des données a ordonné au régulateur irlandais des données d’imposer une interdiction permanente de la publicité comportementale de Meta sur Facebook et Instagram.

Les principaux groupes politiques du Parlement européen sont parvenus à un consensus sur un projet de législation obligeant les plateformes Internet à détecter et à signaler les contenus pédopornographiques afin d’empêcher leur diffusion sur Internet.

Politique de contenu

Meta, la société mère de Facebook et d’Instagram, est confrontée à une bataille juridique lancée par plus de 30 États américains. Les plaignants affirment que Meta a intentionnellement et sciemment utilisé des fonctionnalités addictives tout en dissimulant les risques potentiels liés à l’utilisation des médias sociaux, violant ainsi les lois sur la protection des consommateurs et la réglementation sur la protection de la vie privée des enfants de moins de 13 ans.

L’UE a officiellement demandé à Meta et TikTok des précisions sur les mesures de lutte contre la désinformation. Dans le contexte du conflit au Moyen-Orient, l’UE souligne les risques liés à la diffusion à grande échelle de contenus illégaux et de désinformation.

La loi britannique sur la sécurité en ligne (Online Safety Act), qui impose de nouvelles responsabilités aux entreprises de médias sociaux, est entrée en vigueur. Cette loi vise à renforcer la sécurité en ligne et rend les plateformes de médias sociaux responsables de leurs pratiques de modération des contenus.

Développement

La bande de Gaza a subi trois coupures d’Internet depuis le début du conflit, ce qui a incité la société SpaceX d’Elon Musk, Starlink, à offrir un accès à Internet aux organisations d’aide internationalement reconnues à Gaza. Par ailleurs, les ONG environnementales exhortent l’UE à prendre des mesures pour lutter contre les déchets électroniques, en demandant une révision de la directive sur les déchets d’équipements électriques et électroniques (DEEE), selon la communication du Bureau européen de l’environnement.

LES CONVERSATIONS DE LA VILLE – GENÈVE

Comme convenu lors de la session ordinaire du Conseil de l’UIT en juillet 2023, une session supplémentaire consacrée à la confirmation des questions logistiques et à la planification organisationnelle pour 2024-2026 s’est tenue en octobre 2023. Elle a été précédée par le groupe de réunions des groupes de travail du Conseil (GTC) et des groupes d’experts (GE), au cours desquelles la liste des présidents et des vice-présidents a été établie jusqu’à la Conférence de plénipotentiaires de 2026. Le prochain groupe de réunions des GTC et des GE aura lieu du 24 janvier au 2 février 2024.

Le troisième sommet du Geneva Science and Diplomacy Anticipator (GESDA) a vu le lancement de l’Open Quantum Institute (OQI), un partenariat entre le Département fédéral suisse des affaires étrangères (DFAE), le CERN et UBS. L’OQI vise à rendre les ordinateurs quantiques de haute performance accessibles à tous les utilisateurs qui se consacrent à la recherche de solutions et à l’accélération des progrès dans la réalisation des objectifs de développement durable (ODD). L’OQI sera hébergé au CERN à partir de mars 2024 et facilitera l’exploration des cas d’utilisation de la technologie dans les domaines de la santé, de l’énergie, de la protection du climat, etc.

En bref

Définir le paysage mondial de l’IA

Nous avons passé la majeure partie de l’année 2023 à lire et à écrire, chaque mois, sur la gouvernance de l’IA. Le mois d’octobre ne fait pas exception. Alors que le monde est aux prises avec les complexités de cette technologie, les initiatives suivantes illustrent les efforts déployés pour relever les défis en matière d’éthique, de sécurité et de réglementation, tant au niveau national qu’international.

Décret de M. Biden sur l’IA. Le décret représente à ce jour l’effort le plus important du gouvernement américain pour réglementer l’IA. Dévoilé par anticipation, il fournit des directives applicables dans la mesure du possible et appelle à une législation bilatérale lorsque cela s’avère nécessaire, notamment en matière de confidentialité des données.

L’accent mis sur la sûreté et la sécurité de l’IA en est l’une des caractéristiques les plus marquantes. Les développeurs des systèmes d’IA les plus puissants sont désormais tenus de partager les résultats des tests de sécurité et les informations critiques avec le gouvernement américain. En outre, les systèmes d’IA utilisés dans les infrastructures critiques sont soumis à des normes de sécurité rigoureuses, ce qui témoigne d’une approche proactive visant à atténuer les risques potentiels associés au déploiement de l’IA.

Contrairement à certaines lois émergentes sur l’IA, comme la loi sur l’IA de l’UE, le décret de M. Biden adopte une approche sectorielle. Il demande à des agences fédérales spécifiques de se concentrer sur les applications de l’IA dans leur domaine. Par exemple, le ministère de la Santé et des Services sociaux est chargé de promouvoir une utilisation responsable de l’IA dans le domaine de la santé, tandis que le ministère de la Culture est chargé d’élaborer des lignes directrices pour l’authentification des contenus et le marquage en filigrane afin d’étiqueter clairement les contenus générés par l’IA. Le DoJ est chargé de s’attaquer à la discrimination algorithmique, ce qui témoigne d’une approche nuancée et adaptée de la gouvernance de l’IA.

Au-delà de la réglementation, le décret vise à renforcer l’avance technologique des États-Unis. Il facilite l’entrée des travailleurs hautement qualifiés dans le pays, reconnaissant leur rôle essentiel dans l’avancement des capacités d’IA. Le décret donne également la priorité à la recherche sur l’IA par le biais d’initiatives de financement, d’un accès accru aux ressources et aux données relatives à l’IA, et de la mise en place de nouvelles structures de recherche.

Les principes directeurs du G7. Simultanément, les pays du G7 ont publié leurs principes directeurs pour l’IA avancée, accompagnés d’un code de conduite détaillé pour les organisations qui la développent.

 People, Person, Groupshot, Adult, Male, Man, Clothing, Formal Wear, Suit, Coat, Face, Head, Mario Draghi, Fumio Kishida, Justin Trudeau, Jo Johnson, Joe Biden, Emmanuel Macron, Olaf Scholz
Source de la photo : Politico

Ces principes, 11 au total, s’articulent autour de la responsabilité fondée sur le risque. Le G7 encourage les développeurs à mettre en œuvre des mécanismes fiables d’authentification des contenus, ce qui témoigne d’un engagement à garantir la transparence des contenus générés par l’IA.

Une similitude notable avec la loi européenne sur l’IA est l’approche fondée sur le risque, qui confère aux développeurs d’IA la responsabilité d’évaluer et de gérer les risques associés à leurs systèmes. L’UE s’est rapidement félicitée de ces principes, estimant qu’ils pouvaient compléter les règles juridiquement contraignantes de la loi sur l’IA de l’UE au niveau international.

Tout en s’appuyant sur les principes de l’Organisation de coopération et de développement économiques (OCDE) en matière d’IA, les principes du G7 vont plus loin à certains égards. Ils encouragent les développeurs à déployer des mécanismes fiables d’authentification et de provenance du contenu, tels que le filigrane, pour permettre aux utilisateurs d’identifier le contenu généré par l’IA. Toutefois, l’approche du G7 préserve un certain degré de flexibilité, permettant aux juridictions d’adopter le code d’une manière qui s’aligne sur leurs approches individuelles.

Les différents points de vue sur la réglementation de l’IA parmi les pays du G7 sont reconnus, allant d’une application stricte à des lignes directrices plus favorables à l’innovation. Toutefois, certaines dispositions, telles que celles relatives à la vie privée et aux droits d’auteur, sont critiquées pour leur imprécision, ce qui soulève des questions quant à leur capacité à susciter des changements significatifs.

L’initiative chinoise de gouvernance mondiale de l’IA (GAIGI). La Chine a dévoilé son GAIGI lors du troisième forum « Belt and Road », marquant ainsi une étape importante dans l’élaboration de la trajectoire de l’IA à l’échelle mondiale. Le GAIGI de la Chine devrait réunir 155 pays participant à l’initiative du « Belt and Road », créant ainsi l’un des plus grands forums mondiaux de gouvernance de l’IA.

Cette initiative stratégique se concentre sur cinq aspects, dont l’alignement du développement de l’IA sur le progrès humain, la promotion des avantages mutuels et l’opposition aux divisions idéologiques. Elle établit également un système de test et d’évaluation pour évaluer et atténuer les risques liés à l’IA, similaire à l’approche basée sur les risques de la future loi sur l’IA de l’UE. En outre, le GAIGI soutient les cadres fondés sur le consensus et apporte un soutien essentiel aux pays en développement dans le renforcement de leurs capacités en matière d’IA.

L’approche proactive de la Chine en matière de réglementation de son industrie de l’IA lui confère un avantage de premier plan. Malgré son approche profondément idéologique, les mesures provisoires de la Chine sur l’IA générative, en vigueur depuis le mois d’août 2023, ont constitué une première mondiale. Cet avantage positionne la Chine comme un acteur influent dans l’élaboration des normes mondiales en matière de réglementation de l’IA.

Sommet sur la sécurité de l’IA à Bletchley Park. Le sommet très attendu du Royaume-Uni a débouché sur un engagement historique de la part des principaux pays et entreprises du secteur de l’IA : celui de tester les modèles d’IA les plus novateurs avant de les rendre publics.

La déclaration de Bletchley identifie les dangers de l’IA actuelle, notamment les préjugés, les menaces pour la vie privée et la production de contenus trompeurs. Tout en abordant ces préoccupations immédiates, l’accent a été mis sur l’IA d’avant-garde – les modèles avancés qui dépassent les capacités actuelles – et son grave potentiel de nuisance. Les signataires sont l’Allemagne, l’Australie, le Canada, la Chine, la Corée, les États-Unis, la France, l’Inde, le Royaume-Uni et Singapour, soit un total de 28 pays plus l’UE.

Les gouvernements vont désormais jouer un rôle plus actif dans l’expérimentation des modèles d’IA. L’AI Safety Institute, un nouveau centre mondial établi au Royaume-Uni, collaborera avec des institutions d’IA de premier plan pour évaluer la sécurité des technologies d’IA émergentes avant et après leur diffusion publique. Il s’agit là d’un changement important par rapport au modèle traditionnel, dans lequel les entreprises d’IA étaient seules responsables de la sécurité de leurs modèles.

Le sommet a débouché sur un accord visant à créer un groupe consultatif international sur les risques liés à l’IA, inspiré du Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC). Chaque pays signataire désignera un représentant qui soutiendra un groupe plus large d’éminents universitaires spécialisés dans l’IA, en produisant des rapports sur l’état de la science. Cette approche collaborative vise à favoriser un consensus international sur les risques liés à l’IA.

Organe consultatif de haut niveau des Nations unies sur l’IA. Les Nations unies ont adopté une approche unique en créant un organe consultatif de haut niveau sur l’IA, composé de 39 membres. Dirigé par Amandeep Singh Gill, envoyé des Nations unies pour la technologie, l’organe publiera ses premières recommandations d’ici la fin de l’année, les recommandations finales étant attendues pour l’année prochaine. Ces recommandations seront examinées lors du Sommet de l’avenir des Nations unies, en septembre 2024.

Au lieu des initiatives précédentes qui ont introduit de nouveaux principes, l’organe consultatif de l’ONU se concentre sur l’évaluation des initiatives de gouvernance existantes dans le monde, l’identification des lacunes et la proposition de solutions. L’envoyé technologique envisage l’ONU comme une plateforme permettant aux gouvernements de discuter et d’affiner les cadres de gouvernance de l’IA.

Définition actualisée de l’IA de l’OCDE. L’OCDE a officiellement révisé sa définition de l’IA : un système d’IA est un système basé sur une machine qui, pour des objectifs explicites ou implicites, déduit, à partir des données qu’il reçoit, comment générer des résultats tels que des prédictions, du contenu, des recommandations ou des décisions qui peuvent influencer des environnements physiques ou virtuels. Les différents systèmes d’IA varient dans leurs niveaux d’autonomie et d’adaptabilité après le déploiement. Il est prévu que cette définition soit incorporée dans le prochain règlement de l’UE sur l’IA.

La désinformation menace de masquer la vérité au Moyen-Orient

La citation « On dit qu’un mensonge peut faire le tour de la terre le temps que la vérité mette ses chaussures », attribuée à Mark Twain, est – ironiquement – apocryphe.

La désinformation est aussi vieille que l’humanité et date de plusieurs décennies sous sa forme actuelle connue, mais les médias sociaux ont amplifié sa vitesse et son étendue. Un rapport du MIT datant de 2018 a révélé que les mensonges se propagent six fois plus vite que la vérité, sur Twitter en l’occurrence. Les différentes plateformes accentuent de manière différente la désinformation, en fonction du nombre de dispositifs mis en place pour assurer la viralité des messages.

Pourtant, toutes les plateformes de médias sociaux ont été confrontées à la désinformation ces derniers jours, alors que les populations étaient en proie à la violence en Israël et à Gaza. Les plateformes de médias sociaux ont été inondées d’images et de vidéos explicites du conflit, ainsi que d’images et de vidéos qui n’avaient rien à voir avec lui.
Que se passe-t-il ? Des images erronées, des documents modifiés et d’anciennes vidéos sorties de leur contexte circulent en ligne. Il est donc difficile pour quiconque cherche des informations sur le conflit de faire la part des choses entre le faux et le vrai.

 Number, Symbol, Text

Modeler les perceptions. Les affirmations trompeuses ne se limitent pas à la zone de conflit ; elles ont également un impact sur les perceptions globales et contribuent à la division des opinions. Les individus, influencés par des préjugés et des émotions, prennent parti sur la base d’informations qui manquent souvent de précision ou de contexte.

De faux récits sur des plateformes comme X (anciennement connu sous le nom de Twitter) peuvent avoir une incidence sur les programmes politiques, avec des exemples de mémos mensongers circulant au sujet de l’aide militaire et des allégations de transferts de fonds. Même les comptes vérifiés supposés fiables contribuent de manière significative à la diffusion de fausses informations.

Ce que font les entreprises technologiques. Meta a mis en place un centre d’opérations spéciales composé d’experts, dont des personnes parlant couramment l’hébreu et l’arabe. Elle collabore avec des vérificateurs de faits, dont elle utilise les évaluations pour déclasser les faux contenus dans le flux afin d’en réduire la visibilité. Les mesures prises par TikTok sont quelque peu similaires. L’entreprise a mis en place un centre de commandement pour son équipe de sécurité, ajouté des modérateurs maîtrisant l’arabe et l’hébreu, et amélioré les systèmes de détection automatisés. X a supprimé des centaines de comptes liés au Hamas et supprimé ou signalé des milliers de contenus. Google et Apple auraient désactivé les données de trafic en direct pour les cartes en ligne d’Israël et de Gaza. La plateforme de messagerie sociale Telegram a bloqué les chaînes du Hamas sur Android en raison de violations des directives de la boutique d’applications de Google.

L’UE réagit. L’UE a ordonné à X, Alphabet, Meta et TikTok de supprimer les faux contenus. Le commissaire européen Thierry Breton leur a rappelé leurs obligations au titre de la nouvelle loi sur les services numériques (Digital Services Act, DSA), et a donné à X, Meta et TikTok 24 heures pour répondre. X a confirmé avoir supprimé des comptes liés au Hamas, mais l’UE a envoyé une demande formelle d’informations, marquant le début d’une enquête sur le respect de la loi sur les services numériques.

La situation se complique. Cependant, au début de l’année, Meta, Amazon, Alphabet et Twitter ont licencié de nombreux membres de leurs équipes chargées de la désinformation. Cette mesure s’inscrivait dans le cadre d’une restructuration post-COVID-19 visant à améliorer leur rendement sur le plan financier.

Cette situation souligne la nécessité de prendre des mesures énergiques, notamment une vérification efficace des faits, une surveillance réglementaire et une responsabilisation des plateformes, afin d’atténuer l’impact de la désinformation sur la perception du public et le discours mondial.

FGI 2023

Le Forum sur la gouvernance de l’Internet (FGI) 2023 a abordé des questions essentielles dans un contexte de tensions mondiales, notamment le conflit au Moyen-Orient. Avec un nombre record de 300 sessions, 15 jours de contenus vidéo et 1 240 intervenants, les débats ont porté sur des sujets allant du Pacte mondial pour le numérique (PMN) et de la politique en matière d’IA à la gouvernance des données et à la réduction de la fracture numérique.

Les dix questions suivantes sont extraites des rapports détaillés de centaines de séances et d’ateliers organisés dans le cadre du FGI 2023.uEeep0lgHT2UROIVBLtFvKbKddA37jdDlGWmD4Bfi34eKvMDabyFEXKa9PbjUK6bTu uX0IcbvOnAfT16F2lTB2SWgYiS4tmf1o Dt2fPfjhqVjqslqq2nZdfwd3kW9LuOlLKsDNTQg p0VtOUgWAA

1. Comment gouverner l’IA ? Les sessions ont exploré les solutions nationales et internationales en matière de gouvernance de l’IA, en mettant l’accent sur la transparence et en remettant en question la réglementation des applications ou des capacités de l’IA.

2. Quel sera l’avenir du FGI dans le contexte du Pacte mondial pour le numérique (PMN) et du processus d’examen du SMSI+20 ? L’avenir du FGI est étroitement lié à la PMN et au processus d’examen du SMSI+20. L’examen de 2025 pourrait décider du sort du FGI, et les négociations sur la PMN, attendues en 2024, auront également un impact sur la trajectoire du FGI.

3. Comment pouvons-nous utiliser la richesse des données de l’IGF pour un avenir soutenu par l’IA et centré sur l’humain ?

Les 18 années de données du FGI sont considérées comme un bien public. Les discussions ont porté sur l’utilisation de l’IA pour obtenir des informations, améliorer la participation des parties prenantes et représenter visuellement les discussions à l’aide de graphiques de connaissances.

4. Comment atténuer les risques de fracture de l’Internet ? Des approches multidimensionnelles et un dialogue inclusif ont été proposés pour prévenir les conséquences imprévues.

5. Quels sont les enjeux des consultations sur le traité de l’ONU sur la cybercriminalité ? Des inquiétudes ont été exprimées quant au champ d’application, aux garanties en matière de droits de l’Homme, aux imprécisions dans les définitions de la cybercriminalité et au rôle du secteur privé dans les négociations du traité des Nations unies sur la cybercriminalité. L’accent a été mis sur la clarté, sur la séparation entre les crimes cyberdépendants et ceux qui sont rendus possibles par la cybercriminalité, ainsi que sur la coopération internationale.

6. Les nouvelles règles fiscales mondiales seront-elles aussi efficaces que nous l’espérons tous ? Le FGI a débattu de l’efficacité potentielle de la solution à deux piliers de l’OCDE/G20 pour les règles fiscales mondiales. Des inquiétudes subsistent quant aux transferts de bénéfices, aux paradis fiscaux et aux déséquilibres de pouvoir entre les pays du Nord et du Sud.

7. Comment aborder la question de la désinformation et de la protection des communications numériques en temps de guerre ? Les efforts de collaboration entre les organisations humanitaires, les entreprises technologiques et les organismes internationaux ont été jugés essentiels.

8. Comment renforcer la gouvernance des données ? La conférence a souligné l’importance d’une gouvernance des données organisée et transparente, comprenant des normes claires, un environnement favorable et des partenariats public-privé. Le concept Data Free Flow with Trust (DFFT), introduit par le Japon, a été discuté en tant que cadre pour faciliter les flux de données mondiaux tout en garantissant la sécurité et la protection de la vie privée.

9. Comment combler la fracture numérique ? La fracture numérique nécessite des stratégies détaillées allant au-delà de la connectivité et impliquant des initiatives régionales, le déploiement de satellites LEO et des efforts en matière d’alphabétisation numérique. Les partenariats public-privé, en particulier avec les RIR, ont été soulignés comme étant essentiels pour favoriser la confiance et la collaboration.

10. Quel est l’impact des technologies numériques sur l’environnement ? Le FGI a étudié l’impact environnemental des technologies numériques, soulignant le fait que le secteur pourrait réduire ses émissions de 20 % d’ici à 2050. Des actions immédiates, des efforts de collaboration, des campagnes de sensibilisation et des politiques durables ont été préconisés pour minimiser l’empreinte environnementale de la numérisation.

Pour en savoir plus, consultez notre rapport final sur le FGI 2023.

À venir : eWeek 2023 de la CNUCED

Organisée par la Conférence des Nations unies sur le commerce et le développement (CNUCED) en collaboration avec les partenaires d’eTrade for all, l‘eWeek 2023 de la CNUCED est programmée du 4 au 8 décembre au prestigieux Centre international de conférences de Genève (CICG). Le thème central de cet événement novateur est : « Façonner l’avenir de l’économie numérique ».

Des ministres, des hauts fonctionnaires, des P.-D. G., des organisations internationales, des universitaires et des représentants de la société civile se réuniront pour répondre à des questions essentielles sur l’avenir de l’économie numérique : à quoi ressemble l’avenir que nous souhaitons pour l’économie numérique ? Que faut-il faire pour que cet avenir devienne réalité ? Comment les partenariats numériques et une coopération renforcée peuvent-ils contribuer à des résultats plus inclusifs et durables ?

Au cours de la semaine, les participants prendront part à plus de 150 sessions portant sur des thèmes tels que la gouvernance des plateformes, l’impact de l’IA sur l’économie numérique, les pratiques numériques respectueuses de l’environnement, l’autonomisation des femmes grâce à l’entrepreneuriat numérique et l’accélération de la préparation au numérique dans les pays en développement. 

L’événement explorera les domaines politiques clés pour construire une numérisation inclusive et durable à différents niveaux, en se concentrant sur l’innovation, les bonnes pratiques évolutives, les actions concrètes et les mesures réalisables.

Pour les jeunes de 15 à 24 ans, une consultation en ligne a été mise en place afin de s’assurer que leur voix soit entendue dans l’élaboration de l’avenir numérique pour tous.

Les comptes rendus de la GIP en temps réel et les sessions de Diplo à la CNUCED

 Logo, Advertisement

La GIP participera activement à l’eWeek 2023 en fournissant des rapports sur l’événement. Nos experts humains seront rejoints par DiploAI, qui générera les comptes rendus de toutes les sessions de l’eWeek. Mettez en favori notre page dédiée à l’eWeek 2023 sur le Digital Watch Observatory ou téléchargez l’application pour suivre les comptes rendus.
Diplo, l’organisation à l’origine de la GIP, co-organisera également une session intitulée « Scénario du futur avec les jeunes » avec la CNUCED et la Friedrich-Ebert-Stiftung (FES), et une session intitulée « Accords sur l’économie numérique et l’avenir de l’élaboration de règles commerciales numériques » avec CUTS International. La session de Diplo sera intitulée « Bottom-up AI and the Right to be Humanly Imperfect » (IA ascendante et droit à l’imperfection humaine). Pour plus de détails, visitez notre Diplo @ UNCTAD eWeek page.


Actualités de la Francophonie

 Logo, Text

Les régulateurs des télécommunications francophones du Fratel se réunissent à Rabat pour renforcer les intérêts des utilisateurs

L’Agence Nationale de Réglementation des Télécommunications (ANRT) du Royaume du Maroc, présidente du Réseau francophone de la régulation des Télécommunications (Fratel) en 2023, a accueilli la 21e réunion annuelle du réseau, les 25 et 26 octobre 2023, à Rabat sur le thème : « Comment renforcer l’objectif de satisfaction des utilisateurs dans la régulation ? ». Plus de 140 participants représentant 18 autorités de régulation, membres de Fratel, des institutions internationales (Banque mondiale), des associations de consommateurs de différents pays, et des acteurs du secteur ont pris part à cette réunion.

Pour ses 20 ans, le Fratel a mis l’accent cette année sur la prise en compte des intérêts des utilisateurs. Après le 20e séminaire 2023 du réseau qui s’est déroulé en mai à Lausanne sur le thème « Pourquoi et comment associer l’utilisateur à la régulation ? », les tables rondes de la réunion annuelle ont permis aux intervenants d’évoquer les différents types d’utilisateurs au bénéfice desquels la régulation s’exerce et ce qui est mis en œuvre pour satisfaire leurs besoins voire les protéger. Il a aussi été question des moyens d’améliorer l’efficacité des actions d’information à l’égard de ces différentes catégories d’utilisateurs et de l’accompagnement du grand public face aux évolutions technologiques.

 People, Person, Groupshot, Adult, Male, Man, Indoors, Architecture, Building, College
Crédits photo : Fratel

Au cours de la réunion annuelle s’est déroulée l’élection du nouveau comité de coordination du réseau pour 2024. Il est composé de M. Marc SAKALA, Directeur général de l’ARPCE de la République du Congo (président), de Mme Laure DE LA RAUDIERE, Présidente de l’Arcep France et M. Az-El-Arabe HASSIBI, Directeur général de l’ANRT du Maroc (vice-présidents). 

Les membres ont vivement remercié Luc TAPELLA, Directeur de l’ILR du Luxembourg, pour avoir été membre du Comité de coordination durant les 3 dernières années.

Le plan d’action du réseau pour l’année 2024 a été adopté par ses membres. Les thématiques traitées en 2024 tourneront d’une part autour de l’avenir des réseaux et de la régulation et, d’autre part, sur les enjeux de régulation relatifs aux marchés de la donnée et des services numériques.

Le prochain séminaire aura lieu au cours du premier semestre 2024 au Togo et aura pour thème « Économie de la donnée et services numériques : quels enjeux de régulation technico-économiques ? ».  La réunion annuelle du réseau se tiendra au cours second semestre 2024 et portera sur « Quels modèles d’affaires et quelles stratégies des opérateurs télécoms dans le futur ? ».

En savoir plus : www.fratel.org

L’OIF poursuit sa mobilisation des représentants francophones à l’ICANN 

Après la Conférence ICANN77 à Washington en juin dernier, marquée par son retour dans cette enceinte, l’OIF a participé à la Réunion générale annuelle de l’ICANN (Société pour l’attribution des noms de domaine et des numéros sur Internet), à Hambourg, du 21 au 26 octobre 2023. Ce cadre a été l’occasion de poursuivre les efforts de mobilisation de la communauté francophone afin de renforcer la coordination et la mobilisation des voix francophones dans cette instance de régulation d’Internet. En effet, l’ICANN a pour principales missions d’administrer les ressources numériques d’Internet, telles que d’allouer l’espace des adresses de protocole Internet et de gérer le système des noms de domaines de premier niveau (TLD génériques et nationaux), et de coordonner les acteurs techniques. L’OIF y a un statut de membre observateur au sein du GAC (Comité consultatif gouvernemental), l’un des quatre comités consultatifs de l’ICANN qui représente la voix des gouvernements et des organisations intergouvernementales (OIG) dans cette structure multipartite.

 People, Person, Architecture, Building, Classroom, Indoors, Room, School, Crowd, Audience, Lecture, Hall, Lecture Hall, Theater
Crédits photo : OIF

A travers sa Direction de la Francophonie économique et numérique, l’OIF contribue à la coordination et mobilisation des acteurs francophones au sein du GAC afin de porter une voix et des priorités communes francophones de manière plus forte et avec davantage d’impact.

En marge des sessions, l’OIF a ainsi organisé une réunion de coordination des Représentants des Etats membres de la Francophonie au sein du Comité consultatif gouvernemental, le mardi 24 octobre 2023. Celle-ci a réuni 18 participants et a permis de faire émerger des priorités communes, notamment sur le système des noms de domaine (DNS) et la re-délégation aux autorités étatiques ou aux acteurs nationaux de la société civile d’Internet des domaines de premier niveau national (ccTLD, country code Top-Level Domain qui forment la dernière partie d’une adresse Internet et renvoient à un pays spécifique ou une région).  Les noms de domaine de premier niveau restent en effet une problématique récurrente pour certains Etats, notamment en Afrique francophone. Au-delà des questions techniques, cette re-délégation est une véritable question de souveraineté des acteurs nationaux sur leur nom de domaine national. Cet enjeu a été illustré par l’intervention du Ministre des Postes, des Télécommunications et de l’Économie numérique de la République de Guinée, Monsieur Ousmane Gaoual Diallo, qui, en introduction des sessions du GAC, a annoncé que son pays est enfin le gestionnaire du « .gn », après de nombreuses années de travail et de procédures (ce nom de domaine étant préalablement géré par des acteurs privés). 

Le partage de bonnes pratiques concernant cette re-délégation, la gestion et l’opérationnalisation du ccTLD, ainsi que la question du renforcement de capacités constituent des points d’attention et modalités d’intervention à privilégier afin de résoudre des situations similaires. 

Les prochaines réunions de l’ICANN auront lieu à San Juan, Puerto Rico, du 2 au 7 mars 2023 (Forum communautaire ICANN79) et surtout à Kigali, au Rwanda, pour le Forum politique ICANN80 du 10 au 13 juin 2024, qui sera une belle opportunité pour la communauté francophone de se mobiliser.

En savoir plus : www.francophonie.org

Evènements à venir : 

  • Colloque du Réseau francophone des régulateurs des médias (REFRAM) à Nouakchott (Mauritanie, 16-17 novembre 2023) : Organisé par la Haute Autorité de la presse et de l’audiovisuel de Mauritanie, ce colloque a pour thème « L’audiovisuel à l’ère numérique : acquis et défis ». 
  • Conférence eWeek 2023 (Genève, 4-8 décembre 2023) : l’OIF contribue au programme de cette manifestation internationale à travers 3 sessions avec pour thèmes « Vers un indice de vulnérabilité numérique », « Comment répondre aux besoins de compétences numériques en Afrique francophone ? » et « La découvrabilité des contenus numériques, un impératif pour garantir la diversité culturelle ».
  • Formation aux enjeux de négociation du Pacte numérique mondial (7-8 décembre 2023, en ligne) : Organisée par l’OIF en partenariat avec l’ISOC, cette formation s’adresse aux experts francophones en charge du numériques au sein des Missions permanentes des Nations unies à New York.


DW Weekly #137 – 20 November 2023

 Text, Paper, Page

Dear all,

Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping was momentous not so much for what was said (see outcomes further down), but for the fact that it happened at all. 

Over the weekend, the news of Sam Altman’s ousting from OpenAI caused quite a stir. He didn’t need to wait long to find a new home: Microsoft.

Lots more happened, so let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden-Xi Summit cools tensions after long tech standoff

Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping, in San Francisco on the sidelines of the Asia-Pacific Economic Cooperation’s (APEC) Leaders’ Meeting, marked a significant step towards reducing tensions between the two countries. 

Implications for tech policy. Tensions, especially about technology, have been escalating for months and years. For instance, in August, the US government issued a new executive order banning several Chinese-owned software and apps from its market. This order was met with some trepidation by tech companies operating in both countries as it was unclear how this would affect their businesses. But now, after Biden and Xi’s meeting, there is hope that tensions between the two countries will ease and that this softening will cover many aspects, including tech cooperation and policy. At least, so we hope.

Responsible competition. Prior to their closed-door meeting, the two leaders pragmatically acknowledged that the USA and China have contrasting histories, cultures, and social systems. Yet, President Xi said, ‘As long as they respect each other, coexist in peace, and pursue win-win cooperation, they will be fully capable of rising above differences and find the right way for the two major countries to get along with each other’. Biden earlier had said, ‘We have to ensure that competition does not veer into conflict. And we also have to manage it responsibly.’

Biden Xi
State meeting with Xi, Biden, and staff. Credit: @POTUS on X.

Cooperation on AI. Among other topics, the two presidents agreed on the need ‘to address the risks of advanced AI systems and improve AI safety through US-China government talks,’ the post-summit White House readout said. It’s unclear what this means exactly, given that both China and the USA have already introduced the first elements of an AI framework. The fact that they brought this up, however, means that the USA certainly wants to stop any trace of AI technology theft in its tracks. But what’s in it for China?

US investment. A high-level diplomat suggested to Bloomberg that Xi’s address asking US executives to invest more in China was a signal that China needs US capital because of mistakes at home that have hurt China’s economic growth. If US Ambassador to Japan Rahm Emanuel is right, that explains why cooperation is a win-win outcome.

Tech exports. There’s a significant ‘but’ to the appearance of a thaw. Cooperation will continue as long as advanced US technologies are not used by China to undermine US national security. The readout continued: ‘The President emphasised that the United States will continue to take necessary actions’ to prevent this from happening, at the same time ‘without unduly limiting trade and investment’.

Unreported? Undoubtedly, there were other undisclosed topics discussed by the two leaders during their private meeting. For instance, what happened to the ‘likely’ deal on banning AI from autonomous weapon systems, including drones, which a Chinese embassy official hinted at before the meeting and on which the USA took a new political stand just two days prior?

Although it’s early days to see any significant positive ripple waves after the meeting, we’ll let the fact that Biden and Xi met face to face sink in a little bit. After all, as International Monetary Fund managing director Kristalina Georgieva told Reuters, the meeting was a badly needed signal that the world can cooperate more.


Digital policy roundup (13–20 November)

// AI //

Sam Altman ousted from OpenAI, joins Microsoft

Sam Altman, the CEO of OpenAI, who was fired on Friday in a surprise move by the company’s board, will now be joining Microsoft. Altman will lead a new AI innovation team at Microsoft, CEO Satya Nadella announced today (Monday). Fellow OpenAI co-founder Greg Brockman, who was removed from the board, will also join Microsoft.

Although Twitch co-founder Emmett Shear has been appointed as interim CEO, OpenAI’s future is far from stable: A letter signed by over 700 OpenAI employees has demanded the resignation of the board and the reinstatement of Altman (which might not even be possible at this stage).

Why is it relevant? First, Altman was the driving force behind the company – and its technology – which pushed the boundaries in AI and machine learning in such a short and impactful time. More than that, Altman was OpenAI’s main fundraiser; the new CEO will have big shoes to fill. Second, Microsoft has been a major player in the world of AI for many years; Altman’s move will further increase Microsoft’s already significant influence in this field. Third, tech companies can be as volatile as stock markets.

Sam Altmans badge
Sam Altman shows off an OpenAI badge, which he said was the last time to ever wear it.

US Senate’s new AI bill to make risk assessments and AI labels compulsory

A group of US senators have introduced a bill to establish an AI framework for accountability and certification based on two categories of AI systems – high-impact and critical-impact ones. The AI Research, Innovation, and Accountability Act of 2023 – or AIRIA – will also require internet platforms to implement a notification mechanism to inform the users when the platform is using generative AI. 

Joint effort. Under the bill, introduced by members of the Senate Commerce Committee, the National Institute of Standards and Technology (NIST) will be tasked with developing risk-based guidelines for high-impact AI systems. Companies using critical-impact AI will be required to conduct detailed risk assessments and comply with a certification framework established by independent organisations and the Commerce Department.

Why is it relevant? The bipartisan AIRIA is the latest US effort to establish AI rules, closely following President Biden’s Executive Order on Safe, Secure, and Trustworthy AI. It’s also the most comprehensive AI legislation introduced in the US Congress to date.


// IPR //

Music publishers seek court order to stop Anthopic’s AI models from training on copyrighted lyrics

A group of music publishers have requested a US federal court judge to block AI company Anthropic from reproducing or distributing their copyrighted song lyrics. The publishers also want the AI company to implement effective measures that would prevent its AI models from using the copyrighted lyrics to train future AI models. 

The publishers’ request is part of a lawsuit they filed on 18 October. The case continues on 29 November.

Why is it relevant? First, although the lawsuit is not new, the music publishers’ request for a preliminary injunction shows how impatient copyright holders are with AI companies allegedly using copyrighted materials. Second, the case raises once more the issue of fair use: In a letter to the US Copyright Office last month, Anthropic argued that its models use copyrighted data only for statistical purposes and not for copying creativity.

Case details: Concord Music Group, Inc. v Anthropic PBC, District Court, M.D. Tennessee, 3:23-cv-01092.


 Rocket, Weapon, Launch, Ammunition, Missile

// CONNECTIVITY //

Amazon’s Project Kuiper’s successful Protoflight mission

The team behind Amazon’s Project Kuiper, a satellite network developed by Amazon, has successfully tested the prototype satellites, which were launched on 6 October. Watch this video to see the Project Kuiper team testing a two-way video call from an Amazon site in Texas. The next step is to start mass producing the satellites for deployment in 2024.


Was this newsletter forwarded to you, and you’d like to see more?


// DMA //

Meta and others challenge DMA gatekeeper status

A number of tech companies are challenging the European Commission’s label of digital gatekeepers, which places them into the scope of the new Digital Markets Act. Among the companies: 

  • Meta (Case T-1078/23): The company disagrees with the Commission’s decision to designate its Messenger and Marketplace services under the new law, but does not challenge the inclusion of Facebook, Whatsapp, or Instagram.
  • Apple (Cases T-1079/23 & T-1080/23): Details aren’t public but media reports said the company was challenging the inclusion of its App Store on the list of gatekeepers.
  • TikTok (Case (T-1077/23): The company said its designation risked entrenching the power of dominant tech companies.

Microsoft and Google decided not to challenge their gatekeeper status.

Why is it relevant? The introduction of the Digital Markets Act has far-reaching implications for the operations of tech giants. This legal challenge is a first attempt to block its effective implementation. The outcomes of these cases could establish a precedent for the future regulation of digital markets in the EU.


The week ahead (20–27 November)

20 November–15 December: The ITU’s World Radiocommunication Conference, which starts today (Monday) in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.

21–23 November: The 8th European Cyber Week (ECW) will be held in Renne, France, and will bring together cybersecurity and cyber defence experts from the public and private sectors.

27–29 November: The 12th UN Forum on Business and Human Rights will be held in a hybrid format next week to discuss effective change in implementing obligations, responsibilities, and remedies.


#ReadingCorner

Copyright lawsuits: Who’s really protected?

Microsoft, OpenAI, and Adobe are all promising to defend their customers against intellectual property lawsuits, but that guarantee doesn’t apply to everyone. Plus, those indemnities are narrower than the announcements suggest. Read the article.

Guarding artistic creations by polluting data

Data poisoning is a technique used to protect copyrighted artwork from being used by generative AI models. It involves imperceptibly changing the pixels of digital artwork in a way that ‘poisons’ any AI model ingesting it for training purposes, rendering it functionally useless. While it has been primarily used by content creators against web scrapers, it has many other uses. However, data poisoning is not as straightforward and requires a targeted approach to pollute the datasets. Read the article.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 84 – November 2023

 Page, Text, Advertisement, Poster, Person, Head, Face

Snapshot: What’s making waves in digital policy?

Geopolitics

The US Department of Commerce (DoC) Bureau of Industry and Security (BIS) announced a tightening of export restrictions on advanced semiconductors to China and other nations subject to arms embargoes. This decision has elicited a strong reaction from China, labelling the measures as ‘unilateral bullying’ and an abuse of export control mechanisms. 

Further complicating the US-China tech landscape, there are discussions within the US government about imposing restrictions on Chinese companies access to cloud services. If implemented, this move could have significant consequences for both nations, particularly impacting major players like Amazon Web Services and Microsoft. Finally, Canada has banned Chinese and Russian software from devices issued by the government, citing security concerns.

AI governance

In other developments, a leaked draft text suggests that Southeast Asian countries, under the umbrella of the Association of Southeast Asian Nations (ASEAN), are adopting a business-friendly approach to AI regulation. The draft guide to AI ethics and governance asks companies to consider cultural differences and doesn’t prescribe categories of unacceptable risk. Meanwhile, Germany has introduced an AI action plan intending to increase AI advancement on national and European scales, to compete with the predominant AI forces of the USA and China.

Read more on AI governance below.

Security

The heads of security agencies from the USA, the UK, Australia, Canada, and New Zealand, collectively known as the Five Eyes, have publicly cautioned about China’s widespread espionage campaign to steal commercial secrets. The European Commission has announced a comprehensive review of security risks in vital technology domains, including semiconductors, AI, quantum technologies, and biotechnologies. ChatGPT faced outages on 8 November, believed to be a result of a distributed denial-of-service (DDoS) attack. Hacktivist group Anonymous Sudan claimed responsibility. Finally, Microsoft’s latest Digital Defense Report revealed a global increase in cyberattacks, with government-sponsored spying and influence operations on the rise. 

Infrastructure

The US Federal Communications Commission (FCC) voted to initiate the process of restoring net neutrality rules. Initially adopted in 2015, these rules were repealed under the previous administration but are now poised for reinstatement. 

Access Now, the Internet Society, and 22 other organisations and experts have jointly sent a letter to the Telecom Regulatory Authority of India (TRAI) opposing the enforcement of discriminatory network costs or licensing regimes for online platforms.

Internet economy

Alphabet’s Google reportedly paid a substantial sum of USD 26.3 billion to other companies in 2021 to ensure its search engine remained the default on web browsers and mobile phones. This was revealed during the US Department of Justice’s (DoJ) antitrust trial. For similar anticompetitive actions,  the Japan Fair Trade Commission (JFTC) has opened an antimonopoly investigation into Google’s web search dominance. 

The European Central Bank (ECB) has decided to commence a two-year preparation phase starting 1 November 2023, to finalise regulations and select private-sector partners before the possible launch of a digital version of the euro. The next step will be the possible implementation after a green light from policymakers. In parallel, the European Data Protection Board (EDPB) has called for enhanced privacy safeguards in the European Commission’s proposed digital euro legislation.

Digital rights

The council presidency and the European Parliament have reached a provisional agreement on a new framework for a European digital identity (eID) to provide all Europeans with a trusted and secure digital identity.Under the new agreement, member states will provide citizens and businesses with digital wallets that link their national digital identities with other personal attributes, such as driver’s licences and diplomas.

The European Parliament’s Internal Market and Consumer Protection Committee has passed a report warning of the addictive nature of certain digital services,  advocating tighter regulations to combat addictive design in digital platforms. On a similar note, the European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram. 

Key political groups in the European Parliament have reached a consensus on draft legislation compelling internet platforms to detect and report child sexual abuse material (CSAM) to prevent its dissemination on the internet.

Content policy

Meta, the parent company of Facebook and Instagram, is confronting a legal battle initiated by over 30 US states. The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13. 

The EU has formally requested details on anti-disinformation measures from Meta and TikTok. Against the backdrop of the Middle East conflict, the EU emphasises the risks associated with the widespread dissemination of illegal content and disinformation.

The UK’s Online Safety Act, imposing new responsibilities on social media companies, has come into effect. This law aims to enhance online safety and holds social media platforms accountable for their content moderation practices.

Development

The Gaza Strip has faced three internet blackouts since the start of the conflict, prompting Elon Musk’s SpaceX’s Starlink to offer internet access to internationally recognised aid organisations in Gaza. Meanwhile, environmental NGOs are urging the EU to take action on electronic waste, calling for a revision of the Waste Electrical and Electronic Equipment Directive (WEEE Directive), per the European Environmental Bureau’s communication.

THE TALK OF THE TOWN – GENEVA

As agreed during the regular session of the ITU Council in July 2023, an additional session dedicated to confirming logistical issues and organisational planning for 2024–2026 was held in October 2023. It was preceded by the cluster of Council Working Group (CWG) and Expert Group (EG) meetings where the list of chairs and vice-chairs were appointed until the 2026 Plenipotentiary Conference. The next cluster of CWG and EG meetings will take place from 24 January to 2 February 2024. 

The 3rd Geneva Science and Diplomacy Anticipator (GESDA) Summit saw the launch of the Open Quantum Institute (OQI), a partnership among the Swiss Federal Department of Foreign Affairs (FDFA), CERN and UBS. The OQI aims to make high-performance quantum computers accessible to all users devoted to finding solutions for and accelerating progress in attaining sustainable development goals (SDGs). The OQI will be hosted at CERN beginning in March 2024 and facilitate the exploration of the technology’s use cases in health, energy, climate protection, and more.


Shaping the global AI landscape

Month in, month out, we spent most of 2023 reading and writing about AI governance. October is no exception. As the world grapples with the complexities of this technology, the following initiatives showcase efforts to navigate its ethical, safety, and regulatory challenges on both national and international fronts.

Biden’s executive order on AI. The order represents the most substantial effort by the US government to regulate AI to date. Unveiled with anticipation, the order provides actionable directives where possible and calls for bipartisan legislation where necessary, particularly in data privacy.

 Accessories, Formal Wear, Tie, Adult, Female, Person, Woman, Desk, Furniture, Table, Male, Man, Crowd, Boy, Child, People, Electronics, Mobile Phone, Phone, Flag, Wristwatch, Jewelry, Ring, Necklace, Computer, Laptop, Pc, Joe Biden, Kamala Harris
Image credit: CNBC

One standout feature is the emphasis on AI safety and security. Developers of the most potent AI systems are now mandated to share safety test results and critical information with the US government. Additionally, AI systems utilised in critical infrastructure sectors are subjected to rigorous safety standards, reflecting a proactive approach to mitigating potential risks associated with AI deployment.

Unlike some emerging AI laws, such as the EU’s AI Act, Biden’s order takes a sectoral approach. It directs specific federal agencies to focus on AI applications within their domains. For instance, the Department of Health and Human Services is tasked with advancing responsible AI use in healthcare, while the DoC is directed to develop guidelines for content authentication and watermarking to label AI-generated content clearly. The DoJ is instructed to address algorithmic discrimination, showcasing a nuanced and tailored approach to AI governance.

Beyond regulations, the executive order aims to bolster the US’s technological edge. It facilitates the entry of highly skilled workers into the country, recognising their pivotal role in advancing AI capabilities. The order also prioritises AI research through funding initiatives, increased access to AI resources and data, and the establishment of new research structures.

G7’s guiding principles. Simultaneously, the G7 nations released their guiding principles for advanced AI, accompanied by a detailed code of conduct for organisations developing AI.

 Groupshot, Person, People, Adult, Male, Man, Clothing, Formal Wear, Suit, Coat, Face, Head, Mario Draghi, Fumio Kishida, Jo Johnson, Justin Trudeau, Joe Biden, Emmanuel Macron, Olaf Scholz
Image credit: Politico

These principles, totalling 11, centre around risk-based responsibility. The G7 encourages developers to implement reliable content authentication mechanisms, signalling a commitment to ensuring transparency in AI-generated content.

A notable similarity with the EU’s AI Act is the risk-based approach, placing responsibility on AI developers to assess and manage the risks associated with their systems. The EU promptly welcomed these principles, citing their potential to complement the legally binding rules under the EU AI Act internationally.

While building on the existing Organisation for Economic Co-operation and Development AI Principles (OECD) principles, the G7 principles go a step further in certain aspects. They encourage developers to deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. However, the G7’s approach preserves a degree of flexibility, allowing jurisdictions to adopt the code in ways that align with their individual approaches.

Differing viewpoints on AI regulation among G7 countries are acknowledged, ranging from strict enforcement to more innovation-friendly guidelines. However, some provisions, such as those related to privacy and copyright, are criticised for their vagueness, raising questions about their potential to drive tangible change.

China’s Global AI Governance Initiative (GAIGI). China unveiled its GAIGI during the Third Belt and Road Forum, marking a significant stride in shaping the trajectory of AI on a global scale. China’s GAIGI is expected to bring together 155 countries participating in the Belt and Road Initiative, establishing one of the largest global AI governance forums.

This strategic initiative focuses on five aspects, including ensuring AI development aligns with human progress, promoting mutual benefit, and opposing ideological divisions. It also establishes a testing and assessment system to evaluate and mitigate AI-related risks, similar to the risk-based approach of the EU’s upcoming AI Act. Additionally, the GAIGI supports consensus-based frameworks and provides vital support to developing nations in building their AI capacities.

China’s proactive approach to regulating its homegrown AI industry has granted it a first-mover advantage. Despite its deeply ideological approach, China’s interim measures on generative AI, effective since August this year, were a world first. This advantage positions China as a significant influencer in shaping global standards for AI regulation.

 Flower, Flower Arrangement, Plant, Adult, Male, Man, Person, Flower Bouquet, Crowd, Face, Head, Accessories, Formal Wear, Tie, Xi Jinping

AI Safety Summit at Bletchley Park. The UK’s much-anticipated summit resulted in a landmark commitment among leading AI countries and companies to test frontier AI models before public release.

The Bletchley Declaration identifies the dangers of current AI, including bias, threats to privacy, and deceptive content generation. While addressing these immediate concerns, the focus shifted to frontier AI – advanced models that exceed current capabilities – and their potential for serious harm. Signatories include Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.

 Crowd, Person, Audience, Speech, Adult, Female, Woman, Male, Man, Accessories, Formal Wear, Tie, Clothing, Suit, Electrical Device, Microphone, Podium, Face, Head, Rishi Sunak
Campaigns 40

Governments will now play a more active role in testing AI models. The AI Safety Institute, a new global hub established in the UK, will collaborate with leading AI institutions to assess the safety of emerging AI technologies before and after their public release. This marks a significant departure from the traditional model, where AI companies were solely responsible for ensuring the safety of their models.

The summit resulted in an agreement to form an international advisory panel on AI risk, inspired by the Intergovernmental Panel on Climate Change (IPCC). Each signatory country will nominate a representative to support a larger group of leading AI academics, producing State of the Science reports. This collaborative approach aims to foster international consensus on AI risk.

UN’s High-Level Advisory Body on AI. The UN has taken a unique approach by launching a High-Level Advisory Body on AI, comprising 39 members. Led by UN Tech Envoy Amandeep Singh Gill, the body will publish its first recommendations by the end of this year, with final recommendations expected next year. These recommendations will be discussed during the UN’s Summit of the Future in September 2024.

Unlike previous initiatives that introduced new principles, the UN’s advisory body focuses on assessing existing governance initiatives worldwide, identifying gaps, and proposing solutions. The tech envoy envisions the UN as the platform for governments to discuss and refine AI governance frameworks. 

OECD’s updated AI definition. The OECD has officially revised its definition of AI, to read: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. It is anticipated that this definition will be incorporated into the EU’s upcoming AI regulation.


Misinformation crowding out the truth in the Middle East

It is said that a lie can travel halfway around the world while the truth is still putting on its shoes. It is also said it was Mark Twain who coined it – which, ironically, is untrue.

Misinformation is as old as humanity and decades old in its current recognisable form, but social media has amplified its speed and scale. An MIT report from 2018 found that lies spread six times faster than the truth – on Twitter, that is. Different platforms amplify misinformation differently – depending on how many mechanisms for the virality of posts the platform has in place.

Yet all social media platforms have struggled with misinformation in recent days, as people have been grappling with the violence unfolding in Israel and Gaza, social media platforms have become inundated with graphic images and videos of the conflict – and images and videos that have nothing to do with it. 

What’s happening? Miscaptioned imagery, altered documents, and old videos taken out of context are circulating online. This makes it hard for anyone looking for information about the conflict to parse falsehood from truth.

Two tiles form FA and then bifurcate to two possible endings: FACT and FAKE.
Campaigns 41

Shaping perceptions. Misleading claims are not confined to the conflict zone; they also impact global perceptions and contribute to the polarisation of opinions. Individuals, influenced by biases and emotions, take sides based on information that often lacks accuracy or context. 

False narratives on platforms like X (formerly known as Twitter) can influence political agendas, with instances of fake memos circulating about military aid and allegations of fund transfers. Even supposedly reliable verified accounts contribute significantly to the dissemination of misinformation.

What tech companies are doing. Meta has established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers. It is working with fact-checkers, using their ratings to downrank false content in the feed to reduce its visibility. TikTok’s measures are somewhat similar. The company established a command centre for its safety team, added moderators proficient in Arabic and Hebrew, and enhanced automated detection systems. X removed hundreds of Hamas-linked accounts and removed or flagged thousands of pieces of content. Google and Apple reportedly disabled live traffic data for online maps for Israel and Gaza. Social messaging platform Telegram blocked Hamas channels on Android due to violations of Google’s app store guidelines. 

The EU reacts. The EU ordered X, Alphabet, Meta, and TikTok to remove fake content. European Commissioner Thierry Breton reminded them of their obligations under the new Digital Services Act (DSA), giving X, Meta, and TikTok 24 hours to respond. X confirmed removing Hamas-linked accounts, but the EU sent a formal request for information, marking the beginning of an investigation into compliance with the DSA.

Complicating matters. However, earlier this year, Meta, Amazon, Alphabet, and Twitter laid off many team members focusing on misinformation. This was part of a post-COVID-19-induced restructuring aimed at improving financial efficiency. 

The situation underscores the need for robust measures, including effective fact-checking, regulatory oversight, and platform accountability, to mitigate the impact of misinformation on public perception and global discourse.


IGF 2023

The Internet Governance Forum (IGF) 2023 addressed pressing issues amid global tensions, including the Middle East conflict. With a record-breaking 300 sessions, 15 days of video content, and 1,240 speakers, debates covered topics from the Global Digital Compact (GDC) and AI policy to data governance and narrowing the digital divide.

The following ten questions are derived from detailed reports from hundreds of workshops and sessions at the IGF 2023

 Text, Logo, Smoke Pipe, Food
Campaigns 42

1. How can AI be governed? Sessions explored national and international AI governance options, emphasising transparency and questioning the regulation of AI applications or capabilities.

2. What will be the future of the IGF in the context of the Global Digital Compact (GDC) and the WSIS+20 Review Process? The future of the IGF is closely tied to the GDC and the WSIS+20 Review Process. The 2025 review may decide the IGF’s fate, and negotiations on the GDC, expected in 2024, will also impact the IGF’s trajectory.

3. How can we use the IGF’s wealth of data for an AI-supported, human-centred future? 

The IGF’s 18 years of data is considered a public good. Discussions explored using AI to gain insights, enhance multistakeholder participation, and visually represent discussions through knowledge graphs.

4. How can risks of internet fragmentation be mitigated? Multidimensional approaches and inclusive dialogue were proposed to prevent unintended consequences.

5. What challenges arise from the negotiations on the UN treaty on cybercrime? Concerns were raised about the scope, human rights safeguards, undefined cybercrime definitions, and the role of the private sector in the UN treaty on cybercrime negotiations. Clarity, separation of cyber-dependent and cyber-enabled crimes, and international cooperation were emphasised.

6. Will the new global tax rules be as effective as everyone hopes for? The IGF discussed the potential effectiveness of the OECD/G20’s two-pillar solution for global tax rules. Concerns lingered about profit-shifting, tax havens, and power imbalances between Global North and South nations.

7. How can misinformation and protection of digital communication be addressed during times of war? Collaborative efforts between humanitarian organisations, tech companies, and international bodies were deemed essential.

8. How can data governance be strengthened? The discussion emphasised the importance of organised and transparent data governance, including clear standards, an enabling environment, and public-private partnerships. The Data Free Flow with Trust (DFFT) concept, introduced by Japan, was discussed as a framework to facilitate global data flows while ensuring security and privacy.

9. How can the digital divide be bridged? The digital divide requires comprehensive strategies beyond connectivity involving regional initiatives, deploying LEO satellites, and digital literacy efforts. Public-private partnerships, especially with RIRs, were highlighted as crucial for fostering trust and collaboration.

10. How do digital technologies impact the environment? The IGF explored the environmental impact of digital technologies, highlighting the potential to cut emissions by 20% by 2050. Immediate actions, collaborative efforts, awareness campaigns, and sustainable policies were advocated to minimise the environmental footprint of digitalisation.
Read more in our IGF 2023 Final report.


Upcoming: UNCTAD eWeek 2023

Organised by the UN Conference on Trade and Development (UNCTAD) in collaboration with eTrade for all partners, the UNCTAD eWeek 2023 is scheduled from 4 to 8 December at the prestigious International Conference Center Geneva (CICG). The central theme of this transformative event is ‘Shaping the future of the digital economy’.

Ministers, senior government officials, CEOs, international organisations, academia, and civil society will convene to address pivotal questions about the future of the digital economy: What does the future we want for the digital economy look like? What is required to make that future come true? How can digital partnerships and enhanced cooperation contribute to more inclusive and sustainable outcomes?

Over the week, participants will join more than 150 sessions addressing themes including platform governance, the impact of AI on the digital economy, eco-friendly digital practices, the empowerment of women through digital entrepreneurship, and the acceleration of digital readiness in developing countries. 

The event will explore key policy areas for building inclusive and sustainable digitalisation at various levels, focusing on innovation, scalable good practices, concrete actions and actionable steps. 

For youth aged 15–24, there’s a dedicated online consultation to ensure their voices are heard in shaping the digital future for all.

Stay up-to-date with GIP reporting!

The GIP will be actively involved in eWeek 2023 by providing reports from the event. Our human experts will be joined by DiploAI, which will generate reports from all eWeek sessions. Bookmark our dedicated eWeek 2023 page on the Digital Watch Observatory or download the app to follow the reports.

Diplo, the organisation behind the GIP, will also co-organise a session entitled ‘Scenario of the Future with the Youth’ with UNCTAD and Friedrich-Ebert-Stiftung (FES), and a session entitled ‘Digital Economy Agreements and the Future of Digital Trade Rulemaking’ with CUTS International. Diplo’s session will be titled ‘Bottom-up AI and the Right to be Humanly Imperfect.’ For more details, visit our Diplo @ UNCTAD eWeek page.

 Logo, Advertisement, Text


DW Weekly #136 – 13 November 2023

 Text, Paper, Page

Dear all,

The ongoing Middle East conflict has made us realise how dangerous and divisive hate speech can be. With illegal content on the rise, governments are putting on pressure and launching new initiatives to help curb the spread. But can these initiatives truly succeed, or are they just another drop in the ocean?

In other news, policymakers are working towards semantic alignment in AI rules, while tech companies are offering indemnity for legal expenses related to copyright infringement claims originating from AI technology.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governments ramp up pressure on tech companies to tackle fake news and hate speech

Rarely have we witnessed a week quite like the last one, where so much scrutiny was levelled at social media platforms over the rampant spread of disinformation and hate speech. You can tell that leaders are worried about AI’s misuse by terrorists and violent extremists for propaganda, recruitment, and the orchestration of attacks. The fact that so many elections are around the corner raises the stakes even more.

Christchurch Call. In a week dominated by high-stakes discussions, global leaders, including French President Emmanuel Macron and former New Zealand leader Jacinda Ardern, gathered in Paris for the annual Christchurch Call meeting. The focal point was a more concerted effort to combat online extremism and hate speech, a battle that has gained momentum since the far-right shooting at a New Zealand mosque in 2019.

Moderation mismatch. In Paris, Macron seized the opportunity to criticise social media giants. In an interview with the BBC, he slammed Meta and Google for what he termed a failure to moderate terrorist content online. The revelation that Elon Musk’s X platform had only 2,294 content moderators, significantly fewer than its counterparts, fueled concerns about the platforms’ efficacy.

UNESCO’s battle cry. Meanwhile, UNESCO’s Director-General, Audrey Azoulay, sounded an alarm about the surge in online disinformation and hate speech, labelling it a ‘major threat to stability and social cohesion’. UNESCO unveiled an action plan (in the form of guidelines), backed by global consultations and a public opinion survey, emphasising the urgent need for coordinated action against this digital scourge. But while the plan is ambitious, its success hinges on adherence to non-binding recommendations. 

Political ads. On another front, EU co-legislators reached a deal on the transparency and targeting of political advertising. Stricter rules will now prohibit targeted ad-delivery techniques involving the processing of personal data in political communications. A public repository for all online political advertising in the EU is set to be managed by an EU Commission-established authority. ‘The new rules will make it harder for foreign actors to spread disinformation and interfere in our free and democratic processes. We also secured a favourable environment for transnational campaigning in time for the next European Parliament elections,’ lead MEP Sandro Gozi said. In the EU’s case, success hinges not on adherence, but on effective enforcement. 

Use of AI. Simultaneously, Meta, the parent company of Facebook and Instagram, published a new policy in response to the growing impact of AI on political advertising (after it was disclosed by the press). Starting next year, Meta will require organisations placing political ads to disclose when they use AI software to generate part or all of those ads. Meta will also prohibit advertisers from using AI tools built into Meta’s ad platform to generate ads under a variety of categories, including housing, credit, financial services, and employment. Although we’ve come to look at self-regulation with mixed feelings, the new policy – which will apply globally – is ‘one of the industry’s most significant AI policy choices to come to light to date’, to quote Reuters.

Crack-down in India. Even India joined the fray, with its Ministry of Electronics and Information Technology issuing a stern statement on the handling of misinformation. Significant social media platforms with over 5 million users must comply with strict timeframes for identifying and deleting false content.

As policymakers and tech giants grapple with the surge of online extremism and disinformation, it’s clear that much more needs to happen. The scale of the problem demands a tectonic change, one that goes beyond incremental measures. The much-needed epiphany could lie in the shared understanding and acknowledgement of the severity of the problem. While it might not bring about an instant solution, collective recognition of the problem could serve as a catalyst for a significant breakthrough.


Digital policy roundup (6–13 November)

// AI //

OECD updates its definition of AI system

The OECD’s council has agreed to a new definition of AI system, which reads: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’

Compared with the 2019 version, it has added content as one of the possible outputs, referring to generative AI systems. 

Why is it relevant? First, the EU, which aligned its AI Act with the OECD’s 2019 definition, is expected to integrate the revised definition into its draft law, presently at trilogue stage. As yet, no documents reflecting the new definition have been published. Second, the EU’s push towards semantic alignment extends further. The EU and USA are currently working on a common taxonomy, or classification system, for key concepts, as part of the EU-US Trade and Technology Council’s work. The council is seeking public input on the draft taxonomy and other work areas until 24 November.


Hollywood actors and studios reach agreement over use of AI 

Hollywood actors have finally reached a (tentative) deal with studios, bringing an end to a months-old strike. One of the disagreements was on the use of AI: Under the new deal, producers will be required to get consent and compensate actors for the creation and use of digital replicas of actors, whether created on set or licensed for use. 

The film and television industry faced significant disruptions due to a strike that began in May. The underlying rationale was this: While it’s impossible to halt the progress of AI, actors and writers could fight for more equitable compensation and fairer terms. Hollywood’s film and television writers reached an agreement in October, but negotiations between studios and actors were at an impasse until last week’s deal.

Why is it relevant? First, it’s a prime example of how AI has been disrupting creative industries and drawing concerns from actors and writers, despite earlier scepticism. Second, as The Economist thinks, AI could make a handful of actors omnipresent, and hence, eventually boring for audiences. But we think fans just want a good storyline, regardless of whether the well-loved artist is merely a product of AI.


OpenAI’s ChatGPT hit by DDoS attack

OpenAI was hit by a cyberattack last week, resulting in a major outage to its ChatGPT and API. The attack was suspected to be a distributed denial of service (DDoS) attack, which is meant to disrupt access to an online service by flooding it with too much traffic. When the outage first happened, OpenAI reported that the problem was identified, and a fix was deployed. But the outage continued the next day, with the company confirming that it was ‘dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack’.

Responsible. Anonymous Sudan claimed responsibility for the attack, which the group said was in response to OpenAI’s collaboration with Israel and the OpenAI CEO’s willingness to invest more in the country.

Screenshot of a message from Anonymous Sudan entitled ‘Some reasons why we targeted OpenAI and ChatGPT’ lists four reasons: (1) OpenAI’s cooperation with the state of Israel, (2) use of AI for weapons and oppression, (3) it is an American company, and 4) it has a bias toward Israel (summary of the list).

Was this newsletter forwarded to you, and you’d like to see more?


// COMPETITION //

G7 ready to tackle AI-driven competition risks; more discussion on genAI needed

Competition authorities from G7 countries believe they already have the legal authority to address AI-driven competitive harm, which power could be further complemented by AI-specific policies, according to a communiqué published at the end of last week’s summit in Tokyo.

When it comes to emerging technologies such as generative AI, however, the G7 competition authorities say that ‘further discussions among us are needed on competition and contestability issues raised by those technologies and how current and new tools can address these adequately.’

Why is it relevant? Unlike other areas of AI governance, competition issues are not a matter of which new laws to enact, but rather how to interpret existing legal frameworks. How could this be done? Competition authorities have suggested that government departments, authorities, and regulators should (a) give proper consideration to the role of effective competition alongside other issues and (b) collaborate closely with each other to tackle systemic problems consistently.


// COPYRIGHT //

OpenAI launches Copyright Shield to cover customers’ legal fees for copyright infringement claims

Sam Altman, the CEO of OpenAI, has announced that the company will cover the legal expenses of business customers faced with copyright infringement claims stemming from using OpenAI’s AI technology. The decision responds to the escalating concern that industry-wide AI technology is being trained on protected content without the authors’ consent. 

This initiative, called Copyright Shield, was announced together with a host of other improvements to ChatGPT. Here’s the announcement: ‘OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems. Today, we’re going one step further and introducing Copyright Shield – we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform.’

Why is it relevant? The offer of covering legal costs has become a trend, after Microsoft, in September, announced legal protection to users of its Copilot AI services faced with copyright infringement lawsuits, with Google following suit a month later by adding a second layer of indemnity to also cover AI-generated output. Details of how these services will be implemented are not yet entirely clear.


Meta info sheet states: Want to subscribe or continue using our Products for free with ads?Laws are changing in your region, so we're introducing a new choice about how we use your info for ads. You'll learn more about what each option means for you before you confirm your choice. Your choice will apply to the accounts in this Accounts CentreSubscribe to use without adsSubscribe to use your Instagram account without ads, starting at €12.99/month (inclusive of applicable taxes). Your info won't be used for ads.Use for free with ads Discover products and brands through personalised ads, while using your Instagram account for free. Your info will be used for ads. To use our Products for free with ads, agree to Meta using your info for the followingContinue to use your information from your accounts in this Accounts Centre for adsContinue to use cookies on our Products to personalise your ads and measure how theyperformHelpful info Your experience will stay the sameYou can change your choice or adjust your settings at any time to make sure that your ad experience is right for you.You can add or remove accounts at any time in Settings.We are committed to your privacy and keeping your information secure.We're updating our Terms and Privacy Policy to reflect these changes, including how we use your information for ads. By continuing to use our Products, you agree to the updated terms.

// PRIVACY //

Meta tells Europeans: Pay or Okay

Meta has rolled out a new policy for European users: Allow Facebook and Instagram to show personalised ads based on user data, or pay a subscription fee to remove ads. But there’s a catch – even if subscribers sign up to remove ads, the company will still gather their data – it just won’t use that data to show them ads. Privacy experts have seen this coming. A legal fight is definitely on the horizon.


// TAXATION //

Apple suffers setback over sweetheart tax case involving Ireland

The Apple-Ireland state aid case, which has been ongoing for almost a decade, is set to be decided by the EU’s Court of Justice, and things don’t look too good for Apple. The current chapter of the case involves a decision by the European Commission, which found that Apple owed Ireland EUR 13 billion (USD 13.8 billion) in unpaid taxes over an alleged tax arrangement granted to Apple, which amounted to illegal state aid. In 2020, the General Court annulled that decision, and the European Commission appealed.

Last week, the Court of Justice’s advocate general said the General Court made legal errors, and the annulment should be set aside. Advocate General Giovanni Pitruzzella advises the court to refer the case back to the lower court for a new decision.

Why is it relevant? First, the new opinion confirms the initial reaction of the European Commission, which at the time had said that the General Court made legal errors. Second, although the advocate general’s opinion is non-binding, it is usually given considerable weight by the court. 

Case details: Commission v Ireland and Others, C-465/20 P


The week ahead (13–20 November)

13–16 November: Cape Town, South Africa, will host the Africa Tech Festival, a four-day event that is expected to bring together around 12,000 participants from the policy and technology sectors. There are 3 tracks: AfricaCom is dedicated to telecoms, connectivity, and digital infrastructure; AfricaTech explores innovative and disruptive technologies; AfricaIgnite is dedicated to entrepreneurs.

15 November: The much-anticipated meeting between US President Joe Biden and Chinese President Xi Jinping will take place on the sidelines of the Asia-Pacific Economic Cooperation (APEC) leaders’ meeting in San Francisco. Both sides will be looking for a way to smooth relations, not least on technology issues.

20 November–15 December: The ITU’s World Radiocommunication Conference, taking place in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.


#ReadingCorner
Cover of a news magazine

The scourge of disinformation and hate speech during elections

There is no doubt that the use of social media as a daily source of information has grown a lot over the past 15 years. But did you know that it has now surpassed print media, radio, and TV? This leaves citizens particularly exposed to disinformation and hate speech, which are highly prevalent on social media. The Ipsos UNESCO survey on the impact of online disinformation and hate speech sheds light on the growing problem, especially during elections.


Screenshot of a Telegeography submarine cable map

One world, two networks? Not yet…

One of the biggest fears among experts is that the tensions between the USA and China could fragment the internet. Telegeography research director Alan Mauldin assesses the impact on the submarine cable industry. If you’re into slide decks, download Mauldin’s presentation.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

DW Weekly #135 – 06 November 2023

 Text, Paper, Page

Dear readers,

Last week’s AI Safety Summit, hosted by the UK government, was on everyone’s radar. Despite coming just days after the US President’s Executive Order on AI and the G7’s guiding principles on AI, the summit served to initiate a global process on establishing AI safety standards. The week saw a flurry of other AI policy developments, making it one of the busiest weeks of the year for AI.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Landmark agreement on AI safety-by-design reached by UK, USA, EU, and others

The UK has secured a landmark commitment with leading AI countries and companies to test frontier AI models before releasing them for public use. That’s just one of the initiatives agreed on during last week’s AI Safety Summit, hosted by the UK at Bletchley Park.

Delicate timing. The summit came just after US President Joe Biden announced his executive order on AI, the G7 released its guiding principles, and China’s President Xi Jinping announced its Global AI Governance Initiative. With such a diverse line-up of developments, there was a risk that the UK’s summit would be outshined, and its initiatives overshadowed. But judging by how the UK successfully avoided turning the summit into a marketplace (at least, not publicly), it managed to launch not just a product but a process.

Signing the Bletchley Declaration. The group of countries signing the communique on Day 1 of the summit included Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.

Yes, China too. We’ve got to hand it to Prime Minister Rishi Sunak for bringing everyone around the table, including China: ‘Some said, we shouldn’t even invite China… others that we could never get an agreement with them. Both were wrong. A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.’ And he’s right. On his part, Wu Zhaohui, China’s vice minister of science and technology, told the opening session that Beijing was ready to increase collaboration on AI safety. ‘Countries regardless of their size and scale have equal rights to develop and use AI’, he added, possibly referring to China’s latest efforts to help developing nations build their AI capacities.

Like-minded countries testing AI models. The countries agreeing on the plan to test frontier AI models were actually a smaller group of like-minded countries – Australia, Canada, the EU, France, Germany, Italy, Japan, Korea, Singapore, the USA, and the UK – and ten leading AI companies – Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, and xAI. 

No China this time. China (and others) were not part of this smaller group, even though China’s representative reportedly attended Day 2. Why China did not sign the AI testing plan remains a mystery (we do have a theory or 2, though).

AI Safety Summit
UK Prime Minister Rishi Sunak addressing the AI Safety Summit (1–2 November 2023)

Outcome 1: Shared consensus on AI risks

Current risks. For starters, countries agreed on the dangers of current AI, as outlined in the Bletchley Declaration, which they signed on Day 1 of the summit. Those include bias, threats to privacy and data protection, and risks arising from the ability to generate deceptive content. 

A more significant focus: Frontier AI. Though current risks need to be mitigated, the focus was predominantly on frontier AI, that is, advanced models that exceed the capabilities of what we’re seeing today, and their ‘potential for serious, even catastrophic, harm’. It’s not difficult to see why governments have come to fear what’s around the corner, as there have been plenty of stark warnings about the future’s superintelligent systems, the risk of extinction, and the seriousness of these warnings. But as long as they don’t let the dangers of tomorrow divert them from addressing the immediate concerns, they’re on track. 

Outcome 2: Governments to test AI models 

Shared responsibility. Gone are the days when AI companies were solely responsible for ensuring the safety of their models. Or as Sunak said on Day 2, ‘we shouldn’t rely on them to mark their own homework’. Governments (the like-minded ones) will soon be able to see for themselves whether next-generation AI models are safe enough to be released to the public, or whether they pose threats to critical national security.

How it will work. A new global hub, called the AI Safety Institute (an evolution of the existing Frontier AI Taskforce), will be established in the UK, and will be tasked with testing the safety of emerging AI technologies before and after their public release. It will work closely with the UK’s Alan Turing Institute and the USA’s AI Safety Institute, among others.

Outcome 3: An IPCC for AI 

Panel of experts. A third major highlight of the summit is that countries agreed to form an international advisory panel on AI risk. Prime Minister Sunak said the panel was ‘inspired by how the Intergovernmental Panel on Climate Change (IPCC) was set up to reach international science consensus.’

How it will work. Each country who signed on to the Bletchley Declaration will nominate a representative to support a larger group of leading AI academics, tasked with producing State of the Science reports. Turing Award winner Yoshua Bengio will lead the first report as chair of the drafting group. The chair’s secretariat will be housed within the AI Safety Institute.

So what’s next? As far as gatherings go, it looks like the UK’s AI Safety Summit is the first of many. The second summit will be online, co-hosted by Korea in 6 months. An in-person meeting in France will follow a year later. As for the first report, we can expect it to be published ahead of the Korea summit. 


Digital policy roundup (30 October–6 November)

// AI //

Big Tech accused of exaggerating AI risks to eliminate competition

On today’s AI landscape, there are a few dominant Big Tech companies, alongside a vibrant open-source community, which is driving significant advancements in AI. The latter is posing a challenging competition to Big Tech, according to Google Brain founder Andrew Ng, leading giant companies to exaggerate the risks of AI in the hope of triggering strict regulation that would stymie the open-source community.

‘It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,’ Ng said.

Why is it relevant? Firstly, this statement echoes the cautionary note expressed in a leaked internal Google document from last May, which said that open-source AI would outcompete Google and OpenAI. Second, it paralyses open-source’s ability to resolve governance issues due to Big Tech’s control over data and knowledge.

UN advisory body to tackle gaps in AI governance initiatives  

The UN’s newly formed High-Level Advisory Body on AI, comprising 39 members, will assess governance initiatives worldwide, identify existing gaps, and find out how to bridge them, according to UN Tech Envoy Amandeep Singh Gill. He said the UN provides ‘the avenue’ for governments to discuss AI governance frameworks.

The advisory body will publish its first recommendations by the end of this year, and final recommendations next year. They will be discussed during the UN’s Summit of the Future, to be held in September next year.

Why is it relevant? It appears that the advisory body will not release another set of AI principles. Instead, they will focus on closing gaps rather than adding to the growing number of principles.


Tweet from @netblocks says: Confirmed: Live network data show a new collapse in connectivity in the #Gaza Strip with high impact to Paltel, the last remaining major operator serving the territory; the incident will be experienced as the third telecommunications blackout since the start of the conflict. A line graph shows connectivity declining in percentages from October 2–30 in 2023.

// MIDDLE EAST //

Third internet blackout in Gaza

The Gaza Strip was disconnected from internet, mobile, and telephone networks over the weekend – the third time since the start of the conflict. NetBlocks, a global internet monitoring service, said: ‘We’ve tracked the gradual decline of connectivity, which has corresponded to a few different factors: power cuts, airstrikes, as well as some amount of connectivity decline due to population movement.’


Was this newsletter forwarded to you, and you’d like to see more?


// DATA PROTECTION //

Facebook and Instagram banned from running behavioural advertising in EU

The European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram. According to the EU’s GDPR, companies need to have a good reason for collecting and using someone’s personal information; Meta had none.

Ireland is where Meta’s headquarters are located. The ban imposed on the company, which owns Facebook and Instagram, covers all EU countries and those in the European Economic Area.

Why is it relevant? There are six different reasons, or legal bases, that a company can use to process data. One of them, based on consent (meaning that a person has given their clear and specific agreement for their information to be used), is Meta’s least favourite, as the chance of users refusing consent is high. Yet, it may soon be the only basis Meta can actually use – a development which will surely make Austria-based NGO noyb quite happy.


The week ahead (6–13 November)

7-8 November: The 2023 Conference on International Cyber Security takes place at The Hague, the Netherlands. The theme is ‘War and Peace. Conflict, Behaviour and Diplomacy in Cyberspace’

8 November: The International AI Summit, organised by ForumEurope and EuroNews in Brussels and online, will ask whether a global approach to AI regulation is possible.

10-11 November: The annual Paris Peace Forum will tackle trust and safety in the digital world, among other topics.

13–16 November: The Web Summit, dubbed Europe’s biggest tech conference, meets in Lisbon.


#ReadingCorner
 Person, Security, Disk

A new chapter in IPR: The age of AI-generated content

Intellectual property authorities worldwide face a major challenge: How to approach inventions created not by human ingenuity, but by AI. This issue has sparked significant debate within the intellectual property community, and many lawsuits. Read part one of a three-part series that delves into the impact of AI on intellectual property rights.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

DW Weekly #134 – 30 October 2023

 Text, Paper, Page

Dear readers,

The stage is set for some major AI-related developments this week. Biden’s executive order on AI, and the G7’s guiding principles and code of conduct, are out. On Wednesday and Thursday, the UK will host the much-anticipated AI Safety Summit, where political leaders and CEOs will focus squarely on AI risks. In other news, the landscape for children’s online safety is changing, while antitrust lawsuits and investigations show no signs of easing up.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden issues AI executive order; G7 adopts AI principles and code of conduct

You can tell how much AI is on governments’ minds by how many developments take place in a week – or in this case, one day.

Today’s double bill – Biden’s new executive order on AI, and the G7’s guiding principles on AI and code of conduct for developers – was highly anticipated. The White House first announced plans for the executive order in July; more recently, Biden mentioned it again during a tech advisors’ meeting. As for the G7, Japan Prime Minister Fumio Kishida has been providing regular updates on the Hiroshima AI Process for months. 

Executive order targets federal agencies’ deployment of AI

Biden’s executive order represents the government’s most substantial effort thus far to regulate AI, providing actionable directives where it can, and calling for bipartisan legislation where needed (such as data privacy). There are three things that stand out:

AI safety and security. The order places heavy emphasis on safety and security by requiring, for instance, that developers of the most powerful AI systems share their safety test results and other critical information with the US government. It also requires that AI systems used in critical infrastructure sectors be subjected to rigorous safety standards.

Sectoral approach. Apart from certain aspects that apply to all federal agencies, the order employs a somewhat sectoral approach to federal agencies’ use of AI (in contrast with other emerging laws such as the EU’s AI Act). For instance, the order directs the US Department of Health and Human Services to advance the responsible use of AI in healthcare, the Department of Commerce to develop guidelines for content authentication and watermarking to clearly label AI-generated content, and the Department of Justice to address algorithmic discrimination. 

Skills and research. The order directs authorities to make it easier for highly skilled workers to study and work in the country, an attempt to boost the USA’s technological edge. It will also heavily promote AI research through funding, access to AI resources and data, and new research structures.

G7’s principles place risk-based responsibility on developers

The G7 has adopted two texts: The first is a list of 11 guiding principles for advanced AI. The second – a code of conduct for organisations developing advanced AI – repeats the principles but expands on some of them with details on how to implement them. Our three main highlights:

Risk-based. One notable similarity with the EU’s AI Act is the risk-based element, which places responsibility on developers of AI to adequately assess and manage the risks associated with their systems. The EU promptly welcomed the texts, saying they will ‘complement, at an international level, the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act’.

A step further. The texts build on the existing OECD AI Principles, but in some instances they go a few steps further. For instance, they encourage developers to develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content. 

(Much) softer approach. Differing viewpoints on AI regulation exist among the G7 countries, ranging from strict enforcement to more innovation-friendly guidelines. The documents allow jurisdictions to adopt the code in ways that align with their individual approaches. But despite this flexibility, a few other provisions are overly vague. Take the provision on privacy and copyright, for instance: ‘Organisations are encouraged to implement appropriate safeguards, to respect rights related to privacy and intellectual property, including copyright-protected content.’ That’s probably not specific enough to provoke change.

Amid mounting concerns about the risks associated with AI, today’s double bill begs the question: Will these developments succeed in changing the security landscape for AI? Biden’s executive order has the most significant strength: Although it lacks enforcement teeth, it carries the constitutional weight to manage federal agencies. But on a global scale, perspectives vary so greatly that their influence is limited. And yet, today’s developments are just the beginning this week.


Digital policy roundup (23–30 October)

// MIDDLE EAST //

Musk’s Starlink to provide internet access to Gaza for humanitarian purposes

Elon Musk confirmed on Saturday that his SpaceX’s Starlink will provide internet connectivity to ‘internationally recognised aid organisations’ in Gaza. This prompted Israel’s communication minister, Shlomo Karhi, to express strong opposition about the Starlink’s potential exploitation by Hamas.

Responding to Karhi’s tweet, Musk replied: ‘We are not so naive. Per my post, no Starlink terminal has attempted to connect from Gaza. If one does, we will take extraordinary measures to confirm that it is used *only* for purely humanitarian reasons. Moreover, we will do a security check with both the US and Israeli governments before turning on even a single terminal.’

A telephone and internet blackout isolated people in the Gaza Strip on Saturday, which added to Israel’s weeks-long suspension of electricity and fuel to Gaza.  

Why is it relevant? First, it shows how internet connectivity is increasingly being weaponised during conflicts. Second, the world half-expected Starlink to intervene, given the role it played during the Ukraine conflict, and in other countries affected by natural disasters. But its (public) promise to get go-aheads from both governments could expose the company to new dimensions of responsibility and risks, and could be counterproductive to the aid organisations who so desperately need access to coordinate their relief efforts.

Screenshot of exchange on X

// KIDS ONLINE //

Meta sued by 33 US states over children’s mental health

Meta, Instagram and Facebook’s parent company, is facing a new legal battle from over 30  US states, which are alleging that the company engaged in deceptive practices and contributed to a mental health crisis among young users of its social media platforms. 

The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13. 

Why is it relevant? The concerns raised in this lawsuit have been simmering for quite some time. Two years ago, Meta’s former employee Frances Haugen catapulted them into the public consciousness after leaking thousands of internal documents to the press and testifying to the US Senate about the company’s practices. Since then, the issue  even showed up on US President Joe Biden’s radar earlier this year. Biden called for tighter regulation ‘to stop Big Tech from collecting personal data on kids and teenagers online’.

Case details: People of the State of California v. Meta Platforms, Inc. et al., District Court, Northern District of California, 4:23-cv-05448


UK implements Online Safety Act, imposing child safety obligations on companies

The UK’s Online Safety Act, which imposes new responsibilities on social media companies, came into effect last week after the law received royal assent. 

Among other obligations, social media platforms will be required to swiftly remove illegal content, ensure that harmful content (such as adult pornography) is inaccessible to children, enforce age limits and verification measures, provide transparent information about risks to children, and offer easily accessible reporting options for users facing online difficulties. As is to be expected, there are harsh fines – up to GBP 18 million (USD 21.8 million) or 10% of global annual revenues – in store for non-compliance.

Why is it relevant? For many years, the UK relied on companies’ self-regulated efforts to keep children safe from harmful content. The industry’s initially well-intentioned efforts gradually yielded to alternate choices that prioritised financial interests – the self-regulation experiment is now over, as one child safety expert put it.


Was this newsletter forwarded to you, and you’d like to see more?


A robotic arm with an articulated hand hovers over a keyboard as though ready to type.

// CYBERWARFARE //

US official: North Korea and other states using AI in cyberwarfare

US Deputy National Security Advisor Anne Neuberger has confirmed that North Korea is using AI to escalate its cyber capabilities. In a recent press briefing (held on the sidelines of Singapore International Cyber Week), Neuberger explained: ‘We have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.’ Although experts have often spoken about the risks of AI in cyberwarfare, it’s the first time there’s been an open acknowledgement of its use in offensive cyberattacks. There will be lots to talk about in London this week.


// ANTITRUST //

Google paid billions of dollars to be default search engine

Alphabet’s Google paid USD 26.3 billion (EUR 24.8 billion) to other companies in 2021 to ensure its search engine was the default on web browsers and mobile phones. This was revealed by a company executive testifying during the US Department of Justice’s (DOJ) antitrust trial and in a court record, which the presiding judge refused to redact.

The case, filed in 2020, concerns Google’s search business, which the DOJ and state attorneys-general consider ‘anticompetitive and exclusionary’ sustaining its monopoly on the digital advertising market. 

Why is it relevant? First, the original complaint had already indicated that ‘Google pays billions of dollars each year to distributors… to secure default status for its general search engine’. The exact figures have now been made known. Second, this will make it even more difficult for Google to argue against the implications of its exclusionary agreements with other companies.

Case details: USA v. Google LLC, District Court, District of Columbia, 1:20-cv-03010


Japan’s competition authority investigating Google’s practices

The Japan Fair Trade Commission (JFTC) is seeking information on Google’s suspected anti-competitive behaviour in the Japanese market, as part of an investigation still in its early stages.

The commission will determine whether Google excluded or restricted the activities of its competitors by entering into exclusionary agreements with other companies.

Why is this relevant? If it all sounds too familiar, that’s because the Japan case is very similar to the US DoJ’s ongoing case against Google.


The week ahead (30 October–6 November)

1–2 November: The UK will host its much-anticipated AI Safety Summit in the historic Bletchley Park, Milton Keynes. British Prime Minister Rishi Sunak will welcome CEOs of leading companies and political leaders, including US Vice President Kamala Harris, European Commission President Ursula von der Leyen, and UN Secretary-General Antonio Guterres. In addition to discussing AI capabilities, risks, and cross-cutting challenges, the UK government is expected to announce an AI Safety Institute, which ‘will advance the world’s knowledge of AI safety and it will carefully examine, evaluate and test new types of AI’, the Prime Minister said. Here’s the discussion paper and the two-day programme.

1–2 November: The Global Cybersecurity Forum gathers in Riyadh, Saudi Arabia, for its annual event, which will this year be dedicated to ‘charting shared priorities in cyberspace’.

3–4 November: The 4th AI Policy Summit takes place in Zurich, Switzerland (at the ETH Zurich campus) and online. Diplo (publisher of this newsletter) is a strategic partner.

4–10 November: The Internet Engineering Task Force (IETF) is gathering in Prague, Czechia and online for its 118th annual meeting

6 November: Deadline for very large online platforms and search engines to publish their first transparency reports under the EU’s Digital Services Act. A handful of platforms have already published theirs: Amazon, LinkedIn, Pinterest, Snapchat, Zalando, Bing, and yes, TikTok.


#ReadingCorner
Image of human head made up of wired connections

Exploring the state of AI in 2023

The topic of AI safety, which appears for the first time in the annual State of AI report, has gained widespread attention and spurred governments and regulators worldwide into action, the 2023 report explains. Yet, beneath this flurry of activity lie significant divisions within the AI community and a lack of substantial progress towards achieving global governance, with governments pursuing conflicting approaches. Read the report.


How to manage AI risks

A group of AI experts has summed up the risks of upcoming, advanced AI systems in a seven-page open letter that urges prompt action, including regulations and safety measures by AI companies. ‘Large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems are looming’, they warn. 


AI and social media: Driving us down the rabbit hole

Harvard professor Lawrence Lessig holds a critical stance on the impact of AI and social media, and an even more critical perspective on the human capacity for critical thinking. ‘People have a naïve view: They open up their X feed or their Facebook feed, and [they think] they’re just getting stuff that’s given to them in some kind of neutral way, not recognizing that behind what’s given to them is the most extraordinary intelligence that we have ever created in AI that is extremely good at figuring out how to tweak the attitudes or emotions of the people they’re engaging with to drive them down rabbit holes of engagement.’ Read the interview.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

DW Weekly #133 – 23 October 2023

 Text, Paper, Page

Dear all,

The spread of illegal content and fake news linked to the Middle East conflict has been worrying EU and US policymakers, who are putting more pressure on social media companies to step up their efforts. The USA-China trade war is escalating with tighter restrictions on US chip exports to China and retaliation by China. As other updates confirm, it’s been anything but blue skies as of late. But let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

China unveils Global AI Governance Initiative as part of Belt and Road

In a significant stride towards shaping the trajectory of AI on a global scale, China’s President Xi Jinping announced the Global AI Governance Initiative (GAIGI) during the opening speech of last week’s Third Belt and Road Forum. 

The initiative is expected to bring together all 155 countries that make up the Belt and Road Initiative. This will make it one of the largest global AI governance forums.

Key tenets. Releasing additional details, the Foreign Ministry’s spokesperson said the strategic initiative will focus on five aspects. It will ensure that AI development remains synonymous with human progress, which is quite a noble aim. It will promote mutual benefit, and ‘oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI’ – a clear dig at Western allies. It would establish a testing and assessment system to evaluate and mitigate AI-related risks, which reminds us of the risk-based approach the EU is taking in its upcoming AI Act. It will also support efforts to develop consensus-based frameworks, ‘with full respect for policies and practices among countries,’ and provide vital support to developing nations to build their AI capacities.

Chinese President Xi Jinping stands behind a wide podium covered in flowers.

First-mover advantage. In recent months, China has been moving swiftly to regulate its homegrown AI industry. Its interim measures on generative AI, effective since August, were a world first; it introduced rules for the ethical application of science and tech (including AI).  China is now looking at basic security requirements for generative AI. Very few acknowledge that despite its deeply ideological approach, China was the first to regulate generative AI, giving itself a significant advantage and mileage in the race to influence global standards. So much so that even US experts are now suggesting that the USA and its allies should engage with China ‘to learn from its experience and explore whether any kind of global consensus on AI regulation is possible’.

China’s approach. Interestingly, the interim measures are a watered-down version – or at least, a less robust version compared to its initial draft – a signal that China was favouring a more industry-friendly approach. A few weeks after the measures came into effect, eight major Chinese tech companies obtained approval from the Cyberspace Administration of China (CAC) to deploy their conversational AI services. In between the USA’s underwhelming progress on AI regulation, and the EU’s strict approach, China’s approach could easily gain appeal on the international stage.

Quasi-global. The international audience watching that stage is very large. With over 150 countries forming part of the Belt and Road Initiative, China’s Global AI Governance Initiative will be one of the largest AI governance forums. But the coalition’s size is not the only reason why the initiative will be highly influential. As the Belt and Road Initiative celebrates its 10th anniversary, China is extolling its success in stimulating nearly USD 1 trillion in investment, forming more than 3,000 cooperative projects, creating 420,000 jobs, and lifting 40 million people out of poverty. All of this gives China geopolitical clout and leverage.

Showtime. China’s Global AI Governance Initiative will undoubtedly influence other processes. Of the coalitions that have launched their own vision or process for regulating AI, the most recent is the draft guide to AI ethics, which the Association of Southeast Asian Nations (ASEAN) is working on. The unveiling of China’s initiative comes a few weeks before the UK’s AI Safety Summit (see programme), which China is set to attend (even though it’s still unclear who will represent China – the decision will indicate the level of significance China gives to the UK process). 

Xi’s speech conveys a willingness to engage: ‘We stand ready to increase exchanges and dialogue with other countries and jointly promote the sound, orderly and secure AI development in the world’. But as China’s Global Times writes, ‘China is already a very important force in global AI development… there is no way the USA and its Western allies can set up a system of AI management and regulation while squeezing China out.’


Digital policy roundup (16–23 October)

// DISINFORMATION //

EU formally asks Meta, TikTok for details on anti-disinformation measures

As the Middle East conflict unfolds, ‘the widespread dissemination of illegal content and disinformation linked to these events carries a clear risk of stigmatising certain communities and destabilising our democratic structures’, to quote European Commission Thierry Breton.

Last week, we wrote how Breton personally reached out to X’s Elon Musk, TikTok’s Shou Zi Chew, Alphabet’s Sundar Pichai, and Meta’s Mark Zuckerberg, urging them to promptly remove illegal content from their platforms. Two days later, X received a formal request for information.

Now, the European Commission has sent formal requests for information about the measures they have taken to curb the spread of illegal content and disinformation to Meta and TikTok (Alphabet has been spared so far, it seems). Meta has been documenting the measures publicly.

Deadlines. The companies must provide the commission with information on crisis response measures by 25 October and measures to protect the integrity of elections by 8 November (plus in TikTok’s case, how it’s protecting kids online). As we mentioned previously, we don’t think this exchange will stop with just a few polite letters.


DSA not yet fully operational? Honour it just the same

The European Commission is applying pressure on EU member states to implement parts of the DSA months ahead of its full implementation on 17 February 2024. The ongoing wars and instabilities have led to an ‘unprecedented increase in illegal and harmful content being disseminated online’, it said.

The commission is appealing to the countries’ ‘spirit of sincere cooperation’ to go ahead and form the planned informal network once the DSA starts applying fully, to take coordinated action, and to assist it with enforcing the DSA.  

Why is it relevant? It shows the commission’s (or rather, Breton’s) eagerness to see the DSA applied. It’s the kind of pressure that one can hardly choose to ignore.


US senator urges social media platforms to curb deceptive news

Disinformation is not just a concern for European policymakers. US Senator Michael Bennett has also written to the CEOs of Meta, Google, TikTok, and X to take prompt action against ‘deceptive and misleading content about the Israel-Hamas conflict’, which he says is ‘spreading like wildfire’.

Bennett’s letter was quite critical: ‘In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement, and redistribution… Your platforms have made particular design decisions that hamper your ability to identify and remove illegal and dangerous content.’ 

Why is it relevant? First, it shows that concerns about the spread of disinformation and illegal content in the context of the Middle East conflict are not limited to European policymakers alone (although the approach taken by both sides hasn’t been quite the same). Second, Bennett is drawing attention to the platforms’ algorithms (something that the EU did not mention), which have arguably played a significant role in inadvertently promoting misleading content and creating filter bubbles.

@Senator Bennet (Michael Bennet) tweets ‘Because of social media companies' practices, deceptive and misleading content about the Israel-Hamas conflict is spreading like wildfire. We need an independent agency able to write rules to prevent foreign disinformation and increase transparency. The tweet is accompanied by a blurb from The Hill saying ‘Senate Democrat questions tech giants on efforts to stop false Israel-Hamas conflict content. Screenshot of Michael Bennet links to his intervention in the US Senate:  https://trib.al/PnNXOBl.

Was this newsletter forwarded to you, and you’d like to see more?


// CHIPS //

USA tightens restrictions on semiconductor exports to China

The US Department of Commerce’s (DOC) Bureau of Industry and Security (BIS) has tightened export restrictions on advanced semiconductors to China and other countries that are subject to an arms embargo. In practice, this means that China will be unable to obtain high-end chips that are used to train powerful AI models and equipment that can enable the production of tiny chips that are used for AI.

China reacted strongly to the BIS decision, calling these measures ‘unilateral bullying’, and an abuse of export control measures. The measures are an expansion of semiconductor export restrictions implemented last year

Why is it relevant? This latest tit-for-tat is meant to close loopholes from the 2022 measures. US Secretary of Commerce Gina Raimondo says that the objective remains unchanged: to restrict China from advancements in AI that are vital for its military applications. But the Washington-based Semiconductor Industry Association cautions that export controls ‘could potentially harm the US semiconductor ecosystem instead of advancing national security’.


 Adult, Female, Person, Woman, Crowd, Male, Man, People, Audience, Speech, Flag, Head, Condoleezza Rice
The heads of US, UK, Australian, Canadian and New Zealand security agencies meeting publicly for the first time, on a stage at Stanford University. Credit: FBI

// CYBERSECURITY //

Five Eyes warn of China’s ‘innovation theft’ campaign

The heads of the Five Eyes security agencies – composed of the USA, UK, Australia, Canada and New Zealand – have warned of a sizeable Chinese espionage campaign to steal commercial secrets. The agency heads met publicly for the first time during a security summit held in Silicon Valley. Over 20,000 people in the UK have been approached online by Chinese spies, the head of the UK’s MI5 told the BBC.


// NET NEUTRALITY //

US FCC vote kicks off process to restore net neutrality rules

The US Federal Communications Commission (FCC) has voted in favour of starting the process to restore net neutrality rules in the USA. The rules were originally adopted by the Obama administration in 2015, but repealed a few years later under the Trump government.

The steps ahead. Although net neutrality proponents will have uttered a collective sigh of relief at this renewal, the process involves multiple steps, including a period for public comments. 

Why is it relevant? We won’t state the obvious about net neutrality, or how the FCC will broaden its reach. Rather, we’ll highlight what chairwoman Jessica Rosenworcel said last week: There are already several state-led open internet policies that providers are abiding by right now; it’s time for a national one.


// COMPETITION //

South Africa investigating competition in local news media and adtech market

South Africa’s Competition Commission has launched an investigation into the distribution of media content and the advertising technology (adtech) markets that link buyers and sellers of digital advertising. 

The investigation will also determine whether digital platforms such as Meta and Google are engaging in unfair competition with local news publishers by using their content to generate advertising revenue.

Why is it relevant? First, it shows how global investigations – most notably in Australia and Canada – are drawing attention to Big Tech’s behaviour in other markets, and are influencing the measures taken by other regulators. Second, it reflects rising concerns about the shift from print advertising to digital content and advertising – a trend that is not sparing anyone.


// DIGITAL EURO //

ECB launches prep phase for digital euro

The European Central Bank (ECB) has announced a two-year prep phase for the digital euro, which will work on its regulatory framework and the technical setup. The phase starts on 1 November, and comes after a two-year research phase. 

The ECB made it clear that the launch doesn’t mean that the digital euro is a certainty. But if there’s eventually a green light, the digital euro will function similarly to online wallets or bank accounts, and will be guaranteed by the ECB. It will only be available to EU residents.

Why is it relevant? Digital currencies issued by central banks (known as Central Bank Digital Currencies (CBDCs)) are in a rapidly developing phase worldwide. Last year, a BIS report said that two-thirds of the world’s central banks are considering introducing CBDC in the near future. Even though only a few countries – such as China, Sweden, and a handful of Caribbean countries – have launched digital currencies or pilot projects, the EU is treading slowly but surely, expecting the digital euro to coexist alongside physical cash and to introduce measures that would safeguard its existing commercial banking sector.


The week ahead (23–30 October)

21–26 October: ICANN78, the organisation’s 25th annual general meeting, is ongoing in Hamburg, Germany and online.

24–26 October: The CEOs of some of the world’s leading telecoms operators are meeting in Paris for the 5G World Summit this week. 

25–26 October: The European Commission’s Global Gateway Forum – dubbed the European response to China’s Belt and Road Forum – is taking place in Brussels. 

25–27 October: Nashville, Tennessee, will host the 13th (ISC)2 Security Congress, convening the cybersecurity community in person and online.


#ReadingCorner
 Advertisement, Poster, Adult, Female, Person, Woman, Face, Head

Online abuse of kids ‘escalating’

Child sexual exploitation and abuse online is escalating worldwide, in both scale and methods, the latest We Protect Global Alliance’s threat assessment warns. To put this into numerical perspective, the reports of abuse material reported in the USA in 2019 dwarfs the 32 million reports made in 2022. It gets worse: ‘The true scale of child sexual exploitation and abuse online is likely greater than this as a lot of harm is not reported.’ Read the report, including its recommendations.

File photo of a child using a digital device.

If abuse is on the rise, why isn’t the tech industry doing more?

As the eSafety Commissioner of Australia noted last week, some of the biggest tech companies just aren’t living up to their responsibilities to halt the spread of online child sexual abuse content and livestreaming. 

‘Within online businesses much of the child safety and wider consumer agenda is marked as an overhead cost not a profit centre …’, writes John Carr, a UK leading expert in child internet safety. ‘Companies will obey clearly stated laws. But the unvarnished truth is many are also willing to exploit any and all available wiggle room or ambiguity to minimise or delay the extent of their engagement with anything which does not contribute directly to the bottom  line. If it makes them money they need no further encouragement. If it doesn’t, they do.’ Read the blog post.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

IGF 2023 – Final Report

Kyoto, 8 – 12 October 2023

This year’s IGF came at a time of heightened global tension. As the Middle East conflict unfolded, aspects related to internet fragmentation, cybersecurity during times of war, and mis- and disinformation entered prominently into the IGF 2023 debates.

During the discussions at this year’s record-breaking IGF (with 300 sessions, 15 days of video content, and 1,240 speakers), participants also debated other topics at length – from the Global Digital Compact (GDC) and other processes to AI policy (such as the Hiroshima AI Process – more further down), data governance dilemmas, and narrowing the digital divide.

The following 10 questions are derived from detailed reports from hundreds of workshops and sessions at the IGF 2023.

10 questions debated at IGF 2023


1. How can AI be governed?

 Person, Art, Drawing, Head, Face, Body Part, Hand, Villaño III

There seems to be some form of general consensus among stakeholders – both public and private – that we need to govern AI if we are to leverage it for the benefit of humanity. But what exactly to govern, and, even more importantly, how to do so, remains open for debate.

And so it is no surprise that the IGF featured quite a few such debates, as sessions explored national and international AI governance options, highlighted the need for transparency in both the technical development of AI systems and in the governance processes themselves, and questioned whether to regulate AI applications/uses or capabilities.

Highlights 

Just as was the case with the internet, AI is set to impact the entire world, albeit in different ways and at different speeds. And so, setting some form of international governance mechanisms to guide the development and deployment of human-centric, safe and trustworthy AI is essential. The jury is still unsure whether to have international guiding principles, stronger regulations, new agencies, etc.

But there is already a body of work to build upon, from the OECD’s AI principles and the UNESCO recommendation on AI ethics to the G7 Hiroshima AI Process and the EU’s approach to developing voluntary AI guardrails ahead of the AI Act coming into force. Japan’s Prime Minister announced at the start of the IGF that a draft set of guiding principles and a code of conduct for developers of advanced AI is to be put on the table for approval at the upcoming G7 Summit. The texts form part of the Hiroshima AI Process, kickstarted during last May’s G7 Summit.

If the world is to move ahead with some form of global AI governance approach, then this approach needs to be defined in an inclusive manner. There is a tendency for countries and regional blocs with more robust regulatory frameworks to shape governance practices globally, but the voices and interests of smaller and developing countries must be more meaningfully represented and considered.

Take Latin America and Africa, for example: They provide significant raw materials, resources, data, and labour for AI development, but their participation in global processes does not strongly reflect this. Moreover, the discussion on AI harms is still predominantly framed through the Global North lens. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and capacity development are essential.

The Brussels effect – where EU regulations made in Brussels become influential worldwide – featured in some discussions. The EU’s AI Act will likely influence regulatory approaches in other jurisdictions globally. However, countries must consider their unique local contexts when designing their regulations and policies to ensure they respond to and reflect local needs and realities. And, of course, this so-called AI localism should also apply when integrating local knowledge systems into AI models. By incorporating this local knowledge, AI models can better address distinct local and regional challenges.

Multistakeholder cooperation in shaping AI governance mechanisms was highlighted as essential. With the private sector driving AI innovation, its involvement in AI governance is inevitable and indispensable. Such an involvement also needs to be transparent, open, and trustworthy. 

But it is not all about laws and regulations. Technical standards also have a role to play in advancing trustworthy AI. Different technical standards are necessary within the AI ecosystem at different levels, encompassing certifications for evaluating quality management systems and ensuring product-level adherence to specific benchmarks and requirements. These standards aim to maintain efficient operations, promote reliability, and foster trust in AI products and services.

It was argued that a balanced mix of voluntary standards and legal frameworks could be the way forward. Here, too, there is a need for actors in developing countries to actively engage in shaping AI standards rather than merely adapting to standards set by external entities.

While we wait for new international regulations to be developed, a wide range of actors could adopt or adapt new or existing voluntary standards for AI. For instance, the Institute of Electrical and Electronics Engineers (IEEE) developed a value-based design approach that UNICEF uses. The implementation of AI also requires a deep understanding of established ethical guidelines. To this end, UNESCO has published the first-ever global Guidance on Generative AI in Education and Research, which aims to support countries in implementing immediate actions and planning long-term policies to properly use generative AI tools. 

Aside from laws, regulations, and technical standards, what else could help achieve a human-centric and inclusive approach to AI? Forums and initiatives such as the Global Partnership on AI (GPAI), the Frontier Model Forum, the Partnership on AI, and the MLCommons have a role to play. They can promote the secure and ethical advancement of cutting-edge AI models – by establishing common definitions and understandings of AI system life cycles, creating best practices and standards, and fostering information sharing between policymakers and industry. And states should look into allocating resources to the development of publicly accessible AI technology as a way to ensure wider access to AI technology and its benefits.


2. What will be the future of the IGF in the context of the Global Digital Compact (GDC) and the WSIS+20 Review Process?

 Book, Publication, Comics, Person, Face, Head, Bulldozer, machine

From 2003 to 2005, the World Summit of the Information Society (WSIS) came up with several outcome documents meant, among other goals, to advance a more inclusive information society and establish key principles for what was back then a fresh new term: internet governance. The IGF itself was an outcome of WSIS.

In 2025, a WSIS+20 review process will look at the progress made in implementing WSIS outcomes and will, most likely, decide on the future of the IGF (as its current mandate expires in 2025). In parallel with preparing for WSIS+20, UN member states will also have to negotiate on the Global Digital Compact, expected to be adopted in 2024 as a pact for an ‘open, free, and secure digital future for all’.  

So, the next two years are set to be intensive. New forums are under consideration. Some existing structures may be strengthened.  International organisations are gearing up for an ‘AI mandate race’ that will shape their future and, in some cases, question their very existence. 

The IGF’s future will be significantly influenced by the rapidly changing policy environment, as discussed in Kyoto.  

Highlights 

The Global Digital Compact (GDC) sparked a lot of interest in official sessions, including one main session, bilateral meetings, and corridor chats, with two underlying issues:

IGF input into GDC drafting: The IGF community would like to see more multistakeholder participation throughout the GDC drafting process. Mimicking the IGF mode of operation is unrealistic, as the GDC will be negotiated under UN General Assembly rules. However, while following the UNGA rules of procedure, the GDC should continue to make every effort to include all stakeholders’ perspectives, as it has in the past. Stakeholders were also encouraged to communicate with their national representatives in order to contribute more to the GDC process. (Bookmark our GDC tracker)

Participation of the IGF in GDC implementation: Several speakers stressed that the IGF should play a prominent role in the development of implementing the GDC. The IGF Leadership Panel, for example, argued that the IGF should play a central role in GDC follow-up processes. The relation between the IGF and a potential Digital Cooperation Forum, as suggested in the UN Secretary-General’s policy brief, was the “elephant in the room” during the IGF in Kyoto.

Inclusion in governance was in focus during the session on the participation of Small Island Developing States (SIDS) in digital governance processes. The debate brought up an interesting paradox. Although SIDS have the formal possibility of participating in the IGF, they often lack the resources to do so effectively. Other small groups from civil society, business, and academia encounter a similar participatory paradox.

Changes in the global architecture may have a two-fold impact on SIDS. Firstly, the proliferation of digital forums could further strain their already stretched participation capacity. Secondly, the GDC may propose new forms of participation reflecting the specificities of small actors with limited resources. For any future digital governance architecture to work, it will be important for SIDS and other small actors, from businesses to civil society, to be able to have stronger voices.

 Electronics, Hardware, Diagram

The IGF debates indicated the renewed relevance of the WSIS process ahead of review in 2025. The G77 is particularly keen to base GDC negotiations on the WSIS Tunis Agenda and the Geneva Declaration of Principles, as stated in the recently adopted G77 Havana Declaration. The G77 argued for a triangulation of digital governance structures among Agenda 2030, WSIS, and the GDC. 

Whatever policy outcomes will be reflected in the GDC and the WSIS+20 review, the IGF should be refined, improved, and adapted to the rapidly changing landscape of AI and broader digital developments. More attention should also be given to involving missing communities in IGF debates. The IGF Plus approach was mentioned in discussions in Kyoto. 

In Kyoto, international organisations fueled the race for AI mandates to secure a place in the developing frameworks for handling AI. According to Diplo’s analysis of AI in IOs, almost every UN organisation has some AI initiative in place.

In the emerging AI era, many organisations are faced with existential questions about their future and how to manage new policy issues. The primary task facing the UN system and its member states in the upcoming years will be managing the race to put an AI mechanism in place. Duplication of effort, overlapping mandates, and the inevitable confusion when addressing the impact of AI could impede effective multilateralism.


3. How to use IGF’s wealth of data for an AI-supported, human-centred future?

 Person, Face, Head

The immense amount of data accumulated through the IGF over the past 18 years is a public good that belongs to all stakeholders. It presents an opportunity for valuable insights when mined and analysed effectively, with AI applications serving as useful tools in this process.

Highlights 

The IGF has accumulated a vast repository of knowledge generated by the discussions at the annual forum and its communities over the years (e.g. session recordings and reports; documents submitted for public consultation; IGF messages and annual reports; outputs of youth and parliamentary tracks, best practice forums, policy networks, and dynamic coalitions; summaries of MAG meetings; reports from national, regional and youth IGF initiatives). But this is an underutilised resource that could be used to build a sustainable, inclusive, and human-centric digital future.

Diplo and the GIP supported the IGF Secretariat in organising a side session to discuss how to unlock the IGF’s knowledge to gain AI-driven insights for our digital future.

Jovan Kurbalija smiles as he sets down the microphone on a panel at IGF2023.

AI can increase the effectiveness of disseminating and utilising the knowledge generated by the IGF. It can also help identify underrepresented and marginalised groups and disciplines in the IGF processes, allowing the IGF to increase its focus on involving them. 

Moreover, AI can assist in managing the busy schedule of IGF sessions by linking them to similar discussions from previous years, aiding in coordinating related themes over time. It can visually represent hours of discussions and extensive content as a knowledge graph, as demonstrated by Diplo’s experiment with AI-enhanced reporting at IGF2023.

An intricate multicoloured lace network of lines and nexuses representing a knowledge graph of Day 0 of IGF2023.

Importantly, preserving the IGF’s knowledge and modus operandi can show the relevance and power of respectful engagement with different opinions and views. Since this approach is not automatic in our time, the IGF’s impact could extend beyond internet governance and have a more profound effect on the methodology of global meetings.


4. How can risks of internet fragmentation be mitigated?

 CAD Diagram, Diagram, Device, Grass, Lawn, Lawn Mower, Plant, Tool

The escalating threat of fragmentation challenges the internet’s global nature. Geopolitical tensions, misinformation, and digital protectionism reshape internet governance, potentially compromising its openness. A multidimensional approach is crucial to understanding and mitigating fragmentation. Inclusive dialogue and international norms play a vital role in reducing these risks. 

Highlights 

Internet fragmentation would pose significant challenges to the global and interconnected nature of the internet. It would hinder communication, stifle innovation, and undermine the intended functioning of the internet. Throughout the week, different sessions tackled these issues and how to reduce the risks of internet fragmentation.

The internet, as we know it, cannot be taken for granted any more. Geopolitical tensions, the weaponisation of the internet, dis- and misinformation, and the pursuit of digital sovereignty through protectionism could potentially fracture the open nature of the internet. The same can be said for restrictions on access to certain services, internet shutdowns, and censorship.

One way of examining the risks is to look at the different dimensions of fragmentation, fragmentation of the user experience, that of the internet’s technical layer, and fragmentation of internet governance and coordination (explained in detail in this background paper), and the consequences each of them carries. 

Policymakers can also use this approach to create a cohesive and comprehensive regulatory approach that does not lead to internet fragmentation (for instance, a layered approach to sanctions can help prevent unintended consequences like hampering internet access). In fact, state control over the public core of the internet and its application layer is a major concern. Different technologies operate at several layers of the internet, and different entities manage those distinct layers.

Disruptions in the application layer could lead to disruptions in the entire internet. Therefore, governance of the public core calls for careful consideration, a clear understanding of these distinctions, and deep technical knowledge. 

International norms are critical to reducing the risk of fragmentation. International dialogue in forums like the IGF is invaluable for inclusive discussions and contributions from diverse stakeholders, including different perspectives about fragmentation between the Global North and Global South.

Countries pursue their policies at the national level, but they also need to be mindful of harmonising with regulatory frameworks with extraterritorial reach. In developing national and regional regulatory frameworks, it is indispensable to elicit multistakeholder input, particularly considering the perspectives of marginalised and vulnerable communities. Public policy functions cannot be entrusted entirely to private corporations (or even governments). The involvement of technical stakeholders in public policy processes is essential for sound, logical, informed decision-making and improved governance that protects the technical infrastructure.


5. What challenges arise from the negotiations on the UN treaty on cybercrime?

 Adult, Bride, Female, Person, Wedding, Woman, People, Clothing, Hat, Body Part, Hand, Art

As negotiations on the new UN cybercrime treaty enter the last mile, they were highly prominent topics of IGF2023. The broad scope of the current draft of the UN Cybercrime Treaty, the lack of adequate human rights safeguards, the absence of a commonly agreed-upon definition of cybercrime, and the uncertain role of the private sector in combating cybercrime are some of the crucial challenges addressed during the sessions. 

Highlights

As the Main Session: Cybersecurity, Trust & Safety Online, and the session Risks and Opportunities of a new UN Cybercrime Treaty noted, provisions to ensure human rights protection seem blurred. The wide discretion left to states in adopting the provisions related to online content, among others, could leave plenty of wiggle room for authoritarian regimes to target and arbitrarily prosecute activists, journalists, and political opponents. Additionally, retaining personal data from individuals accused of an alleged cybercrime offence could open the door for the misuse and infringement of their right to privacy.

Provisions regarding cybercrime offences need to be clarified, too, as there is no commonly agreed-upon definition of cybercrime. For now, it is clear that we need to separate cyber-dependent serious crimes (like terrorist attacks using autonomous cyberweapons) from cyber-enabled actions (like online speech) that help commit crimes and violate human rights. Additionally, there is a need to overcome cybercrime impunity, especially in cases where states are unwilling or unable to combat it.

International cooperation between states and the private sector is yet another aspect that members have to agree on. Essentially, there is a need to ensure more robust and comprehensive provisions to address capacity development and technical assistance. It was noted that these provisions should facilitate cooperation across different legal jurisdictions and promote relationships with law enforcement agencies.

The role of the private sector is another stumbling stone in the negotiations. The proposed provisions put the private sector in a rather challenging position as they would have to comply with the laws of different jurisdictions. This means that conflicts of laws, including existing international instruments such as the Budapest Convention, would be inevitable and need to be harmonised somehow.

What if states cannot agree on an international treaty? Well, there are still ways to strengthen the fight against cybercrime. Options include establishing a database of cybersecurity experts for knowledge sharing, pooling knowledge for capacity development, expanding the role of organisations like INTERPOL, and encouraging states and businesses to allocate more resources to strengthen their cybersecurity posture.

Has the UN Cybercrime Treaty draft opened Pandora’s box? It always depends on how someone perceives it. What is clear from the sessions discussed is that many challenges need to be addressed as the ‘deadline’ for the UN Cybercrime Treaty approaches.


6. Will the new global tax rules be as effective as everyone is hoping for?

 Lamp, machine, Wheel, Lighting

Over the years, the growth of the digital economy – and how to tax it – has led to major concerns over the adequacy of tax rules. The IGF discussion focused on the necessity for clear and open dialogues on digital taxation, and for a just and equitable tax revenue distribution. There are hurdles to implementing effective taxation measures. The involvement of a wider range of stakeholders could be pivotal in shaping workable solutions for the taxation of businesses of tech titans.

Highlights

Global tax rules could ameliorate the unfair consequences of tax havens, provide consistent approaches to allocating profits and reducing uncertainty for multinational companies. The OECD/G20 made significant steps in this direction: In 2021, over 130 countries came together to support a new two-pillar solution. This will introduce a 15% effective minimum tax rate in most jurisdictions and will oblige multinationals to pay tax in countries where their users are located (rather than where they have a physical presence). In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.

For these models to be effective, they need to fully counter the scenarios that have, in the past, allowed multinationals to reduce their tax bills. First, multinational corporations have traditionally shifted profits to low-tax jurisdictions, which has deprived countries in the Global South of their fair share of tax revenue. Second, neither of the two frameworks addresses the issue of tax havens directly (although the minimum tax will help mitigate this issue). Third, the OECD and UN models do not fully take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South. 

Until recently, countries in the Global South felt these measures alone were insufficient to ensure tax justice. They, therefore, opted to adopt various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services.

Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious, opining that these countries should carefully consider all implications before signing international tax treaties and perhaps even sign these treaties only after they see their effects play out.


7. How to address misinformation and protection of digital communication during times of war?

 Firearm, Weapon, Gun, Handgun, Rifle, Face, Head, Person

In the midst of ongoing conflicts, new concerns about the impact of misinformation have arisen. The primary concern is how this impacts civilians residing in volatile regions. Misinformation adds to confusion, division, and physical and psychological distress, especially for civilians caught in the middle. 

Digital communication also has a decidedly operational role in conflict situations, completely different from any military use. It should provide secure communication to reach and inform those in need. The security and robustness of digital networks therefore become critical in ensuring humanitarian assistance. 

Highlights

The old wisdom that the truth is the first victim of war has been amplified by digital technology. The session Safeguarding the free flow of information amidst conflict explained how disseminating harmful information can exacerbate pre-existing social tensions and grievances, leading to increased violence and violations of humanitarian law. 

The spread of misinformation can cause distress and psychological burdens among individuals living in conflict-affected areas. Misinformation hampers their ability to access potentially life-saving information during emergencies. The distortion of facts and the influence on beliefs and behaviours as a consequence of disseminating harmful information also raise tensions in conflict zones.

In times of peace, experts advocate for a multi-faceted approach to addressing misinformation in conflict zones. In times of war, the immediate concerns focus primarily on ensuring the safety and well-being of civilians. If communication channels are disrupted, the spread of misinformation can be even more dangerous.

In these situations, humanitarian organisations and tech companies must work together to establish secure channels and provide accurate information to those in need. Additionally, efforts should be made to counter cyber threats and protect critical infrastructure. In fact, with the growing reliance on a shared digital infrastructure, civilian entities are more likely to be inadvertently targeted through error or tech failure. The interconnectedness of digital systems means that an attack on one part of the infrastructure can have far-reaching consequences, potentially affecting civilians who are not directly involved in the conflict zone. 

The involvement of international organisations and governments is essential in coordinating these efforts and ensuring that humanitarian principles are upheld. Special consideration should also be given to the safety and protection of those working in the digital infrastructure sector during times of conflict.


8. How can data governance be strengthened?

 Furniture, Book, Publication

Organised, transparent data governance is crucial in today’s digital landscape and requires clear standards for coherence and consistency, an enabling environment requiring effort, trust, and adaptability from all sectors, and public-private partnerships for addressing critical issues. Intermediaries play a key role in bridging gaps. The Data Free Flow with Trust (DFFT) concept, introduced by Japan in 2019, also promises to strengthen data governance by enabling global data flows while ensuring security and privacy. 

Highlights

Data governance plays a critical role in ensuring the effective and responsible use of data, especially in today’s digital age. Discussions during an open forum on public-private partnerships served to identify important measures that can help improve or expand upon existing data governance approaches.

First, clear standards and operating procedures can promote coherence and consistency in data governance. The lack of coherence is one of the main reasons for underwhelming private sector contributions. By defining and implementing robust standards, both the public and private sectors could have a common framework to work upon, facilitating collaboration and maximising the potential for data-driven initiatives.

Second, an enabling environment is essential for effective data governance. This environment requires time, effort, proof-of-concept, trust, and adaptability. Creating such an environment necessitates the involvement of all sectors – public, private, and civil society. 

Third, public-private initiatives are crucial to helping bridge data gaps related to critical issues like climate change, poverty, and inequality. Collaboration between the public and private sectors allows for the pooling of resources, expertise, and knowledge, enabling a more holistic approach to addressing these challenges.

Successful public-private partnerships require investment, time, and trust-building efforts. Parties involved must dedicate time to cultivating relationships and fostering mutual understanding. This may include the participation of dedicated individuals from both the private sector and governmental organisations. Their active presence can facilitate effective communication, coordination, and alignment of goals, leading to fruitful collaborations.

Related to public-private initiatives is the role that intermediaries or brokers have to help bridge the skills and capacity gaps between sectors by combining their expertise and resources to drive collaboration and support the achievement of sustainable development goals.

The sustainability of public-private partnerships also depends on the size and global reach of the involved entities. For instance, large firms with global reach are well-positioned to enable such partnerships. They possess the necessary resources, capabilities, and networks to maintain and nourish relationships, ensuring long-term viability and impact in driving sustainable development. 

Much was also said about Data Free Flow with Trust (DFFT) – a concept first championed by Japan during the G20 summit in 2019 – which aims to strengthen data governance by facilitating the smooth flow of data worldwide while ensuring data security and privacy for users. 
Speakers in High-Level Leaders Session I: Understanding Data Free Flow with Trust (DFFT) emphasised how the DFFT concept can help strengthen data governance in additional ways. It provides a framework for harmonising and aligning the different national or regional perspectives, encourages public-private data partnerships, and promotes using regulatory and operational sandboxes as practical solutions to foster good governance among stakeholders.


9. How can the digital divide be bridged?

 Book, Comics, Publication, Person, Art, Graphics, Computer, Electronics, Pc, Face, Head

Although discussions on bridging the digital divide might seem repetitive, the persistence of this topic is warranted by the stark reality revealed in the latest data from the International Telecommunication Union (ITU): approximately 5.4 billion people are using the internet. That leaves 2.6 billion people offline and still in need of access.

Highlights

In the pursuit of universal and meaningful connectivity, precise data tracking emerges as a cornerstone for informed decision-making. Data tracking equips stakeholders with the insights needed to identify areas requiring attention and improvement. Through a blend of quantitative indicators (numerical data and statistical analysis) and a qualitative approach (subjective assessments, such as in-depth case studies), a comprehensive connectivity assessment is achieved, facilitating effective individual country evaluations. 


What needs to be improved? While the efforts of international organisations, especially ITU and UNESCO in data collection are complementary, they are often not perfectly coordinated. Other areas for improvement include the lack of quality data on how communities use the internet, a lack of reliable indicators for safety and security, as well as speed, and reckoning realities that rural regions may not be fully reflected in the data collected.

There are several solutions, from regional collaboration and initiatives to the utilisation of emerging technologies. 

One proposed approach to expanding internet access involves utilising Low Earth Orbit (LEO) satellites. LEO satellites offer the potential to deliver real-time and reliable internet connectivity to remote or hard-to-reach regions worldwide. Nevertheless, several concerns have surfaced, primarily concerning the cost of accessing such services, their environmental impact, and the technical challenges associated with large-scale LEO satellite deployment.

To make it possible for LEO satellites to be used and deployed effectively, countries need to review their laws, make sure they are in line with international space law, and get involved in international decision-making bodies like ITU and COPUOS to help make policies and rules that support this.

To bridge the digital divide, it is essential to address various factors and develop comprehensive strategies that go beyond connectivity. There is a need for digital solutions customised to fit specific local environments. These strategies must address issues regarding the affordability and availability of devices and technologies and the availability of content and digital skills, as these deficiencies still pose barriers to full internet access.

In the broader context of the digital divide, AI and large language models (LLMs) were highlighted as having the potential to redefine and expand digital skills and literacy. Moreover, including native languages in these models can enable digital interactions, particularly for individuals with lower literacy skills.  

The goal of bridging the digital divide can only be achieved through partnerships and collaborations embodied in regional initiatives. Thus, Regional Internet Registries (RIRs) have an important role, particularly in regions that are underserved or have limited access to internet resources.

RIRs often go beyond their narrow mandates in the allocation and registration of internet number resources within a specific region of the world. RIRs have facilitated collaboration and knowledge sharing by adopting a multistakeholder and regional approach, leading to a more connected and equitable internet landscape.

One of the RIRs’ main strengths is building community trust. This trust has been established through their work on regional and local issues such as connectivity and support for community networks and Internet Exchange Points (IXPs). 


The EU’s initiative, the Global Gateway, was identified as a good example of a collaborative effort to bridge the digital divide. Notable efforts under the project involve forging alliances with countries in Latin America and the Caribbean, implementing the Building the Europe Link to Latin America (BELLA) program for fibre optic cables, establishing regional cybersecurity hubs and strengthening the overall digital ecosystem.


10. How do digital technologies impact the environment?

 Art, Person, Doodle, Drawing, Face, Head

We’ve broken too many environmental records this year. June, July, and August 2023 are the hottest three months ever documented, September 2023 was the hottest September ever recorded; and 2023 is firmly set to be the warmest year on record. Global temperatures will likely surge to record levels in the next five years. Therefore, the discussion of the overall impact of digital technologies on the environment at the IGF was particularly critical.

Highlights

Data show that digital technologies contribute 1% to 5% of greenhouse gas emissions and consume 5% to 10% of global energy

Internet use comes with a hefty energy bill, even for seemingly small things like sending texts – it gobbles up data and power. In fact, the internet’s carbon footprint amounts to 3.7% of global emissions

AI, the coolest kid on the block, leaves a significant carbon footprint too: For instance, the training of GPT-3 resulted in 552 metric tons of carbon emissions, equivalent to driving a passenger vehicle over 2 million kilometers. ChatGPT ‘drinks’ a 500ml bottle of fresh water for every simple conversation with about 20 to 50 questions.

The staggering number of devices globally (over 6.2 billion) need frequent charging, contributing to significant energy consumption. Some of these devices also perform demanding computational tasks requiring substantial power, increasing the numbers. Moreover, the rapid pace of electronic device advancement and devices’ increasingly shorter lifespans have exacerbated the e-waste problem. 

In contrast, digital technologies also have the potential to cut emissions by 20% by 2050 in the three highest-emitting sectors – energy, mobility, and materials. 2050 is a bit far away, though, and immediate actions are critically needed to hit the 2030 Agenda targets.

What can we do? To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about our available sustainable sources and establish standards for their use. If we craft and implement policies right from the inception of a new technological direction, we can create awareness among innovators and start-up stakeholders about its carbon footprint to ensure environmentally-conscious design.

Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly concerning AI and its environmental impact, as well as collaboration among organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is imperative for sustainable practices. 

We can also harmonise measurement standards to track the environmental impacts of digital technologies. This will enable policymakers and stakeholders to develop more effective strategies for mitigating the negative impacts.

We can use satellites and high-altitude connectivity devices to make the internet more sustainable. We can take the internet to far-off places using renewable energy sources, like solar power. 
We can also leverage digital technologies to generate positive impacts. For instance, AI can be used to optimise electrical supply and demand, reduce energy waste and greenhouse gas emissions, and revolutionise the generation and management of renewable energy.



Data analysis of IGF 2023

To analyse the discussions at IGF, we first recorded them. The total length of that footage is almost 15 days long: 14 days, 21 hours, 22 minutes, and 30 seconds, to be precise. Talk about a packed programme!

Then we used DiploAI to transcribe IGF2023 discussions verbatim and counted 3,242,715 words spoken. That is nearly three times the length of the longest book in the world – Marcel Proust’s À  la recherche du temps perdu. If an IGF 2023 book of transcripts were published, an average reader, who reads 218 words per minute, would need 217 hours – that’s nine days! – to read it cover to cover.

Using DiploAI, we analysed this text corpus and extracted key points, totalling to 288,364 words. Then DiploAI extracted the essence of discussions and the most important words spoken. The 10 most mentioned words were: AI, internet, data, support, government, importance, technology, issue, regulation, and global. It is interesting to note that the 11th most mentioned word was digital. 

Word cloud shows the relative frequency of words in the IGF2030 corpus textus. AI stands out clearly as the most prominent term, with internet, data, support, government, importance, technology, issue, regulation, global following.

Prefix monitor

Other prefixes followed a similar pattern compared to the previous three years. 

Digital was still the most used prefix, with a total of 8,661 references. This is nearly a 63% increase in frequency compared to IGF 2022, when it was referenced  5,346 times.

Online and cyber took 2nd and 3rd places, respectively, with 3,682 and 3,532 mentions. While cyber remained in third place, there was a 98% increase since last year, when it was mentioned 1,789 times. 

The word tech came in 4th place, as it did last year, a significant decrease compared to 2021, when it held the 2nd spot.

Finally, virtual remained in 5th place, accounting for 2.5% of the analysed prefixes.

Word cloud shows the relative frequency of words in the IGF2030 corpus textus. AI stands out clearly as the most prominent term, with internet, data, support, government, importance, technology, issue, regulation, global following.

Diplo and GIP at IGF 2023

Reporting from the IGF: AI and human expertise combined

With 300+ sessions and 15 days worth of video footage featuring 1,240 speakers and 16.000 key points, IGF2023 was the largest and most dynamic IGF gathering so far. For the 9th consecutive year, the GIP and Diplo provided just-in-time reports and analyses from the discussions. This year, we added our new AI reporting tool, to the mix. Diplo’s human experts and AI tool work together in this hybrid system to deliver a more comprehensive reporting experience.

This hybrid approach consists of several stages:

  1. Online real-time recording of IGF sessions. First, our recording team set up an online recording system that captured all sessions at the IGF. 
  2. Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker contributed is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
  3. AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session and individual speakers. These graphical representations mapped the intricate connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis 
  4. Writing dailies. Our team of analysts used AI-generated reports to craft comprehensive daily analyses. 

You can see the results of that approach – session reports and dailies – on our IGF2023 Report page

You are presently reading the culmination of our efforts: the top highlights from the discussions at IGF2023. These debates are presented in a Q&A format, tackling the Global Digital Compact (GDC), AI, concerns about internet fragmentation, negotiations on cybercrime, digital taxation, misinformation, data governance, the digital divide, and climate change.


Diplo crew in Kyoto

Diplo and the GIP were actively engaged at IGF2023, organising and participating in various sessions.

 Art, Collage, Person, Adult, Male, Man, People, Clothing, Jeans, Pants, Pottery, Face, Head, Footwear, Shoe
IGF panellists Sorina Teleanu and Jovan Kurbalija in front of a screen projecting the same view of the panel.

8-12 October

Diplo and GIP booth at IGF 2023 village


IGF panellists Sorina Teleanu and Jovan Kurbalija in front of a screen projecting the same view of the panel.

Sunday, 8 October

Bottom-up AI and the right to be humanly imperfect (organised by Diplo) | Read more


Pavlina Ittleson sits on an IGF panel in front of a table with a laptop and monitor.

Tuesday, 10 October

How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums (co-organised by Diplo) | Read more


Anastasiya Kazakova speaks into a microphone at an IGF session.

Wednesday, 11 October

Ethical principles for the use of AI in cybersecurity (participation by Anastasiya Kazakova) | Read more


Sorina Teleanu speaks into a microphone on an IGF panel.

Thursday, 12 October

IGF to GDC- An Equitable Framework for Developing Countries (participation by Sorina Teleanu) | Read more


A panel moderator watches Vladimir Radunović on a projection screen as he speaks remotely at the session.

Thursday, 12 October

ICT vulnerabilities: Who is responsible for minimising risks? (co-hosted by Diplo) | Read more


Next Steps?

Line drawing depicts a busy street with cars and pedestrians. Many signposts and billboards congest the view with announcements for different IGF meetings.

Start preparing for IGF 2024 by following Digital Watch coverage of governance topics, actors, and processes.

DW Weekly #132 – 16 October 2023

 Text, Paper, Page

Dear all,

As the conflict in the Middle East unfolds, and the world watches closely, those relying on social media for updates are left confused over what’s real and what’s not. This may be just the beginning of an age dominated by mis- and disinformation. In other news, there are new AI guidelines in the pipeline, while the EU has unveiled plans for a Digital Networks Act (which we’ll cover when things solidify a bit more).

Let’s get started.

Stephanie and the Digital Watch team

PS. Due to a technical glitch, this issue has been published a bit later than usual. Our apologies.


// HIGHLIGHT //

How the Middle East crisis is being (mis-)reported online

In recent days, as people have been grappling with the violence unfolding in Israel and Gaza, social media platforms have become inundated with graphic images and videos of the conflict. Without diminishing the gravity of what’s happening in the Middle East and the need to make it known, the problem with such social media content is that some of it is fake.

What’s fake, exactly? There’s a distinction between reporting something that didn’t happen and repurposing visuals from other conflicts for stronger impact. From a production point of view, there’s something sinister and malicious in fabricating a lie; during wartime, this is meant to raise alarm and stir up animosity. Reporting the truth but attaching a fake image is theoretically less sinister – although it is still a lie, and can fuel confusion, hostility, public safety risks, and harmful civil discourse among those who consume it.

 Number, Symbol, Text

Additionally complex. In some cases, the issue is more complex than this. Perpetrators go to the trouble of creating fake accounts, and of circulating uncaptioned imagery, leaving it to readers to draw their own conclusions. In this way, they can tap into biases and powerful emotions, such as fear, without having to take responsibility for the level of truthfulness of the content.

The worst part. Most parts of the world have taken sides. Polarisation has reached unprecedented heights. When individuals decide to condone a violent action (or not) based on whether an image really originates from their adversaries rather than their favoured faction, that brings out the worst in people. We won’t go into the gory details: Killing innocent children is an atrocity, regardless of who’s behind it – or whether a report has attached the correct image to it. 

Where it’s happening. Misinformation is as old as humanity and decades old in its current recognisable form, but social media has amplified its speed and scale. To say that online misinformation spreads like wildfire is an understatement. The challenge is compounded when shared by people with large followings. This could also happen if the press falls victim to the misinformation that’s flowing into newsrooms at a staggering scale.

Deprioritised. Earlier this year, Meta, Amazon, Alphabet, and Twitter laid off many of the team members focusing on misinformation and hate speech. This was part of a post-COVID-19-induced restructuring aimed at improving financial efficiency. 

The EU takes action: X, Google, Meta, TikTok ordered to remove fake content   

It didn’t take long for European Commissioner Thierry Breton to request that X, YouTube (Google), Facebook (Meta), and TikTok take down fake content.

In letters sent to X’s Elon Musk and to TikTok’s Shou Zi Chew, Breton wrote how their platforms had been used to disseminate fake content related to ‘the terrorist attacks carried out by Hamas against Israel’ (in his letter to Facebook’s Sundar Pichai and to Meta’s Mark Zuckerberg, Breton simply wrote about a surge in such content ‘via certain platforms’).

Each letter reminded the platforms of their obligations under the new Digital Services Act (DSA), including prompt responses to take-down requests by law enforcement. In X, Facebook, and TikTok’s case, the Commissioner gave the platforms 24 hours to respond.

The case of X. In TikTok and Microsoft’s case, things went more or less quiet. In X’s case, CEO Linda Yaccarino responded to complaints, confirming the removal of hundreds of Hamas-linked accounts and the removal or flagging of thousands of pieces of content – but this was either an unsatisfactory response or a predictable course of events that left the European Commission, just a day later, to send X a formal request for information. Breton tweeted the development as ‘a first step in our investigation to determine compliance with the DSA’, hinting that things will require more than just a handful of exchanged letters to be resolved. 

Elections in sight. The immediate worry may well be the Middle East conflict, but the longer-term worry is the numerous elections in 2024 – from the EU’s parliament and those in European countries, to the US presidential elections. It’s a concern that affects many countries.

The restructuring may prove costlier for those platforms laying off disinformation teams to save money.


Digital policy roundup (9–16 October)

// AI GOVERNANCE //

G7 to agree on AI guidelines by year’s end, Japan PM confirms

Japan confirmed that G7 leaders will agree on international guidelines for users by the end of the year, as well as non-binding rules and a code of conduct for developers of AI systems by the end of the year. This was announced by Prime Minister Fumio Kishida during last week’s Internet Governance Forum (IGF) in Kyoto. 

The texts form part of the Hiroshima AI Process, which was kickstarted during May’s G7 summit, held in Hiroshima. The upcoming summit will take place online.

Why is it relevant? There has been a lot of anticipation for the G7 rules on AI, even though they are non-binding. Japan, the current G7 president, will want to see its plans through by the end of the year, before it passes the baton to Italy.


ASEAN eyeing business-friendly AI rules

Southeast Asian countries are taking a business-friendly approach to AI regulation, according to a leaked draft text. The Association of Southeast Asian Nations (ASEAN) draft guide to AI ethics and governance asks companies to consider cultural differences and doesn’t prescribe categories of unacceptable risk. 

The guide is voluntary and meant to guide domestic regulations. ASEAN’s hands-off approach is seen as more business-friendly, as it limits the compliance burden and allows for more innovation. 

Why is it relevant? The EU has been discussing AI rules with countries in the region in a bid to convince them to follow its approach. But ASEAN’s approach clearly goes against the EU’s push for globally harmonised binding rules and is more aligned with other business-friendly frameworks.


// ANTITRUST //

Done deal: Microsoft’s acquisition of Activision Blizzard is approved

Microsoft has completed its USD68.7 billion acquisition of video games producer Activision Blizzard after the UK’s Competition and Markets Authority (CMA) approved the deal. The approval was granted after Microsoft presented the CMA with a restructured agreement in which the company said it would transfer the licensing rights for cloud streaming rights to Ubisoft, a proposed offer which the CMA had already said addressed their previous concerns.

The EU had already given the green light to the merger in May, but media reports said the European Commission was deciding whether it would look further into the restructured deal. Now it seems the European Commission won’t pursue this after all. However, the US Federal Trade Commission (FTC) intends to look into the licensing agreement Microsoft signed with Ubisoft.

Why is this relevant? Microsoft’s acquisition of Activision has been controversial. It’s the most expensive acquisition yet by Big Tech, so due to its scale, regulators feared it could hurt competition and give Microsoft too much power in the gaming market. European regulators are satisfied, but will these approvals solve the bigger problem of Big Tech accumulating ever more power? 

Activision Blizzard
Campaigns 110

Was this newsletter forwarded to you, and you’d like to see more?


// TAXATION //

IRS audit: Microsoft faces potential USD28.9 billion tax bill

The US Inland Revenue Service (IRS) has notified Microsoft that it owes USD28.9 billion in back taxes, penalties, and interest, covering the period 2004–2013 (which is nowhere near the USD160 million (GBP136 million) the company just paid the UK’s tax authority). The audit, which has been ongoing for over a decade, focuses on a deal where Microsoft transferred intellectual property to a factory in Puerto Rico for more favourable tax treatment. 

Microsoft says the taxes it has already paid could decrease the final tax owed under the audit by up to USD10 billion. The company plans to appeal the IRS’ conclusions, and the case is expected to continue for several more years.

Why is it relevant?

First, it’s the largest audit in US history. The IRS may be looking at the Microsoft case as a chance to prove the agency’s effectiveness in being more aggressive against corporations with endless resources. 

Second, it’s yet another example of Big Tech shifting income to low-tax jurisdictions specifically to lower their tax bill. 

Third, it coincides with the OECD’s latest inroad into its overhaul of global tax rules: The OECD has just published the text of a multilateral convention to implement the so-called Amount A of Pillar One. In simpler terms, this part of the new global rules will oblige some of the largest tech companies in the world to pay tax where their users are located, rather than where their corporate offices are based. 


The week ahead (16–23 October)

16–17 October: This year’s International Regulators Forum is being hosted in Cologne, Germany. The Small Nations Regulators Forum takes place tomorrow.

16–20 October: UNCTAD’s 8th World Investment Forum has returned as an in-person event hosted in Abu Dhabi, UAE.  

18 October: The US Federal Communications Commission meets on Wednesday to decide whether to kickstart the legislative process to restore the net neutrality rules it had introduced in 2015 (reversed in 2017).

18–19 October: The Organization for American States (OAS) Cyber Symposium 2023 takes place in The Bahamas. It’s organised in partnership with the National CIRT of The Bahamas.

20 October: The 27th EU-US Summit, in Washington DC, will bring together US President Joe Biden, European Council President Charles Michel, and European Commission President Ursula von der Leyen to talk about cooperation in areas including AI and digital infrastructure. 

21–26 October: ICANN78 takes place in Hamburg, Germany, starting Saturday. It will be the organisation’s 25th Annual General Meeting.


#ReadingCorner
Little boy with a mobile phone on the street, a child and gadgets

Google proposes framework for protecting kids online 
‘Appropriate safeguards can empower young people and help them learn, connect, grow, and prepare for the future.’ This is how Google introduces its new framework for child safety, which tells policymakers how the company views existing and proposed rules concerning, for instance, age verification, parental consent, and personalised content. Read the blog and framework, published earlier today.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation