Numéro 83 de la lettre d’information Digital Watch – octobre 2023

Cover image of the Digital Watch newsletter titled: Le numérique à l'AGNU 78, with a drawing of the speakers at the UNGA, and an artificial brain, representing AI, writing a report

Observatoire

Coup d’œil : quelles sont les nouvelles tendances en matière de politique numérique ?

Géopolitique

La Commission européenne a publié une liste préliminaire de quatre domaines technologiques à haut risque en raison de leur utilisation abusive par des régimes autocratiques et des violations des droits de l’homme ; les experts estiment que cette liste vise la Chine. De l’autre côté de l’Atlantique, alors que Washington envisage d’imposer des restrictions supplémentaires aux exportations de puces, les entreprises américaines continueront à vendre des puces à la Chine, mais pas les plus perfectionnées. Le conseil chinois du commerce a appelé les États-Unis à reconsidérer les règles limitant leurs investissements dans le secteur technologique chinois. Le conseil affirme que les restrictions sont floues et ne font pas la différence entre les applications militaires et civiles.

Gouvernance de l’IA

Les pays du G7 ont convenu de créer un code de conduite international pour l’IA qui établirait des principes pour la surveillance et le contrôle des formes avancées d’IA. Dans le même ordre d’idées, le Japon (qui préside actuellement le G7) et le Canada ont publié des codes de conduite volontaires à l’intention des entreprises qui développent l’IA. 

Cette initiative s’inscrit dans la tendance récente qui consiste à utiliser des lignes directrices facultatives jusqu’à ce que des réglementations soient adoptées.

L’autorité britannique de régulation de la concurrence, la Competitions and Markets Authority (CMA), a proposé sept principes pour encadrer le développement et le déploiement des modèles fondateurs de l’IA (technologie formée à partir de vastes quantités de données pour effectuer un large éventail de tâches et d’opérations). Enfin, les États-Unis ont annoncé leur intention de présenter prochainement à l’ONU une proposition de normes mondiales pour l’utilisation de l’IA militaire.

Sécurité

Le Comité international de la Croix-Rouge (CICR) a édicté huit règles d’engagement à l’intention des cyberpirates qui participent à des conflits, les avertissant que leurs actions peuvent mettre des vies en danger. Ces règles interdisent notamment les cyberattaques visant des civils, des hôpitaux et des installations humanitaires, ainsi que l’utilisation de logiciels malveillants ou d’outils similaires susceptibles de nuire à des cibles tant militaires que civiles.

Infrastructure

La Commission fédérale des communications (FCC) des États-Unis prévoit de rétablir les règles relatives à la neutralité de l’Internet qui ont été abrogées en 2017. La présidente de la FCC, Jessica Rosenworcel, a annoncé que la FCC proposait de reclasser le haut débit sous le titre II de la loi américaine sur les communications. Cela donnerait à la FCC plus d’autorité pour réglementer les fournisseurs d’accès à Internet, y compris la capacité d’empêcher les opérateurs de ralentir ou d’accélérer le trafic Internet vers certains sites web.

Huawei, le géant chinois de la technologie, a intenté une action en justice devant un tribunal de Lisbonne contre une résolution du Conseil de cybersécurité du Portugal (CSSC), qui interdit aux opérateurs d’utiliser leurs équipements dans les réseaux mobiles 5G à grande vitesse.

Économie de l’Internet

La Commission européenne a désigné six contrôleurs d’accès – à savoir Alphabet, Amazon, Apple, ByteDance, Meta et Microsoft – comme gardiens en vertu de la loi sur les marchés numériques (DMA), à l’issue d’une procédure d’examen de 45 jours. La désignation porte sur un total de 22 services de plateforme de base fournis par ces entreprises.

Dans un autre domaine, Amazon a temporairement obtenu une victoire dans une affaire concernant sa classification en tant que très grande plateforme en ligne (VLOP). Le tribunal de la Cour de justice de l’Union européenne (CJUE), à Luxembourg, a, en réponse à la requête d’Amazon, accordé des mesures provisoires, entraînant le report de certaines obligations au titre de la loi sur les services numériques (DSA). Cette décision intervient alors que des mesures strictes ont été prises dans le cadre de la loi sur les services numériques de l’UE (DSA), affectant 19 grandes plateformes en ligne et moteurs de recherche.

Les pratiques anticoncurrentielles (présumées) des grandes entreprises ont été sous les feux de la rampe le mois dernier. La Commission fédérale du commerce des États-Unis (FTC) et 17 procureurs généraux d’État ont poursuivi Amazon pour comportement anticoncurrentiel présumé. Le procès intenté par le ministère américain de la Justice contre Google, l’une des plus grandes affaires antitrust depuis des décennies, a débuté le 12 septembre 2023. Ce procès se concentre sur les activités de recherche de Google, qui seraient « anticoncurrentielles et discriminatoires », permettant à l’entreprise de conserver un monopole sur le marché de la publicité numérique. Dans une autre affaire concernant également Google, l’entreprise a annoncé un règlement provisoire aux États-Unis sur des allégations de monopole concernant la plateforme d’application Play Store.

La Commission européenne a recueilli de manière informelle des avis sur les pratiques potentiellement abusives de Nvidia, a révélé Bloomberg. Cela intervient après que l’autorité française de la concurrence a effectué une « inspection surprise […] dans le secteur des cartes graphiques », dont il a été révélé qu’elle impliquait Nvidia.

Droits numériques

Le commissaire irlandais à la protection des données a confirmé une amende de 345 millions d’euros (370 millions de dollars) à l’encontre de TikTok pour avoir enfreint les lois européennes sur la protection de la vie privée concernant le traitement des données personnelles des enfants. L’Administration nationale américaine des télécommunications et de l’information (NTIA) sollicite l’avis du public sur les risques liés à l’Internet pour les enfants et sur les moyens de les atténuer.

La loi sur la gouvernance des données, essentielle à la stratégie européenne en matière de données, est entrée en vigueur le 24 septembre 2023, avec pour objectif principal de faciliter l’échange sécurisé de données entre les secteurs et les membres de l’UE, en améliorant notamment l’utilisation des données du secteur non public.

Reporters sans frontières (RSF) a appelé le public à participer à la rédaction de la Charte de l’IA afin de clarifier la position de la communauté journalistique sur l’utilisation extensive des technologies de l’IA sur le terrain.

L’organisme norvégien de surveillance des données espère étendre ses amendes journalières d’un million de couronnes norvégiennes (93 000 USD) pour violation de la vie privée à l’encontre de Meta dans l’ensemble de l’UE et de l’Espace économique européen (EEE). Il appartient maintenant au Comité européen de la protection des données (CEPD) d’évaluer la situation.

Politique de contenu

Une cour d’appel fédérale américaine a étendu les limites de la communication de l’Administration Biden avec les plateformes de médias sociaux à l’Agence américaine pour la cybersécurité et la sécurité des infrastructures (CISA). Cette décision réduit considérablement la capacité de la Maison-Blanche et des agences gouvernementales à s’engager avec les plateformes de médias sociaux sur des questions de modération de contenu.

L’UE a mis en garde les principales plateformes de médias sociaux contre le non-respect de la loi sur les services numériques (Digital Services Act, DSA) récemment adoptée pour lutter contre les fausses informations.

Développement

L’UE a publié son rapport sur la Décennie numérique, qui préconise des mesures pour atteindre les objectifs de la Décennie numérique d’ici à 2030.

De nouvelles données de l’UIT montrent que l’accès mondial à l’Internet s’est amélioré en 2023, avec plus de 100 millions de nouveaux utilisateurs dans le monde.
Le sommet du G77 a adopté la déclaration de La Havane, qui met l’accent sur la science, la technologie et l’innovation, et décrit les actions futures du G77.

LES CONVERSATIONS DE LA VILLE – GENÈVE

Lors de la 54e session du Conseil des droits de l’Homme des Nations unies (CDH), un groupe de travail a discuté de la cyberintimidation à l’encontre des enfants, examinant le rôle des États, du secteur privé et des parties prenantes dans la lutte contre la cyberintimidation et l’autonomisation des enfants dans la sphère numérique. En outre, le Conseil a présenté un rapport de synthèse sur le rôle de la maîtrise du numérique, des médias et de l’information dans la promotion et l’exercice du droit à la liberté d’opinion et d’expression lors de la 53e session. Le Conseil a également examiné un rapport sur l’impact des nouvelles technologies destinées à la protection du climat.

Le Forum public 2023 de l’OMC s’est concentré sur le rôle du commerce dans la promotion d’un avenir respectueux de l’environnement, notamment sur le thème suivant : « La numérisation en tant qu’outil pour l’écologisation des chaînes d’approvisionnement ». Plus de 20 sessions ont été consacrées aux outils numériques et à leurs impacts.

La 8e session du Dialogue de l’OMPI s’est penchée sur l’IA générative et la propriété intellectuelle. Pendant deux jours, six groupes de discussion ont abordé les cas d’utilisation de l’IA générative, le contexte réglementaire, les préoccupations éthiques concernant les données d’apprentissage, la paternité, la propriété du travail créatif et les stratégies pour naviguer dans la propriété intellectuelle en matière d’IA générative


En bref

Le numérique à l’AGNU 78

Le débat général de l’Assemblée générale des Nations unies (AGNU) est une plateforme mondiale où les dirigeants du monde entier se réunissent pour aborder certaines des questions les plus urgentes auxquelles l’humanité est confrontée. L’un de ces sujets cruciaux est l’impact des technologies numériques.

Lors du débat général de l’AGNU 2023, 94 intervenants, dont le secrétaire général de l’ONU ainsi que des représentants du Saint-Siège et de l’UE, se sont penchés sur des thèmes numériques.

Ce résultat (94) représente une augmentation significative par rapport à notre première analyse en 2017, lorsque 47 pays s’étaient exprimés sur des sujets numériques. Sept ans plus tard, ce chiffre a doublé pour atteindre 94. Cette forte augmentation souligne la reconnaissance croissante de l’importance primordiale des technologies numériques aux plus hauts niveaux du discours diplomatique.

Dans un contexte plus large, les discussions liées à la technologie numérique représentaient 2,51 % de l’ensemble du corpus textuel produit lors des discours de l’AGNU 2023.

 Bar Chart, Chart
Nombre global d’intervenants mentionnant les questions numériques

Le débat général de 2023 a vu une augmentation substantielle des mentions de l’IA dans les déclarations nationales. Sur les 467 130 mots prononcés pendant le débat, 6 279 concernaient l’IA, ce qui confirme sa position de sujet numérique le plus fréquemment abordé. Ce regain d’intérêt peut être attribué, en partie, à l’attention généralisée suscitée par le lancement de ChatGPT.

L’IA a fait l’objet de 39 discours lors de l’AGNU 78, ce qui témoigne de son importance croissante. Toutefois, les dirigeants ont également exploré d’autres sujets liés au numérique, notamment le développement numérique (44), la cybersécurité (23), la politique de contenu (7), les considérations économiques (4) et les droits de l’Homme (6).

IA. L’évolution rapide de l’IA a suscité des inquiétudes quant à ses risques potentiels, du déplacement d’emplois aux cybermenaces. Si certains intervenants ont souligné le potentiel de transformation de l’IA dans les domaines de la santé et de l’éducation, beaucoup ont insisté sur la nécessité d’une gouvernance éthique et d’une coopération internationale. Un consensus s’est dégagé sur l’urgence de réglementer l’IA, de s’attaquer à ses applications militaires et d’établir des normes mondiales. Le rôle des Nations unies dans la facilitation de ces discussions et la promotion d’une utilisation responsable de l’IA a été un thème récurrent, avec des appels en faveur d’un Pacte numérique mondial et de la création d’une agence internationale de l’IA.

Développement numérique. Les dirigeants ont souligné la nécessité de combler la fracture numérique, de réduire les inégalités et de garantir un développement numérique inclusif. De nombreuses nations ont plaidé en faveur d’une coopération internationale par le biais d’initiatives telles que le Pacte mondial pour le numérique, afin de relever collectivement ces défis. L’importance des technologies numériques pour atteindre les objectifs de développement durable et favoriser la solidarité mondiale est un thème commun aux dirigeants.

Cybersécurité. L’évolution du paysage des menaces non traditionnelles pour la sécurité, en particulier la cybersécurité et la cybercriminalité, a fait l’objet d’un débat. Les dirigeants ont souligné la nécessité d’une coopération internationale et de cadres de gouvernance pour faire face aux cybermenaces transfrontalières, protéger les infrastructures critiques et lutter contre la cybercriminalité.

Politique de contenu. Les dirigeants ont abordé la question de la propagation inquiétante de la désinformation et des fausses informations, amplifiée par l’IA et les plateformes de médias sociaux. Ils ont souligné les risques pour la démocratie ainsi que l’augmentation de la violence et des conflits dans le monde réel provoqués par les discours de haine et la désinformation en ligne. Les efforts pour lutter contre la désinformation comprennent des propositions pour une charte des droits numériques et un code de conduite pour l’intégrité de l’information sur les plateformes numériques.

Économie. L’importance d’adopter la technologie numérique et d’encourager l’innovation pour renforcer les économies a été accentuée. Les efforts visant à réduire les barrières commerciales, à rechercher des accords de libre-échange et à passer à des économies numériques et vertes ont été mis en exergue.
Droits de l’Homme. Les dirigeants ont exprimé leurs préoccupations concernant la surveillance en ligne, la collecte de données et les violations des droits de l’Homme. Ils ont appelé à des approches centrées sur l’homme et fondées sur les droits de l’Homme pour le développement et le déploiement des technologies.

 Person, Light, Traffic Light, Face, Head

Faut-il laisser l’IA halluciner ?

Cette année, les experts humains de Diplo ont été rejoints par DiploAI pour analyser les discours. Ils ont distillé des points clés et repéré des schémas dans les discours, y compris des cas où l’IA a halluciné, créé de fausses informations ou déformé la réalité. Jovan Kurbalija, de Diplo, suggère que nous devrions peut-être la laisser faire dans son dernier article de blog intitulé : « Diplomatic and AI hallucinations » (hallucinations diplomatiques et IA) : comment sortir des sentiers battus pour résoudre les problèmes mondiaux ?

 Chart, Plot, Map, Atlas, Diagram, Person


La carte du monde met en évidence les pays qui ont abordé les questions numériques lors de l’AGNU 78.

La vision de l’UE en matière de numérique et d’IA en 2023 : discours de Mme von der Leyen

Dans son discours sur l’état de l’Union en 2023, la présidente de la Commission européenne, Ursula von der Leyen, a exposé sa vision de l’avenir numérique de l’Europe, en mettant particulièrement l’accent sur le rôle de l’IA. Le discours a mis en lumière les réalisations de l’Europe dans le domaine numérique ainsi que les mesures prises pour relever les défis et saisir les possibilités offertes par l’IA et les technologies numériques.

 People, Person, Crowd, Adult, Female, Woman, Electrical Device, Microphone, Cup, Accessories, Jewelry, Necklace, Audience, Belt, Flag, Lecture, Speech, Ursula von der Leyen
Mme von der Leyen prononçant son discours. Source : Commission européenne

L’investissement de l’Europe dans la transformation numérique

La présidente von der Leyen a commencé par mettre en avant l’importance de la technologie numérique dans la simplification des activités commerciales et de la vie quotidienne. Elle a souligné que l’Europe avait dépassé son objectif d’investissements dans les projets numériques dans le cadre de NextGenerationEU, les États membres utilisant ce financement pour numériser des secteurs clés tels que les soins de santé, la justice et les transports.

Gérer les risques numériques et protéger les droits fondamentaux

Toutefois, la présidente a également reconnu les défis posés par le monde numérique, notamment la désinformation, les contenus préjudiciables et les risques pour la vie privée. Elle a relevé que ces problèmes érodaient la confiance et violaient les droits fondamentaux. Pour contrer ces menaces, l’Europe a pris l’initiative de protéger les droits des citoyens grâce à des cadres législatifs tels que le DSA et le DMA, qui visent à créer un espace numérique plus sûr et à responsabiliser les géants de la technologie.

Le rôle de l’IA 

La présidente von der Leyen a insisté sur le potentiel de l’IA à révolutionner les soins de santé, à accroître la productivité et à lutter contre le changement climatique. Mais elle a également mis en garde contre la sous-estimation des menaces réelles occasionnées par l’IA. Citant les préoccupations des principaux développeurs et experts de l’IA, elle a souligné l’importance d’atténuer les risques liés à l’IA à l’échelle mondiale.

Les trois piliers d’un cadre d’IA responsable

La présidente a exposé trois piliers clés pour le pilotage de l’Europe dans l’élaboration d’un cadre mondial pour l’IA : les garde-fous, la gouvernance et l’orientation de l’innovation.

  1. Garde-fous : veiller à ce que le développement de l’IA reste centré sur l’homme, transparent et responsable. La loi sur l’IA, une loi globale sur l’IA favorable à l’innovation, a été présentée comme un modèle pour le monde entier. Il s’agit maintenant d’adopter rapidement les règles et de passer à la mise en œuvre.
  2. Gouvernance : établir un système de gouvernance unique en Europe et collaborer avec des partenaires internationaux pour créer un groupe d’experts mondial similaire au Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC) pour l’IA. Cet organe fournirait des informations sur l’impact de l’IA sur la société et garantirait des réponses coordonnées au niveau mondial.
  3. Guider l’innovation : tirer parti du rôle prépondérant de l’Europe dans le domaine des supercalculateurs en mettant à la disposition des jeunes pousses de l’IA des ordinateurs à haute performance pour l’entraînement de leurs modèles. En outre, il est essentiel de favoriser un dialogue ouvert avec les développeurs et les entreprises d’IA, à l’instar des règles volontaires de sûreté, de sécurité et de confiance adoptées par les grandes entreprises technologiques aux États-Unis.


Commission ad hoc sur la cybercriminalité : principaux enseignements de la 6e session

La 6e session du Comité ad hoc sur la cybercriminalité a terminé ses travaux, mais de nombreuses questions restent en suspens. Alors que le dernier volet est prévu pour février 2024, les États ne se sont toujours pas mis d’accord sur l’utilisation des termes « cybercriminalité » ou « TIC à des fins malveillantes » dans la convention.

Le dernier projet (mis à jour le 1er septembre 2023) a également suscité un débat entre les États sur le champ d’application de la convention, la Chine et la Russie s’inquiétant du fait que le paysage évolutif des technologies de l’information et de la communication (TIC) n’a pas été pris en compte de manière adéquate. En ce qui concerne la criminalisation des infractions, la Russie a souligné la nécessité de sanctionner l’utilisation des TIC à des fins extrémistes et terroristes, et, avec la Namibie et la Malaisie, entre autres pays, a soutenu l’inclusion des actifs numériques dans le blanchiment des produits de la criminalité. Dans le même temps, certains pays, dont le Royaume-Uni et l’Australie, se sont opposés à leur inclusion, affirmant qu’ils n’entraient pas dans le champ d’application de la convention.

Les dispositions relatives aux droits de l’Homme ont suscité des inquiétudes non seulement parmi les États, mais aussi parmi les parties prenantes. Microsoft a notamment déclaré que les dispositions actuelles inscrites dans le dernier projet pourraient être « désastreuses pour les droits de l’Homme ». En ce qui concerne les mesures de protection des données, l’Afrique du Sud, les États-Unis et la Russie ont proposé la collecte de données relatives au trafic et l’interception de données relatives au contenu. Dans le même temps, Singapour et la Suisse se sont opposés à cette proposition, l’UE soulignant que de telles mesures constituent une menace pour les droits de l’Homme et les libertés fondamentales.

Les négociations sur la coopération internationale ont également rencontré des difficultés, la Russie rappelant l’importance d’établir une distinction entre le lieu de conservation des données et les lieux de traitement, de stockage et de transmission des données, notamment dans le cadre de l’informatique dématérialisée (cloud computing). Pour résoudre le problème de la « perte de localisation » des données, la Russie a proposé de se référer au deuxième protocole de la Convention de Budapest. En revanche, des pays comme le Pakistan, l’Iran, la Chine et la Mauritanie ont proposé d’inclure l’article 47 bis sur la coopération entre les autorités nationales et les fournisseurs de services. Pour l’essentiel, cette coopération devrait porter sur le signalement des délits de cybercriminalité tels qu’établis par la convention, le partage d’expertise, la formation, la préservation des preuves électroniques et la garantie de la confidentialité des demandes reçues des autorités chargées de l’application de la loi.

Une proposition intéressante a été faite par le Costa Rica et le Paraguay pour inclure le mot « durabilité » dans les articles 52 et 56 afin de fournir une assistance efficace et de traiter l’impact sociétal de la cybercriminalité.
La question reste donc ouverte : existe-t-il déjà un traité ? Les États se sont-ils mis d’accord sur les dispositions ? Non. Les États tiendront-ils leur dernier round en février 2024 ? Oui. Que se passera-t-il en cas d’absence de consensus ? Le Bureau de l’Office des Nations unies contre la drogue et le crime (ONUDC) interviendra et confirmera que les décisions seront prises à la majorité des deux tiers des représentants présents et votants.

 Flag


À venir : FGI 2023

L’édition 2023 du Forum sur la gouvernance de l’Internet (FGI) se tiendra à Kyoto, au Japon, du 8 au 12 octobre, sur le thème « L’Internet que nous voulons – l’autonomisation de tous ».

Le programme s’articule autour de huit sous-thèmes :

  • IA et technologies émergentes ;
  • éviter la fragmentation de l’Internet ;
  • cybersécurité, cybercriminalité et sécurité en ligne ;
  • gouvernance des données et transparence ;
  • fractures numériques et inclusion ;
  • gouvernance numérique mondiale et coopération ;
  • droits de l’Homme et libertés ;
  • durabilité et environnement.

Le Forum comprendra environ 300 sessions, avec une pléthore de présentations, y compris des sessions de haut niveau, des sessions principales, des ateliers, des forums ouverts, des séances de discussion éclair, des lancements et des récompenses, des sessions de mise en réseau, des événements du jour 0, des sessions de coalition dynamique, et des sessions d’initiatives nationales et régionales (NRI).

En outre, le village du FGI, où 76 exposants présenteront leur travail, sera ouvert aux visiteurs.

Restez informé sur les rapports du GIP !

La Geneva Internet Platform sera activement impliquée dans l’IGF 2023 en fournissant des rapports sur les sessions de l’IGF pour la 9e année consécutive. Cette année, nos experts humains seront rejoints par DiploAI, qui générera des rapports sur toutes les sessions du FGI.

Nous publierons également des rapports quotidiens sur le FGI tout au long de la semaine, et un rapport final sera publié au terme du FGI.

Ajoutez un signet à notre page dédiée à IGF 2023 sur le Digital Watch Observatory ou téléchargez l’application pour recevoir les rapports. Abonnez-vous pour avoir accès à des bulletins d’information quotidiens.

 Logo, Text

Si vous assistez au FGI de Kyoto, passez par notre kiosque Diplo et GIP. Si vous participez à la conférence en ligne, visitez notre espace dans le village virtuel.


Actualités de la Francophonie

 Logo, Text

L’OIF organise un Café numérique francophone sur la « Découvrabilité et diversité culturelle et linguistique dans l’espace numérique » pour les délégations francophones auprès des Nations unies à New York

 Art, Graphics, Advertisement, Nature, Outdoors, Sea, Water, Pattern, Accessories
Campaigns 13

Suite à la publication de sa Contribution au Pacte numérique mondial (PNM) en avril 2023, remise à l’Envoyé pour les technologies des Nations unies le 03 mai 2023, l’Organisation internationale de la Francophonie (OIF) a mis en place les « Cafés numériques francophones ». Ce rendez-vous bimensuel a pour objectif de renforcer la sensibilisation des diplomates et experts francophones en charge du numérique au sein des Missions permanentes des Nations unies sur les implications diplomatiques des développements numériques, de faire un état des lieux régulier sur les processus en cours, d’encourager la concertation francophone à New York et, in fine, de favoriser une meilleure coordination des positions. Cette sensibilisation s’inscrit aussi dans le cadre du programme « D-CLIC, formez-vous au numérique » et de son volet 3 sur la sensibilisation à la gouvernance du numérique. 

Ainsi, le deuxième « Café numérique francophone » couvre le thème de la « Découvrabilité et diversité culturelle et linguistique dans l’espace numérique » et aura lieu le 26 octobre 2023. Ce thème n’est pas anodin puisqu’il est une des propositions faites par l’OIF dans sa Contribution au PNM en complément des thèmes élaborés par les Nations unies : « Promotion de la diversité culturelle et linguistique dans le numérique ». L’OIF y plaide la défense de la diversité culturelle et linguistique dans l’espace numérique à travers un fort plaidoyer en faveur de la « découvrabilité » des contenus en ligne. 

En effet, l’environnement numérique ne répond pas suffisamment aux enjeux du multilinguisme et le risque d’exclusion d’une grande partie des expressions culturelles induit par la « plateformisation » des modes de consommation et de distribution des contenus doit être pris en compte. Ce risque doit être atténué par des mesures propres à assurer la découvrabilité de tous les contenus sur la Toile. Ainsi, l’univers numérique doit refléter cette diversité en créant un écosystème favorable à l’affirmation et à la valorisation d’un pluralisme culturel et linguistique excluant tout monopole de la pensée ou forme d’hégémonie culturelle. Cela est d’autant plus opportun avec la montée en puissance de l’Intelligence artificielle (IA), et la manière dont les algorithmes génèrent du contenu dans différentes langues, ayant donc un impact sur la visibilité et la découvrabilité du contenu francophone en ligne. Il est ici question de l’importance de la gouvernance des algorithmes au service de la diversité et de la découvrabilité dans le cyberespace. La promotion de la richesse et la diversité culturelles de demain se feront notamment sur Internet et il est essentiel de construire dès maintenant l’environnement qui permettra de les sauvegarder. Ce seront les enjeux de ce thème qui sera animé par Monsieur Destiny Tchehouali, Professeur de communication internationale au Département de Communication sociale et publique de l’Université du Québec à Montréal (UQAM) et co-titulaire de la Chaire Unesco en communication et technologies pour le développement.

Au-delà de la sensibilisation et du renforcement des compétences sur des thématiques liées au développement du numérique, ce dialogue permettra de favoriser des convergences et positions communes de diplomates et délégations francophones au sein de différentes instances à New York, et notamment durant les négociations intergouvernementales qui s’ouvriront en décembre 2023 sur le PNM.

Le Groupe de travail exécutif sur le numérique (GTEN) rend son rapport et ses recommandations pour renforcer l’action de la Francophonie dans le champ du numérique

La Secrétaire générale de la Francophonie Louise Mushikiwabo a reçu des mains de l’Ambassadeur suisse Martin Dahinden le Rapport du groupe de travail exécutif la Gouvernance du numérique.

Ce rapport, mandaté par le Sommet de Djerba des Chefs d’Etat et de gouvernement des pays ayant le français en partage, a pour objectif de clarifier la valeur ajoutée de la Francophonie en général, et de l’OIF en particulier, dans la gouvernance du numérique. Il a été élaboré par un groupe de travail, le Groupe de travail exécutif sur le numérique (GTEN), constitué d’un nombre restreint de membres de haut niveau issus de pays représentatifs des territoires formant l’espace francophone. Placé sous la Présidence de l’Ambassadeur suisse Martin Dahinden, il est composé d’experts du Bénin, du Canada/Québec, de la République Démocratique du Congo, de la France, du Maroc, de la Roumanie, du Vietnam et de la Fédération Wallonie-Bruxelles. 

De juin à septembre 2023, le Groupe s’est réuni à sept reprises et s’est notamment fondé sur plusieurs documents de référence de la Francophonie (Relevé des décisions, XVIIIe Conférence des chefs d’État et de gouvernement membres de l’OIF – 19 et 20 novembre 2022, Stratégie de la Francophonie numérique 2022-2026, Contribution de la Francophonie au Pacte numérique mondial) pour réfléchir de manière collaborative et itérative aux enjeux du numérique dans l’espace francophone. 

Le rapport contient donc des axes prioritaires et des recommandations pour renforcer l’action de la Francophonie et de ses membres dans le champ du numérique. Il s’attache à faire des propositions opérationnelles sur chaque thématique suivante : la réduction de la fracture numérique et accès au numérique pour les populations de l’espace francophone ; le renforcement des capacités des acteurs nationaux et régionaux, avec une attention particulière aux femmes et aux jeunes ; les voix francophones dans la gouvernance du numérique, notamment à travers la consolidation des initiatives entre pays francophones en matière de régulation du numérique ; la découvrabilité du contenu francophone en contribuant à accroître la visibilité des contenus francophones en ligne ; et enfin la promotion de l’innovation numérique responsable, inclusive et respectueuse des droits de l’Homme.

La Secrétaire générale de la Francophonie, qui a placé le numérique au cœur de son action, s’est engagée à porter ces propositions devant les Ministres des Affaires étrangères de l’espace francophone qui se réuniront lors de la Conférence ministérielle de la Francophonie à Yaoundé les 4-5 novembre prochains.

 Adult, Female, Person, Woman, Male, Man, Accessories, Formal Wear, Tie, Glasses, Clothing, Coat, Blackboard, Louise Mushikiwabo
Source de la photographie : OIF

L’OIF intervient au Forum Régional de Développement de l’UIT pour l’Afrique 2023 (Addis Abeba, 03-05 octobre 2023)

L’Organisation internationale de la Francophonie à travers sa Représentante permanente auprès de l’Union africaine, Mme Néfertiti Tshibanda, a pris part à Addis Abeba au Forum régional de développement de l’UIT pour l’Afrique (RDF-AFR) portant sur le thème : « Transformation numérique pour un avenir numérique durable et équitable : Accélérer la mise en œuvre des ODD en Afrique ». La session de haut niveau à laquelle a pris part Mme la Représentante avait comme thématique de discussion le développement et la transformation numériques de l’Afrique, avec les populations au cœur de ce processus. Lors de son intervention, Mme la Représentante est largement revenue sur l’historique de l’engagement et de la vision de la Francophonie pour le développement du numérique, l’action de l’OIF dans le domaine du renforcement des compétences numériques des populations francophones à travers notamment le programme D-CLIC mais aussi son engagement dans le domaine de la gouvernance du numérique. La diversité culturelle et linguistique dans l’espace numérique avec la découvrabilité des contenus en ligne est un des sujets privilégiés de l’action de l’OIF dans le champ de la gouvernance du numérique. Il fait d’ailleurs partie des deux thématiques (renforcement des capacités numériques et promotion de la diversité culturelle et linguistique dans le numérique) adjoints par l’OIF aux 7 sujets initiaux proposés par les Nations unies pour le Pacte numérique mondial. Pour rappel, l’OIF a remis sa contribution complète au Pacte numérique mondial à l’Envoyé pour les technologies des Nations Unies et l’a présenté aux délégations francophones à New York le 03 mai 2023.

 QR Code, Text

Les Autorités de protection des données personnelles de l’espace francophone se réunissent au Maroc lors de la 14e conférence de l’AFAPDP 

La Commission Nationale de Contrôle de la Protection des Données à Caractère Personnel (CNDP) du Maroc a accueilli la 14e conférence de l’Association Francophone des Autorités de Protection des Données Personnelles (AFAPDP) le 2 octobre 2023 à Tanger (Maroc). Cette conférence avait pour thème principal : « Enjeux des relations DPAs-GAMMAs : Exemple du Web scraping ». 

Le moissonnage de données (web scraping) peut présenter des enjeux et défis majeurs pour la protection de la vie privée. L’extraction automatisée de données à partir du web peut impliquer l’aspiration de données personnelles, notamment sur les réseaux sociaux. Elle peut donc poser des problèmes par rapport aux principes et réglementations sur les données personnelles. 

Il est à rappeler qu’avec 11 autres autorités de protection des données (Data protection authorities d’Australie, Canada, Royaume-Uni, Hong Kong, Suisse, Norvège, Nouvelle-Zélande, Colombie, Jersey, Argentine et Mexique) dans le monde, la CNDP avait déjà signée en août 2023 une lettre destinée aux GAMMAs (Google, Apple, Meta, Microsoft et Amazon) et d’autres réseaux sociaux comme X Corp (ex-Twitter) ou encore ByteDance Ltd (TikTok) pour les inviter à prendre des dispositions permettant de minimiser les risques d’atteinte à la vie privée pour les utilisateurs. 

En savoir plus : https://www.afapdp.org

Événements à venir :

  • Conférence du Réseau francophone des régulateurs des médias – REFRAM (2023, Dakar), date à confirmer (https://www.refram.org)
  • Conférence du Réseau francophone de la régulation des télécommunications – FRATEL (25-26 octobre 2023, Rabat, Maroc) : Comment renforcer l’objectif de satisfaction des utilisateurs dans la régulation ? (https://www.fratel.org/)
  • Participation de l’OIF à l’Assemblée générale annuelle de l’ICANN (ICANN 78), Société pour l’attribution des noms de domaines et des numéros sur Internet (21-26 octobre 2023, Hambourg)

IGF 2023 – Daily 4

 Logo, Advertisement, Art, Graphics, Text

IGF Daily Summary for

Wednesday, 11 October 2023

Dear reader, 

The third day always brings a peak in the IGF dynamics, as happened yesterday in Kyoto. The buzz in the corridors, bilateral meetings, and tens of workshops bring into focus the small and large ‘elephants in the room’. One of these was the future of the IGF in the context of the fast-changing AI and digital governance landscape. 

What will the future role of the IGF be? Can the IGF fulfil the demand for more robust AI governance? What will the position of the IGF be in the context of the architecture proposed by the Global Digital Compact, to be adopted in 2024?

These and other questions were addressed in two sessions yesterday. Formally speaking, decisions about the future of the IGF will most likely happen in 2025. The main policy dilemma will be about the role of the IGF in the context of the Global Digital Compact, which will be adopted in 2024. 

While governance frameworks featured prominently in the debates, a few IGF discussions dived deeper into the specificities of AI governance. 

Yesterday’s sessions provided intriguing reflections and insights on cybersecurity, digital and the environment, human rights online, disinformation, and much more, as you can read below.

You can also read how we did our reporting from IGF2023. Next week, Diplo’s AI and Team of Experts will provide an overall report with the gist of our debates and many useful (and interesting)l statistics. 

Stay tuned!

The Digital Watch team

A rapporteur writes a report on a laptop while observing a dynamic panel discussion in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://dig.watch/igf2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


Highlights from yesterday’s sessions
Banners announcing the 18th Annual Meeting of the Internet Governance Forum hang from the ceiling of a walkway at the IGF2023 venue.
Kinkaku-ji Temple in Kyoto. Credit: Sasa VK

The day’s top picks

  • The future of the IGF
  • Ethical principles for the use of AI in cybersecurity
  • Inclusion (every kind of inclusion)

Digital Governance Processes

What is the future of the IGF? 

It could be a counter-intuitive question given the success of IGF2023 in Kyoto. But, continuous revisiting of the purpose of IGF is built into its fundaments. The next review of the future of the IGF will most likely happen in 2025 on the occasion of the 20th anniversary of the World Summit on Information Society (WSIS) when the decision to establish the IGF was made.

In this context, over the last few days in Kyoto, the future of the IGF has featured highly in corridors, bilateral meetings, and yesterday’s sessions. One of the main questions has been what will be the future position of the IGF in the context of the Global Digital Compact (GDC), to be adopted during the Summit of the Future in September 2024. For instance, what will be the role of the IGF if the GDC establishes a Digital Cooperation Forum as suggested in the UN Secretary-General’s policy brief

Debates in Kyoto reflected the view that fast developments, especially in the realm of AI, require more robust AI and digital governance. Many in the IGF community argue for a prominent role for the IGF in the emerging governance architecture. For example, the IGF Leadership Panel believes that it is the IGF that should participate in overviewing the implementation of the GDC. Creating a new forum would incur significant costs in finances, time, and effort. There is also a view that the IGF should be refined, improved and adapted to the rapidly changing landscape of AI and broader digital developments in order to, among other things, involve missing communities in current IGF debates. This view is supported by the IGF’s capacity to change and evolve, as has happened since its inception in 2006. 

Banners announcing the 18th Annual Meeting of the Internet Governance Forum hang from the ceiling of a walkway at the IGF2023 venue.

The Digital Watch and Diplo will follow the debate on the future of the IGF in the context of the GDC negotiations and the WSIS+20 Review Process.


AI

AI and governance

AI will be a critical segment of the emerging digital governance architecture. In the Evolving AI, evolving governance: From principles to action session, we learned that we could benefit from two things. First, we need a balanced mix of voluntary standards and legal frameworks for AI. It’s not about just treating AI as a tool, but regulating it based on its real-world use. Second, we need a bottom-up approach to global AI governance, integrating input from diverse stakeholders and factoring in geopolitical contexts. IEEE and its 400,000 members were applauded for their bottom-up engagement with regulatory bodies to develop socio-technical standards beyond technology specifications. The UK’s Online Safety Bill, complemented by an IEEE standard on age-appropriate design, is one encouraging example.

The open forum discussed one international initiative specifically – the Global Partnership on Artificial Intelligence (GPAI). The GPAI operates via a multi-tiered governance structure, ensuring decisions are made collectively, through a spectrum of perspectives. It currently boasts 29 member states, and others like Peru and Slovenia are looking to join. At the end of the year, India will be taking over the GPAI chair from Japan and plans to focus on bridging the gender gap in AI. It’s all about inclusion, from gender and linguistic diversity to educational programmes to teach AI-related skills. 

AI and cybersecurity

AI could introduce more uncertainty into the security landscape. For instance, malicious actors might use AI to facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. AI is making it easier to make bioweapons and propagate autonomous weapons, raising concerns about modern warfare. National security strategies might shift towards preemptive strikes, as commanders fear that failure to strike the right balance between ethical criteria and a swift military response could put them at a disadvantage in combat. 

On the flip side, AI can play a role in safeguarding critical infrastructure and sensitive data. AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues, by assisting in age verification and disrupting suspicious behaviours and patterns that may indicate child exploitation. AI could be a game-changer in reducing harm to civilians during conflicts: It could reduce the likelihood of civilian hits by identifying and directing target strikes more accurately, thus enhancing precision and protecting humanitarian elements in military operations. One of yesterday’s sessions, Ethical principles for the use of AI in cybersecurity, highlighted the need for robust ethical and regulatory guidelines in the development and deployment of AI systems in the cybersecurity domain. Transparency, safety, human control, privacy, and defence against cyberattacks were identified as key ethical principles in AI cybersecurity. The session also argued that existing national cybercriminal legislation could cover attacks using AI without requiring AI-specific regulation.

Anastasiya panel
Diplo’s Anastasiya Kazakova at the workshop: Ethical principles for the use of AI in cybersecurity.

The question going forward is: Do we need separate AI guidelines specifically designed for the military? The workshop on AI and Emerging and Disruptive Technologies in warfare called for the development of a comprehensive global ethical framework led by the UN. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the need for a unified approach and compliance through intergovernmental processes persists.

The military context often presents unique challenges and ethical dilemmas, and the first attempts at guidelines for the military are those from the RE-AIM Summit and the UN Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons.


Cybersecurity

Global negotiations for a UN cybercrime convention

Instruments and tools to combat cybercrime were high on the agenda of discussions. The negotiations about the possible UN cybercrime convention in the Ad Hoc Committee (AHC) are nearing the end, yet many open issues remain. While the mandate is clearly limited to cybercrime (broader mandate proposals, like the regulation of ISPs, were removed from the text), there is a need to precisely define the scope of the treaty. There is no commonly agreed-upon definition of cybercrime yet, and a focus on well-defined crimes that are universally understood across jurisdictions might be needed. 

There are calls to distinguish between cyber-dependent (dependent upon some cyber element for its execution) serious crime like autonomous cyberweapon terrorist attacks and cyber-enabled (actions traditionally carried out in other environments, but now possible with the use of computer as well) actions like online speech which may hurt human rights., The treaty should also address safe havens for cybercriminals, since certain countries turn a blind eye to cybercrime within their borders — due to their limited capacity to combat it, or political or other incentives to ignore it.

Another major stumbling stone of the negotiations is how to introduce clear safeguards for human rights and privacy. Concerns are present over the potential misuse of the provision related to online content by authoritarian countries to prosecute activists, journalists, and political opponents. Yet the mere decision-making process for adopting the convention – which requires unanimous consensus, or, alternatively, a two-thirds majority vote – makes it unlikely that any provision curtailing human rights will be included in the final text.

The current draft includes explicit references to human rights and thus goes far beyond existing crime treaties (e.g. UNCTAD and UNCAC). A highly regarded example of an instrument that safeguards human rights is the Cybercrime Convention (known as the Budapest Convention) of the Council of Europe, which requires parties to uphold the principles of the rule of law and human rights; in practice, judicial authorities effectively oversee the work of the law enforcement authorities (LEA).

One possible safeguard to mitigate the risks of misuse of the convention is the principle of dual criminality, which is crucial for evidence sharing and cooperation in serious crimes. The requirement of dual criminality for electronic evidence sharing is still under discussion in the AHC.

Other concerns related to the negotiations on the new cybercrime convention include the information-sharing provisions (whether voluntary or compulsory), how chapters in the convention will interact with each other, and how the agreed text will manage to overcome jurisdictional challenges to avoid conflicting interpretations of the treaty. Discussions about the means and timing of information sharing about cybersecurity vulnerabilities, as well as reporting and disclosure, are ongoing.

A more robust capacity-building chapter and provisions for technical assistance are needed, apparently. Among other things, those provisions should enable collaborative capacities across jurisdictions and relationships with law enforcement agencies. The capacity-building initiative of the Council of Europe under the Budapest Convention can serve as an example (e.g. training in cybercrime for judges).

The process of drafting the convention benefited from the deep involvement of expert organisations like the United Nations Office on Drugs and Crime (UNODC), the private sector, and civil society. It is widely accepted that strong cooperation among stakeholders is needed to combat cybercrime. 

The current draft introduces certain challenges for the private sector. Takedown demands as well as placing the responsibility for defining and enforcing rules on freedom of speech on companies, generate controversy and debate within the private sector: Putting companies in an undefined space confronts them with jurisdictional issues and conflicts of law. Inconsistencies in approaches across jurisdictions and broad expectations regarding data disclosure without clear safeguards pose particular challenges; clear limitations on data access obligations are also essential.

What comes next for the negotiations? The new draft of the convention is expected to be published in mid-November, and one final negotiation session is ahead in 2024. After deliberations and approval by the AHC (by consensus or two-thirds voting), the text of the convention will need to be adopted by the UN General Assembly and opened for ratification. For the treaty to be effective, accession by most, if not all, countries is necessary. 

The success or failure of the convention depends on the usefulness of the procedural provisions in the convention text (particularly those relating to investigation, which are currently well-developed) and the number of states that ratify the treaty. Importantly, the success of the treaty implementation is also conditioned that it doesn’t impede existing functioning systems, such as the Budapest Convention, which has been ratified by 68 countries worldwide. An extended effect of a treaty would be the support given to cybercrime efforts by UN member states in passing related national bills. 

Digital evidence for investigating war crimes

A related debate developed around cyber-enabled war crimes, due to the recent decision by the International Criminal Court (ICC) prosecutor to investigate such cases. The Budapest Convention applies to any crime involving digital evidence, including war crimes (in particular Article 14 on war crime investigations, Article 18 on the acquisition of evidence from any service provider, and Article 26 on the sharing of information among law enforcement authorities). 

Of particular relevance is the development of tools and support to capture digital evidence, which could aid in the investigation and prosecution of war crimes. Some tech companies have partnered with the ICC to create a platform that serves as an objective system for creating a digital chain of custody and a tamper-proof record of evidence, which is critical for ensuring neutrality and preserving the integrity of digital evidence. The private sector also plays a role in collecting evidence: There are reports from multiple technology companies providing evidence of malicious cyber activities during conflicts. The Second Additional Protocol to the Budapest Convention offers a legal basis for disclosing domain name registration information and direct cooperation with service providers. At the same time, Article 32 of the Budapest Convention addresses the issue of cross-border access to data, but this access is only available to state parties.

Other significant sources of evidence are investigative journalism and open source intelligence (OSINT) – like the Bellingcat organisation – which uncover war crimes and gross human rights violations using new tools, such as the latest high-resolution satellite imagery. OSINT should be considered an integral part of the overall chain of evidence in criminal investigations, yet such sources should be integrated within a comprehensive legal framework. Article 32 of the Budapest Convention, for example, is already a powerful tool for member states to access OSINT from both public and private domains, with consent. Investigative journalism plays a role in combating disinformation and holding those responsible for war crimes accountable.

Yet, the credibility and authenticity of such sources’ evidence can be questioned. Technological advancements, such as AI, have enabled individuals, states, and regimes to easily manipulate electronic data and develop deepfakes and disinformation. When prosecuting cybercrime, it is imperative that evidence be reliable, authentic, complete, and believable. Related data must be preserved, securely guarded, protected, authenticated, verified, and available for review to ensure its admissibility in trials. The cooperation of state authorities could lead to the development of methodologies for verifying digital evidence (e.g. the work of the Global Legal Action Network).

A digital image in black and blue.

Human rights

Uniting for human rights

‘As the kinetic physical world in which we exist recedes and the digital world in which we increasingly live and work takes up more space in our lives, we must begin thinking about how that digital existence should evolve.’ This quotation, published in a background paper to the session on Internet Human Rights: Mapping the UDHR to cyberspace, succinctly captures one of the central issues of our age. 

The world today is witnessing a concerning trend of increasing division and isolationism among nations. Ironically, global cooperation and governance, the very reasons for IGF 2023, are precisely what we need to promote and safeguard human rights. 

At the heart of yesterday’s main session on Upholding human rights in the digital age was the recognition that human rights should serve as an ethical compass in all aspects of internet governance and the design of digital technologies. But this won’t happen on its own: We need collective commitment to ensure that human rights are at the forefront of the evolving digital landscape, and we need to be deliberate and considerate in shaping the rules and norms that govern it.

The Global Digital Compact framework could promote human rights as an ethical compass by providing a structured and collaborative platform for stakeholders to align their efforts towards upholding human rights in the digital realm. 

The IGF also plays a crucial role in prioritising human rights in the digital age by providing a platform for diverse perspectives, grounding AI governance in human rights, addressing issues of digital inclusion, and actively engaging with challenges like censorship and internet resilience.

Capitalist surveillance

In an era dominated by technological advancements, the presence of surveillance in our daily lives is pervasive, particularly in public spaces. Driven by a need for heightened security measures, governments have increasingly deployed sophisticated technologies, such as facial recognition systems. 

As yesterday’s discussion on private surveillance showed, citizens also contribute to our intricate web of interconnected surveillance networks: Who can blame the neighbours if they want to monitor their street to keep it safe from criminal activity? After all, surveillance technologies are affordable and accessible. And that’s the thing: A parallel development that’s been quietly unfolding is the proliferation of private surveillance tools in public spaces. 

These developments require a critical examination of their impact on privacy and civil liberties, and on issues related to consent, data security, and the potential for misuse. Most of us are aware of these issues, but the involvement of private companies in surveillance introduces a new layer of complexity. 

Unlike government agencies, private companies are often not subject to the same regulations and transparency requirements. This can lead to a lack of oversight and transparency regarding how data is collected, stored, and used. 

Additionally, the potential for profit-driven motives may incentivise companies to push the boundaries of surveillance practices, potentially infringing on individuals’ privacy rights. It’s not like we haven’t seen this before.

A metal post with surveillance cameras aimed in three directions stands against a blue sky marked by clouds

Ensuring ethical data practices

The exploitation of personal data without consent is ubiquitous. Experts in the session Decolonise digital rights: For a globally inclusive future drew parallels to colonial practices, highlighting how data is used to control and profit. This issue is not only a matter of privacy but also an issue of social justice and rights. 

When it comes to children, privacy is not just about keeping data secure and confidential but also about questioning the need for collecting and storing their data in the first place. This means that the best way to check whether a user accessing an online service is underaged is to use pseudonymous credentials and pseudonymised data. Given the wave of new legislation requiring more stringent age verification measures, there’s no doubt that we will be discussing this issue much more in the coming weeks and months. 

Civil society is perhaps best placed to hold companies accountable for their data protection measures and governments in check for their efforts in keeping children safe. Yet, we sometimes forget to involve the children themselves in shaping policies related to data governance and their digital lives. 

Hence, the suggestion of involving children in activities such as data subject access requests. This can help them comprehend the implications of data processing. It can also empower them to participate in decision-making processes and contribute to ensuring ethical and responsible data practices. After all, the experts argue, many children’s level of awareness and concern about their privacy is comparable to that of adults.


Development

Digital technologies and the environment

The pandemic clearly showed the intricate connection between digital technologies and the environment. Although lower use of gasoline-powered vehicles led to a decrease in CO2 emissions during lockdowns, isolating also triggered a substantial increase in internet use due to remote work and online activities, giving rise to concerns about heightened carbon emissions from increased online and digital activities. 

Data doesn’t lie, (statisticians do) and data has confirmed the dual impact of digital technologies: While these technologies contribute 1% to 5% of greenhouse gas emissions and consume 5% to 10% of global energy, they also have the potential to cut emissions by 20% by 2030.

To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about what sustainable sources we have available and establish standards for their use.

While progress is being made, there’s a pressing need for consistent international standards that consider environmental factors for digital resources. Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly in relation to AI and its environmental impact, as well as collaborations between organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is crucial for sustainable practices. 

Currently, over 90% of global enterprises are small or medium-sized, contributing over 50% of the world’s GDP, yet they lack the necessary frameworks to measure their carbon footprint, which is a key step in enabling their participation in the carbon economy in a real and verifiable way. 

Inclusion of people with disabilities

There’s no one-size-fits-all solution when it comes to meeting the needs of people with disabilities (PWD) in the digital space. First of all, the perduring slogan, ‘nothing about us without us’ must be respected. Accessibility by design standards like Web Content Accessibility Guidelines (WCAG) 2 are easily available through the W3C Accessibility Standards Overview. Although accessibility accommodations require tailored approaches to address the specific needs of both groups and individuals, standards offer a solid foundation to start with. 

The inclusion of people with disabilities should extend beyond technical accessibility to include the content, functionality, and social aspects of digital platforms.The stigma PWD face in online spaces needs to be addressed by implementing policies that create a safe and inclusive online environment. 

Importantly, we must take advantage of the internet governance ecosystem to ensure that

  • We support substantial representation from the disability community in internet governance discussions, beyond discussions on disabilities.
  • We stress the importance of making digital platforms accessible to everyone, no matter their abilities or disabilities, using technology and human empowerment.
  • We provide awareness-raising workshops for those unaware of the physical, mental, and cognitive challenges others might be facing, including those of us who suffer from one disability without understanding what others are facing.
  • We provide skills and training to effectively use available accommodations to overcome our challenges and disabilities.
  • We make available training and educational opportunities for persons with disabilities to be involved in the policymaking processes that involve us, making the internet and digital world better for everyone with the resulting improvements.
  • We support research to continue the valuable scientific improvements made possible by emerging technologies and digital opportunities.
A person in a black suit sits in a wheelchair in front of a computer desk. They are wearing a virtual reality headset and gesturing with their arms and hands.

Sociocultural

The public interest and the internet 

The internet is widely regarded as a public good with a multitude of benefits. Its potential to empower communities by enabling communication, information sharing, and access to valuable resources was appreciated. However, while community-driven innovation coexists with corporate platforms, the digital landscape is primarily dominated by private, for-profit giants like Meta and X. 

This dominance is concerning, particularly because it risks exacerbating pre-existing wealth and knowledge disparities, compromises privacy, and fosters the proliferation of misinformation.

This duality in the internet’s role demonstrates its ability to both facilitate globalisation and centralise control, possibly undermining its democratic essence. The challenge is even greater when considering that efforts to create a public good internet often lack inclusivity, limiting the diversity of voices and perspectives in shaping the internet. Furthermore, digital regulations tend to focus on big tech companies, often overlooking the diverse landscape of internet services. 

To foster a public good internet and democratise access, there is a need to prioritise sustainable models that serve the public interest. This requires a strong emphasis on co-creation and community engagement. This effort will necessitate not only tailoring rules for both big tech and small startup companies but also substantial investments in initiatives that address the digital divide and promote digital literacy, particularly among young men and women in grassroots communities, all while preserving cultural diversity. Additionally, communities should have agency in determining their level of interaction with the internet. This includes enabling certain communities to meaningfully use the internet according to their needs and preferences.

Disinformation and democratic processes 

In the realm of disinformation, we are witnessing new dynamics, with an expanded cast of individuals and group actors responsible for misleading the public, with the increasing involvement of politics and politicians. 

Addressing misinformation in this fast-paced digital era is surely challenging, but not impossible. For instance, Switzerland’s resilient multi-party system was cited to illustrate how it can resist the sway of disinformation in elections. And while solutions can be found to limit the spread of mis- and dis-information online, they need to be put in place with due consideration to issues such as freedom of expression and proportionality. The Digital Services Act (DSA) – adopted in the EU – is taking this approach, although concerns were voiced about its complexity.

A UN Code of Conduct for information integrity on digital platforms could contribute to ensuring a more inclusive and safe digital space, contributing to the overall efforts against harmful online content. However, questions arose about its practical implementation and the potential impacts on freedom of expression and privacy due to the absence of shared definitions.

Recognising the complexity of entirely eradicating disinformation, some argued for a more pragmatic approach, focusing on curbing its dissemination and minimising the harm caused, rather than seeking complete elimination. A multifaceted approach that goes beyond digital platforms and involves fact-checking initiatives and nuanced regulations was recommended. Equally vital are efforts in education and media literacy, alongside the collection of empirical evidence on a global scale, to gain a deeper understanding of the issue.

Tiles with random letters surround five tiles lined up in a row to spell the word ‘FACTS’ on a pink background.

Infrastructure

Fragmented consensus

Yesterday’s discussions on internet fragmentation built on those of the previous days. Delving into diverse perspectives on how to prevent the fragmentation of the internet is inherently valuable. But when there’s an obvious lack of consensus on even the most fundamental principles, it underlines just how critical the debate is.

For instance, should we focus mostly on the technical aspects, or should we also consider content-related fragmentation – and which of these are the most pressing to address? If misguided political decisions pose an immediate threat, should policymakers take a backseat on matters directly impacting the internet’s infrastructure?

Pearls of wisdom shared by experts in both workshops – Scramble for internet: You snooze, you lose and Internet fragmentation: Perspectives & collaboration – offer a promising bridge to close this gap in strategy.

One of these insights emphasised the need to distinguish content limitations from internet fragmentation. Content restrictions, like parental controls or constraints on specific types of content, primarily pertain to the user experience rather than the actual fragmentation of the internet. Labelling content-level limitations as internet fragmentation could be misleading and potentially detrimental. Such a misinterpretation might catalyse a self-fulfilling prophecy of a genuinely fragmented internet.

Another revolved around the role of governments, in some ways overlapping with content concerns. There’s apprehension that politicians might opt to establish alternate namespaces or a second internet root, thereby eroding its singularity and coherence. If political interests start shaping the internet’s architecture, it could culminate in fragmentation and potentially impede global connectivity. And yet, governments have been (and still are) essential in establishing obligatory rules affecting online behaviour when other voluntary measures proved insufficient. 

A third referred to the elusive nature of the concept of sovereignty. Although a state holds the right to establish its own rules, should this extend to something inherently global like the internet? The question of sovereignty in the digital age, especially in the context of internet fragmentation, prompts us to reevaluate our traditional understanding of state authority in a world where boundaries are increasingly blurred by the whirlwinds of silt raised as governments search for survey markers in the digital realm.

Network with pins

Economic

Tax rules and economic challenges for the Global South

Over the years, the growth of the digital economy – and how to tax it – led to major concerns over the adequacy of tax rules. In 2021, over 130 countries came together to support the OECD’s new two-pillar solution. In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.

Despite significant improvements in tax rules, developing countries feel that these measures alone are insufficient to ensure tax justice for the Global South. First, these models are based on the principle that taxes are paid where profits are generated. This principle does not consider the fact that many multinational corporations shift profits to low-tax jurisdictions, depriving countries in the Global South of their fair share of tax revenue. Second, the two frameworks do not address the issue of tax havens directly, which are often located in the Global North. Third, the OECD and UN models do not take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South. 

Yesterday’s workshop on Taxing Tech Titans: Policy options for the Global South discussed policy options accessible to developing countries. 

Countries in the Global South have adopted various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services. That’s not to say that they’ve all been effective: Uganda’s experience with taxing digital services, for instance, had unintended negative consequences. In addition, unilateral measures without a global consensus-based solution can lead to trade conflicts.

So what would the experts advise their countries to do? Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious: ‘Wait and see, and sign up later’ a concluding remark suggested.

Tax word on wooden cubes on the background of dollar banknotes. Tax payment reminder
Diplo/GIP at the IGF

Reporting from the IGF: AI and human expertise combined

We’ve been hard at work following the IGF and providing just-in-time reports and analyses. This year, we leveraged both human expertise and DiploAI in a hybrid approach that consists of several stages:

  1. Online real-time recording of IGF sessions. Initially, our recording team set up an online recording system that captured all sessions at the IGF. 
  2. Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker made which contribution is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
  3. AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session as well as for individual speakers. These graphical representations intricately mapped the connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis (see the knowledge graph from Day 1 at IGF2023)
Line drawing of an intricate web of fine, coloured lines and nexuses.
  1. Writing dailies. To conclude the reporting process, our team of analysts used AI-generated reports to craft comprehensive daily analyses. 

You can see the results of that approach on our dedicated page.

alt-text=0
One part of Diplo’s Belgrade team at work. Does that clock say 2:30 a.m.? Yes, it does.
A photo collage of tourist sights around Tokyo frames a photo of DiploTeam eating at a restaurant, with the comment: Diplo Crew at IGF2023’.
A part of our team attended the IGF in situ and participated in sessions as organisers, moderators and speakers. Here they are, on their last evening in Kyoto (above).

IGF 2023 – Daily 3

 Logo, Advertisement, Text, Art, Graphics

IGF Daily Summary for

Tuesday, 10 October 2023

Dear reader, 

On Day 2, the IGF got into full swing with intense debates in conference rooms and invigorating buzz in the corridors. The inspiring local flavours permeated the debate on the impact of digitalisation on the treasured Japanese manga culture and Jovan Kurbalija’s parallels between the Kyoto School of Philosophy and AI governance.

 Gate, Torii, Architecture, Building

After the formalities and protocol of the first day, the level of usual ‘language’ decreased, and new insights and controversies increased. AI remains a prominent topic, with much more clarity in the debate. While the familiar AI lingo continues, it was refreshing to see increased clarity in thinking about AI away from the prevailing hype and fear-mongering in the media space.

The quality of debate was increased by viewing AI from various perspectives, from technology and standardisation to human rights and cybersecurity. While acknowledging the reality of different interests and powers, the debate on AI brought the often-missing insight that all countries face similar challenges in governing AI. For this reason, focusing on human-centred AI may help reduce geopolitical tensions in this field.

The Global Digital Compact (GDC) triggered intense and constructive debate. While there is overwhelming support for the GDC as the next step in developing inclusive digital governance, the focus on details is increasing, including the role of the IGF in implementing the GDC and preserving the delicate balance between the multilateral negotiations of the GDC and multistakeholder participation. This year, the IGF also intensified academic debate with policy implications on the difference between internet and ‘digital’. 

Further down in this summary, you can find more on, among other things, internet fragmentation, cybersecurity, content moderation, and digital development. 

You can also read more on an exciting initiative using AI to preserve the rich knowledge that the IGF has generated since its first edition in Athens in 2006.

We wish you inspiring discussions and interesting exchanges on the third day of the IGF in Kyoto!

The Digital Watch Team

A rapporteur writes a report on a laptop while observing a dynamic panel discussion

Do you like what you’re reading? Bookmark us at https://dig.watch/igf2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


The highlights of the discussions

The day’s top picks

  • AI: Increasing clarity in debates
  • Japan: Manga, Kyoto philosophers and digital governance
  • The GDC: Overall support and discovering the ‘devils in the details’
  • The IGF itself: Using AI to preserve the rich knowledge treasure of the IGF

Artificial Intelligence

Refreshing clarity in AI debates

 Advertisement, Poster, Art, Graphics, Publication, Text, Book, Nature, Outdoors

We want AI, but what does that mean? Today’s main session brought refreshing clarity to the discussion on the impact of AI on society. It moved from the cliche of AI ‘providing opportunities while triggering threats’ to providing more substantial insights. The spirit of this debate starkly contrasted to the rather alarming hype about existential human threats that AI has triggered.

AI was compared to electricity, with the suggestion that AI is becoming similarly pervasive and requires global standards and regulations to ensure its responsible implementation. The discussion recognised AI as a critical infrastructure. 

 Business Card, Paper, Text

A trendy analogy comparing AI governance to the International Atomic Energy Agency (IAEA) was criticised as containing more differences than similarities between AI and nuclear energy.

While we wait for new international regulations to be developed, a wide range of actors could adopt voluntary standards for AI. For instance, UNICEF uses a value-based design approach developed by the Institute of Electrical and Electronics Engineers (IEEE).

The private sector in AI governance is both inevitable and indispensable. Therefore, its involvement must be transparent, open, and trustworthy. Currently, this is not the case. However, the representative of OpenAI noted the recent launch of the Red Teaming Network as an industry attempt to be more open and inclusive. Other examples are the LEGO Group’s implementation of measures to protect children in their online and virtual environments and the UK Children’s Act.

Calls were made for national and regional attempts to advance local-context AI governance, as in Mauritius, Kenya and Egypt, which are taking steps towards national policies. In Latin America, challenges also arise from unique regional contexts, global power dynamics, and the intangible nature of AI. 

AI and human rights 

Children’s safety and rights were the focus of a workshop organised by UNICEF. AI has already entered classrooms, but without clearly defined criteria for responsible integration. It is already clear that there are benefits: The innovative use of AI for bridging cultural gaps heralds a new era of global connectedness, and it can support fair assessments. Going beyond, according to Honda, its HARU robot can provide emotional support to vulnerable children or AI can fill a gap in a severely under-resourced mental healthcare system, such as in Nigeria, where an ‘Autism VR’ project is increasing awareness, promoting inclusion and supporting neurodiverse children. 

However, a note of caution was also sounded: the future of education lies in harnessing technology’s potential while championing inclusivity, fairness, and global collaboration. Some solutions are: integrating educators in the research process, adopting a participatory action approach, involving children from various cultural and economic backgrounds, and recognising global disparities given that AI datasets are insufficiently capturing the experiences of children from the Global South.

Human rights approaches carried weight today, echoing in the workshop on a Global human rights approach to responsible AI governance. The discussion highlighted the ‘Brussels effect,’ wherein EU regulations became influential worldwide. Countries with more robust regulatory frameworks tend to shape AI governance practices globally, emphasising the implications of rules beyond national borders. In contrast, as some observers noted, in Latin America, the regional history of weak democratic systems generated a general mistrust towards participation in global processes, hindering the continent’s engagement in AI governance. Yet, Latin America provides raw materials, resources, data, and labour for AI development while the tech industry aggressively pushes for regional AI deployment in spite of human rights violations. The same can be said for Africa. 

To address these challenges, it is necessary to strengthen democratic institutions and reduce regional asymmetries, keeping in mind that human rights should represent the voice of the majority. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and capacity development are essential. 

AI and standardisation

It appears that regional disparities plague standardisation efforts as well. Standardisation is indispensable for linkages between technology developers, policy professionals and users. Yet, as the workshop Searching for standards: The global competition to govern AI noted, silos remain problematic, isolating developers and policymakers or providers and users of the technology. The dominance of advanced economies as providers of AI tech, heavily guarding intellectual property rights, and early standard-setting has led to situations where harms are predominately framed through a lens in the Global North, at the cost of impacts on users, usually in the Global South. 

As a potential way of opening the early standard-setting game, open-source AI models support developing countries by offering immediate opportunities for local development and involvement in the evolving ecosystem. There is, however, a need for technical standards for AI content, with watermarking proposed as a potential standard. 

AI and cybersecurity

The use of AI in cybersecurity provides numerous opportunities, as noted during the workshop on AI-driven cyber defence: Empowering developing nations. The discussion centred on the positive role of AI in cybersecurity, emphasising its potential to enhance protection rather than pose threats. One example is AI’s effectiveness in identifying fake accounts and inauthentic behaviour.  

Collaboration and open innovation were emphasised as critical factors for AI cybersecurity. Keeping AI accessible to experts in other areas helps prevent misuse, and policymakers should incentivise open innovation. 

A person’s finger touches a digital fingerprint icon on an interlocking network of digital functions represented by icons connected to AI.

Unlocking the IGF’s knowledge using AI

Yesterday, Diplo and the GIP supported the IGF Secretariat in organising a side session, where how to unlock IGF’s knowledge to gain AI-driven insights for our digital future was discussed. The immense amount of data accumulated through the IGF over 18 years – which is a public good that belongs to all stakeholders – presents an opportunity for valuable insights when mined and analysed effectively, with AI applications serving as useful tools in this process. This way, this wealth of knowledge can be more effectively utilised to contribute to the SDGs.

Jovan Kurbalija smiles and reaches for the microphone on a panel at the IGF.

AI can enhance human capabilities to assist the IGF mission. AI has the capability to generate interactive reports from sessions (as it does at IGF2023), with detailed breakdowns by speaker and topic, narrative summaries, and discussion points. Such a tool can codify and translate the arguments presented during sessions, identify and label key topics, and develop a comprehensive knowledge graph. It can connect and compare discussions across different IGF sessions, identify commonalities, link related topics, and facilitate a more comprehensive understanding of the subject matter, as well as associate relevant SDGs with the discussions. 

AI can mitigate the challenge of the crowded schedule of IGF sessions, by establishing links to similar discussions and sessions from past years, which enables better coordination and consolidation of related themes over the course of years and meetings. Ultimately, AI can help us visualise hours of discussions and thousands of pages of discussion in the format of a knowledge graph, as done in Diplo’s experiment with daily debates at this year’s IGF (see below).

An intricate multicoloured lace network of lines and nexuses representing a knowledge graph of Day 0 of IGF2023

AI can increase the effectiveness of disseminating and utilising the knowledge generated by the IGF. It can also help identify underrepresented and marginalised groups and disciplines in the IGF processes, allowing the IGF to increase its focus on involving them. 

Importantly, preserving the IGF’s knowledge and modus operandi can show the relevance and power of respectful engagement with different opinions and views. Since this approach is not a ‘given’ in our time, the IGF’s contribution could be much broader, far beyond the focus on internet governance per se.

Digital governance processes

GDC in the spotlight

While it may look like AI is the one and only most popular topic at this year’s IGF, there is at least one more thing on many participants’ minds: the much anticipated Global Digital Compact (GDC). It’s no surprise, then, that a main session was dedicated to it. If this is the first time you are reading about the GDC (we strongly doubt it), we invite you to familiarise yourself with it on this process page before moving on. 

If you know what the GDC is, then you most likely also know that one sour point in discussions so far has concerned the process itself: While the GDC is expected to be an outcome of the multilateral 2024 Summit of the Future, many state and non-state actors argue that there should be multistakeholder engagement throughout the GDC development process. But, as highlighted during yesterday’s main session, balancing multilateral processes and multistakeholder engagement is indeed a challenge. How to address this challenge seems to remain unclear, but for the time being, stakeholders are encouraged to engage with their member states to generate greater involvement. 

And speaking of multistakeholderism, one expectation (or rather a wish) that some actors have for the GDC is that it will recognise, reinforce, and support the multistakeholder model of internet governance. Another expectation is that the GDC will establish clear linkages with existing processes while avoiding duplication of efforts and competition for resources. For instance, it was said during the session that the IGF itself should have a role in implementing the GDC principles and commitments and in the overall GDC follow-up process. 

Beyond issues of process and focus, one particularly interesting debate has picked up momentum within the GDC process: whether and to what extent internet governance and digital governance/digital cooperation are distinct issues. Right now, there are arguments on both sides of the debate. Please contribute your views to the survey on the internet vs. digital debate.

 Electronics, Hardware, Diagram

Multistakeholder voices in cyber diplomacy                                               

The IGF is, by nature, a multistakeholder space, but many other digital governance spaces struggle with how to define stakeholder engagement. This was highlighted in the session Stronger together: Multistakeholder voices in cyber diplomacy, where many participants called for enhanced stakeholder participation in policy-making and decision-making processes related, in particular, to cybersecurity, cybercrime, and international commerce negotiations.

The non-governmental stakeholders’ perspective is essential for impactful outcomes, transparency, and credibility. The absence of their input not only results in the loss of valuable perspectives and expertise, but also undermines the legitimacy and effectiveness of the policies and decisions made. Moreover, collaboration between state and non-state stakeholders can also be seen as mutually beneficial. Multistakeholder involvement could aid governments in the gathering of diverse ideas during negotiations and decision-making processes related to digital issues. 

However, as the session on enhancing the participation and cooperation of CSOs in/with multistakeholder IG forums noted, civil society organisations, especially from the Global South, face barriers to entry into global multistakeholder internet governance spaces, and the need for increased capacity building, transparency in policy processes, and spaces that allow for network building and coordination to impactfully engage in global multistakeholder internet governance processes.

One approach to solving the conundrum of multistakeholder engagement in intergovernmental processes was proposed: implementing a policy on stakeholder participation. Such a policy, it was said, would transform stakeholder involvement into an administrative process, ensuring that all perspectives are consistently considered and incorporated into policy-making.

People in business dress and holding laptop computers converse in a hallway

Infrastructure

Turning back the tide on internet fragmentation

The concerned words of Secretary-General Antonio Guterres at the start of the 78th UN General Debate still echo in the minds of many of us. ‘We are inching ever closer to a great fracture in economic and financial systems and trade relations,’ he told world leaders, ‘one that threatens a single, open internet with diverging strategies on technology and AI, and potentially clashing security frameworks.’

Those same concerns were raised within the halls of the IGF yesterday. In one of the workshops, experts tried to foresee the internet in 20 years’ time: The path we’re on today, mired with risks, does not bode well for the internet’s future. In a second workshop, experts looked at the different dimensions of fragmentation – fragmentation of the user experience, that of the internet’s technical layer, and fragmentation of internet governance and coordination (explained in detail in this background paper) – and the consequences they all carry. In a third workshop, experts looked at the technical community’s key role in the evolution of the internet and how they can best help shape the future of the internet.

The way we imagine the future of the internet might vary in detail. Still, the core issue is the same: If we don’t act fast, the currently unified internet will come precariously closer to fragmenting into blocs. 

It could be the beginning of the end of the founding vision of the free, open, and decentralised internet, which shaped its development for decades. We need to get back to the values and principles that shaped the internet in its early days if we are to recover those possibilities. These values and principles underpin the technical functioning of the internet, and ensure that the different parts of the internet are interconnected and interoperable. Protecting the internet’s critical properties is crucial for global connectivity.

As risks increase, we shouldn’t lose sight of the lasting positive aspects either. The internet has been transformational; it has opened the doors for instantaneous communication; its global nature has enabled the free flow of ideas and information and created major opportunities for trade. 

A swift change to the current state of affairs (undoubtedly affecting the online space) is forthcoming, The Economist argued recently. But if we want to be more proactive, there are plenty of spaces that can help us understand and mitigate the risks (one of which is the Global Digital Compact). Perhaps this will also give us the space to renew our optimism in technology and its future.

Debate on ‘fair share’ heats up

Internet traffic has increased exponentially, prompting telecom operators to request that tech companies contribute their fair share to maintain the infrastructure. In Europe, this issue is at the centre of a heated debate. Just a few days ago, 20 CEOs from most of Europe’s largest telecom companies called on lawmakers to introduce new rules. 

Yesterday, one lively discussion during an IGF workshop tackled this very question: whether over-the-top (OTT) service providers (e.g. Netflix, Disney Plus) should contribute to the costs associated with managing and improving the infrastructure. While the debate isn’t new, there were at least two points raised by experts that are worth highlighting:

  • Instead of charging network fees, ISPs could partner with OTT providers in profit-sharing agreements. 
  • It might be better if governments are left out of this debate. Instead of imposing new regulations, governments could encourage cooperation between companies. This seems to be an approach actively embraced by the Republic of Korea.

Development

Digital and the environment

The hottest summer ever recorded on Earth is behind us: June, July, and August 2023 were the hottest three months ever documented, World Meteorological Organisation (WMO) data shows. The discussion of the overall impact of digital technologies on the environment at the IGF, therefore, came as no surprise.

Internet use comes with a hefty energy bill, even for seemingly small things like sending texts – it gobbles up data and power. In fact, the internet’s carbon footprint amounts to 3.7% of global emissions. The staggering number of devices globally( over 6.2 billion), need frequent charging, contributing to significant energy consumption. Some of these devices also perform demanding computational tasks that require substantial power, further compounding the issue. Moreover, the rapid pace of electronic device advancement and devices’ increasingly shorter lifespans have exacerbated the problem of electronic waste (e-waste).

There are, however, a few things we can do. For instance, we can use satellites and high-altitude connectivity devices to make the internet more sustainable. We can take the internet to far-off places using renewable energy sources, like solar power. And crucially, if we craft and implement policies right from the inception of technology, we can create awareness among start-up stakeholders about its carbon footprint. We can also leverage AI to optimise electrical supply and demand and reduce energy waste and greenhouse gas emissions, which together, might even generate more reliable and optimistic projections of climate change.

A modern, white, three-bladed windmill stands in a field of green plants, against a blue sky.

Broadband from space

The latest data from ITU shows that approximately 5.4 billion people are using the internet. That leaves 2.6 billion people offline and still in need of access. One of the ways to bring more people online is by using Low Earth Orbit (LEO) satellites – think Starlink – to provide high-speed, low-latency internet connectivity. Another important element here are libraries, which often incorporate robotics, 3D printing, and Starlink connections, enabling individuals to engage with cutting-edge innovations.

There are, however, areas of concern regarding LEO satellites. It could be (is) technically challenging to launch LEO satellites on a large scale. Their environmental impact, both during their launch and eventual disposal in the upper atmosphere, is unclear. For some communities, the cost of using such services might be too high. Additionally, satellites are said to cause issues for astronomical and scientific observations. 

To fully harness the potential of these technologies, countries must re-evaluate and update their domestic regulations related to licensing and authorising satellite broadband services. Additionally, countries must be aware of international space law and its implications to make informed decisions. Active participation in international decision-making bodies, such as ITU and the UN Committee on Peaceful Uses of Outer Space (COPUOS), is crucial for shaping policies and regulations that support the effective deployment of these technologies. 

By doing so, countries can unlock the benefits of space-based technologies and promote the uninterrupted provision of wireless services on a global scale.

Starlink satellite dish on the roof of residential building

Accessible e-learning for persons with disabilities (PWD)

The accessibility challenges in e-learning platforms pose substantial hardships for people with disabilities, both physical and cognitive. Unfortunately, schools frequently fail to acknowledge or address the difficulties associated with online resource access with the immediacy they need and deserve. Those uninformed about and inexperienced with the obstacles of cognitive impairments often regard these issues as insignificant. This lack of awareness compounds the problem, leaving students with disabilities, especially those with cognitive impairments, to silently wrestle with these issues, a workshop on accessible e-learning experience noted.

Some solutions identified are: 

  • Involving users with disabilities in the development process of e-learning platforms
  • Integrating inclusion into everyday practice in educational institutions 
  • Implementing proactive measures and proper benchmarking and assessment tools to effectively address digital inclusion
  • Collaborating globally to make e-learning more accessible

Human rights

Digital threats in conflict zones

With all that’s going on in the Middle East, we can’t help but wonder how digital threats and misinformation are potentially impacting the lives of civilians residing in conflict zones, in a negative way. This issue was tackled in three workshops yesterday – Encryption’s role in safeguarding human rights, Safeguarding the free flow of information amidst conflict, and Current developments in DNS privacy.

In modern conflicts, digital attacks are not limited to traditional military targets. Civilians and civilian infrastructures, such as hospitals, power grids, and communications networks, are also at risk. In addition, with the growing reliance on a shared digital infrastructure, civilian entities are more likely to be inadvertently targeted. The interconnectedness of digital systems means that an attack on one part of the infrastructure can have far-reaching consequences, potentially affecting civilians not directly involved in the conflict.

The blurred lines between civilian and military targets in the digital realm has other far-reaching implications for trust and safety. It affects the credibility of humanitarian organisations, the provision of life-saving services, the psychological well-being of civilians, and their access to essential information.

Experts advocated a multi-faceted approach to address digital threats and misinformation in conflict zones. This included building community resilience, collaborating with stakeholders, enforcing policies, considering legal and ethical implications, and conducting thorough due diligence.

Connected paper cutout dolls in red, yellow, green, and blue hold hands, filling a white surface.

Sociocultural

Multilingualism, cultural diversity, and local content

As in previous years, the discussion on digital inclusion touched on the need to foster multilingualism and access to digital content and tech in native languages. This is particularly challenging in the case of less spoken languages such as Furlan, Sardo, and Arberesh, and these challenges need to be addressed if we want to truly empower individuals and communities to meaningfully engage in and take advantage of the digital world. The Tribal Broadband Connectivity Programme was highlighted as an example of an initiative that works to preserve indigenous languages, thereby adding tribal languages and cultural resources to the internet ecosystem. 

Universal acceptance (UA) was brought up as a way to enable a more inclusive digital space. UA is not only a technical issue (i.e. making sure that domain names and email addresses can be used by (are compatible with) all internet applications, devices, and systems irrespective of script and language), but also one of digital inclusion. It fosters inclusivity and accessibility in the digital realm. And while core technical issues have mostly been resolved, more needs to be done to drive substantive progress on UA. Approaches include more efforts to raise awareness within the technical community about UA readiness, economic incentives (e.g. government preference in public procurement for vendors who demonstrate UA), government support and involvement in the uptake of UA; and policy coordination among different stakeholders.

Multilingualism is not only about accessing content in local languages and in native scripts but also about developing such content. It was noted in a dedicated session that local content creation in minority languages contributes significantly to cultural and linguistic diversity. But challenges remain here as well. 

But in order to create content, users need to be able to access the internet. Yet, digital divides remain a reality, as do the lack of robust infrastructure, affordability issues (e.g. some households can only afford one device, while in many, even this one device is seen as a luxury), and gender inequalities, which prevent many from creating content. In addition, the mismatch between broadband pricing and the spending power of individuals hinders digital inclusion. Continued efforts are required to deploy reliable infrastructure with affordable pricing options.

Nonetheless, there is hope for universal access to the internet in the future. Advancements in technology are gradually making access less expensive with more options, potentially enabling broader internet access. And initiatives such as Starlink and Project Kuiper, which aim to provide connectivity to remote areas via satellites, are helping to bridge the digital divide.

One interesting point in the discussion was that the internet has not evolved into the egalitarian platform initially envisioned for content creation and access to information. Despite the TikTok phenomenon, instead of empowering individuals to become content publishers – it was said – the internet has given rise to powerful intermediaries who aggregate, licence, and distribute content. These intermediaries dominate the industry by delivering uniform content to a global market. And so, challenges remain regarding content distribution and ensuring equal access for all. 
In considering local content contributions, platform and content neutrality should also be considered to ensure a fair and diverse content ecosystem.

Cybersecurity and Digital Safety

The development of offensive cyber capabilities by states, impactful ransomware attacks, and the high risks of sexual abuse and exploitation of minors online, have all raised the profile of cybersecurity and the importance of protecting against new and existing threats, the Main Session on Cybersecurity, Trust & Safety Online noted.

Offensive cyber capabilities and the legitimacy of using force in response to cyberattacks were outlined as important challenges, along with fighting the use of social networks as tools for interventionism, the promotion of hate speech, incitement to violence, destabilisation, and the dissemination of false information and fake news. 

Given the long list and complexity of issues, some feel a legally binding international instrument is needed to complement existing international law and encompass cyberspace adequately. Others underline the need to involve different stakeholders – the technical community, civil society, and companies, including law firms – in shaping any such instrument. The fast pace of tech development is another challenge in this endeavour. The limitations of a comprehensive solution to which we aspire should be acknowledged, and we should prioritise actions that could have the greatest near-future impact for mitigating risks.

Cybercrime negotiations

The debate on a UN treaty to combat cybercrime identified the following challenges: 

  • The current draft of the cybercrime treaty aims to extend the power of law enforcement but offers weak safeguards for privacy and human rights; treaty-specific safeguards may be necessary. 
  • Geopolitics dominates negotiations, and expert input is often needed (but not available) to understand the reality and shape of current cybercrime policies. 
  • Companies must play a crucial role in international cooperation against cybercrime.

Some concrete suggestions to foster increased cooperation and efficiency to combat cybercrime beyond international treaty provisions include the creation of a database of cybersecurity and cybercrime experts for knowledge and information sharing (the efforts of the UN OEWG and the OAS were outlined), developing a pool of existing knowledge to support capacity development for combating cybercrime (not least because policymakers often feel intimidated by technical topics), and focusing on expanding the role of existing organisations such as Interpol. Importantly, states and businesses should become more aware of the economic benefits and potential increase in GDP due to investments in cybersecurity.

International negotiations should also focus more on strengthening systems and networks at a technical level. This includes measures to ensure the development of more secure software, devices, and networks, through security-by-design and security-by-default; providing legal protections for security researchers when identifying vulnerabilities; enhancing education and information sharing; using AI in cybersecurity for identifying vulnerabilities in real-time and other tasks. The risks of emerging technologies have come to the forefront of cybersecurity; however, international discussions should not lose sight of the broader cybersecurity landscape.

Cybersecurity and development

In emerging cybersecurity governance frameworks, developing countries’ specificities should be considered. Taking West Africa as an example, challenges include the lack of national and regional coordination to effectively combat cyber threats; resource limitations on technical, financial, and human fronts; insufficient allocation of resources to the cybersecurity sector; a shortage of qualified personnel in the region; and weak critical infrastructure, which is particularly susceptible to cyber attacks (where frequent power outages and telecommunication disruptions are already commonplace). 

Cybersecurity frameworks developed in such an environment should be based on peer-to-peer cooperation between the states of the region, cooperation, information sharing with the private sector, and local adaptation of global best practices, considering the local context and challenges. Notable initiatives are the Joint Platform for Advancing Cybersecurity in West Africa, launched under the G7 German presidency, which aims to establish an Information Sharing and Analysis Center (ISAC), and the Global Forum on Cyber Expertise (GFCE) work with the Economic Community of West African State (ECOWAS) to enhance capacities through partnerships.

A sturdy padlock sits on a black table in front of a computer keyboard.

Legal

The potential of regulatory sandboxes

What happens when you toss traditional regulation into a sandbox and hand innovators a shovel? You get a regulatory playground where creativity flourishes, rules adapt, and the future takes shape one daring experiment at a time. 

The workshop Sandboxes for data governance highlighted the growing interest in this tool for development of new regulatory solutions. Regions like Africa and the Middle East are in the early stages of adopting fintech-related sandboxes; Singapore has gained more experience and has fostered collaboration between industry and regulators. GovTech sandboxes, as seen in Lithuania, have become integral to the regulatory process where (un)controlled environments facilitate the testing and implementation of mature technologies in the government sector.

A common main challenge is the significant resources and time required to implement sandboxes, and how taxing this is for developing countries. It helps to learn from established sandboxes and tailor them to specific contexts. But more than that, collaborative efforts are needed between government authorities, industry players, civil society organisations, and regulatory bodies to make the process work.

The content creation revolution

The tectonic shift in content creation over the past decade has been internet-shaking. Content creation is no longer limited to an elite group of professionals, thanks to the widespread availability of user-friendly and inexpensive tools. Users are now generating vast amounts of unique and dynamic new content and sharing it on social media platforms and wherever the latest trend thrives.

Yesterday’s workshops on intellectual property discussed this shift, and the efforts to support the accessibility and availability of content through digital platforms. One workshop that looked at content creation and access to open information recognised that the industry is adapting to new technological advancements, while the workshop that looked at the manga culture (a cultural treasure of Japan, the host country) examined how the global manga market enjoyed rapid growth during the COVID-19 pandemic.

Both discussions explained how this transformation has its own challenges. The surge in user-generated content has raised important questions about intellectual property rights (IPR) and the ethical consumption of creative output, including the complexities of identifying and thwarting pirate operators, whose elusive tactics threaten creators’ livelihoods. The need for multistakeholder cooperation involving government bodies, internet communities, and industry players to effectively combat this threat goes without saying.

As the discussions unfolded, a common thread emerged: the need for innovation to meet the evolving demands of the digital age. But the discussions also demonstrated that the digital age demands not only legal frameworks and technological fortification but a nuanced understanding of the evolving dynamics between creators and consumers, and the content they develop and consume. 

Scales and a computer laptop form the background for a judge’s gavel on a desk.
Diplo/GIP at the IGF

Don’t miss our sessions today! 

Sorina Teleanu will speak at the open forum From IGF to GDC: A new era of global digital governance: A SIDS perspective. The session will examine the challenges developing countries face when engaging in global digital governance processes and explore ways to address such challenges. It will also discuss expectations from the ongoing GDC process and the relationship between the GDC process and the IGF. When and where? Wednesday, 11 October, at 09:45–11:15 local time (00:45–02:15 UTC), in Room I.

Anastasiya Kazakova will speak at a workshop on ethical principles for using AI in cybersecurity. The session will discuss what concrete measures stakeholders must take to implement ethical principles in practice and make them verifiable. It will also gather input and positions on how a permanent multistakeholder dialogue and exchange could be stimulated on this topic. When and where? Wednesday, 11 October, at 15:15–11:15 local time (06:15–16:15 UTC), in Annex Hall 1.

IGF 2023 – Daily 2

Decorative banner announcing the IGF2023 Daily#2 Highlights.

IGF Daily Summary for

Monday, 9 October 2023

Dear reader, 

Welcome to the IGF2023 Daily #2, your daily newspaper dedicated to the 18th Internet Governance Forum (IGF) discussions. 

AI and data were two keywords echoing during the kick-off day of IGF2023. Parliamentarians gathered for their now-traditional roundtable, and tens of workshops discussed development, human rights and other pillar themes. We noticed three main trends in yesterday’s debates.

First, many traditional narratives have been rehearsed, including the need ‘to manage both the opportunities and the risks that digital technologies bring’. Less repetition of common points could free more space for fostering new ideas through critical and engaging debates.

Second, existing initiatives were amplified in the debate, including a fresh focus on the G7 Hiroshima AI process and the G20 New Delhi initiative of digital public infrastructure.

Third, some new insights were brought into the debate, including a call for a ‘fourth way’- beyond EU, China, the USA approaches – which will help developing countries to leverage data as a strategic asset for socio-economic development, amplified by cross-border exchanges. 

As you can read below, our reporting aims to identify new insights, ideas, and initiatives in the IGF debates. You can also dive deeper into summary reports generated just in time by DiploAI.

The Digital Watch team, with support from DiploAI

Drawing of a rapporteur taking notes at the back of the room as panelists discuss dynamically in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://dig.watch/event/internet-governance-forum-2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


The summary of the discussions
Ky0to mix modern and new Oct 2023

The day’s top picks

  • The need to move from general debates on data as a public good for development to operational use 
  • Calls for a ‘4th way’ in data governance, in addition to the EU, China, and the USA approach
  • The role of parliamentarians in shaping a trusted internet 
  • Call for a judiciary track at the IGF

Leveraging the multistakeholder approach to build the Internet We Want

It is trite to speak, or indeed, in this case, write, about the impact that digital technologies have had on our everyday lives. However, it’s worth noting that these technologies now occupy a prominent position on the global stage, evident in G7, G20, G77, and UNGA discussions. Moreover, there’s a growing realisation that their potential extends far beyond what we’ve witnessed so far: They could help us achieve the SDGs, address climate change, and create a better world.

For instance, AI has emerged as a technology with the potential to enhance the impact of digital technology on the SDGs. Data show that 70% of the SDGs can benefit directly from digital technologies, highlighting their potential to positively impact global development.

UN Secretary-General Guterres outlined three key areas where we need to act:

  1. Bridging the connectivity gap by bringing the last 2.6 billion people online, especially the women and girls in underdeveloped regions
  2. Addressing the governance gap by improving the coordination and alignment of the IGF and other digital policy/governance entities within and beyond the UN system
  3. Prioritising human rights and a human-centred approach to digital cooperation
 People, Person, Crowd, Electronics, Screen, Adult, Male, Man, Female, Woman, Cinema, Accessories, Formal Wear, Tie, Audience, Computer Hardware, Hardware, Monitor, Face, Head, Lecture, Indoors, Room, Seminar, Projection Screen, Architecture, Building, Clothing, Suit, Classroom, School, António Guterres
UN SG Antonio Guterres speaks during the opening of IGF2023. Credit: IGF Flick

The internet must remain open, secure, and accessible. This requires increased support for long-established multistakeholder institutions. Guterres emphasised: ‘We cannot afford another retreat into silos.’ Following this approach, he said, we can maximise the benefits of the internet while reducing its risks, and build the internet we want.

The IGF has a role to play: It should strengthen its position as a global digital policy forum in finding points of convergence and consensus.

But as the IGF crosses the threshold of adulthood, the community can look back and ask: has it delivered on its mandate and purpose? And the community can look forward and ask: How can the IGF better support preparations for and the follow-up to the Global Digital Compact and Summit of the Future?

The role of parliamentarians in shaping a trusted internet 

In the headlines, dwindling trust in politics, underscored by compelling polling data, has raised alarm bells. In the background, this negative trend threatens the legitimacy and effectiveness of political institutions, necessitating concerted efforts to rebuild trust. According to the Day 1 discussions, the integrity of democratic elections is in danger, in light of the widespread interference seen globally. It was noted that 70 democracies are scheduled to hold elections in 2024, and these elections are at a higher risk than ever before, given the growing misuse of digital technology for disinformation and election interference. 

Trust, a key phrase in this session, is further eroded as online violence, particularly against women in politics, remains a serious challenge. It not only jeopardises individual well-being but also undermines democratic processes. And then, there’s the enigma of AI – holding the promise of unprecedented opportunities while posing new challenges such as the micro-targeting of voting audiences, bias, or new, old, and changing privacy concerns. 

Amid these substantial concerns, the role of parliamentarians becomes pivotal. They are the cornerstones of our political system, entrusted with crafting the legal framework that governs our digital lives. Some speakers in this year’s parliamentary track pointed out that parliaments are the only branch of government that remains in touch with the individual daily digital lives of citizens. Thus, their active involvement in establishing robust frameworks for the governance of digital technologies rooted in transparency, accountability, and fairness is urgently needed. Contextualisation of global frameworks is another key point emphasised by the speakers. They argued that countries should adapt global frameworks to their specific needs and local requirements.

The need for agile governance was reiterated throughout the discussion: Sustainable, innovative, and future-proof regulations should be used to effectively and efficiently respond to the ever-changing digital technology landscape. 

The session was a call to action to reestablish trust, combat online violence, safeguard electoral integrity, and navigate the complex realm of AI. Above all, it underscored the indispensable role of parliamentarians in the global digital governance. 

Almost-empty plenary hall at the IGF2023.
The plenary hall of IGF2023 in Kyoto | Credit: SasaVK

AI

In September, the G7 Hiroshima Leaders agreed to develop an international code of conduct for AI. This mirrors the European Commission’s approach to developing voluntary AI guardrails ahead of its actual AI law and the approaches adopted or drafted in the USA and Canada. The High-Level Leaders Session V: Artificial Intelligence reiterated these calls and the need for international guiding principles, research and investment, awareness of the local context, and stakeholder engagement in achieving safe and trustworthy AI.  

The trajectory of AI systems is anticipated to evolve towards multimodal capabilities, seamlessly integrating text and visual content with fluency in multiple languages, extending its global impact beyond English. Generative AI, expected to be as transformative as the internet was, emerged as a central discussion point, poised to etch its mark on history. However, consumers need to know what is AI-generated content and what is human-generated content, particularly ahead of global elections.

The spotlight was on the alarming proliferation of AI-amplified misinformation and disinformation and the profound impact of technology on human emotions and rationality. In this context, there emerged a resounding call for truth, trust, and shared reality, reaffirming the pivotal role of journalism in upholding democratic values. Simultaneously, it was also recognised that the deployment of AI can help address pressing global challenges –  highlighting disaster management, climate crises, global health, and education as high-risk domains.

Transparency and collaboration emerged as linchpins for solutions. Transparency was analysed in the context of technical development and the governance of AI systems. Singapore’s effort in launching the open-source AI Verify Foundation was mentioned as an example of the commitment to open discourse and robust governance. Collaboration, particularly in a multistakeholder fashion, was highlighted, and the private sector was recognised as a force driving AI innovation and, thereby, a necessary partner to governments in governing AI.

Looking ahead, the session heralded the Hiroshima AI Process and the plans for an AI expert support centre under the Global Partnership on AI (GPAI) as signifiers of a proactive approach to addressing AI challenges. Forums such as the Frontier Model Forum, the Partnership on AI, and the ML Commons also represent similar forward-looking efforts. The UN, ITU and the OECD were asked to be more prominent in advancing AI initiatives. 

An AI shield icon hovers over a person’s hand

Internet fragmentation

As expected, internet fragmentation was on the agenda of the IGF. As a network of networks, the internet is inherently fragmented, yet, concerns are looming about harmful fragmentation, which would hindes the intended function of the internet. 

Geopolitical developments are changing internet governance. States increasingly seek to achieve digital sovereignty to exert control over their respective internet spheres. This comes as a response to the adverse effects of internet weaponisation, digital interference,  disinformation, misinformation, and campaigns embracing violence outside their national borders. Such regulatory tendencies, however, can lead to internet fragmentation with negative consequences, including restrictions on access to certain services, internet shutdowns and censorship, and exacerbation of the digital divide in underdeveloped regions. The internet as we know it cannot be taken for granted any more.

International norms are critical to reduce the risks of fragmentation. International dialogue in forums like the IGF is a valuable tool for inclusive discussions and contributions from diverse stakeholders. It is important to acknowledge different perspectives about fragmentation between the Global North and Global South. National regulations must, therefore, consider different contexts and allow countries to pursue their own policies. However, they should maintain a comprehensive approach to internet governance. Of particular relevance are the regulatory frameworks with extraterritorial implications – like those of the EU, China, and India – due to their economic powers and the global nature of the internet.

In developing national and regional regulatory frameworks, it is important to consider multistakeholder input, because the internet – and its critical resources – are not used, owned, or managed solely by states. It can be difficult to establish a central authority responsible for shaping internet policy requirements. Inclusivity and user empowerment are also important, particularly considering the perspectives of marginalised and vulnerable communities. At the same time, there is a significant risk in leaving public policy functions in the hands of private corporations. The industry should accept that it is not exempt from regulations.

A particular concern about harmful fragmentation is related to state control over the public core of the internet and its application layer. Different technologies operate at several layers of the internet, and those distinct layers are managed by different entities. Disruptions in the application layer could lead to disruptions in the entire internet. Therefore, governance of the public core calls for careful consideration, a clear understanding of these distinctions, and deep technical knowledge. 

Accountability for the governance of the public core of the internet should be dealt with on an international level. Regulations related to the technical layers should follow a layered policy approach, in which different regulations may be required for each layer (following approaches embraced in Japan and The Netherlands, for instance). By considering the specificities of different layers, policymakers can create a cohesive and comprehensive regulatory approach that does not lead to internet fragmentation (for instance, a layered approach to sanctions can help prevent unintended consequences like hampering internet access).

Subsea internet cables.
Subsea internet cables. Credit: Airtel Business

Human rights

Looking specifically at the intersection of gender and youth online to achieve a safer and more inclusive digital environment, the workshop BeingDigital Me: Being youth, women, and/or gender-diverse online presented different perspectives in addressing this matter. The speakers highlighted several initiatives that address gender and gender-based violence online, such as the Global Partnership for Action on Gender-Based Online Harassment and Abuse, the work of the Internet Society’s Gender Standing Group, and recent initiatives in Colombia. Speaking about the need for inclusivity and collaboration to combat tech-facilitated gender-based violence and gendered disinformation, the speakers pointed out the role of education and skill development in fostering increased youth participation. In addition, they spoke about the positive impacts of online platforms that promote gender-related initiatives and the need for a specific framework to address digital violence. 

The session on advocacy with Big Tech in restrictive regimes discussed a complex set of issues related to advocating and implementing human rights policies in countries with regimes restricting digital rights. From the perspective of civil society, in addition to engaging with governments on these issues, a challenge lies in understanding the ecosystem, the complexity of regulation and policies, and the fast-paced changes taking place. In restrictive regimes, tech companies that have human rights policies in place must address the dilemma of whether to comply with restrictive national rules or uphold their human rights policies and, as a consequence, limit their business activities within those jurisdictions. Also discussed was the specific role of platforms and their responsibility when it comes to content moderation (particularly AI-moderated content), addressing disinformation, and enhancing transparency. 

In addition, the speakers addressed the imbalance in capacity between civil society and tech companies, the challenges of sudden structural changes in tech companies that impact human rights corporate policies, and the imperative that civil society advocates for implementing human rights policies and contingency strategies by tech companies.

The participants discussed the examples of Russia, Vietnam, Türkiye, Syria, and Pakistan.

top view of group holding wooden cubes with rights lettering

Data governance

The Data Free Flow with Trust (DFFT), one of the pillar themes of IGF2023, was the focus of the session on the development aspects of free data flows, Opportunities of cross-border data flow – DFFT for development. Data is a recurrent topic at the IGF, with many narratives based on the need to balance the flow of data with trust and privacy, the extraction of value from data, data transparency, private-public partnerships, and others. 

A few novel highlights in this year’s discussions included: 

  • a call for a ‘fourth way’ for data governance (in addition to the approaches taken by the USA, the EU, and China) in which data would feature as a strategic asset of developing countries, used for socio-economic development 
  • the centrality of digital public infrastructure (DPI) as an infrastructure for inclusive, open, effective use of data 
  • a more operational and practical concept of data as a public good (see the session on African AI: Digital Public Goods for Inclusive Development
  • strengthened voices of developing and least-developed countries in emerging global data governance frameworks
  • mainstreamed Data Free-Flow with Trust in development assistance projects and initiatives

Tackling the issue of balancing the operationalisation of Data Free Flow with Trust, the speakers in a dedicated session discussed the main challenges in ensuring that privacy, security and intellectual property are safeguarded in the promotion of the free flow of data. The speakers highlighted the challenges related to access to data, the need for redress mechanisms, and the impacts of restricted data flow on the fragmentation of the internet. They also addressed the responsibilities of different stakeholders – including governments and the private sector (be it internet companies or the telecom sector, for instance) – in safeguarding the privacy and security of data. A human rights-based approach to data and the involvement of civil society in the relevant policy processes were mentioned as a must for ethical and responsible data governance. 

It was also emphasised that applying the rule of law in the digital space is as crucial as in the physical world. A proposal was made to establish a judiciary track at the IGF to include judges and other professionals in the judiciary field in discussions related to digital governance. This would provide them with a specific platform to engage with experts, share insights, and gather more knowledge about digital governance.

Visualizing data - abstract purple background with motion blur, digital data analysis concept

Digital and environment

Green and digital are two pillars of many policy approaches and strategies worldwide. The workshop Cooperation for a green digital future highlighted the potential of AI and the internet of things in reporting and gathering accurate information about climate change. Yet, without common measurement standards, the impact of new technologies will be limited. The new societal dynamism of youth in the climate field has much potential for accelerating the multistakeholder approach in advancing an interplay between digital and green policy dynamics.

Because Japan, the host of IGF 2023, has been a leader in robotics for decades, it is not surprising that robots are featured in the IGF debate. This was the case, for instance, in the session Robot symbiosis cafe, where several examples of using robots to assist people with disabilities were given. But beyond highlighting the potential for good, the debate also raised significant concerns, including the need to deal with the hype surrounding the use of robots in society and the risk of new forms of divides emerging because developing countries might not have the resources and know-how to develop robotics. One solution for making robots more affordable is to foster agile, innovative enterprises to streamline the process of robot design and production, ultimately lowering costs and reducing development time.

 Blue digital circuits form a landscape of a large tree and smaller plants beneath clouds on a black background.
Diplo/GIP at IGF2023

Follow our just-in-time reporting!

Unable to attend all the sessions you’re interested in? DiploAI and the team of experts have you covered with just-in-time reporting from IGF2023. Read summaries of the sessions and the main arguments raised during discussions, available only a few hours after the sessions conclude. View knowledge graphs as visual mapping of debates. Bookmark our dedicated IGF2023 page on the Digital Watch observatory, or download the app to read the reports.

Decorative banner ‘Follow IGF2023 Just-in-Time Reporting by DiploAI and Diplo’s Team of Experts, with a QR code for access.

We’re also present at the IGF2023 Village! 

If you’re attending IGF2023 in Kyoto in person, come visit us at booth 56! If you’re joining the meeting online, we have a virtual booth you can swing by!

 Groupshot, Person, Adult, Male, Man, Female, Woman, Accessories, Bag, Handbag, Glasses, Clothing, Coat, Footwear, Shoe, Formal Wear, Tie
Diplo’s director Jovan Kurbalija with Indonesia’s delegation at Diplo/GIP booth at the IGF | Credit: SasaVK

Don’t miss our sessions today! 

We supported the IGF Secretariat in organising a session on unlocking the IGF’s knowledge, where Jovan Kurbalija and Sorina Teleanu will discuss the power of epistemic communities, organising data, and harnessing AI insights for our digital future. When and where? Tuesday, 10 October, at 12:30 – 13:15 local time (03:30 – 04:15 UTC), in Room K.

Pavlina Ittelson will moderate an open forum on ways to enhance in-depth long-term participation and efficient cooperation of CSOs in multilateral- and multistakeholder- internet governance fora. When and where? Tuesday, 10 October, at 14:45 – 16:15 local time (05:45 – 07:15 UTC), in Room K.

DW Weekly #131 – 9 October 2023

 Text, Paper, Page

Dear all,

The EU is in the spotlight this week: It has just published its list of critical technology areas, similar to the lists which other countries have drawn up, which it will assess for risks to its economic security. In other news, Kenyan lawmakers want to halt Worldcoin’s operations in the country, whereas Microsoft’s testimony as part of the ongoing US trial against Google shows how intense the race to data is.

Let’s get started.

Stephanie and the Digital Watch team

PS. If you’re reading this from Kyoto (IGF 2023), join us for discussions and drop by our booth.


// HIGHLIGHT //

The four critical technologies the EU will assess for risks:
AI, advanced chips, quantum tech, and biotech

The European Commission announced on Tuesday that it will review the security and leakage risks of four vital technology domains – semiconductors, AI, quantum technologies, and biotechnologies, among the 10 technologies areas most critical to the EU’s economic security.

What it means. The EU wants to make sure that these technologies do not fall in the wrong hands. If they do, they could be exploited to hurt others. For instance, biotechnologies used for medical treatment can be exploited for potential biowarfare applications. If quantum cryptography designed to safeguard a country’s critical infrastructure is misused, it could potentially undermine or disrupt the critical operations of that same country. We can only imagine what lies in store for AI if it’s used for hostile purposes. 

Dual-use. These four technologies were prioritised due to their transformative nature, their potential to breach human rights, or the risk they carry if they’re used for military purposes. In fact, they all share dual use capabilities, that is, they all have the potential for both civilian (healthcare, communications etc) and military applications (weapons, etc.). 

Other risks. In addition to tech security and leakage, the EU thinks there are other critical risks that will eventually also warrant attention: those linked to the resilience of supply chains; those affecting the physical and cybersecurity of critical infrastructure; and those with implications for the weaponisation of economic dependencies and economic coercion. 

Countries of concern. The recommendation does not mention any specific country that would be targeted, but there’s one term that gives it away. The concept of ‘de-risking’ (in contrast with decoupling), mentioned several times in the recommendation, forms part of the EU’s policy of reducing reliance on China. It’s therefore, quite clear that China will be one of the main targets of the risk assessments.

Issue #1: Divergences. The risk assessments will be carried out in collaboration between the commission and its member states (with input from the private sector). They are the first steps towards implementing the new European Economic Security Strategy, published in June. As with all things new, competing interests and diverging geopolitical concerns are a main challenge: European countries are divided, with France and Germany favouring an investment-first approach, with central Europe adopting a more critical approach to China. 

Issue #2: Protectionism. The EU is set to make crucial decisions next year on the measures it will implement, and whether it will carry out collective risk assessments on the remaining 6 technologies. A potential challenge is that these measures could portray the EU as increasingly adopting protectionist policies, in the eyes of China. If this perception takes hold, it has the potential to significantly harm the trade relations between the EU and China. EU Commissioner Thierry Breton’s assertion that ‘protection does not mean protectionism – again, I insist on this’, is unlikely to assuage concerns.

A geopolitical trend. Though the EU is the latest actor to move ahead with its plan to reduce risks, it’s by no means the first. Other countries, notably the USA and Australia, published similar lists of technologies they were assessing for the risks they pose. 

Yet, there’s a notable difference: The foundation of Europe’s approach is to de-risk, not decouple, supporting the economic security strategy’s tripartite approach of protecting, promoting, and partnering. What needs to be seen is whether the latter will be consigned to  a simple theoretical construct.


Digital policy roundup (2–9 October)

// AI GOVERNANCE //

In USA v Google, Microsoft says companies are competing for data to train AI

Testifying in the ongoing US trial against Google, Microsoft CEO Satya Nadella (appearing as a plaintiff’s witness) said that tech giants were competing for vast troves of content needed to train AI. Companies are entering into exclusive deals with large content makers, which are locking out rivals, Nadella said.

The lawsuit concerns Google’s search business, which the US Department of Justice and state attorneys-general consider ‘anticompetitive and exclusionary’. They are arguing that Google’s agreements with smartphone manufacturers and other firms have strengthened its search monopoly. Google has counterargued that users have plenty of choices and opt for Google due to its superior product.

Why is it relevant? First, Nadella’s comments highlight the resources required by AI technology: computing power, and large troves of data. Second, Nadella said these exclusionary data agreements reminded him of ‘the early phases of distribution deals’ – which is to say that agreements with content providers are monopolising valuable content just as Google allegedly did with smartphone manufacturers and other companies.

Case details: USA v Google LLC, District Court, District of Columbia, 1:20-cv-03010


Was this newsletter forwarded to you, and you’d like to see more?


Hollywood strike 2023
Campaigns 63

The writers’ strike is over. A historic strike and almost five months later, the Writers’ Guild of America – which represents over 11,500 screenwriters – struck a deal with Hollywood companies on the use of AI: AI-generated material may not be used to undermine or split a writer’s credit, or to adapt literary material; companies can’t force writers to use AI tools; and companies have to disclose whether material given to writers is AI-generated.


// ANTITRUST //

Korean communications authority fines Google, Apple

The Korean Communications Commission (KCC) is fining Google and Apple for abusing their dominant position in the app market. The fine can go up to KRW68 billion (USD50 million).

Google and Apple were found to have forced app developers to use specific payment methods and to have delayed app reviews unfairly. In addition, Apple implemented discriminatory charging of fees to domestic app developers.

Why is it relevant? South Korean authorities have taken aim at Big Tech companies’ practices in recent years. In April, the country’s Fair Trade Commission (FTC) fined Google USD32 million for blocking the growth of local rival app One Store Co. marketplace. In 2021, the FTC fined the company around USD1 million ‘for obstructing other companies from developing rival versions of the Android operating system.’


// WORLDCOIN //

Kenyan lawmakers want Worldcoin to cease operations in the country

A Kenyan parliamentary panel called on the country’s information technology regulator on Monday to shut down the operations of cryptocurrency project Worldcoin within the country until more stringent regulations are put in place. 

The lawmakers’ report concluded that Sam Altman’s Worldcoin project constituted an ‘act of espionage’. The panel also urged the government to launch criminal probes into Tools for Humanity Corp, the company behind Worldcoin’s infrastructure, for operating in Kenya illegally.

Why is it relevant? First, Kenya could set a precedent on how countries could deal with Worldcoin, even though the operations are being scrutinised in other countries as well. Second, it shows the speed at which new technologies can enter a market, leaving regulators to grapple with the policy implications.


// INFRASTRUCTURE //

Amazon launches first test satellites for Kuiper internet network

Amazon launched its initial pair of prototype satellites from Florida last week, the company’s first step before it deploys thousands more satellites into orbit.

However, Amazon is up against pressing schedules on multiple fronts. First, the Federal Communications Commission mandates that at least half of the proposed 3,236 satellites in Project Kuiper’s constellation must be launched by mid-2026. Second, Amazon faces the challenge of catching up with SpaceX, which already boasts over 2 million customers for its satellite internet service.

Why is it relevant? Low-orbit satellites, like the ones launched by Amazon, can expand global connectivity significantly. By operating closer to the Earth’s surface, these satellites enable faster communication speeds, lower latency, and wider coverage.


The week ahead (9–16 October)

Ongoing till 12 October: The annual Internet Governance Forum (IGF2023) is taking place in Kyoto, Japan and online this week. Follow our dedicated space on Dig.Watch for reports. Expect a round-up in next week’s edition.

11–12 October: With so many elections around the corner, the EU DisinfoLab’s 2023 conference will have plenty to discuss.

12–15 October: The 13th IEEE Global Humanitarian Technology Conference in Pennsylvania, USA, will address critical issues for resource-constrained and vulnerable people.

16–17 October: This year’s International Regulators’ Forum will be hosted in Cologne, Germany. The Small Nations Regulators’ Forum takes place on the second day.

16–20 October: UNCTAD’s 8th World Investment Forum returns as an in-person event hosted in Abu Dhabi, UAE. 


#ReadingCorner
Webpage banner of report
Campaigns 64

More countries, sectors under attack – Report

Cyberattacks have increased globally, with government-sponsored spying and influence operations on the rise. The primary motives? Stealing information, monitoring communications, and manipulating information. These insights are from Microsoft’s latest Digital Defense Report, covering trends from July 2022 to June 2023.

Webpage banner of report
Campaigns 65

Empowering everything with AI

This Wall Street Journal article (paywalled) talks about the growing role of AI in practically every aspect of our lives – from virtual assistants like Siri and Alexa to automated systems in the workplace. We’ll soon be unable to escape it.


FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

IGF 2023 – Daily 1

 Logo, Advertisement, Citrus Fruit, Food, Fruit, Grapefruit, Plant, Produce, Text

IGF Daily Summary for

Sunday, 8 October 2023

Dear reader, 

If you’ve arrived in Kyoto, good morning.

If you’re on your way to Kyoto, happy travels.

If you’re following the IGF online, we hope you’re settled in comfortably.

Welcome to the IGF2023 Daily #1, your daily newspaper dedicated to the 18th Internet Governance Forum (IGF) discussions. 

As per tradition, Diplo and the Geneva Internet Platform (GIP) are providing just-in-time reporting from IGF2023, bringing you session summaries, data analysis and more. What’s not traditional is the addition of DiploAI, our new AI reporting tool, to the mix. In this hybrid system, Diplo’s human experts and AI tool work together to deliver a more comprehensive reporting experience.

Our AI will prepare session reports for all the events, while our dedicated human team is curating daily highlights from these reports, which will be delivered to your inbox each day. This issue covers the highlights of Day 0 at IGF2023 on 8 October 2023.

Let’s begin, 

The Digital Watch team

Drawing of a rapporteur taking notes at the back of the room as panelists discuss dynamically in front of a projection screen.

Do you like what you’re reading? Bookmark us at https://dig.watch/event/internet-governance-forum-2023 and tweet us @DigWatchWorld

Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu


The summary of the discussions

The day’s top picks

  • Importance of Data Free flow with Trust (DFFT)
  • Establishment of a permanent data policy forum
  • Appointing a goodwill ambassador for digitalisation and the SDGs 
  • Call for bottom-up AI
Kiosk poster with a typical Japanese garden scene announcing the 18th Annual Meeting of the Internet Governance Forum.

High-Level Leaders Session I: Understanding Data Free Flow with Trust (DFFT)

The centrality of data marked the kick-off of IGF’s high-level debate. Speakers during the High-Level Leaders Session I: Understanding ‘Data Free Flow with Trust’ (DFFT) anchored data in the wider context of tackling climate change and dealing with health challenges, among others. Thus, cross-border data transfers become critical for our shared digital future. 

The Data Free Flow with Trust (DFFT) concept was introduced to ensure these transfers go smoothly. DFFT aims to enable the flow of data worldwide while ensuring data security and privacy for users.

However, the concept is not without challenges: The global data landscape is fragmented with diverse data security and privacy perspectives. Some of the concerns mentioned at this session were: potential privacy threats from third-party access and government surveillance, as well as the credibility and reliability of data sources. These concerns underscored the importance of implementing explicit principles on governmental access, as demonstrated by the OECD’s Trusted Government Access Program. It was also highlighted that building trust in institutions responsible for data collection is imperative, as is effective policy oversight.

Cross-sector collaboration is also needed to strengthen data governance. Promoting regulatory and operational sandboxes has also been proposed as a practical solution to foster good governance among stakeholders.

New initiatives for data governance are needed to establish a robust global framework capable of efficiently managing data to ensure secure and effective data flow. The idea to create a permanent international forum for data policy dialogue has been gaining wider acceptance, including support from G7 nations. Such a forum should avoid the risk of fragmented data laws and regulations.

Abstract digital waves with flowing particles

High-Level Leaders Session II: Evolving trends in mis- & dis- information

An MIT report from 2018 found that lies spread six times faster than the truth. The situation is worsened by rapid advancements in generative AI, which can create synthetic content nearly indistinguishable from authentic content. This problem is even more critical in the Global South, where weaker institutional structures make populations more susceptible to misinformation.

What can be done to tackle misinformation and disinformation? A holistic approach needs to be taken and it must be multistakeholder in nature, the High Level Leaders Session II: Evolving trends in mis- and dis-Information noted. One part of the solution is enhancing media literacy, to proactively tackle the strength and attraction of disinformation, known as pre-bunking. Users also need to be aware of iof the health and accuracy of their data, and consume data in a balanced and unbiased manner (this was likened to maintaining nutritional balance in your diet). 

Another part of the solution is heightening the responsibility, transparency, and accountability of tech platforms, which should advance responsible innovation, boost fact-checking capabilities and comply with a Code of Practice against disinformation.

Strengthening regulation also forms a crucial part of the solution, but the introduction of new regulations unleashes its own set of challenges, topmost being their slow emergence, always lagging behind new technologies for content generation, including fake content. A revised governance structure that rewards the sharing of accurate information to combat the appeal of false news should also be in place. Emerging as a novel regulatory approach is the concept of ‘digital constitutionalism,’ offering a promising way to control this amplified influence of tech companies. This involves crafting collaborative global legislation and international frameworks capable of effectively confronting and regulating these platform companies.

Tiles form a wordplay illustration of FAct and FAke by using the same F and A for the beginning of both words

High-Level Leaders Session III: Looking ahead to WSIS+20: Accelerating the multistakeholder process 

2025 will mark 20 years since the first World Summit on the Information Society (WSIS) was held, and it is time for a review. The principles established during WSIS are still relevant, and the multistakeholder approach is crucial for effective internet governance, as highlighted by High-Level Leaders Session III: Looking ahead to WSIS+20: Accelerating the multistakeholder process. WSIS has made significant progress in establishing a human-centric, digitally connected global society through the use of ICTs. 

WSIS+20 is at a turning point, building on what worked well and adjusting to what is ahead of us, especially related to challenges triggered by AI. In this new context, the following issues remain high on the agenda of WSIS discussions: inclusion, bridging the digital divide, and putting human values at the centre of AI and digital developments.

Iconic logo of the World Summit on the Information Society Geneva 2003 - Tunis 2005

High-Level Leaders Session IV: Access & innovation for revitalising the SDGs

As the SDG clock ticks ahead of the 2030 SDG deadline, digital is more and more considered a way to rescue the SDGs. This was clear during the SDG debates at the UN General Assembly in September. This call to add a digital element to the SDGs echoed during the High-Level Leaders Session IV: Access & Innovation for Revitalising the SDGs. The session listed numerous areas where digital can support the 2030 Agenda, including poverty, inequality, climate change, and the digital divide.

One of the examples from the session noted that initiatives utilising technologies such as AI, cloud computing, sensors, drones, and blockchain in smart agriculture are being implemented globally to tackle SDG2 and eradicate hunger. 

The session debate listed the following key issues in the nexus between the SDGs and digitalisation: ethical considerations, privacy concerns, digital literacy and equitable access to technology, responsible governance, and education. 

The discussions highlighted the interconnectedness of these goals and emphasised the need for a comprehensive, cooperative approach. Collaboration and partnerships among all stakeholders, including governments, the private sector, and civil society, are deemed essential for the successful implementation of digital solutions in advancing the SDGs. Creating the position of goodwill ambassador for digitalisation and SDGs and initiating digital enlightenment movements can further help spread awareness and knowledge.

Icons for each of the 17 SDGs form a circle around the title: Sustainable Development Goals, with a line linking each one to the centre.

IGF Leadership Panel paper: The Internet We Want

The IGF Leadership Panel, a body established by the UN Secretary-General to support and strengthen the IGF, presented its paper The Internet We Want in a Day 0 session. According to the paper, the internet needs to be:

1. Whole and open. The potential fragmentation of the internet threatens social and economic development benefits, while also harming human rights.

2. Universal and inclusive. Data show that 2.7 billion people remain offline. Connecting them not only requires infrastructure, but also digital skills, and applications and content relevant for users. Frameworks that enable internet connectivity should be based on light-touch ICT policy and regulations, and encourage universal access, competition, innovation, and the development of new technologies. 

3. Free-flowing and trustworthy. Trust is strengthened when governments adopt robust and comprehensive commitments to protect the rights and freedoms of individuals. Cooperation between governments and stakeholders, including business and multilateral organisations, is needed to advocate for interoperable policy frameworks that facilitate cross-border data flows, enabling data to be exchanged, shared, and used in a trusted manner, thereby fostering high privacy standards.

4. Safe and secure. Robust frameworks for high levels of cybersecurity should be established, along with strong recommendations for legal structures, practices, and cross-border cooperation to combat cybercrime.

5. Rights-respecting. Human rights must be respected online and offline, and a human rights-based approach to internet governance is required to realise the full benefits of the internet for all.

What lies ahead?
IGF2023 virtual reception desk attended by an avatar
Campaigns 76

300+ sessions! That’s what’s in store for IGF2023. We will be with you throughout it all, from the high-level leaders track, parliamentary track, youth track, main sessions, workshops, dynamic coalition sessions, open forums, town halls, lightning talks, award launches, and networking sessions. Bookmark our dedicated IGF2023 page on the Digital Watch observatory, or download the app to read the session reports.

Diplo/GIP at IGF2023

Diplo and the GIP are actively engaged at IGF2023, organising and participating in various sessions.

Diplo’s Director of Knowledge Sorina Teleanu (left) and Diplo’s Executive Director Jovan Kurbalija (right) during Diplo’s Day 0 event.
Campaigns 77

Diplo’s Director of Knowledge Sorina Teleanu (left) and Diplo’s Executive Director Jovan Kurbalija (right) during Diplo’s Day 0 event.

We kicked off Day 0 with a session on bottom-up AI and the right to be humanly imperfect, where we discussed how AI models should reflect more diversity, relying on different communities’ distinct traditions and practices. Such an approach will contribute to a more authentic, bottom-up AI model that does not limit itself to predominantly European philosophical traditions. We also noted that the uniqueness and imperfection of human traits are invaluable characteristics and essential considerations in the development of AI.

We’re also at the IGF2023 Village! If you are on the ground at IGF2023 in Kyoto, drop by our Diplo/GIP booth. If you’re joining the meeting online, check out our space in the virtual Village.

Jovan Kurbalija, Sorina Teleanu, and Pavlina Ittleson at the Diplo/GIP booth at IGF2023.
Jovan Kurbalija, Sorina Teleanu, and Pavlina Ittleson at the Diplo/GIP booth at IGF2023.

Digital Watch newsletter – Issue 83 – October 2023

 Page, Text, Advertisement, Poster, Person, Face, Head

Snapshot: What’s making waves in digital policy?

Geopolitics

The European Commission has released a preliminary list of four high-risk technology areas for potential misuse by autocratic regimes and human rights violations; experts say this is aimed at China. On the other side of the pond, as Washington weighs additional restrictions on chip exports, US companies will continue to sell chips to China, but not the most advanced ones. China’s trade council has called on the US to reconsider rules restricting American investments in China’s tech sector. The council argues that the restrictions are vague and do not differentiate between military and civilian applications.

AI governance

G7 countries have agreed to create an international code of conduct for AI that would establish principles for the oversight and control of advanced forms of AI. In a similar development, Japan (the current chair of the G7) and Canada have released voluntary codes of conduct for companies developing AI – this follows the recent trend of using voluntary guidelines until regulations are enacted. 
The British anti-trust regulator, the Competitions and Markets Authority (CMA), proposed seven principles to guide the development and deployment of AI foundational models (technology trained on vast amounts of data to carry out a wide range of tasks and operations). Finally, The USA announced plans to present a proposal of global standards for the use of military AI at the UN in the near future.

Security

The International Committee of the Red Cross (ICRC) has issued eight rules of engagement for hacktivists who are involved in conflicts, warning them that their actions can endanger lives. The rules include a prohibition on cyberattacks targeting civilians, hospitals, and humanitarian facilities, as well as the use of malware or similar tools that can harm both military and civilian targets.

Infrastructure

The US Federal Communications Commission (FCC) is planning to restore the net neutrality rules that were repealed in 2017. FCC Chairperson Jessica Rosenworcel announced that the FCC proposes to reclassify broadband under Title II of the US Communications Act. That would give the FCC more authority to regulate internet providers, including the ability to prevent carriers from slowing down or speeding up internet traffic to certain websites.
Huawei, the Chinese tech giant, has taken legal action in a Lisbon court against a resolution by Portugal’s cybersecurity council (CSSC), which effectively restricts operators from employing its equipment in high-speed 5G mobile networks.

Internet economy

The European Commission has designated 6 major tech companies, including Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft, as gatekeepers under the Digital Markets Act (DMA), concluding a 45-day review process. The designation includes a total of 22 core platform services provided by these companies. 

In another area, Amazon has temporarily secured a victory in a case concerning its classification as a Very Large Online Platform (VLOP). The General Court of the Court of Justice of the EU (CJEU) in Luxembourg has, in response to Amazon’s plea, granted interim measures, resulting in the postponement of certain DSA obligations. This takes place amid the initiation of stringent measures under the EU Digital Services Act (DSA), affecting 19 major online platforms and search engines

(Alleged) Anti-competitive practices by major companies were in the spotlight last month. The US Federal Trade Commission (FTC) and 17 state attorneys general sued Amazon for alleged anti-competitive behaviour. One of the biggest antitrust cases in decades, The US Justice Department’s case against Google commenced on 12 September 2023. This lawsuit focuses on Google’s search business, which is alleged to be ‘anti-competitive and exclusionary,’ enabling the company to maintain a monopoly in the digital advertising market. In a different case also concerning Google, the company announced a provisional settlement in the USA on monopoly allegations concerning the Play Store application platform. 

The European Commission has been informally collecting views on potentially abusive practices by Nvidia, Bloomberg revealed. This comes after France’s competition authority carried out an ‘unannounced inspection […] in the graphics cards sector’, which was revealed to involve Nvidia.

Digital rights

Reporters Without Borders (RSF) has called for public input in drafting the AI Charter to clarify the journalism community’s position on the extensive use of AI technologies in the field. 

Norway’s data watchdog hopes to extend its daily fines of NOK 1 million (USD93,000) for privacy breaches against Meta across the EU and European Economic Area (EEA). Now, it is up to the European Data Protection Board (EDPB) to evaluate the situation.

Content policy

A federal appeals court in the USA has extended limits on the Biden administration’s communication with social media platforms, to also encompass the US Cybersecurity and Infrastructure Security Agency (CISA). This ruling significantly trims the ability of the White House and government agencies to engage with social media platforms on matters of content moderation.

The EU warned major social media platforms about not complying with the newly enacted Digital Services Act (DSA) targeting fake news.

Development

The EU has released its Digital Decade report, urging actions to achieve Digital Decade targets by 2030.

New ITU data shows global internet access improved in 2023, with over 100 million new users worldwide.

The G77 Summit adopted the Havana Declaration, focusing on science, technology, and innovation and outlining the G77’s future actions.

THE TALK OF THE TOWN – GENEVA

During the 54th session of the UN Human Rights Council (UNHRC), a panel discussed cyberbullying against children, discussing the roles of states, the private sector, and stakeholders in addressing cyberbullying and empowering children in the digital sphere. Additionally, the council presented a summary report on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression from the 53rd  session. The council also heard a report on the impact of new technologies intended for climate protection.

The WTO Public Forum 2023 focussed on the role of trade in fostering an eco-friendly future, including the theme ‘Digitalisation as a tool for the greening of supply chains’. Over 20 sessions delved into digital tools and their impacts.

The 8th session of the WIPO Conversation delved into generative AI and IP. Over two days, 6 panels covered generative AI’s use cases, regulatory landscape, ethical concerns regarding training data, authorship, ownership of creative work, and strategies for navigating IP in generative AI.


Digital at UNGA78

The General Debate of the UN General Assembly (UNGA) serves as a global platform where world leaders come together to address some of the most pressing issues confronting humanity. One of these critical topics is the impact of digital technologies. 

During the 2023 UNGA General Debate, 94 speakers, including the Secretary-General of the UN, and representatives of the Holy See and the EU, delved into digital themes. 

This number (94) represents a significant increase compared to our first analysis in 2017, when 47 countries spoke on digital topics. Fast forward 7 years, and this number has doubled to 94. This sharp rise underscores the growing recognition of the paramount importance of digital technologies at the highest levels of diplomatic discourse.

In the broader context, discussions related to digital technology accounted for 2.51% of all the text corpus produced during 2023 UNGA’s speeches.

Bar graph shows the overall number of speakers mentioning digital issues from 2017 (47), 2018 (63), 2019 (84), 2020 (76), 2021 (83), 2022 (92), and 2023 (94).

The General Debate in 2023 saw a substantial surge in mentions of AI in national statements. Out of the 467,130 words spoken during the debate, 6,279 were about AI, solidifying its position as the most frequently discussed digital topic. This surge in interest can be attributed, in part, to the widespread attention garnered by the launch of ChatGPT.

AI featured prominently in 39 speeches during UNGA 78, reflecting its growing significance. However, leaders also explored other digital-related subjects, including digital development (44), cybersecurity (23), content policy (7), economic considerations (4), and human rights (6).

AI. The rapid evolution of AI prompted concerns about its potential risks, from job displacement to cyber threats. While some speakers highlighted AI’s transformative potential in healthcare and education, many emphasised the need for ethical governance and international cooperation. There was consensus on the urgency of regulating AI, addressing its military applications, and establishing global norms. The role of the UN in facilitating these discussions and promoting responsible AI use was a recurring theme, with calls for a Global Digital Compact and the creation of an international AI agency.

Digital development. Leaders emphasised the need to bridge the digital divide, reduce inequalities, and ensure inclusive digital development. Many nations advocated for international cooperation through initiatives like the Global Digital Compact to address these challenges collectively. The importance of digital technologies in achieving sustainable development goals and fostering global solidarity was a common theme among leaders.

Cybersecurity. The evolving landscape of non-traditional security threats, focusing on cybersecurity and cybercrime, was discussed. Leaders emphasised the need for international cooperation and governance frameworks to address cross-border cyber threats, protect critical infrastructure, and combat cybercrime.

Content policy. Leaders addressed the concerning spread of disinformation and fake news amplified by AI and social media platforms. They highlighted the threats posed to democracy, and an increase in real-world violence and conflict caused by online hate speech and misinformation. Efforts to combat disinformation included proposals for a digital bill of rights and a code of conduct for information integrity on digital platforms.

Economic. The importance of embracing digital technology and fostering innovation to enhance economies was emphasised. Efforts to reduce trade barriers, seek free trade agreements, and transition into digital and green economies were highlighted.

Human rights. Leaders expressed concerns about online surveillance, data harvesting, and human rights abuses. They called for human-centred and human-rights-based approaches to the development and deployment of technology.

Digital image of an AI brain with circuits and wiring.

Should we let AI hallucinate?

This year, Diplo’s human experts were joined by DiploAI in analysing speeches. They distilled key points and spotted patterns in speeches, including instances where AI hallucinated – created false information or distorted reality. Diplo’s Jovan Kurbalija suggests we just might want to let it do this in his newest blog post Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems?

 Chart, Plot, Map, Atlas, Diagram, Person
Global map highlights countries that addressed digital topics at UNGA78.

The EU’s Digital and AI Vision in 2023: Von der Leyen’s Address

In the 2023 State of the Union Address, European Commission President Ursula von der Leyen laid out her vision for the digital future of Europe, with a particular emphasis on the role of AI. The speech highlighted Europe’s achievements in the digital realm and the steps being taken to address the challenges and opportunities presented by AI and digital technologies.

 People, Person, Crowd, Adult, Female, Woman, Electrical Device, Microphone, Cup, Accessories, Jewelry, Necklace, Audience, Belt, Flag, Lecture, Speech, Ursula von der Leyen
Campaigns 86

Von der Leyen delivering her address. Credit: European Comission.

Europe’s Investment in digital transformation

President von der Leyen began by acknowledging the importance of digital technology in simplifying both business and everyday life. She pointed out that Europe had exceeded its investment target in digital projects under NextGenerationEU, with member states using this funding to digitise key sectors such as healthcare, justice, and transportation.

Managing digital risks and protecting fundamental rights

However, the president also acknowledged the challenges posed by the digital world, including disinformation, harmful content, and privacy risks. She stressed that these issues eroded trust and violated fundamental rights. To counter these threats, Europe has taken the lead in safeguarding citizens’ rights through legislative frameworks like the DSA and the DMA, which aim to create a safer digital space and hold tech giants accountable.

The role of AI

President von der Leyen highlighted the potential of AI to revolutionise healthcare, increase productivity, and address climate change. But she also warned against underestimating the real threats posed by AI. Citing the concerns of leading AI developers and experts, she emphasised the importance of mitigating AI-related risks on a global scale.

Three Pillars for a Responsible AI Framework

The president outlined three key pillars for Europe’s leadership in shaping a global AI framework: Guardrails, governance, and guiding innovation.

  1. Guardrails: Ensuring AI development remains human-centric, transparent, and responsible. The AI Act, a comprehensive pro-innovation AI law, was presented as a blueprint for the world. The focus now is on adopting the rules promptly and moving towards implementation.
  1. Governance: Establishing a single governance system in Europe and collaborating with international partners to create a global panel similar to the Intergovernmental Panel on Climate Change (IPCC) for AI. This body would provide insights into the impact of AI on society and ensure coordinated global responses.
  2. Guiding Innovation: Leveraging Europe’s leadership in supercomputing by opening up high-performance computers to AI start-ups for training their models. Additionally, fostering an open dialogue with AI developers and companies, akin to the voluntary safety, security, and trust rules adopted by major tech companies in the USA, is a crucial step.

Ad Hoc Committee on Cybercrime: Key takeaways from the 6th session

The 6th session of the UN Ad Hoc Committee on Cybercrime finished its work, yet many issues remain open. With the final round being scheduled for February 2024, states still have not yet agreed on whether to use the term cybercrime or ICTs for malicious purposes in the convention. 

The latest draft (updated on 1 September 2023) had states debating over the scope of the convention, with China and Russia expressing their concern that the evolving landscape of information and communication technologies (ICTs) has not been adequately addressed. Regarding the criminalisation of offences, Russia emphasised the need to criminalise the use of ICTs for extremist and terrorist purposes, and together with Namibia and Malaysia, among other countries, supported the inclusion of digital assets regarding the laundering of proceeds of crimes. At the same time, countries, including the UK and Australia, opposed their inclusion, claiming that it does not fall within the convention’s scope. 

Human rights provisions have raised concerns not only among the states but stakeholders as well. Namely, Microsoft stated that the current provisions enshrined under the latest draft could be ‘disastrous for human rights’. With regards to data protection measures, South Africa, the USA, and Russia proposed the collection of traffic data and interception of content data. At the same time, Singapore and Switzerland opposed this proposal, with the EU stressing that such measures threaten human rights and fundamental freedoms. 

Negotiations on international cooperation also faced challenges, with Russia highlighting the importance of distinguishing between the location of data custodians and the places where data processing, storage, and transmission occur, especially in cloud computing. Additionally, countries including Pakistan, Iran, China, and Mauritania proposed the inclusion of Article 47 bis on cooperation between national authorities and service providers. Essentially, the cooperation should include reporting cybercrime offences as established under the convention, sharing expertise, training, preserving electronic evidence, and ensuring the confidentiality of requests received from law enforcement authorities. 

An interesting proposal was that of Costa Rica and Paraguay to include the word ‘sustainability’ in Articles 52 and 56 for effective assistance and addressing cybercrime’s societal impact.

So the question remains: Have states agreed on the provisions? No. Will states debate a final round in February 2024? Yes. What will happen in case there is no consensus? The Bureau of the UN Office on Drugs and Crime (UNODC) will step in and confirm that two-thirds of the majority of present and voting representatives shall take the decisions.

Flag of the United Nations

Upcoming: IGF 2023

The 2023 edition of the Internet Governance Forum (IGF) will be held in Kyoto, Japan, 8–12 October, under the theme ‘The internet we want – empowering all people’. 

The programme is developed around eight sub-themes: 

  • AI and emerging technologies 
  • Avoiding internet fragmentation 
  • Cybersecurity, cybercrime and online safety 
  • Data governance and trust 
  • Digital divides and inclusion 
  • Global digital governance and cooperation 
  • Human rights and freedoms
  • Sustainability and environment

The forum will feature approximately 300 sessions, with a plethora of formats, including high-level sessions, main sessions, workshops, open forums, town halls, lightning talks, launches and awards, networking sessions, day 0 events, dynamic coalition sessions, and national and regional initiatives (NRIs) sessions. 

Additionally, the IGF village, where 76 exhibitors will showcase their work, will be open for visitors. 

Stay up-to-date with GIP reporting!
The Geneva Internet Platform will be actively involved in IGF 2023 by providing reports from IGF sessions for the 9th year in a row. This year, our human experts will be joined by DiploAI, which will generate reports from all IGF sessions.

We’ll also publish IGF daily reports throughout the week, and a final report will be published after the IGF.

Bookmark our dedicated IGF 2023 page on the Digital Watch Observatory or download the app to follow the reports. Subscribe to receive daily newsletters.

Banner highlighting the hybrid reporting by experts and Diplo AI at the IGF 2023

If you are attending the IGF in Kyoto, drop by our Diplo and GIP booth. If you’re joining the meeting online, check out our space in the virtual village.


DW Weekly #130 – 2 October 2023

 Text, Paper, Page

Dear all,

It’s back to AI regulation today – from a US executive order in sight to new AI voluntary rules and industry pleas for regulation. In other news, net neutrality is making a comeback in the USA, with the communications commission’s proposal to restore the 2015 rules that were repealed in 2017. Amazon has been sued for antitrust violations in the USA, but managed to obtain a temporary suspension of the EU’s Digital Services Act obligations (we’ll report on this once the court delivers its judgement).

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Biden confirms AI executive order is imminent

US President Joe Biden’s executive order on AI will be issued in the coming weeks, he confirmed during a meeting of the President’s Council of Advisors on Science and Technology in San Francisco last week.

The first time we heard about this was in July when Biden spoke of plans for new rules right after a meeting he had just held with AI companies. At the time, the president spoke of an executive order coming in summer, which was obviously delayed. It’s coming this autumn, he now said. But more than that, last week’s remarks offer new clues on what to expect.

Leveraging AI’s potential. Biden, a self-proclaimed AI enthusiast, said it would be a major failure if future generations looked back at our time and thought that we had the potential tools to explore and significantly increase our ability to help, and ‘we somehow messed it up’. One of the main focuses, therefore, is on harnessing the potential of AI for diverse areas such as research, healthcare, science, education, and more.

Protecting people from profound risk. Without clear rules of the road, Biden is wary that AI innovation could create serious dangers, hence the executive order’s strong emphasis on risk. But the fact that the leading 10 to 12 companies are developing AI tools with vast differences in their potential and risks creates a significant challenge for legislators: Which measures can tackle threats across the board in a way that doesn’t water down any regulatory action?

A good starting point. The AI Bill of Rights and other voluntary commitments already focus on safety, security, and trust, so the executive order will most likely build on them. In his address, Biden specifically referenced three voluntary measures companies are following, making it likely that these would be repeated in his executive action. The first would oblige companies to make sure AI is watertight before being released to the public. The second is the need for independent product testing: The White House has already supported red-teaming at DefCon, confirming this preference. The third is watermarking, which would address the ubiquitous issue of disinformation. 

The path to bipartisan legislation. Biden also said his administration will ‘continue to work with bipartisan legislation’. Though the intention is there, a serious challenge here is time: Legislators are still working on fundamental questions, and the most we can expect is some narrow pieces of an AI regulatory regime in the current session of Congress. 

A close ally: The UK. On the international front, Biden said the USA would work with international partners, singling out the UK, probably because it’s one of the few countries that is friendly to US-based AI companies. The UK’s upcoming AI Summit in November will focus primarily on managing the risks of frontier AI, that is, ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models’, according to the UK government’s pre-summit description released last week. If that sounds familiar, that’s precisely what OpenAI’s Sam Altman told US lawmakers recently

A shift in focus. With AI companies calling for guardrails, legislators offering bipartisan support, and an executive order in sight, there’s a clear shift in focus in the USA. At home, the USA is shifting away from its traditional laissez-faire approach. The main question is, to what extent?  


Digital policy roundup (25 September–2 October)

// AI GOVERNANCE //

UNGA78: Concerns over AI risks and use of lethal autonomous systems

Our coverage of the first week of the UN General Assembly’s general debate showed clearly that countries were worried about AI risks. What emerged during the second week was along the same lines: There’s an urgent need for global cooperation on how to navigate AI challenges.

A particularly worrying issue raised on the last day was the use of lethal autonomous weapons systems (LAWS) in armed conflict. A few countries said that until an international legal framework is put in place, LAWS should be banned.

Why is it relevant? International negotiations have been going on for years without much progress. Different countries have varying positions on LAWS: Some advocate for a complete ban, while others argue for strict regulations and safeguards. With AI risks on world leaders’ minds, there might be a better chance for the debate to move forward and reach a compromise.

New proposals. In parallel, the USA plans to propose international norms for the responsible military use of AI at the UN’s First Committee meeting in October. Costa Rica also said it was working on proposing a joint resolution with Austria and Mexico on autonomous weapons systems.

More resources. Read who said what at UNGA 78, and what they said about AI.


Canada launches voluntary AI code of conduct

Canada launched a new voluntary code of conduct for companies developing generative AI. The code includes measures for accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. Signatories also commit to supporting the development of a responsible AI ecosystem in Canada and using AI to drive inclusive and sustainable growth while prioritising human rights, accessibility, and environmental sustainability.

Even though the code is voluntary and has been signed by OpenText, BlackBerry, and TELUS, among others, it has received some sharp criticism from Shopify CEO Tobi Lütke, who argued that Canada needs to focus on encouraging more innovation, not on regulating the sector.

Why is it relevant? Canada is following the recent trend of launching voluntary guidelines until legislation (in Canada’s case, the Artificial Intelligence and Data Act (AIDA), part of Bill C-27) is enacted.

Spotify CEO Tobi Lütke tweets: ‘Canadian government is announcing a code of conduct on AI today, another case of EFRAID. I won’t support it. We don’t need more referees in Canada. We need more builders. Let other countries regulate while we take the more courageous path and say “come build here”.’
Campaigns 96

Was this newsletter forwarded to you, and you’d like to see more?


// NET NEUTRALITY //

US FCC proposes restoration of net neutrality rules

The US Federal Communications Commission (FCC) is planning to restore the net neutrality rules that it introduced in 2015 (and which were repealed in 2017 under the previous FCC administration).

The announcement was made by FCC Chairperson Jessica Rosenworcel last week (watch or read), who said the FCC proposes to reclassify broadband under the so-called Title II of the US Communications Act. This would also reinstall the FCC’s authority to serve as a watchdog over the communications marketplace.

Why is it relevant? Net neutrality rules, which prevent ISPs from restricting or throttling internet access, have long been a major bone of contention. The 2015 Open Internet Order was a win for net neutrality proponents, but it didn’t last long. With a new majority of Democrat-appointed members on the commission, the current administration hopes to reinstate the original rules. The FCC will meet on 18 October: A vote in favour of this plan will kickstart the legislative process.

Screenshot of FCC Chairperson Rosenworcel’s remarks at the National Press Club links to the video at https://www.youtube.com/watch?v=E_kVkxQ5DCA
Campaigns 97

// ANTITRUST //

FTC sues Amazon over alleged anti-competitive practices

The US Federal Trade Commission (FTC) and 17 state attorneys general sued Amazon for alleged anti-competitive behaviour.

The claims. The FTC and the states say that Amazon abuses its monopoly (in the online superstore market, and the online marketplace services market) using tactics such as forcing sellers to use Amazon’s logistics services, often at inflated prices, and anti-discounting measures that punish sellers and deter other online retailers from offering prices lower than Amazon, keeping prices higher for products across the internet.

‘Amazon is now exploiting its monopoly power to enrich itself while raising prices and degrading service for the tens of millions of American families who shop on its platform and the hundreds of thousands of businesses that rely on Amazon to reach them’, said FTC chairperson Lina Khan.

Why is it relevant? First, it’s a sweeping lawsuit that goes beyond what Amazon faces in other cases. Second, judging by the FTC’s track record (it recently failed to block Microsoft’s acquisition of Activision, though to be fair, the FTC hasn’t given up), there’s a chance that it could emerge largely unscathed. What’s more is that Amazon does have a compelling argument of providing customers with a huge option of products, and of providing retailers access to a global market.

Case details: Federal Trade Commission et al v. Amazon.com Inc, US District Court for the Western District of Washington, 2:23-cv-01495


EU launches preliminary investigation into Nvidia’s possible AI chip market abuse

The European Commission has been informally collecting views on potentially abusive practices by Nvidia, which produces chips used for AI and gaming, Bloomberg revealed. This comes after France’s competition authority carried out an ‘unannounced inspection […] in the graphics cards sector’, which was revealed to involve Nvidia.

Nvidia, a major player in the sector for graphics processing units (GPUs), is the only trillion-dollar semiconductor firm in the world.

Why is it relevant? This marks the first antitrust probe linked to hardware services used for AI. But it’s still premature to say what will happen next. The aim of early-stage investigations is for the European Commission to understand if it needs to intervene with more formal procedures, so there’s no certainty whether this will escalate.


The week ahead (2–9 October)

Ongoing till 31 October: The European Commission, together with other EU institutions, kicked off the annual European Cybersecurity Month campaign, aimed at raising cyber awareness amid increasing concern about online safety.

8–12 October: The annual Internet Governance Forum (IGF2023) takes place in Kyoto, Japan, and online starting next Sunday. As usual, we’ll be on the ground and actively engaged with reporting, analysis, and workshops. Sign up for daily newsletters and download our app to stay up-to-date with session reports and other updates.

Decorative banner for www.digwatch NEWS
Campaigns 98

#ReadingCorner

Tech Policy Press has created an online tracker for updates on Senator Chuck Schumer’s ongoing AI Insight Forum series of events, including who’s attending and what topics were discussed.

Originality.ai, a site that provides detection services for AI-generated content and plagiarism, has another online tracker: It lists the ongoing copyright and trademark lawsuits against OpenAI and ChatGPT.

The 6th session of the Ad Hoc Committee on Cybercrime left many questions unanswered. Two of the most critical issues of contention, related to terminology and the proposed convention’s scope, remained unresolved. With only one session to go – in February 2024 – the pressure’s on. Read our key takeaways from the penultimate session.

Caricature of people talking over each other and disagreeing.
Campaigns 99

FWAzpGt5 steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation

AI and Digital @ UNGA 78

 Indoors, Interior Design, Lighting, Architecture, Building, Convention Center, People, Person, Theater, Stage, City
Campaigns 126

AI and Human Reporting from UN General Assembly 78

Last week, Diplo’s experts and AI system followed UN General Assembly 78 (UNGA 78). In this hybrid approach, we produced an overall summary of the UNGA General Debate and an in-depth analysis of inputs on AI and digital issues. Transcripts and analyses of national statements delivered at the UNGA are also available.

 Chart, Plot, Map, Person
Campaigns 127

AI and Digital Issues @ UNGA 78

Compared to 2022, this year UNGA brought a sharp rise in AI coverage in national statements. It reflected a growing public interest in AI, triggered mainly by the launch of ChatGPT.

In total, 94 speakers covered digital themes, including the Secretary-General of the UN, the Holy See, and the EU. Overall, digital technology-related discussions constituted 2.51% of all language corpus produced by UNGA 78 speeches.

AI was featured in 39 speeches made during UNGA 78. Leaders also explored topics such as digital development (44), cybersecurity (23), content policy (7), economic considerations (4), and human rights (6).  

 Bar Chart, Chart
Campaigns 128

 

In addition to the gist of digital coverage in national statements, you can consult full texts of statements at the Digital Watch Observatory’s dedicated page.


AI and Expert Analysis

artificial intelligence ai and machine learning ml
AI at UNGA 78: The topic on everyone’s lips
Focus on the transformative potential of AI and the urgent need for global cooperation in dealing with AI risks, including lethal autonomous weapons systems (LAWS). Read more.
artificial intelligence ai and machine learning ml
AI at UNGA 78: The topic on everyone’s lips
Focus on the transformative potential of AI and the urgent need for global cooperation in dealing with AI risks, including lethal autonomous weapons systems (LAWS). Read more.
innovative business technology
Technologies at UNGA 78: Caution and optimism blended in discussions
Calls for responsible advancement and international cooperation to tackle tech-related challenges resonated throughout discussions. Read more.
innovative business technology
Technologies at UNGA 78: Caution and optimism blended in discussions
Calls for responsible advancement and international cooperation to tackle tech-related challenges resonated throughout discussions. Read more.
desktop flatlay laptop notebook pen and cash money lying on black background
Digital economy at UNGA 78: Growth and development
Discussion on digitalisation as a driver of economic growth and prosperity… Read more.
desktop flatlay laptop notebook pen and cash money lying on black background
Digital economy at UNGA 78: Growth and development
Discussion on digitalisation as a driver of economic growth and prosperity… Read more.
cybersecurity concept data protection digital technology there is padlock prominent shield left abstract circuit surrounding binary fractal code perspective design 1
Cybersecurity at UNGA 78: Leaders addressed the evolving threat landscape
Focus on cyber threats amplified by AI and other new technologies… Read more.
cybersecurity concept data protection digital technology there is padlock prominent shield left abstract circuit surrounding binary fractal code perspective design 1
Cybersecurity at UNGA 78: Leaders addressed the evolving threat landscape
Focus on cyber threats amplified by AI and other new technologies… Read more.
New SDGs Digital Transformation
Digital development at UNGA 78: Shaping the future through inclusion and capacity building
Highlighting the digital divide, inclusion, impact on climate change and supporting rights of future generations… Read more.
New SDGs Digital Transformation
Digital development at UNGA 78: Shaping the future through inclusion and capacity building
Highlighting the digital divide, inclusion, impact on climate change and supporting rights of future generations… Read more.
fake news and misinformation concept image consisting of two internet cables on a laptop
Content governance at UNGA 78: Misinformation, disinformation and hate speech
The global surge of misinformation, disinformation, and hate speech, often amplified by AI and social media, poses a dire threat to social stability, democracy, and overall well-being… Read more.
fake news and misinformation concept image consisting of two internet cables on a laptop
Content governance at UNGA 78: Misinformation, disinformation and hate speech
The global surge of misinformation, disinformation, and hate speech, often amplified by AI and social media, poses a dire threat to social stability, democracy, and overall well-being… Read more.
wrist hands team and community support diversity people protest group and human rights freedom on
Human rights at UNGA 78: Calls for a human-centric digital future
Concerns over surveillance, calls for humanist traditions, and pleas for a human-centric tech approach resonated. Read more.
wrist hands team and community support diversity people protest group and human rights freedom on
Human rights at UNGA 78: Calls for a human-centric digital future
Concerns over surveillance, calls for humanist traditions, and pleas for a human-centric tech approach resonated. Read more.

More on UNGA 78

UNGA78 side
Governing AI for Humanity: The role of UN Secretary-General’s Advisory Board on AI
The event on Governing AI for Humanity focused on the role of AI in accelerating progress towards the UN sustainable development goals (SDGs). Here is our AI-generated report. Read more.
UNGA78 side
Governing AI for Humanity: The role of UN Secretary-General’s Advisory Board on AI
The event on Governing AI for Humanity focused on the role of AI in accelerating progress towards the UN sustainable development goals (SDGs). Here is our AI-generated report. Read more.
shutterstock 1660018615 scaled 900x300 1
Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems? – Diplo
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.
shutterstock 1660018615 scaled 900x300 1
Diplomatic and AI hallucinations: How can thinking outside the box help solve global problems? – Diplo
We examine the use using AI “hallucinations” in diplomacy, showing how AI analysis of UN speeches can reveal unique insights. It argues that the unexpected outputs of AI could lead… Read more.

New to Diplo and GIP?

Subscribe to our mailing lists, and follow us on all the socials!

Digital on Day 6 of UNGA78

 Indoors, Interior Design, Lighting, Architecture, Building, Convention Center, People, Person, Theater, Stage, City
Campaigns 135

Digital on Day 6 of UNGA78: A digital revolution for development

Welcome to our daily coverage of the General Debate of the 78th UN General Assembly (UNGA). This summary provides a comprehensive overview of how digital issues were tackled during day three of discussions on 26 September 2023. For real-time updates and in-depth reports on UNGA78, follow our live coverage on the Digital Watch Observatory‘s dedicated page through DiploAI reports, written by our AI reporting tool. Stay tuned for the final summary and data analysis from the entire General Debate!


Development: Digital revolution to achieve SDGs

In his speech, India’s External Affairs Minister, S. Jaishankar, emphasised the transformative role of digital public infrastructure and the democratisation of technology as national objectives. Jaishankar highlighted the importance of digitally-enabled governance and delivery.

Secretary of Relations with States of the Holy See, Archbishop Paul Richard Gallagher, emphasised that alongside technological advancement, there should be a parallel commitment to safeguarding our common home. He advocated for the responsible use of new technologies to combat the global crisis of climate change, pollution, and biodiversity loss. Gallagher echoed the injustice that those contributing the least to pollution often bore the brunt of climate change’s adverse effects, i.e., developing countries. Hence, Gallagher stressed the urgency of taking action to protect the world we inhabit.

Tandi Dorji, Bhutan’s minister for foreign affairs, underscored countries’ willingness to engage constructively in preparing for the Summit of the Future. He also advocated for the work towards the elaboration of a Global Digital Compact, aimed at accelerating the implementation of the 2030 Agenda for Sustainable Development. A notable achievement highlighted by Bhutan was the enactment of the National Digital Identity Act, making it the first nation worldwide to establish a legal framework for Self-Sovereign Identity, serving as a cornerstone for delivering digital services to its citizens.

Omar Hilale, chair of the delegation of Morocco, emphasised the necessity for international solidarity and cooperation in scientific research, particularly in areas such as AI, healthcare, energy transformation, and disaster management. Morocco called for the promotion of resilient societies through equity and social justice, underlining the importance of a multilateral system centred around the UN.

Damiano Beleffi, chair of the delegation of San Marino, focused on the significance of digital education and highlighted their support for the outcomes of the 2022 UN Transforming Education Summit. San Marino called upon member states to ensure the global spread of digitalisation, particularly in developing countries.

Stanley Kakubo, minister for foreign affairs of Zambia, drew attention to the potential of digital technology, particularly AI, to enhance citizens’ quality of life. They envisioned AI applications in healthcare and agriculture to bridge gaps. Zambia stressed the importance of forging alliances for technology development, sharing digital resources, and establishing regulations to promote social and economic development. They called for the responsible and ethical use of digital technologies to ensure information security and integrity. Zambia also urged support and investment in digital infrastructure and the provision of affordable devices and internet services, particularly in least developed countries.

Maldives’ Minister of State for Foreign Affairs, Ahmed Khaleel, provided an update on the country’s progress toward the 2030 Agenda, emphasising the pivotal role of physical and digital connectivity in achieving these goals. He noted that the country is undergoing a digital revolution with the proliferation of online education, telemedicine and e-payment systems, with the aim of bringing services closer to its citizens.


AI: Addressing ethical dilemmas

Many people are concerned about AI, noted Chair of the Delegation of Canada, Robert Rae, adding that Canadians are no exception. Minister for External Affairs of Cameroon, Lejeune Mbella Mbella, emphasised the need to confront this challenge, while Denis Ronaldo Moncada Colindres, minister for foreign affairs of Nicaragua, underscored the universal right for all people to benefit from the advancements in science and technology like AI, as technologies are fruits of human intelligence. Subrahmanyam Jaishankar, minister for external affairs of India, further highlighted that the New Delhi G20 outcomes prioritise issues related to the responsible harnessing of AI.

Secretary of Relations with States of the Holy See, Archbishop Paul Richard Gallagher, expressed the pressing need for serious ethical contemplation regarding the integration of supercomputer systems into daily life. Entrusting decisions about an individual’s life and future to algorithms is unacceptable, he stressed. This is also valid in the development of the use of lethal autonomous weapons systems (LAWS), Gallagher noted. 

The use of LAWS in armed conflicts must align with international humanitarian law, Gallagher stated and advocated for negotiations on a legally binding instrument to govern their use. Until such negotiations are concluded, the Holy See called for a moratorium on their deployment. Gallagher underscored the importance of ensuring meaningful human oversight in weapon systems, citing the unique capability of human beings to assess the ethical implications and responsibilities.

In the pursuit of addressing these challenges, the Holy See extended support for the establishment of an International Organization for Artificial Intelligence. Its mission would be to facilitate the exchange of scientific and technological information for peaceful purposes, promoting the common good and integral human development.


Security: Enhancing security in the digital world

Jamaica recognises the threat posed to peace and security in the digital space, Kamina Johnson Smith, minister for foreign affairs and foreign trade of Jamaica, noted. The country is actively working to enhance its domestic cybersecurity capabilities and is also involved in multilateral efforts to address cybersecurity issues. Additionally, she expressed Jamaica’s honour in leading the Caribbean Community’s (CARICOM) efforts towards the development of a UN Convention on Cybercrime.

Marc Hermanne Gninadoou Araba, chair of the delegation of Benin, acknowledged that addressing modern challenges, including cybersecurity, requires reforms supported by clear and resolute political determination.


Follow live updates from the UNGA78 in New York from our dedicated page powered by DiploAI and experts!


New to Diplo and GIP?

Subscribe to our mailing lists, and follow us on all the socials!