DW Weekly #117 – 26 June 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 9

Dear all,

Last week’s European Commission visit to San Francisco was more than a formality: it cemented a commitment by US tech giants to adhere to Europe’s new rules. In other news, new AI rulebooks and investigations are coming soon, while Google and Apple stores are undergoing antitrust reviews in India.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

Twitter, Meta ‘not dragging their feet’ on DSA compliance, says Breton after Silicon Valley meeting

If last week’s San Francisco meeting was aimed at reminding Big Tech that the Digital Services Act (DSA) clock is ticking, European Commissioner Thierry Breton achieved his goal: ‘They are not dragging their feet,’ he said this morning (26 June) on France Inter, referring to the 25 August cut-off date for implementing the complete set of DSA obligations. ‘I want to make that very clear. They have committed themselves.’ (Here’s the transcript in French, and a translation to English.)

Mark Zuckerberg, CEO of Meta, which owns Facebook and Instagram, Elon Musk, Twitter chairman, and Sam Altman, CEO of OpenAI, the startup behind ChatGPT, were among those Breton met during the two-day trip that also included the launch of the EU’s San Francisco office.

The DSA is about preserving innovation while protecting individual freedoms, in Breton’s words. But in truth, Big Tech wants to continue offering services in Europe – a digital market which is too large to ignore. After the 25 August deadline, non-compliance can result in hefty fines, which the EU will not hesitate to impose. Overall, Big Tech gets the big picture.

Twitter will comply with the DSA – Musk

Of the two giant social media companies, Meta was not Breton’s biggest worry. In fact, the commissioner was impressed with Mark Zuckerberg last week, saying he recognised all of the articles of the law and was enthusiastic about it. 

Rather, it’s Twitter which was causing headaches: The company ditched the EU’s disinformation charter last month, and had European leaders worrying that Twitter wouldn’t be willing to comply with the rules. The fact that Twitter committed to the obligations of the DSA is positive news for regulators. 

We actually heard it from Elon Musk himself in a France 2 interview last week. In a somewhat humorous tone, he said it wasn’t the first time he was iterating that Twitter will comply with the law and adhere to regulations. He added a cautious note though: adhering to the law is the limit of Twitter’s intentions, as going beyond what is lawfully required would mean going ‘beyond the will of the people as expressed by the law’. Beyond the DSA, whether intentional or not, Twitter is signalling that it values binding rules much more than voluntary ones – a sentiment that many companies do not share. 

Of course, the real proof of Big Tech’s adherence to the DSA will come after the deadline. So what European regulators can do right now is to continue their stress tests to assess the readiness of the industry, which is what Commissioner Breton’s team did, in fact, prior to his Twitter HQ visit (Meta’s stress test is in July). The outcome hasn’t been disclosed, but Commissioner Breton’s upbeat remarks this morning are another indication that Twitter is on track to implement Europe’s new rules. 

 Text, Page, Person, Face, Head
Campaigns 10

Friends again
Breton’s conversation with OpenAI’s Sam Altman was about the EU’s upcoming AI Act, and the AI pact – a set of voluntary guidelines which Breton devised to help companies prepare for implementing the AI Act. What stood out wasn’t what the two said during the meeting, but what they tweeted right after. The two have come a long way since their recent misunderstanding.


Digital policy roundup (19–26 June)
// AI GOVERNANCE //

US lawmaker releases AI framework

US Senate Majority Chuck Schumer, who in April announced the need for AI rules, has now released his SAFE Innovation Framework

He has also announced a series of AI Insight Forums, starting in September, which will serve as building blocks for new US AI policy. The experts will be a part of what Schumer describes as ‘a new and unique approach to developing AI legislation’. 

Why is it relevant? First, the one-pager mentions China (twice) as a cause for concern. The lawmaker thinks the Chinese Communist Party may be able to set AI standards and write the rules of the road for AI ahead of anyone else. Interestingly, there’s no mention of the EU, whose proposed AI Act is moving ahead quickly. 

Second, Schumer thinks it will take (only) months for Congress to pass AI legislation. It’s ‘exceedingly ambitious’, to quote Schumer himself.

Consumer groups call for more ChatGPT investigations

Consumer groups across 13 European countries are urging their national authorities to investigate the risks posed by generative AI such as ChatGPT. They’re also asking them to enforce existing laws to safeguard consumers. 

The statement, timed to coincide with the publication of a Norwegian consumer group’s report on the consumer harms of generative AI, says the new technology carries many risks, including privacy and security issues, and results which can be inaccurate and can manipulate or mislead people. The organisations also say that consumer groups in both the USA and EU are writing to US President Joe Biden on behalf of the Trans-Atlantic Consumer Dialogue (TACD) on this issue.

Why is it relevant? The call adds more pressure on regulators, especially data protection authorities, to investigate OpenAI, the company behind ChatGPT. So far, tens of investigations have been launched; the list continues to grow.

New AI guidebook in the making in ASEAN region 

ASEAN countries are planning an ASEAN AI guide that will tackle governance and ethics, Reuters has reported exclusively. The agreement was made in February, but the development became known only a few days ago.

Discussions are in their early stages; the guidebook is expected to be released at the end of this year, or early next year.

Why is it relevant? More countries and regions are developing AI rules, which means that unless there’s a concerted effort to build on each other’s work, the world will end up with unharmonised – albeit broadly similar – rules at best. At worst? A patchwork of rules built on a conflicting set of values and priorities.


// SEMICONDUCTORS //

Intel invests in chip fabs in Germany; EU woos Nvidia

Intel has expanded its investment to build two new semiconductor facilities, known as fabs, in Germany. The company will invest EUR 30 billion (USD 32.8 billion), and will receive subsidies worth nearly EUR 10 billion (USD 10.9 billion) from Germany. German Chancellor Olaf Scholz hailed the new agreement as the country’s biggest-ever foreign investment. 

Intel is unlikely to experience a shortage of skills: There are around 20,000 technology students residing in Magdeburg, where the semiconductor fabrication plants (or fabs) will be built. The company is expecting the first plant to start operating within four-to-five years after the European Commission’s subsidies approval.

Across the pond, European Commissioner Thierry Breton took the opportunity of his San Francisco trip to visit NVidia CEO Jensen Huang. The CEO said Breton encouraged him to invest ‘a great deal more’ in Europe, which is going to be a ‘wonderful place to build a future for Nvidia’. 

Why is it relevant? Both the USA and Europe are vying to attract semiconductor companies to their shores. But right now, it’s Europe that is beckoning.


 Clothing, Coat, Jacket, Blazer, People, Person, Adult, Male, Man, Accessories, Formal Wear, Tie, Crowd, Face, Head, Suit, Body Part, Finger, Hand, Jen-Hsun Huang, Thierry Breton
Campaigns 11

Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

Google asks Indian court to quash regulator’s antitrust ruling 

This morning (26 June), Google requested India’s Supreme Court to quash antitrust directives which the country’s competition regulator imposed on the company for allegedly exploiting its dominant position in India’s Android mobile operating system market. 

In March, a tribunal modified four of the ten directives imposed by the Competition Commission of India (CCI) in October, which allowed the company to sustain its current business model. Google is now asking that the remaining directives be stopped and that the court revoke the regulator’s earlier antitrust ruling.

Why is it relevant? The tug of war has already been partially won by the tech giant. The rest of it could go down in one of two ways: If the court confirms the March ruling, it’s status quo for the company; if the court rules that Google did not abuse its position, it’s a significant win for Google, which could influence other cases with other giant tech companies…

Indian competition regulator set to rule on Apple’s app store policies 

The CCI is set to rule soon on Apple’s app store billing and policies. The regulator launched its investigation in 2021, but the process stalled after the commission’s chairman retired in October 2022

Why is it relevant? On the one hand, the case is similar to Google’s case, prompting the regulator to go down the same path. On the other, the regulator’s ruling in Google’s case was revised on appeal, and is now subject to another lawsuit, which may influence the regulator’s final decision.


// E-VOTING //

Switzerland positive after e-voting trial

Swiss voters are shining a good light on a recent e-voting trial, which saw participation rates higher than the national average rate for Swiss voters abroad as a whole. The e-voting software, developed by the Swiss Post, was reviewed after the flaws reported in 2019, and approved for trial in three cantons by the Federal Chancellery earlier in March. 

Why is it relevant? Despite warnings from some Swiss parliamentarians, the outcome of this trial could open the door for Swiss voters living abroad to use the e-voting system in parallel to traditional mail-in ballots. It could also encourage countries where e-voting has either been abandoned (such as recently in Latvia) or never explored. (For reference, here’s where the world stands on e-voting right now).


The week ahead (19–26 June)

19 June–14 July: The four-week 53rd session of the Human Rights Council is ongoing in Geneva and online. What to watch for:

  • 3 July: A panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression (HRC res. 50/15)
  • 6 July: A discussion on the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights (A/HRC/53/42)

Dates may change. Consult the agenda and the latest programme of work. Refer also to the Universal Rights Group’s The Inside Track covering HRC53.

29 June: The EU’s new Regulation on Markets in Crypto-assets, also known as the MiCA regulation, enters into effect today (and will start applying from December 2024). It will regulate crypto-asset issuers and service providers at the EU level for the first time.

1 July: Spain takes up the presidency of the EU Council; a new trio of rotating chairs (Spain-Belgium-Hungary) starts today till the end of 2024. The next elections will be held during Belgium’s presidency.

For more events, bookmark the Digital Watch Observatory’s calendar of global policy events.


#ReadingCorner
 Sphere, Astronomy, Outer Space, Planet, Globe

An atlas on SDGs

Where do countries stand in their goals to achieve the 2030 Agenda? The World Bank’s 2023 Atlas of Sustainable Development Goals tells us how far countries have come – and what more needs to be done. It draws from the World Bank’s database of indicators and multiple other sources. 

SDG practitioners will be happy to learn that the visualisations, together with all the data and code can be downloaded and used for similar purposes.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation

ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

DW Weekly #116 – 19 June 2023

 Text, Paper, Page
Campaigns 19

Dear all,

Antitrust regulators in the EU and USA have Big Tech firms in their crosshairs. The EU wants to curb Google’s dominance in the adtech market, while the US Federal Trade Commission has managed to halt – so far temporarily – Microsoft’s plans to acquire Activision Blizzard. Meanwhile, trilogues on the proposed AI Act have begun in Europe.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

EU takes aim at Google:
Company could be forced to sell off adtech arm

The European Commission is quite unhappy with Google’s role in the adtech business. The company, at the centre of an ongoing antitrust investigation, is dominant on both sides of the market (see the explainer below), making it a prime target for the Commission to order tough remedies if suspicions of abusive patterns are confirmed. 

Not just tough, but tougher. When the European Commission announced last week that it couldn’t see any other solution, we knew it was quite serious for Google. 

What stood out in last week’s press conference was that European Commissioner for Competition Margrethe Vestager described Google’s dominance as pervasive, singling it out as the company with the most ubiquitous presence across markets. What this means in practice is that it’s not just a matter of one company being dominant in a particular market, but a more extensive dominance by one company. It’s a situation that will test the Commission’s resolve in correcting market distortions, and will determine whether existing rules fit this situation.  

It also appears that every time the Commission flagged undesired behaviour, Google swiftly tweaked its actions in subtle ways to comply with the letter of the law, while still accomplishing the same results. The creature stayed the same; only its spots changed. This certainly does not help its cause. As Vestager said, the ‘learning curve is somewhat flat’ for dominant companies: Big Tech has failed to recognise that it’s acceptable to lead a market, but unacceptable to exploit it.

How it started. The EU investigation into Google’s behaviour in the adtech business started a few years ago. Since Google is dominant across the whole value chain, it can be pretty tough to detect specific behaviours. So throughout its preliminary investigation, the commission roped in other competition authorities to investigate practices in arguably one of the most complex technical markets out there. That includes cooperating with the US Department of Justice, itself investigating Google on multiple accounts of alleged antitrust violations.

There’s no other solution. The Commission has touted the notion of divestiture before in other cases, but there were less intrusive measures available. It now appears that when it comes to the adtech market, the market distortions from Google’s dominance in a two-sided market can’t be solved in any other way: You just can’t have ownership of the entire value chain. 

How this could affect Google. There’s no other way to describe it: A request for the company to sell off its adtech arm would be a strong blow to the company. It would mean a major shake-up of the company’s digital advertising empire, and, potentially, a significant restructuring of Google’s business model. 

The formal investigation, launched last week, will determine whether Google violated EU antitrust rules. But the million-dollar question is: Is it a matter of arguing that all other alternatives have been exhausted, or that no other alternative can adequately address this market distortion? 

If the Commission believes that the time for behavioural remedies is up, Google’s only solution is to persuade the Commission that the situation can be remedied by alternative methods. But if the Commission thinks that Google’s dominance in a two-sided market can’t be fixed in any other way, the Commission has a complex task ahead to justify an adtech break-up.

How does adtech work? 

Behind the scenes, complex algorithms race to decide which ad to display to each of us as we browse. There are three parts to this process:

First, advertisers want to place their ads in the hope that they attract our attention. 

Second, publishers want to sell online space (think of it as digital real estate) to display those ads. 

Third, an intermediary applies their (theoretically unbiased) algorithms to determine the best match between each user and the ads they might be interested in – all of this taking place in real-time.

Google offers services in all these three parts of the process. It’s on both the buy-side and the sell-side of a two-sided market, and it’s also in the middle, where the buyer and seller meet. The problem is that in acting as intermediary, it appears to be abusing its dominant position by favouring its own services over those of competing intermediaries.

Alberto Bacchiega, director of Digital Platforms at the Commission’s Directorate-General for Competition, explains it well in the video ‘Statement of Objections to Google Over Abusive Practices in Online Adtech’.


Digital policy roundup (13–19 June)
// AI GOVERNANCE //

EU Parliament advances AI Act, trilogues start

All eyes were on the European Parliament last week in anticipation of the lawmakers’ crucial vote on the EU’s proposed AI Act. Receiving a resounding 499 votes in favour, with 28 against, and 93 abstentions, the Parliament’s text made it through plenary and will now be used as a foundation for negotiations with the EU Council.

The trilogues – tripartite negotiations among the EU’s lawmaking institutions – have already started.

Why is it relevant? For a second, we thought there might be hiccups. But there was too much at stake for European lawmakers to jeopardise the process. So there’s one less hurdle for the AI Act to reach its destination: the coming into effect of new rules that address the risks from AI systems based on their risk level. Yet, it won’t be smooth sailing: biometric-related risks will be a major bone of contention.

 Text
Here’s where we are now: Parliament has approved its draft text in plenary, advancing the AI Act to the trilogue talks (the dotted red line). The EU Council’s negotiating text was approved in December. Source: Based on a diagram from artificialintelligenceact.eu

UN Secretary-General calls on the international community to act now on digital technologies

The recent warnings on AI’s threat to humanity did not land on deaf ears. Last week, UN Secretary-General Antonio Guterrez urged governments to heed to these warnings. ‘Alarm bells over the latest form of artificial intelligence – generative AI – are deafening. And they are loudest from the developers who designed it. These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war. We must take those warnings seriously.’


Was this newsletter forwarded to you, and you’d like to see more?


// ANTITRUST //

US judge temporarily blocks Microsoft’s Activision acquisition

It’s not just Google having antitrust issues of late. Microsoft’s takeover of video game maker Activision Blizzard has suffered a setback after a US judge granted the Federal Trade Commission’s (FTC) request to temporarily block the acquisition. The judge has scheduled a two-day evidentiary hearing for this week on the FTC’s request for a preliminary injunction.  

The FTC is arguing that the transaction would give Microsoft’s video game console Xbox, exclusive access to Activision games, leaving Nintendo and Sony Group’s PlayStation out in the cold.

Why is it relevant? First, without a court order, Microsoft could have closed the USD69 billion (EUR63 billion) deal as early as last week. The company will now have to wait for this week’s hearing. Second, the merger faces a legal battle in August, when an FTC judge is set to hear the case. 

The FTC’s actions stand in contrast with the European Commission’s approval of the merger in May. No doubt the UK’s Competition Authority, which denied approval of the deal citing concerns over the potential impact on competition in the video game industry, is watching closely.


// CYBERSECURITY //

Amid soaring tensions, US official warns of cyber sabotage from China

Chinese hackers are all but sure to disrupt US critical infrastructures, such as pipelines and railways, in the event of a conflict with the USA, a senior US cybersecurity official has warned. Tensions between the two countries have soared. 

The Director of the Cybersecurity and Infrastructure Security Agency, Jen Easterly, emphasised during a recent event that China is significantly investing in its capability to sabotage US infrastructures. With the possibility of Chinese hackers compromising current security measures (given the recent cyberattacks by Chinese state-sponsored hacking group Volt Typhon), Easterly warned that prioritising resilience and strengthening defences is paramount to being prepared.


// DATA PROTECTION //

Swedish data protection watchdog fines Spotify over GDPR breaches 

The Swedish Data Protection Authority has imposed a EUR5 million (USD5.5 million) fine on digital music service company Spotify for breaching several GDPR provisions a few years ago. According to the authority, Spotify failed to provide clear information to users regarding the purposes of its data processing, categories of personal data involved, and data storage periods, among other infringements.

As with many European legal actions concerning data protection, the complaint against Spotify was filed by Austrian non-profit NOYB, which also took the Swedish Data Protection Authority to court for not wanting to investigate the case.

Why is it relevant? The ruling serves as a reminder for organisations operating in the EU to provide clear and transparent information about their data processing practices (and for data protection authorities to investigate every complaint).


The week ahead (19–26 June)

19–21 June: The 2023 edition of Europe’s foremost regional internet governance gathering – EuroDIG – is underway in Finland and online. The theme – Internet in troubled times: Risks, resilience, hope. We’ll use our latest AI tool, DiploGPT, to draft reports that reflect the discussions throughout the meeting. (By the way, here’s DiploGPT in action).

19 June–14 July: The four-week 53rd session of the Human Rights Council also starts today in Geneva and online. What to watch for:

  • 22 June: A discussion on the report Digital Innovation, Technologies, and the Right to Health (A/HRC/53/65)
  • 6 July: A discussion on the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights (A/HRC/53/42)
  • 3 July: A panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression (HRC res. 50/15)

Dates may change. Consult the agenda and the latest programme of work. Refer also to the Universal Rights Group’s The Inside Track covering HRC53.

20–21 June: The Ad Hoc Committee on Cybercrime, tasked with advancing a new cybercrime convention, is holding the fifth intersessional stakeholder consultation in Vienna and online.

21–22 June: The annual Cybersec Forum and Expo features discussions among policymakers and the industry on cybersecurity and resilience, across four streams: state, defence, business, and future technologies. Takes place in Poland, onsite only.

23 June: It’s the last day to propose a session for UNCTAD’s eWeek (known until recently as eCommerce Week). It will take place in December in Geneva and online.

For more events, bookmark the Digital Watch Observatory’s calendar of global policy events.


#ReadingCorner
 Nature, Night, Outdoors, Art, Graphics, Pattern, Astronomy, Moon

Do we trust algorithms to choose our news?

Not really. According to the Reuters Institute’s annual Digital News Report 2023, which surveyed around 94,000 adults across 46 markets, people are sceptical of algorithms determining news selection based on what our friends have read or read (19% agree, 42% disagree) and based on what we ourselves have read or seen in the past (30% agree, equal numbers disagree).

How about the sources we choose to stay informed? Facebook, which was once dominant as a primary network for news, has been surpassed by rivals such as YouTube and TikTok. Read the full report.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Numéro 80 de la lettre d’information Digital Watch – juin 2023

Tendances

L’IA est le maître mot

Dans l’esprit de l’illustration de couverture du gigantesque enjeu que représente l’IA pour l’avenir de l’humanité, une question essentielle se pose: qui a les cartes en main? Est-ce une simple coïncidence, le divin ou des intérêts particuliers?

En mai, l’IA a été au premier plan des discussions mondiales et de la couverture médiatique, et a figuré à l’ordre du jour de réunions et de débats parlementaires. Pourquoi ce battage médiatique?

Premièrement, des alertes très fortes ont été lancées sur le fait que l’IA menace la survie même de l’humanité.

Deuxièmement, la mise en garde contre les risques inhérents à l’existence de l’IA est généralement associée à un besoin de réglementer le développement futur de l’IA.

Dans une toute nouvelle dynamique, les entreprises souhaitent être encadrées. Sam Altman, CEO d’OpenAI, a souligné le rôle crucial des gouvernements dans la réglementation de l’IA et a préconisé la création d’une agence gouvernementale ou mondiale de l’IA chargée de superviser la technologie. Cet organisme de réglementation exigerait des entreprises qu’elles obtiennent des licences avant de développer des modèles d’IA puissants ou d’exploiter des centres de données facilitant le développement de l’IA. Les développeurs seraient ainsi tenus de respecter des règles de sécurité, et d’établir un consensus sur les normes et les risques qui doivent être maîtrisés.

Parallèlement, Microsoft a publié un projet complet de gouvernance de l’IA. Dans sa préface, Brad Smith, président de Microsoft, plaide également en faveur de la création d’une nouvelle agence gouvernementale chargée d’appliquer les nouvelles règles en matière d’IA.

Troisièmement, les gouvernements des pays développés sont favorables à l’idée de réglementer le déploiement futur de l’IA.

Quatrièmement, de plus en plus de voix s’élèvent pour dire que la réglementation, soutenue par une menace existentielle, vise à bloquer les développements open source de l’IA et à confier son pouvoir à un petit nombre de dirigeants, principalement OpenAI/Microsoft et Google.

Quelle que soit la motivation qui préside à la réglementation de l’IA, nous pouvons identifier quelques thèmes qui lui font écho : les violations de la vie privée, les préjugés, la multiplication des escroqueries, la désinformation et la protection de la propriété intellectuelle, entre autres. Cependant, tous les régulateurs ne partagent pas les mêmes points de vue. Voici un échantillon de ce que les juridictions du monde entier ont exprimé en mai 2023 concernant leur volonté de réglementer l’IA, et les approches qu’elles proposent.

L’UE. La première réglementation mondiale en matière d’IA est, sans surprise, élaborée par le mastodonte de la régulation qu’est l’UE. Elle adopte une approche de l’IA fondée sur le risque, en établissant des contraintes pour les fournisseurs et les utilisateurs d’IA en fonction du niveau de risque posé par les systèmes d’IA. Elle introduit également une procédure par étapes pour réglementer l’IA à usage général, ainsi que les modèles d’IA fondateurs et génératifs. Le projet de législation doit être approuvé par le Parlement en séance plénière, ce qui devrait se produire au cours de la session du 12 au 15 juin. Ensuite, les négociations avec le Conseil sur la forme finale de la loi pourront commencer.

Les États-Unis. Des représentants du gouvernement américain ont rencontré les P.-D.G. d’Alphabet, d’Anthropic, de Microsoft et d’OpenAI, et ont évoqué trois points essentiels : la transparence, l’évaluation et la sécurité des systèmes d’IA. La Maison-Blanche et les principaux développeurs d’IA vont collaborer pour évaluer les systèmes d’IA générative afin de détecter les failles et les vulnérabilités potentielles, telles que les confabulations, les violations de confidentialité et les partialités. Les États-Unis évaluent également l’impact de l’IA sur la main-d’œuvre, l’éducation, les utilisateurs et les risques d’utilisation abusive des données biométriques.

Le Royaume-Uni. Il est un autre gouvernement qui collaborera avec l’industrie : le Premier ministre, Rishi Sunak, a rencontré les P.-D.G. d’OpenAI, de Google DeepMind et d’Anthropic pour discuter des risques que l’IA peut poser, tels que la désinformation, la sécurité nationale et les menaces existentielles. Les P.-D.G. ont accepté de travailler en étroite collaboration avec la Foundation Model Taskforce du Royaume-Uni pour faire progresser la sécurité de l’IA. Le Royaume-Uni se concentre également sur les risques électoraux liés à l’IA et sur l’impact du développement des systèmes d’IA sur la concurrence et la protection des consommateurs. Le Royaume-Uni conservera apparemment son approche sectorielle de l’IA, aucune réglementation générale de l’IA n’étant prévue.

Chine. L’Administration du cyberespace de Chine (CAC) a fait part de ses préoccupations concernant les technologies avancées telles que l’IA générative, notant qu’elles pourraient sérieusement remettre en question la gouvernance, la réglementation et le marché du travail. Le pays a également appelé à améliorer la gouvernance de la sécurité de l’IA. En avril, la CAC a proposé des mesures de régulation des services d’IA générative, qui précisent que les fournisseurs de ces services doivent s’assurer que leur contenu est conforme aux valeurs fondamentales de la Chine. Parmi les contenus interdits figurent la discrimination, les fausses informations et la violation des droits de propriété intellectuelle (DPI). Les outils utilisés dans les services d’IA générative doivent faire l’objet d’une évaluation de sécurité avant leur lancement. Les mesures étaient ouvertes aux commentaires jusqu’au 2 juin, ce qui signifie que nous en verrons bientôt le résultat.

Australie. Elle s’inquiète des risques liés à l’IA, tels que les contrefaçons, la désinformation, les incitations à l’automutilation et les abus algorithmiques. Le pays cherche actuellement à savoir s’il doit soutenir le développement d’une IA responsable par des approches volontaires, telles que des outils, des cadres et des principes, ou par des approches réglementaires exécutoires, telles que des lois et des normes obligatoires. 

Corée du Sud. La loi sur l’IA du pays n’est plus qu’à une encablure du vote final de l’Assemblée nationale. Elle permettra le développement de l’IA sans approbation préalable du Gouvernement, classera l’IA à haut risque et établira des normes de fiabilité, soutiendra l’innovation dans l’industrie de l’IA, établira des lignes directrices éthiques, et créera un plan de base pour l’IA et un comité de l’IA supervisé par le Premier ministre. Le Gouvernement a également annoncé qu’il créerait de nouvelles directives et normes pour les droits d’auteur des contenus générés par l’IA d’ici septembre 2023.

Japon. Le gouvernement japonais vise à promouvoir et à renforcer les capacités nationales de développement de l’IA générative tout en s’attaquant aux risques liés à l’IA, tels que la violation des droits d’auteur, la divulgation d’informations confidentielles, les fausses informations et les cyberattaques, entre autres.

Italie. L’Italie a temporairement banni ChatGPT en raison de violations du RGPD en mars. ChatGPT est de retour en Italie après qu’OpenAI a révisé ses déclarations de confidentialité et ses contrôles, mais Garante, l’autorité de protection des données de l’Italie, intensifie son examen des systèmes d’IA pour s’assurer qu’ils respectent les lois sur la protection de la vie privée.

France. La Commission nationale de l’informatique et des libertés (CNIL) a lancé un plan d’action pour l’IA afin de promouvoir un cadre pour le développement de l’IA générative qui respecte la protection des données personnelles et les droits de l’Homme. Ce cadre repose sur quatre piliers : (a) comprendre l’impact de l’IA sur l’équité, la transparence, la protection des données, les préjugés et la sécurité ; (b) développer une IA respectueuse de la vie privée par l’éducation et des lignes directrices ; (c) collaborer avec les innovateurs de l’IA pour assurer la conformité à la protection des données ; (d) auditer et contrôler les systèmes d’IA pour sauvegarder les droits des individus, notamment en ce qui concerne la surveillance, la fraude et les réclamations.

Inde. Le Gouvernement envisage un cadre réglementaire pour les plateformes basées sur l’IA en raison de préoccupations telles que les droits de propriété intellectuelle, les droits d’auteur et la partialité des algorithmes, mais il cherche à le faire en collaboration avec d’autres pays.

Efforts internationaux. En amont de la loi européenne sur l’IA, la Commission européenne et Google prévoient d’unir leurs forces « avec tous les développeurs d’IA » pour élaborer un pacte volontaire sur l’IA. M. Altman, d’Open AI, doit également rencontrer des fonctionnaires de l’UE au sujet de ce pacte.

L’UE et les États-Unis vont établir conjointement un code de conduite en matière d’IA afin de renforcer la confiance du public dans cette technologie. Ce code volontaire « serait ouvert à tous les pays partageant les mêmes idées », a déclaré le secrétaire d’État américain Anthony Blinken.

En outre, le G7 a convenu de lancer un dialogue sur l’IA générative – y compris sur des questions telles que la gouvernance, la désinformation et les droits d’auteur – en coopération avec l’Organisation de coopération et de développement économiques (OCDE) et le Partenariat mondial sur l’IA (GPAI). Les ministres examineront l’IA dans le cadre du « processus d’Hiroshima sur l’IA » et rendront compte des résultats d’ici fin 2023. Les dirigeants du G7 ont également appelé à l’élaboration et à l’adoption de normes techniques pour garantir la fiabilité de l’IA.

Alors, qui a les cartes en mainCe n’est pas encore clair. Nous espérons que ce seront les citoyens qui auront le dernier mot. Les efforts réglementaires mis à part, l’une des façons de s’assurer que les individus conservent la maîtrise de leurs connaissances, même lorsqu’elles sont codifiées par l’IA, est de recourir à l’IA ascendante. Cela permettrait d’atténuer le risque de centralisation du pouvoir inhérent aux grandes plateformes d’IA générative. En effet, l’IA ascendante est généralement basée sur une approche ouverte et transparente qui peut atténuer la plupart des risques de sûreté et de sécurité liés aux plateformes d’IA centralisées. De nombreuses initiatives, y compris les stratégies de développement de l’IA de Diplo, ont prouvé que l’IA ascendante est techniquement faisable et économiquement viable. Il existe de nombreuses raisons d’adopter l’IA ascendante comme moyen pratique de favoriser un nouveau système d’exploitation sociétal fondé sur la centralité, la dignité, le libre arbitre et la réalisation du potentiel créatif des êtres humains.

Baromètre

Les développements de la politique numérique qui ont fait les gros titres

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux développements du mois de mai. Chaque mise à jour du Digital Watch observatory est plus détaillée.

en progression

L’architecture mondiale de la gouvernance numérique

La Journée mondiale des télécommunications et de la société de l’information a été célébrée le 17 mai. Des représentants des Nations unies ont appelé à réduire la fracture numérique, à soutenir les biens publics numériques et à mettre en place un Pacte mondial pour le numérique (PMN).

La quatrième réunion ministérielle UE-États-Unis du Conseil du commerce et de la technologie (CCT) a porté sur les risques liés à l’IA, la réglementation des contenus, les identités numériques, les semi-conducteurs, les technologies quantiques et les projets de connectivité.


neutre

Le développement durable

Selon un rapport de la GSMA, pour combler le fossé numérique entre les hommes et les femmes d’ici 2030, 100 millions de femmes supplémentaires devraient adopter l’Internet mobile chaque année.

La Commission européenne et l’OMS ont lancé une initiative historique dans le domaine de la santé numérique afin d’établir un réseau mondial complet pour la certification de la santé numérique.

La Papouasie-Nouvelle-Guinée a mis en place une plateforme de gestion des cartes d’identité numériques, tandis que les Maldives ont introduit une application mobile de carte d’identité numérique pour simplifier l’accès aux services gouvernementaux. Le programme eID d’Evrotrust est devenu le système officiel d’identification numérique de la Bulgarie.


en progression

La sécurité

Un rapport chinois affirme avoir identifié cinq procédés utilisés par la CIA pour lancer des révolutions colorées à l’étranger et neuf procédés utilisés comme armes pour des cyberattaques.

Les agences cybernétiques « Five Eyes » ont attribué les cyberattaques contre des infrastructures critiques américaines au groupe de pirates informatiques Volt Typhon, soutenu par l’État chinois, ce que la Chine a démenti. Le FBI a perturbé une opération de cyberespionnage russe baptisée « Snake ». Les gouvernements de la Colombie, du Sénégal, de l’Italie et de la Collectivité territoriale de la Martinique ont subi des cyberattaques.

Les États-Unis et la Corée du Sud ont publié un avis conjoint avertissant que la Corée du Nord utilise des tactiques d’ingénierie sociale dans ses cyberattaques.

L’OTAN a mis en garde contre une menace russe potentielle pour les câbles Internet et les gazoducs en Europe et en Amérique du Nord.


neutre

Infrastructure

L’Organe des régulateurs européens des communications électroniques (ORECE) et la majorité des pays de l’UE s’opposent à la pression exercée par les fournisseurs de télécommunications pour que les grandes entreprises technologiques contribuent au coût du déploiement de la 5G et du haut débit en Europe.

La Tanzanie a signé des accords pour étendre les services de télécommunications à 8,5 millions de personnes dans les zones rurales.


neutre

Le commerce électronique et économie de l’Internet

La Commission européenne a approuvé l’acquisition d’Activision Blizzard par Microsoft à condition que les licences Microsoft permettent aux consommateurs d’utiliser n’importe quel service de diffusion en continu.


en progression

Les droits numériques

La Corée du Sud a proposé de modifier sa loi sur la protection des informations privées afin de renforcer les exigences en matière de consentement, d’unifier les normes de traitement des données en ligne / hors ligne et d’établir des critères d’évaluation des violations.

Le Classement mondial de la liberté de la presse 2023 révèle que le journalisme est menacé par l’industrie du faux contenu et le développement rapide de l’IA.

Des interruptions d’Internet ont été signalées au Pakistan à la suite de l’arrestation de l’ancien Premier ministre, et au Soudan au milieu des protestations liées à la condamnation d’un dirigeant de l’opposition, tandis que les médias sociaux ont été restreints en Guinée à la suite de protestations.


en progression

La politique de contenu

Les arrêts de la Cour suprême des États-Unis dans les affaires Gonzalez v. Google, LLC et Twitter, Inc. v. Taamneh ont maintenu les protections de l’article 230 pour les plateformes en ligne.

Google et Meta ont menacé de bloquer les liens vers les sites d’information canadiens si un projet de loi obligeant les plateformes Internet à rémunérer les éditeurs pour leurs informations était adopté.

L’Autriche a interdit l’utilisation de TikTok sur les téléphones professionnels des fonctionnaires fédéraux.

L’Alliance pour les biens publics numériques (DPGA) et le PNUD ont annoncé neuf solutions innovantes à code source ouvert pour faire face à la crise mondiale de l’information. L’UE a appelé à un étiquetage clair des contenus générés par l’IA afin de lutter contre la désinformation. Bien que Twitter se soit retiré du code pour lutter contre la désinformation, il doit toujours se conformer à la loi sur le service numérique lorsqu’il opère dans l’UE.


neutre

Juridiction et questions légales

Apple fait l’objet d’une enquête en France à la suite de plaintes selon lesquelles elle rendrait intentionnellement ses appareils obsolètes afin d’obliger les utilisateurs à en acheter de nouveaux.

Meta s’est vu infliger une amende de 1,2 milliard d’euros en Irlande pour avoir mal exploité les données des utilisateurs et pour avoir continué à transférer des données aux États-Unis en violation d’un arrêt de la Cour de justice de l’Union européenne.


en progression

Les technologies

Un représentant chinois à l’OMC a critiqué les subventions accordées par les États-Unis à l’industrie des semi-conducteurs, les considérant comme une tentative d’entraver les progrès technologiques de la Chine. La Corée du Sud a demandé aux États-Unis de revoir sa règle interdisant à la Chine et à la Russie d’utiliser des fonds américains pour la fabrication de puces et la recherche.

Les États-Unis envisagent de restreindre les investissements dans les puces chinoises, l’IA et l’informatique quantique afin de freiner les flux de capitaux et d’expertise.

L’Australie a publié une nouvelle stratégie nationale en matière d’informatique quantique. La Chine a lancé une plateforme d’informatique quantique en nuage pour les chercheurs et le public.

En bref

Note d’information du Secrétaire général des Nations unies à l’intention du PMN

Le Secrétaire général des Nations unies a publié un document de synthèse contenant des suggestions sur la manière dont un Pacte mondial pour le numérique (PMN) pourrait contribuer à faire progresser la coopération numérique. Le PMN doit être adopté dans le cadre du Sommet de l’avenir en 2024 et devrait « énoncer des principes communs pour un avenir numérique ouvert, libre et sûr pour tous ». Voici un résumé des principaux points du document.

Le document souligne les domaines dans lesquels « la nécessité d’une coopération numérique multipartite est urgente » : réduire la fracture numérique et faire progresser les objectifs du Millénaire pour le développement, rendre l’espace en ligne ouvert et sûr pour tous, et régir l’IA pour l’humanité. Il propose également des objectifs et des actions pour faire progresser la coopération numérique, structurés autour de huit thèmes proposés pour être couverts par le PMN.

Connectivité numérique et renforcement des capacités. L’objectif est de réduire la fracture numérique et de donner aux individus les moyens de participer pleinement à l’économie numérique. Les actions proposées comprennent la fixation d’objectifs de connectivité universelle et l’amélioration de l’éducation du public à la culture numérique.

Coopération numérique pour la réalisation des objectifs de développement durable. Ces objectifs impliquent des investissements ciblés dans l’infrastructure et les services numériques, la garantie de données représentatives et compatibles, et l’établissement de normes de durabilité numérique harmonisées à l’échelle mondiale. Les actions proposées incluent la conception d’infrastructures numériques sûres et inclusives, la promotion d’écosystèmes de données ouverts et accessibles, et l’élaboration d’un plan commun pour la transformation numérique.

Défendre les droits de l’Homme. Il s’agit de placer les droits de l’Homme au cœur de l’avenir numérique, de s’attaquer à la fracture numérique entre les hommes et les femmes, et de protéger les droits des travailleurs. L’une des principales mesures proposées est la création d’un mécanisme consultatif sur les droits de l’Homme dans le domaine du numérique, sous l’égide du Haut-Commissariat des Nations unies aux droits de l’Homme.

Un Internet inclusif, ouvert, sûr et partagé. Les objectifs comprennent la préservation de la nature libre et partagée de l’Internet, et le renforcement d’une gouvernance multipartite responsable. Les actions proposées impliquent l’engagement des gouvernements à empêcher les coupures générales de l’Internet et les perturbations des infrastructures essentielles.

Confiance et sécurité numériques. Les objectifs vont du renforcement de la coopération multipartite à l’élaboration de normes, de lignes directrices et de principes pour une utilisation responsable des technologies numériques. Les actions proposées comprennent la création de normes communes et de codes de conduite sectoriels pour lutter contre les contenus préjudiciables sur les plateformes numériques.

Protection des données et responsabilisation. Les objectifs comprennent la gouvernance des données au bénéfice de tous, l’habilitation des individus à contrôler leurs données personnelles et l’établissement de normes compatibles pour la qualité des données. Les actions proposées consistent notamment à encourager les pays à adopter une déclaration sur les droits relatifs aux données et à rechercher une convergence sur les principes de gouvernance des données par le biais d’un Pacte mondial pour les données.

Gouvernance flexible de l’IA et des technologies émergentes. Les objectifs consistent à garantir la transparence, la fiabilité, la sécurité et le contrôle humain dans la conception et l’utilisation de l’IA, et à donner la priorité à la transparence, à l’équité et à la responsabilité dans la gouvernance de l’IA. Les actions proposées vont de la création d’un organe consultatif de haut niveau pour l’IA au renforcement des capacités réglementaires dans le secteur public.

Patrimoine numérique mondial. Les objectifs comprennent une coopération numérique inclusive, des échanges soutenus entre les États et les secteurs, et un développement responsable des technologies pour le développement durable et l’autonomisation.

Le document d’orientation propose de nombreux mécanismes de mise en œuvre. Le plus notable est un forum annuel de coopération numérique (DCF) convoqué par le Secrétaire général pour faciliter la collaboration entre les cadres multipartites numériques et réduire la duplication, promouvoir l’apprentissage transfrontalier en matière de gouvernance numérique et identifier des solutions politiques pour les défis numériques émergents ainsi que les lacunes en matière de gouvernance. Le document note également que « le succès du PMN dépendra de sa mise en œuvre » aux niveaux national, régional et sectoriel, avec le soutien de plateformes telles que le Forum sur la gouvernance de l’Internet (FGI) et le Forum du Sommet mondial sur la société de l’information (SMSI). Le document suggère la création d’un fonds d’affectation spéciale destiné à financer un programme de bourses de coopération numérique afin de renforcer la participation des différentes parties prenantes.

Genève

Mise à jour des politiques de la Genève internationale

De nombreuses discussions politiques ont lieu chaque mois à Genève. Voici ce qui s’est passé en mai.

Groupe intergouvernemental d’experts sur le commerce électronique et l’économie numérique, sixième session | 10-12 mai

L’objectif principal de ce groupe intergouvernemental d’experts est de renforcer les efforts de la CNUCED dans les domaines des technologies de l’information et de la communication, du commerce électronique et de l’économie numérique, afin de permettre aux pays en développement de participer à l’économie numérique en constante évolution, et d’en tirer profit. En outre, le groupe s’efforce de réduire la fracture numérique et de promouvoir le développement de sociétés du savoir inclusives. La sixième session se concentre sur deux points principaux de l’ordre du jour : comment mettre les données au service de l’Agenda 2030 pour le développement durable, et le groupe de travail sur la mesure du commerce électronique et de l’économie numérique.

Groupe d’experts gouvernementaux (GGE) sur les technologies émergentes dans le domaine des systèmes d’armes autonomes létaux (LAWS), deuxième session 2023 | 15–19 mai

La deuxième session du Groupe d’experts gouvernementaux sur les LAWS s’est tenue à Genève pour « intensifier l’examen des propositions et élaborer, par consensus, d’éventuelles mesures » dans le cadre de la Convention sur certaines armes classiques (CCAC), tout en faisant appel à des experts juridiques, militaires et technologiques.

Dans la version préliminaire du rapport final (CCW/GGE.1/2023/2), le groupe d’experts a conclu que, pour caractériser les systèmes d’armes construits à partir de technologies émergentes dans le domaine des LAWS, il est essentiel de prendre en compte les développements futurs potentiels de ces technologies. Le groupe a également affirmé que les États doivent veiller tout particulièrement au respect du droit international humanitaire tout au long du cycle de vie de ces systèmes d’armes. Les États devraient limiter les types de cibles ainsi que la durée et la portée des opérations auxquelles les systèmes d’armes peuvent participer ; une formation adéquate doit être dispensée aux opérateurs humains. Si le système d’armes basé sur des technologies dans le domaine des LAWS ne peut pas être conforme au droit international, il ne doit pas être déployé.

La 76e Assemblée mondiale de la santé | 21-30 mai

La 76e Assemblée mondiale de la santé (AMS) a invité les délégués de ses 194 États membres à Genève pour discuter des priorités et des politiques de l’organisation sur le thème « L’OMS à 75 ans : sauver des vies, promouvoir la santé pour tous ». Une série de tables rondes a permis aux délégués, aux agences partenaires, aux représentants de la société civile et aux experts de l’OMS de débattre des questions de santé publique actuelles et futures d’importance mondiale. Le 23 mai, le comité B s’est penché sur les rapports d’avancement (A76/37) qui soulignent la mise en œuvre des « stratégies mondiales sur la santé numérique », comme convenu lors de la 73e Assemblée mondiale de la santé. Depuis l’adoption de ces stratégies en 2020, le secrétariat de l’AMS, en collaboration avec des partenaires de développement et d’autres agences des Nations unies, a formé plus de 1 600 fonctionnaires dans plus de 100 États membres à la santé numérique et à l’intelligence artificielle. Le secrétariat a également lancé de nombreuses initiatives pour la diffusion des connaissances et les développements nationaux liés aux stratégies de santé numérique. De 2023 à 2025, le secrétariat continuera à faciliter les actions coordonnées définies dans les stratégies mondiales tout en donnant la priorité aux besoins des États membres.

À venir

Les principaux événements du mois de juin en matière de politique numérique

5–8 juin 2023 | RightsCon (San José, Costa Rica, et en ligne)

La 12e édition du RightsCon devait aborder les développements mondiaux liés aux droits numériques dans 15; domaines: accès et inclusion; IA; entreprises, travail et commerce; conflits et action humanitaire; gouvernance du contenu et désinformation; normes cybernétiques et cryptage; protection des données; sécurité numérique pour les communautés; technologies émergentes; liberté des médias; avenirs, fictions et créativité; gouvernance, politique et élections; conception axée sur les droits de l’Homme; justice, litiges et documentation; haine et violence en ligne; philanthropie et développement organisationnel; vie privée et surveillance; fermetures et censure; tactiques pour les activistes.

12–15 juin 2023 | Forum politique ICANN 77 (Washington, D. C., États-Unis)

Le forum politique est la deuxième réunion du cycle annuel de trois rencontres. Cette réunion se concentre sur le travail d’élaboration des politiques des organisations de soutien et des comités consultatifs, ainsi que sur les activités de sensibilisation régionales. L’ICANN vise à garantir un dialogue inclusif qui offre des possibilités égales pour tous de s’engager sur des questions politiques importantes.

13 juin 2023 | Forum suisse sur la gouvernance de l’Internet 2023 (Berne, Suisse, et en ligne)

Cet événement d’une journée s’est concentré sur des sujets tels que l’utilisation et la réglementation de l’IA, en particulier dans le contexte de l’éducation, de la protection des droits fondamentaux à l’ère numérique, de la gestion responsable des données, de l’influence des plateformes, des pratiques démocratiques, de l’utilisation responsable des nouvelles technologies, de la gouvernance de l’Internet et de l’impact de la numérisation sur la géopolitique.

15–16 juin 2023 | Assemblée numérique 2023 (Arlanda, Suède, et en ligne)

Organisée par la Commission européenne et la présidence suédoise du Conseil de l’Union européenne, cette assemblée devait avoir pour thème : « Une Europe numérique, ouverte et sûre ». Le programme de la conférence devait comprendre cinq sessions plénières, six sessions en petits groupes et trois événements parallèles. Les principaux thèmes de discussion devaient être l’innovation numérique, la cybersécurité, l’infrastructure numérique, la transformation numérique, l’IA et l’informatique quantique.

19–21 juin 2023 EuroDIG 2023 (Tampere, Finlande, et en ligne)

L’EuroDIG 2023 se tiendra sous le thème général de l’Internet en période troublée: risques, résilience et espoir. En plus de la conférence, EuroDIG accueille YOUthDIG, un pré-événement annuel qui encourage la participation active des jeunes (âgés de 18 à 30 ans) à la gouvernance de l’Internet. La GIP s’associe de nouveau à EuroDIG pour fournir des mises à jour et des rapports de la conférence à l’aide de DiploGPT.

DiploGPT a rendu compte de la réunion du Conseil de sécurité de l’ONU

En mai, Diplo a utilisé l’IA pour rendre compte de la session du Conseil de sécurité des Nations unies : la confiance à l’épreuve du temps pour une paix durable. DiploGPT a produit un rapport automatique comprenant un résumé, une analyse des soumissions individuelles et des réponses aux questions posées par le président de la réunion. DiploGPT combine divers algorithmes et outils d’intelligence artificielle adaptés aux besoins des Nations unies et des communications diplomatiques.

DW Weekly #115 – 12 June 2023

 Text, Paper, Page
Campaigns 37

Dear all,

The USA and the UK signed the Atlantic Declaration to strengthen their economic, technological, commercial and trade relations. The EU’s AI Act might be in jeopardy. Meta is in trouble with the EU over content moderation, namely failure to remove child sexual abuse material from Instagram, and Google has published its Secure AI Framework.

Let’s get started.
Andrijana and the Digital Watch team


// HIGHLIGHT //

The US-UK Atlantic Declaration signed

The UK and the USA signed the Atlantic Declaration for a Twenty-First Century US-UK Economic Partnership, touted as a first of its kind as it spans their economic, technological, commercial and trade relations. Here’s what’s in it regarding digital policy (not everything about it is) and how it impacts the USA, the UK… and the EU.

The first pillar focuses on ensuring US-UK leadership in critical and emerging technologies. Under this pillar, the two nations have established a range of collaborative activities:

  • They will prioritise research and development efforts, particularly in quantum technologies, by facilitating increased mobility of researchers and students and fostering workforce development to promote knowledge exchange. 
  • They will work together to strengthen their positions in cutting-edge telecommunications by collaborating on 5G and 6G solutions, accelerating the adoption of Open RAN, and enhancing supply chain diversity and resilience. 
  • Deepening cooperation in synthetic biology is also a priority, aiming to drive joint research, develop novel applications, and enhance economic security through improved biomanufacturing pathways. 
  • Investigators will conduct collaborative research in advanced semiconductor technologies, such as advanced materials and compound semiconductors.
  • Additionally, the countries will accelerate cooperation on AI, with a specific emphasis on safety and responsibility. 

This will involve deepening public-private dialogue, mobilising private capital towards strategic technologies, and establishing a US-UK Strategic Technologies Investor Council within the next twelve months. The council will include investors and national security experts who will identify funding gaps and facilitate private investment in critical and emerging technologies. Lastly, efforts will be made to improve talent flows between the USA and the UK, ensuring a robust exchange of skilled professionals.

What this means for digital policy: The UK and US investments in quantum technology are dwarfed by, for instance, the USD15.2-billion public investment in quantum technology announced by the Chinese government. The UK’s investments in semiconductors–USD1.2 billion–are modest. On the other hand, US companies have pledged nearly USD200 billion. In biotech, the UK is behind the USA. The UK currently, by its own estimate, ranks 3rd in AI behind the USA and China. Overall, China has a stunning lead in research in 37 out of 44 critical and emerging technologies with the USA often second-ranked. Perhaps this partnership with the UK will give both the UK and the USA a leg up.

The second pillar of the partnership centres on advancing cooperation on economic security and technology protection toolkits and supply chains. This involves addressing national security risks associated with some types of outbound investment and preventing their companies’ capital and expertise from fueling technological advances that could enhance the military and intelligence capabilities of countries of concern. Additionally, the countries will work towards flexible and coordinated export controls related to sensitive technologies, enabling the complementarity of their respective toolkits. Strengthening their partnership across sanctions strategy, design, targeting, implementation, and enforcement is another objective. Lastly, the countries aim to reduce vulnerabilities across critical technology supply chains by sharing analysis, developing channels for coordination and consultation during disruptions and crises, and ensuring resilience.

What this means for digital policy:  Judging by previous comments from Paul Rosen, the US Treasury’s investment security chief, this is about preventing know-how and investments in advanced semiconductors, AI, and quantum computing from reaching China, which would allegedly use it to bolster military intelligence capabilities. The UK, which already shares a special relationship with the USA in intelligence, just might be joining the US-led export controls on semiconductors. Reminder: Chip giant Arm is headquartered in the UK.

Pillar 3 of the partnership focuses on an inclusive and responsible digital transformation. The countries aim to enhance cooperation on data by establishing a US-UK Data Bridge, ensuring data privacy protections and supporting Global Cross-Border Privacy Rules (CBPR) Forum and the OECD’s Declaration on Government Access to Personal Data Held by Private Sector Entities.

The countries will accelerate cooperation on AI, and the USA welcomed the planned launch of a Global Summit on AI Safety by the UK Prime Minister in the autumn of 2023. Collaboration on Privacy Enhancing Technologies (PETs) is also planned to enable responsible AI models and protect privacy while leveraging data for economic and societal benefits.

What this means for digital policy: The establishment of a US-UK data bridge was first agreed upon in January, at the Inaugural Meeting of the US-UK Comprehensive Dialogue on Technology and Data. Now we know that the data bridge will support a UK extension to the EU-US Data Privacy Framework

Why is it relevant? First, interestingly, there was no mention of the EU in this equation, although it impacts EU and US-UK-EU relations. Second, the USA and the UK are stressing collaboration on AI. It’s clear that AI is a priority for both. The USA is looking to Britain to help lead efforts on AI safety and regulation, hoping that AI companies find more fertile ground in the UK than in the EU’s stricter environment. Fourth, the UK wants to put its EU membership firmly behind it. Sunak stated ‘I know some people have wondered what kind of partner Britain would be after we left the EU. […] And we now have the freedom to regulate the new technologies that will shape our economic future, like AI, more quickly and flexibly.’

Beyond what they said, we’ll see how this impacts what they did not mention: Microsoft’s Activision Blizzard takeover.


Digital policy roundup (6–12 June)
// AI GOVERNANCE //

Is the EU’s AI Act in jeopardy?

The political deal behind the AI Act may be crumbling, and this might affect the Parliament’s endorsement of the text. 

In April, a deal struck between the four main political groups at the European Parliament stipulated they would not table alternative amendments to the AI Act. However, the European People’s Party (EPP) was given flexibility on the issue of remote biometric identification (RBI). On 7 June, the final deadline for amendments, the EPP tabled a separate amendment on RBI. There are two problems with that.  

  1. Other groups claim that the EPP broke the deal, and they might feel legitimised to vote for amendments that were tabled outside of the deal. If they do, there’s no telling how the Parliament’s plenary vote on 14 June will go. 
  1. Not everyone likes what’s in the actual amendment. The EPP’s proposed text stipulates that the member states may authorise the use of real-time RBI systems in public spaces, subject to the prejudicial authorisation, for ‘(1) the targeted search of missing persons, including children; (2) the prevention of a terrorist attack; (3) the identification of perpetrators of criminal offences punishable in the Member State concerned for a maximum period of at least three years.’ MEPs from four political groups (liberals, socialists, greens, and left) firmly oppose the EPP’s amendment on biometric identification. They are asking for a ban on such systems, claiming that AI systems that perform behavioural analysis are prone to error and falsely reporting law-abiding citizens and are discriminatory and ineffective for law enforcement.

Why is it relevant? If the Parliament doesn’t endorse the text, it will slow down the world’s first playbook on AI–it might take longer than the projected end of 2023 to reach a political deal among EU Institutions. This threatens the EU’s plans to be a leader in AI rule-making, and plenty of others are willing to step up.


// CHILD SAFETY ONLINE //

Instagram’s algorithms recommend child-sex content to paedophiles, research finds

An investigation by the Wall Street Journal, Stanford University, and the University of Massachusetts at Amherst uncovered that Instagram has been hosting large networks of accounts posting child sexual abuse material (CSAM). The platform’s practical recommendation algorithms play a key role in Instagram being the most valuable platform for sellers of self-generated CSAM (SG-CSAM): ‘Instagram connects paedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests, the Journal and the academic researchers found.’ Even viewing one such account led to new CSAM selling accounts being recommended to the user, thus helping to build the network.

This is where it gets worse. At the time of the research, Instagram enabled searching for explicit hashtags such as #pedowhore and #preteensex. When researchers searched for a paedophilia-related hashtag, a pop-up informed them: ‘These results may contain images of child sexual abuse’. Below the text, two options were given: ‘Get resources’ and ‘See results anyway’. Instagram has since removed the option to review the content, but it doesn’t stop us from wondering why this option was available in the first place.

Why is it relevant? First, it tells us something about the speed at which the platform reacted to reports of CSAM. Perhaps, it will now change, since the platform said it will form a task force to investigate the problem.

Second, it attracted the ire of European Commissioner Thierry Breton. 

 File, Computer Hardware, Electronics, Hardware, Webpage, Person, Monitor, Screen, Text
Campaigns 38

Third, Meta will have to demonstrate the measures it plans to take to comply with the EU’s Digital Services Act (DSA) after 25 August or face heavy sanctions, the bloc’s Thierry Breton said. Meta, designated a Very Large Online Platform (VLOP), has stringent obligations under the DSA, and fines for breaches can go to as high as 6% of a company’s global turnover. While Breton didn’t put Twitter on the blast, the company has also been designated as a VLOP, meaning it will also run the risk of being fined.

(And finally, it seems the media did not get the memo: ‘child sexual abuse material’ is the preferred terminology.)


Was this newsletter forwarded to you, and you’d like to see more?


// PRIVACY //

French Senate approves surveillance of suspects using cameras and microphones

The French Senate approved a contentious provision to a justice bill that allows for remote activation of computers and connected devices without the owner’s knowledge. This provision serves two purposes: (1) real-time geolocation for specific offences and the activation of microphones and (2) cameras to capture audio and images, which would be limited to cases of terrorism, delinquency, and organised crime. The Senate also adopted an amendment that limits the possibility of using geolocation to investigate offences punishable by at least ten years imprisonment. However, the implementation of this provision will still require judicial approval.

Why is it relevant? Surveillance tactics are never a favourite with privacy advocates, who typically argue that such privacy breaches cannot be justified by national security concerns. In this instance, the safeguards are unclear, as are mechanisms for redress.


// CYBERSECURITY //

Google introduces Secure AI Framework

Google’s introduced its Secure AI Framework (SAIF), which aims to reduce overall risk when developing and deploying AI systems. It is based on six elements organisations should be mindful of:

  1. Expand strong security foundations to the AI ecosystem by leveraging secure-by-default infrastructure protections and scaling and adapting infrastructure protections as AI threats advance
  2. Bring AI into an organisation’s threat universe by extending detection and response to AI-related cyber incidents
  3. Automate defences to keep pace with existing and new threats, including harnessing the latest AI innovations to improve response efforts
  4. Harmonise platform-level controls to ensure consistent security of AI applications across the organisation
  5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment via reinforcement learning based on incidents and user feedback
  6. Contextualise AI-system risks in surrounding business processes by conducting end-to-end risk assessments on AI deployment

Google has committed to fostering industry support for SAIF, working directly with organisations to help them understand how to assess and mitigate AI security risks, sharing threat intelligence, expanding its bug hunter programs to incentivise research around AI safety and security and delivering secure AI offerings.

Why is it relevant? As more AI products are integrated into digital products, the security of the supply chain will benefit from the secure-by-default AI products.


NATO to enhance military cyber defences in peacetime, integrate private sector capabilities

NATO member states are preparing to approve an expanded role for military cyber defenders during peacetime, as well as the permanent integration of private sector capabilities, revealed NATO’s assistant secretary general for emerging security challenges David van Weel. Furthermore, NATO plans to establish a mechanism to facilitate assistance among allies during crises when national response capabilities become overwhelmed.

The endorsement is expected at the upcoming Vilnius summit in Lithuania, scheduled for July.

 People, Person, Advertisement, Poster, Adult, Male, Man, Clothing, Formal Wear, Suit, Crowd, Face, Head
Image source: NATO CCDCOE Twitter account

Why is it relevant? Van Weel stated: ‘We need to move beyond naming and shaming bad actors in response to isolated cyber incidents, and be clear what norms are being broken.’ The norms he referred to are agreed-upon norms of responsible state behaviour in cyberspace, confirmed in the reports of the GGEs and the first OEWG on ICTs. His remarks come just two weeks after UN member states met under the auspices of the OEWG in New York to discuss responsible state behaviour in cyberspace. We’ll have more on that towards the end of this week on the Digital Watch Observatory–keep an eye out.


China issues draft guidelines to tackle cyber violence 

China’s Supreme People’s Court, Supreme People’s Procuratorate, and Ministry of Public Security issued draft Guiding Opinions on Legally Punishing Cyber Violence and Crimes (Chinese). 

The guidelines propose the punishment of online defamation, insults, privacy violations, and offline nuisance behaviour, such as intercepting and insulting victims of cyber violence and their relatives and friends, causing disturbances, intimidating others, and destroying property. They also address using violent online methods for malicious marketing and hype, as well as protecting civil rights and identifying illegal acts. 

The guidelines also note that network service providers can be convicted and punished for the offence if they neglect their legal obligations to manage information network security regarding identified instances of cyber violence and fail to rectify the situation after being instructed by regulatory authorities to take corrective measures. This applies to cases where such neglect results in the widespread dissemination of illegal information or other serious consequences.

The draft is open for public comments until 25 June.


// CRYPTOCURRENCIES //

US SEC launches lawsuits against Binance and Coinbase

The world’s biggest cryptocurrency exchanges–Binance and Coinbase–were hit by a wave of lawsuits from the US Securities and Exchange Commission (SEC). 

Why is it relevant? Because down the line, these actors might leave the USA for greener pastures. Diplo’s Arvin Kamberi has more on that in the video below.


The week ahead (13–19 June)

13 June: The Swiss Internet Governance Forum 2023 will discuss the use and regulation of AI, especially in the context of education, protecting fundamental rights in the digital age, responsible data management, platform influence, democratic practices, responsible use of new technologies, internet governance; and the impact of digitalisation on geopolitics. Our Director of Digital Policy, Stephanie Borg Psaila, will speak at the session: Digital governance and the multistakeholder approach in 2023.

14 June: The last two GDC thematic deep dives will focus on global digital commons and accelerating progress on the SDGs. The discussion on the global digital commons will explore principles, values, and ideas associated with this approach while considering how the Global Digital Commons (GDC) can enhance the safety, inclusivity, and the global ecosystem of digital public infrastructure and goods. The discussion on accelerating progress on the SDGs will examine the role of digital technology in achieving the SDGs and addressing future challenges, and the potential for generalising principles and approaches based on shared experiences. For more information on the GDC, visit our dedicated web page on the Digital Watch Observatory.

15–16 June: This year’s Digital Assembly will be held under the theme: A digital, open and secure Europe, and has openness, competition, digitalisation and cybersecurity in its focus. The assembly is organised by the European Commission and the Swedish Presidency of the Council of the EU.

19–21 June: The 2023 edition of Europe’s regional internet governance gathering–EuroDIG–will be themed Internet in troubled times: Risks, resilience, hope. The GIP will once again partner with EuroDIG to deliver messages and reports from the conference using DiploGPT. The reports and messages will be available on our dedicated Digital Watch page

19 June–14 July: The 53rd session of the Human Rights Council (HRC) will feature a panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression. The council will also consider the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights, as well as the report on Digital innovation, technologies, and the right to health.

For more events, bookmark the Digital Watch Observatory’s calendar of global policy events.


#ReadingCorner
 Animal, Bird, Penguin, Face, Head, Person, Baby, Art, Text
Campaigns 39

The June issue of the Digital Watch Monthly newsletter is out! 

We asked: who holds the dice in the grand game of addressing AI for the future of humanity? A brief summary of the UN Secretary-General’s policy brief with suggestions on how a Global Digital Compact (GDC) could help advance digital cooperation, May’s barometer of updates, and the leading global digital policy events ahead in June also feature.


Andrijana20picture
Andrijana Gavrilović – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, Diplo
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Digital Watch newsletter – Issue 80 – June 2023

AI is the name of the game

In the spirit of the front page illustration of the grand game of addressing AI for the future of humanity, an essential question arises: Who holds the dice? Is it mere coincidence, the divine, or vested interests?

 Animal, Bird, Penguin, Person, Baby, Face, Head

In May, AI dominated global discussions and media coverage, with AI on the agendas of meetings and parliamentary debates. What’s the hype?

First, there are very loud warnings that AI threatens the very survival of humanity. 

Second, the warning of existential risks is typically associated with a call to regulate future AI development. Using a new dynamic, businesses ask to be regulated. OpenAI CEo Sam Altman emphasised the crucial role of government in regulating AI and advocated for establishing a governmental or global AI agency to oversee the technology. This regulatory body would require companies to obtain licenses before training powerful AI models or operating data centres facilitating AI development. Doing so would hold developers to safety standards and establish consensus on the standards and risks that require mitigation. In parallel, Microsoft has published a comprehensive blueprint for governing AI, with Microsoft President Brad Smith also advocating for creating a new government agency to enforce new AI rules in his foreword.

Third, governments from developed countries are responding positively to the idea of regulating the future development of AI.

Fourth, there are growing voices saying that regulation supported by an existential threat narrative aims to block open-source AI developments and concentrate AI power in the hands of just a few leaders, mainly OpenAI/Microsoft and Google.

Regardless of the motivation behind AI regulation, we can identify a few echoing topics for regulation: privacy violations, bias, the proliferation of scams, misinformation, and the protection of intellectual property, among others. However, not all regulators share the same focal points. Here’s a snapshot of what jurisdictions worldwide expressed in May 2023 regarding their desire to regulate AI and their proposed approaches.

The EU. The world’s first rulebook for AI is, unsurprisingly, being shaped by the regulatory behemoth that is the EU. The bloc is taking a risk-based approach to AI, establishing obligations for AI providers and users based on the level of risk posed by the AI systems. It also introduces a tiered approach for regulating general-purpose AI, and foundation and generative AI models. The draft rules need to be endorsed in the Parliament’s plenary, which is expected to happen during the 12–15 June session. Then, negotiations with the Council on the law’s final form can begin. 

The USA. US government officials met with Alphabet, Anthropic, Microsoft, and OpenAI CEOs and discussed three key areas: the transparency, evaluation, and security of AI systems. The White House and top AI developers will collaborate to evaluate generative AI systems for potential flaws and vulnerabilities, such as confabulations, jailbreaks, and biases. The USA is also evaluating AI’s impact on the workforce, education, consumers, and the risks of biometric data misuse.

The UK. Another government that will collaborate with the industry is the UK: Prime Minister Rishi Sunak has met with the CEOs of OpenAI, Google DeepMind, and Anthropic to discuss the risks AI can pose, such as disinformation, national security, and existential threats. The CEOs agreed to work closely with the UK’s Foundation Model Taskforce to advance AI safety. The UK also focuses on AI-related election risks and the impact of developing AI foundation models for competition and consumer protection. The UK will seemingly keep its sectoral approach to AI, with no general AI regulation planned.

China. The Cyberspace Administration of China (CAC) raised concerns over advanced technologies such as generative AI, noting that they could seriously challenge governance, regulation and the labour market. The country has also called for improving the security governance of AI. In April, the CAC proposed measures for regulating generative AI services, which specify that providers of such services must ensure that their content aligns with China’s core values. Prohibited content includes discrimination, false information, and infringement of intellectual property rights (IPR). Tools utilised in generative AI services must undergo a security assessment before launch. The measures were open for comments until 2 June, meaning we will see the outcome soon.

Australia. Australia is concerned with AI risks such as deepfakes, misinformation, disinformation, self-harm encouragement, and algorithmic bias. The country is currently seeking opinions on whether it should support the development of responsible AI through voluntary approaches, like tools, frameworks, and principles or enforceable regulatory approaches, like laws and mandatory standards. 

South Korea. The country’s AI Act is only a few steps away from the National Assembly’s final vote. It would allow AI development without government pre-approval, categorise high-risk AI and set trustworthiness standards, support innovation in the AI industry, establish ethical guidelines, and create a Basic Plan for AI and an AI Committee overseen by the prime minister. The government also announced it would create new guidelines and standards for copyrights of AI-generated content by September 2023.

Japan. The Japanese government aims to promote and strengthen domestic capabilities to develop generative AI while addressing AI risks such as copyright infringement, exposure of confidential information, false information, and cyberattacks, among other concerns.

Italy. Italy temporarily banned ChatGPT over GDPR violations in March. ChatGPT has returned to Italy after OpenAI revised its privacy disclosures and controls, but Garante, the data protection authority of Italy, is intensifying its scrutiny of AI systems for adherence to privacy laws.

France. French privacy watchdog CNIL launched an AI Action Plan to promote a framework for developing generative AI, which upholds personal data protection and human rights. The framework is based on four pillars: (a) understanding AI’s impact on fairness, transparency, data protection, bias, and security; (b) developing privacy-friendly AI through education and guidelines; (c) collaborating with AI innovators for data protection compliance; and (d) auditing and controlling AI systems to safeguard individuals’ rights, including addressing surveillance, fraud, and complaints.

India. The government is considering a regulatory framework for AI-enabled platforms due to concerns such as IPR, copyright, and algorithm bias, but is looking to do so in conjunction with other countries.

International efforts. Ahead of the EU’s planned AI Act, the European Commission and Google plan to join forces ‘with all AI developers’ to develop a voluntary AI pact. Open AI’s Altman is also set to meet EU officials about the pact

The EU and the USA will jointly prepare an AI code of conduct to foster public trust in the technology. The voluntary code ‘would be open to all like-minded countries,’ US Secretary of State Anthony Blinken stated. 

Additionally, the G7 has agreed to launch a dialogue on generative AI – including issues such as governance, disinformation, and copyright – in cooperation with the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). The ministers will discuss AI as the ‘Hiroshima AI process’ and report results by the end of 2023. The G7 leaders have also called for developing and adopting technical standards to ensure the trustworthiness of AI. 

So, who holds the dice?
It’s not clear yet. We hope it will be citizens who hold the dice. Regulatory efforts aside, one way to ensure that individuals remain in charge of their knowledge, even when codified by AI, is through bottom-up AI. This would mitigate the risk of centralisation of power inherent in large generative AI platforms. In addition, bottom-up AI is typically based on an open-source and transparent approach that can mitigate most safety and security risks related to centralised AI platforms. Many initiatives, including the development strategies of Diplo’s AI, have proven that bottom-up AI is technically feasible and economically viable. There are many reasons to adopt bottom-up AI as a practical way to foster a new societal operating system built around the centrality, dignity, free will, and the achievement of the creative potential of human beings.

Dr Jovan Kurbalija, Director of DiploFoundation, explains why bottom AI is critical for our future.

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from May. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

 Triangle

World Telecommunication and Information Society Day was observed on 17 May with calls by UN officials to bridge the digital divide, support digital public goods, and establish a Global Digital Compact (GDC).
The Fourth EU-US ministerial meeting of the Trade and Technology Council (TTC) covered AI risks, content regulation, digital identities, semiconductors, quantum technologies, and connectivity projects.

Sustainable development

 Triangle

To bridge the digital gender gap by 2030, 100 million more women must embrace mobile internet annually, a GSMA report found.

The EU Commission and WHO launched a landmark digital health initiative to establish a comprehensive global network for digital health certification.
Papua New Guinea rolled out a platform for managing digital IDs.The Maldives introduced a digital ID mobile app for streamlined access to government services. Evrotrust’s eID program became Bulgaria’s official digital ID system.

 

Security

 Triangle

A Chinese report claims it has identified five methods the CIA uses to launch colour revolutions abroad and nine methods used as weapons for cyberattacks.

The Five Eyes cyber agencies attributed cyberattacks on US critical infrastructure to the Chinese state-sponsored hacking group Volt Typhon, which China has denied. The FBI disrupted a Russian cyberespionage operation dubbed Snake. The governments of Colombia, Senegal, Italy and Martinique suffered cyberattacks.

The USA and South Korea issued a joint advisory warning that North Korea is using social engineering tactics in cyberattacks.
NATO has warned of a potential Russian threat to internet cables and gas pipelines in Europe or North America.

Infrastructure

 Triangle

The Body of European Regulators for Electronic Communications (BEREC) and the majority of EU countries are against a push by telecom providers to get Big Tech to contribute to the cost of the rollout of 5G and broadband in Europe.
Tanzania has signed agreements to extend telecommunications services to 8.5 million individuals in rural areas.

 

E-commerce and the internet economy

 Triangle

The Body of European Regulators for Electronic Communications (BEREC) and the majority of EU countries are against a push by telecom providers to get Big Tech to contribute to the cost of the rollout of 5G and broadband in Europe.
Tanzania has signed agreements to extend telecommunications services to 8.5 million individuals in rural areas.

Digital rights

 Triangle

South Korea proposed changes to its Personal Information Protection Act to strengthen consent requirements, unify online/offline data processing standards, and establish criteria for assessing violations.

The 2023 World Press Freedom Index reveals that journalism is threatened by the fake content industry and rapid AI development
Internet shutdowns were reported in Pakistan in the wake of the arrest of the former prime minister, and in Sudan amid protests over the sentencing of an opposition leader, while social media was restricted in Guinea over protests.

 

Content policy

 Triangle

US Supreme Court rulings in Gonzalez v. Google, LLC and Twitter, Inc. v. Taamneh maintained Section 230 protections for online platforms

Google and Meta threatened to block links to Canadian news sites if a bill requiring internet platforms to pay publishers for their news is passed. 

Austria banned the use of TikTok on federal government officials’ work phones.
The Digital Public Goods Alliance (DPGA) and UNDP announced nine innovative open-source solutions to address the global information crisis. The EU called for clear labelling of AI-generated content to combat disinformation. While Twitter pulled out of code to tackle disinformation, it must still comply with the Digital Service Act when operating in the EU.

Jurisdiction and legal issues

 Triangle

Apple faces investigation in France over complaints that it intentionally causes its devices to become obsolete to compel users to purchase new ones. 
Meta was fined €1.2bn in Ireland for mishandling user data and its continued transfer of data to the USA in violation of an EU court ruling.

 

Technologies

 Triangle

A Chinese WTO representative has criticised the USA’s semiconductor industry subsidies, calling them an attempt to stymie China’s technological progress. South Korea asked the USA to review its rule barring China and Russia from using US funds for chip manufacturing and research.

The USA is considering investment restrictions on Chinese chips, AI, and quantum computing to curb the flow of capital and expertise. 
Australia has released a new National Quantum Strategy. China has launched a quantum computing cloud platform for researchers and the public.


UN Secretary-General’s policy brief for GDC

The UN Secretary-General has issued a policy brief with suggestions on how a Global Digital Compact (GDC) could help advance digital cooperation. The GDC is to be agreed upon in the context of the Summit of the Future in 2024 and is expected to ‘outline shared principles for an open, free and secure digital future for all’. Here is a summary of the brief’s main points.

The brief outlines areas where ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing SDGs, making the online space open and safe for everyone, and governing AI for humanity. It also suggests objectives and actions for advancing digital cooperation, structured around eight topics proposed to be covered by the GDC.

Digital connectivity and capacity building. The aim is to bridge the digital divide and empower individuals to participate fully in the digital economy. Proposed actions include setting universal connectivity targets and enhancing public education for digital literacy.

Digital cooperation for SDG progress. Objectives involve targeted investments in digital infrastructure and services, ensuring representative and interoperable data, and establishing globally harmonised digital sustainability standards. Proposed actions include defining safe and inclusive digital infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint for digital transformation.

Upholding human rights. The focus is on placing human rights at the core of the digital future, addressing the gender digital divide, and protecting workers’ rights. A key proposed action is establishing a digital human rights advisory mechanism facilitated by the Office of the UN High Commissioner for Human Rights.

Inclusive, open, secure, and shared internet. Objectives include preserving the free and shared nature of the internet and reinforcing accountable multistakeholder governance. Proposed actions involve commitments from governments to avoid blanket internet shutdowns and disruptions to critical infrastructures.

Digital trust and security. Objectives range from strengthening multistakeholder cooperation to developing norms, guidelines, and principles for responsible digital technology use. Proposed actions include creating common standards and industry codes of conduct to address harmful content on digital platforms.

Data protection and empowerment. Objectives include governing data for the benefit of all, empowering individuals to control their personal data, and establishing interoperable standards for data quality. Proposed actions include encouraging countries to adopt a declaration on data rights and seeking convergence on principles for data governance through a Global Data Compact.

Agile governance of AI and emerging technologies. Objectives involve ensuring transparency, reliability, safety, and human control in AI design and use, and prioritising transparency, fairness, and accountability in AI governance. Proposed actions range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector.

Global digital commons. Objectives include inclusive digital cooperation, sustained exchanges across states and sectors, and responsible development of technologies for sustainable development and empowerment.

Implementation mechanisms

The policy brief proposes numerous implementation mechanisms. The most notable is an annual Digital Cooperation Forum (DCF) to be convened by the Secretary-General to facilitate collaboration across digital multistakeholder frameworks and reduce duplication, promote cross-border learning in digital governance, and identify policy solutions for emerging digital challenges and governance gaps. The document further notes that ‘the success of a GDC will rest on its implementation’ at national, regional, and sectoral levels, supported by platforms like the Internet Governance Forum (IGF) and the World Summit on the Information Society Forum (WSIS). The brief suggests establishing a trust fund to sponsor a Digital Cooperation Fellowship Programme to enhance multistakeholder participation.

Global Digital Compact home
Explore Global Digital Compact
Read more about the Global Digital Compact.
Global Digital Compact home
Explore Global Digital Compact
Read more about the Global Digital Compact.

Policy updates from International Geneva

Intergovernmental Group of Experts on E-commerce and the Digital Economy, sixth session | 10–12 May

The main objective of this intergovernmental group of experts is to enhance UNCTAD’s efforts in the fields of information and communications technologies, e-commerce, and the digital economy, to empower developing nations to participate in and gain advantages from the ever-changing digital economy. Additionally, the group works to bridge the digital divide and promote the development of inclusive knowledge societies. The 6th session focuses on two main agenda items: How to make data work for the 2030 Agenda for Sustainable Development and Working Group on Measuring E-commerce and the Digital Economy.


2023 Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS), second session | 15–19 May

The second session of the GGE on LAWS convened in Geneva to ‘intensify the consideration of proposals and elaborate, by consensus, possible measures’ in the context of the Convention on Certain Conventional Weapons (CCW) while bringing in legal, military, and technological expertise.

In the advance version of the final report (CCW/GGE.1/2023/2), the GGE concluded that when characterising weapon systems built from emerging technologies in the area of LAWS, it is crucial to consider the potential future developments of these technologies. The group also affirmed that states must observe particular compliance with international humanitarian law throughout the life cycle of such weapon systems. States should limit the types of targets and the duration and scope of operations with which the weapon systems can engage; adequate training must be given to human operators. In cases where the weapon system based on technologies in the area of LAWS cannot comply with international law, the system must not be deployed.


The 76th World Health Assembly | 21–30 May

The 76th World Health Assembly (WHA) invited delegates of its 194 member states to Geneva to confer on the organisation’s priorities and policies under the theme of‘WHO at 75: Saving lives, driving health for all. A series of roundtables took place where delegates, partner agencies, representatives of civil society, and WHO experts deliberated about current and future public health issues of global importance. Committee B on 23 May specifically elaborated on the progress reports (A76/37) that highlighted the implementation of  ‘global strategies on digital health’ as agreed in the 73rd WHA. Since the endorsement of the strategies in 2020, the WHA Secretariat, together with development partners and other UN agencies, has trained over 1,600 government officials in more than 100 member states in digital health and AI. The secretariat has also launched numerous initiatives for knowledge dissemination and national developments related to digital health strategies. From 2023 to 2025, the secretariat will continue facilitating coordinated actions set out in the global strategies while prioritising member states’ needs.

What to watch for: Global digital policy events in June

5–8 June | RightsCon (San José, Costa Rica and online)

The 12th annual RightsCon will discuss global developments related to digital rights in 15 tracks: access and inclusion; AI; business, labour, and trade; conflict and humanitarian action; content governance and disinformation; cyber norms and encryption; data protection; digital security for communities; emerging tech; freedom of the media; futures, fictions, and creativity; governance, politics, and elections; human rights-centred design; justice, litigation, and documentation; online hate and violence; philanthropy and organisational development; privacy and surveillance; shutdowns and censorship; and tactics for activists.


12–15 June 2023 | ICANN 77 Policy Forum (Washington, DC, the USA)

The Policy Forum is the second meeting in the three-meeting annual cycle. The focus of this meeting is the policy development work of the Supporting Organizations and Advisory Committees and regional outreach activities. ICANN aims to ensure an inclusive dialogue that provides equal opportunities for all to engage on important policy matters.


13 June 2023  | The Swiss Internet Governance Forum 2023 (Bern, Switzerland and online)

This one-day event will focus on topics such as the use and regulation of AI, especially in the context of education; protecting fundamental rights in the digital age; responsible data management; platform influence; democratic practices; responsible use of new technologies; internet governance; and the impact of digitalisation on geopolitics.


15–16 June 2023 | Digital Assembly 2023 (Arlanda, Sweden and online)

Organised by the European Commission and the Swedish Presidency of the Council of the European Union, this assembly will be themed: A Digital, Open and Secure Europe. The conference program includes five plenary sessions, six breakout sessions, and three side events. The main discussion topics will be digital innovation, cybersecurity, digital infrastructure, digital transformation, AI, and quantum computing.


19–21 June 2023 | EuroDIG 2023 (Tampere, Finland and online)

EuroDIG 2023 will be held under the overarching theme of Internet in troubled times: Risks, resilience, hope. In addition to the conference, EuroDIG hosts YOUthDIG, a yearly pre-event that fosters the active participation of young people (ages 18–30) in internet governance. The GIP will once again partner with EuroDIG to deliver updates and reports from the conference using DiploGPT.


DiploGPT reported from the UN Security Council meeting

In May, Diplo used AI to report from the UN Security Council session: Futureproofing trust for sustaining peace. DiploGPT provided automatic reporting that produced a summary report, an analysis of individual submissions, and answers to the questions posed by the chair of the meeting. DiploGPT combines various algorithms and AI tools customised to the needs of the UN and diplomatic communications.


The Digital Watch observatory maintains a live calendar of upcoming and past events.


DW Weekly #114 – 5 June 2023

 Text, Paper, Page
Campaigns 61

Dear readers,

It’s more AI governance this week, not in the form of binding rules, but rather, voluntary codes of conduct – and lots of them. In other news, there’s a new warning about AI-caused extinction and a multimillion-dollar settlement in the child privacy violation lawsuit against Apple.  

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governing AI: Two codes of conduct announced

What’s the best course of action when legislation takes too long to materialise? If your idea is to simply wait patiently, you won’t find a kindred spirit in European Commissioner Thierry Breton.

The AI Pact: Eager to see companies get ready for the EU’s AI Act, last week Breton announced the AI Pact, a voluntary set of rules that will act as a precursor to the regulation. In an interview on TV5Monde on Saturday, he explained that the AI Pact aims to help companies get a head start on rules that will become binding and obligatory in around two or three years.

The commissioner hopes companies will warm to the proactive initiative, which is why he’s been doing the rounds, starting with Google CEO Sundar Pichai, followed by AnthropicAI CEO Dario Amodei. He’s also met EU digital ministers, who, we presume, have expressed support towards an initiative that could ward off regulatory headaches down the line.

The EU-US joint voluntary AI code: Fellow European Commissioner Margrethe Vestager appears to share a similar sense of impatience. Last week, she announced a voluntary code of conduct that will be developed by policymakers from Washington and Brussels in the coming weeks. The announcement came at the start of the bi-annual EU-US Trade and Tech Council (TTC) summit, which took place in Luleå, Sweden (the code did not make it into the joint statement, though).

The code announced by Vestager has a different objective than Breton’s: Although it’s still in the form of a two-page briefing, it will aim to set basic non-binding principles, or standards, around transparency requirements and risk assessments. 

Breton was more blunt: ‘It is for all the countries that are lagging behind, and the Americans are lagging behind on these issues, I’m not afraid to say it, well, they should also start doing the work that we have done, to establish basic principles. These principles underlie the legislative act that we have built.’ (Machine-translated from this original text: C’est qui que pour tous les pays qui sont en retard et les Américains sont en retard sur ces questions je n’ai pas peur de le dire et bien il faut aussi commencer à faire peut-être le travail que l’on a fait arrêter des principes de base qui sont les principes qui sont les principes sous-jacents à ceux qui nous ont permis d’avoir de bâtir cet acte législatif.)   

In a way, the joint initiative is an attempt to bridge the gap between the laissez-faire approach of the USA and the more stringent approach of the EU – an intermediate step before US companies will be obliged to follow EU rules. It’s probably what should have preceded the GDPR but didn’t.

AI labels to be added to EU’s disinformation code

Yes, there’s a third code that will be impacted by the need to set guardrails for generative AI. And yes, it comes from another European Commissioner. 

Values and transparency chief Vera Jourova announced this morning (Monday 5 June) that AI services should introduce labels for content generated by AI, such as text, images, and videos. This measure will be added to the voluntary Code of Practice on Disinformation, which counts Microsoft, Google, Meta, and TikTok among its signatories (Twitter left the group of code adherents).  

 Crowd, Person, Adult, Female, Woman, Blazer, Clothing, Coat, Jacket, People, Audience, Speech, Věra Jourová
Campaigns 62

‘Signatories of the EU Code of Practice against disinformation should put in place technology to recognise AI content and clearly label it to users,’ she said, with reference to services with ‘a potential to disseminate AI-generated disinformation’. It’s uncertain if this will be applicable to all generative AI services offered by participating companies.

Why is it relevant? First, all of these initiatives place the EU at the forefront of AI regulation. The EU clearly wants to set a global standard – and a high one at that – for AI, especially generative AI. Second, this will codify the emerging practice of labelling content generated by AI (here’s an example).


Digital policy roundup (29 May–5 June)
// AI GOVERNANCE //

Australia plans AI rules

Speaking of disinformation and deceptive content: Australia wants to introduce AI rules and is seeking public comment on how to mitigate the risks, which include algorithmic bias, lack of transparency, and reliability of data.

The request for comment highlights how other countries have approached AI rules – from voluntary approaches in Singapore to stricter regulation in the EU and Canada. 

Why is it relevant? The discussion paper attached to the call for comment extensively references the EU’s proposed AI Act. It includes elements of what a potential risk-based approach (the hallmark of the EU’s AI Act) could include. Breton will be happy.

OpenAI gets warning from Japan’s data protection watchdog

The Japanese data protection authority has issued administrative guidance to OpenAI, the operator of ChatGPT, in response to concerns over the protection of personal data. 

The guidance highlights the potential for ChatGPT to obtain sensitive personal data without proper consent, potentially infringing on privacy. No specific violations of the country’s privacy rules have been confirmed yet by Japan’s Personal Information Protection Commission.

OpenAI may face an onsite inspection or fines if it fails to take sufficient measures in response to the guidance.

Why is it relevant? Japan took a keen interest in ChatGPT: OpenAI CEO Sam Altman met Japan’s Prime Minister to discuss plans to open an office in the country, and government officials and financial sectors rushed in where others feared to tread. Although there’s been no breach, it seems the country’s data protection watchdog is treading more cautiously than the rest.

AI scientists warn about AI-caused extinction

Tech company chiefs and AI scientists have issued another stark warning: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.’

Almost 900 signatories endorsed the one-sentence open letter, spearheaded by the nonprofit organisation Center for AI Safety.

Why is it relevant? First, It’s yet again another warning (see the last week’s issues issue) that we should be much more concerned about the potentially catastrophic effects of future AI on humanity. Compared to what’s happening now, coming ramifications could be far more dire. Second, some of the signatories are behind companies that are pushing the boundaries of AI.


// CONTENT POLICY //

Meta threatens to pull news content from California over proposed rules

Meta has threatened to remove news content in California if state legislation were passed that required tech companies to pay publishers. The recently proposed California Journalism Preservation Act calls for platforms to pay a fee to news providers whose work appears on their services. 

In a tweet, Meta spokesman Andy Stone said the bill would predominantly benefit large, out-of-state media companies using the pretext of supporting California publishers.

Why is it relevant? This is taking place in parallel with Canada’s attempt to introduce similar legislation. Meta (and Google) told the Canadian Senate’s Standing Committee on Transport and Communications that it would have to withdraw from the country should the proposed bill pass as it stands.

 Page, Text, Person
Campaigns 63

// PRIVACY //

Amazon to pay settlement over children’s privacy lawsuit

Amazon will be required to pay USD25 million (EUR23.3 million) to the US Federal Trade Commission (FTC) to settle allegations that it violated children’s rights by failing to delete Alexa recordings as requested by parents. The FTC’s order must still be approved by the federal court.

The FTC’s investigation determined that Amazon had unlawfully used voice recordings to improve its Alexa algorithm for years. 

Why is it relevant? Although the technology is different, it reminds us of something similar: Companies training their models with personal data retrieved without consent. Amazon’s denial of the accusations will do little to appease parents after the FTC determined that the company deceived parents about its data deletion practices. 


Was this newsletter forwarded to you, and you’d like to see more?


 Logo, Symbol
Campaigns 64
// SHUTDOWNS //

Internet access disrupted in Africa: Authorities in Mauritania cut off the mobile internet last week, MENA-based non-profit SMEX reported. The Senegalese government imposed restrictions on mobile data and social media platforms, both actions following protests over the sentencing of opposition leader Ousmane Sonko. Senegal’s restrictions were confirmed by Netblocks, a global internet monitoring service, which said that authorities placed restrictions to prevent the ‘dissemination of hateful and subversive messages in the context of public order disturbances’.


The week ahead (5–12 June)

5–7 June: Re:publica returns to Berlin this week for its annual digital society festival. This year’s theme is money.

5–8 June: Another major meet-up, RightsCon 23, is taking place in Costa Rica and online. On the sidelines: The GFCE’s Regional Meeting for the Americas and Caribbean 2023

5–8 June: If you’re a regulator: ITU’s Global Symposium for Regulators 2023 is taking place in Sharm el-Sheikh, Egypt, and online. 

7 June: ENISA’s AI Cybersecurity Conference takes place in Brussels and online. AI is set to take centre stage.

12 June: It’s the last day to contribute to the US National Telecommunications and Information Administration (NTIA) request for comment on algorithmic accountability

12–15 June: The week-long ICANN77 is taking place in Washington, DC, and online.

#WebDebate on Tech Diplomacy

Join us online tomorrow, Tuesday, 6 June for Why and how should countries engage in tech diplomacy? starting at 13:00 UTC, with quite a line-up of special guests.


#ReadingCorner
 Body Part, Finger, Hand, Person, Electronics, Phone, Baby, Mobile Phone, Face, Head, Photography, Texting
Campaigns 65

Children and the metaverse

Meta’s release – and Apple’s planned release – of mixed-reality headsets may reignite people’s interest in the metaverse. This means more users might start spending their time in the metaverse. Which probably means that more kids will give it a go.

UNICEF and Diplo’s latest report, The Metaverse, Extended Reality and Children, considers the potential effects – both good and bad – that the metaverse has on children; the drivers of and predictions for the growth of the metaverse; and the regulatory and policy challenges posed by the metaverse.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #113 – 29 May 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 73

Dear all,

OpenAI CEO Sam Altman was in the news again, not only because of the European tour he’s embarked on, but over things he said and wrote last week. In other news, Microsoft president Brad Smith joined in the private sector’s call for regulating AI. Meta was hit with a historic fine over data mishandling, while the Five Eyes have attributed a recent spate of cyberattacks to China.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

OpenAI’s Sam Altman says forget about existing AI, it’s future AI that should worry us

There were two reasons why OpenAI CEO Sam Altman made headlines last week. The first concerns a threat he made to European lawmakers (which he then took back) about regulating AI. That’s about regulating existing AI.

The second is his warning on the existential risks which AI could pose to humanity. That’s about regulating future AI. Let’s start with this one.

Regulating future AI… now   

Doomsday theories abound these days. We just don’t know if we’ll see AI take over the world in our lifetime, or that of our children – or if it will ever even come to that at all. 

Sam Altman, the man behind OpenAI’s ChatGPT, which took the world by storm in the space of a few weeks, is probably one of the few people who could predict what AI might be capable of in ten years’ time, within acceptable levels of miscalculations. (That’s also why he’s in our Highlight section again this week).

In case he felt he wasn’t vocal enough during the recent hearing before the US Senate Judiciary, he’s now written about it again: ‘Given the picture as we see it now, it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains… superintelligence will be more powerful than other technologies humanity has had to contend with in the past.’ That would give us ten years before it could all go awry. Considering the time it takes for an EU regulation to see the light of day, ten years is not a long time.

So how should we regulate future AI? Altman sees a three-pronged approach to what he calls superintelligence. The first is a government-backed project where companies agree to safety guardrails based on the rate of growth in AI capability (however this will be measured). This reminds us of what economist Samuel Hammond wrote recently on the need for a Manhattan Project for AI Safety.

The second is to form an international authority, similar to the International Atomic Energy Agency, with authority over AI above a certain capability, to ‘inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc’. 

The third is more research on safety and alignment issues, but we won’t go into this for now.

What is interesting here is the emphasis on regulations based on capabilities. It’s along the same lines as what he argued before US lawmakers the week before: In his view, the stronger or more powerful the algorithm or resource, the stricter the rules should be. By comparison, the EU’s upcoming AI Act takes on a risk-based approach: the higher the risk, the stricter the rules.

Along this reasoning, models that fall below Altman’s proposed capability threshold would not be included under this (or any?) regulation. Why? He thinks that (a) today’s models are not as risky as future versions will be, and (b) that companies and open-source projects shouldn’t have to face burdensome mechanisms like licences or audits. ‘By contrast’, he writes, ‘the systems we are concerned about will have power beyond any technology yet created, and we should be careful not to water down the focus on them by applying similar standards to technology far below this bar.’ 

(Over-) regulating existing AI 

He does have a point. If more powerful models are misused, their power to cause harm is significantly higher. The rules that would apply to riskier models should therefore be more onerous. And he’s probably right that a moratorium wouldn’t stop the advancements from continuing in secret.

But there are also major flaws with Altman’s logic. First, it’s not an either/or scenario, as he suggests. Shifting the focus to tomorrow’s AI, just because it will be more powerful, won’t make today’s issues go away. Today’s issues still need to be tackled, and soon.

This logic explains why he felt compelled to criticise the EU’s upcoming AI Act as a case of over-regulation. Licences and regulations, to him, are an unnecessary burden on companies whose systems carry more or less the same risks as other internet technologies (and therefore, he probably thinks are also insignificant compared to those which more powerful AI systems will pose in the next ten years).  

Second, existing models are the basis for more powerful ones (unless he knows something that we don’t). Hence, the project and authority that Altman envisions should start addressing the issues we see today, based on the capabilities we have today. Guardrails need to be in place today. 

And yet, it’s not Altman’s criticism that angered European Commissioner Thierry Breton, but rather his threat of pulling out of Europe over the proposed rules. If there’s one action that a threat could trigger, it would be the immediate implementation of guardrails.

Tweet from Thierry Breton on 25 May says 'There is no point in attempting blackmail -- claiming that by crafting a clear framework, Europe is holding up the rollout of generative #AI. To the contrary! With the "AI Pact" I proposed, we aim to assist companies in their preparations for the EU AI ACT. The words 'Is that a threat' appear over a photo of a woman with blond hair.

Digital policy roundup (22–29 May)
// AI GOVERNANCE //

Microsoft proposes five-point blueprint for AI regulation

Microsoft has published a blueprint for governing AI, which includes placing tight rules (or safety breaks) on high-risk AI systems that are being deployed to control critical infrastructure, and creating new rules to govern highly capable AI foundation models. In his forward to the blueprint, Microsoft president Brad Smith also called for a new government agency to implement these new rules.

A five-point blueprint for governing Al

  1. Implement and build upon new government-led Al safety frameworks
  2. Require effective safety brakes for Al systems that control critical infrastructure
  3. Develop a broader legal and regulatory framework based on the technology architecture for Al
  4. Promote transparency and ensure academic and public access to Al
  5. Pursue new public-private partnerships to use Al as an effective tool to address the inevitable societal challenges that come with new technology

Source: Microsoft

Why is it relevant? First, some of the proposals in Microsoft’s blueprint are similar to OpenAI Sam Altman’s proposals. For instance:

  • Microsoft proposes ways of governing highly capable AI foundation models – more or less what Altman describes as superintelligent systems. These powerful new AI models are at the frontier of research and development and are emerging at advanced data centres using internet-scale datasets.
  • Like Altman, Smith is not thinking about ‘the rich ecosystem of AI models that exists today’, but rather the small class of edgy AI models that are redefining the frontier.
  • And, again just like Altman, Smith believes in a framework consisting of rules, licensing requirements, and testing.

Second, Microsoft’s blueprint goes a step further (and is closer to the EU’s risk-based approach) in calling for safety breaks on AI systems used within critical infrastructure sectors. Not all AI systems used in these sectors are high-risk, but those that manage or control infrastructure systems for electricity grids or water systems, for instance, require tighter controls.


EU, Google to develop voluntary AI pact ahead of new AI rules

Thierry Breton, the commissioner in charge of the EU’s digital affairs, and Google chief executive Sundar Pichai agreed last week to work on a voluntary AI pact ahead of new regulations. The agreement will help companies develop and implement responsible AI practices. 

Why is it relevant? First, Breton said companies can’t afford to wait until the AI regulation is in place to start complying with the rules. Second, the commissioner used his meeting with Pichai to call out other companies who pick and choose the regulations they’ll implement. We’re assuming he’s referring to OpenAI and Twitter.


// DATA PROTECTION //

Meta’s record fine puts pressure on EU, USA to conclude data transfer framework  

Meta has been fined a record-breaking EUR1.2 billion (USD1.29 billion) and given six months to stop data transfers of European Facebook users from the EU to the USA. 

The fine was imposed by the Irish Data Protection Commissioner (DPC) after the company continued to transfer data despite the EU Court of Justice’s ruling of 2020 invalidating the EU-USA Privacy Shield framework. The data protection regulator concluded that the legal basis that Meta used to continue transferring data did not afford European citizens adequate protection of their rights. 

Why is it relevant? The company will appeal, so there’s still a long way to go before the fine is confirmed. But the pressure’s on for EU and US officials negotiating the new data protection framework. The new Trans-Atlantic Data Privacy Framework, announced in March 2022, has not yet been finalised.


Was this newsletter forwarded to you, and you’d like to see more?


// CYBERCRIME //

Five Eyes attribute cyberattacks to China

The intelligence agencies of the USA, Australia, Canada, New Zealand, and the UK – called the Five Eyes – have attributed recent cyberattacks on US critical infrastructures to the Chinese state-sponsored hacking group Volt Typhon.

Responding to the joint cybersecurity advisory issued by the intelligence agencies, China’s foreign ministry spokesperson Mao Ning dismissed the advisory as disinformation. ‘No matter how the tactics change, it does not change the fact that the US is the empire of hacking,’ she said.


// GDC //

Public institutions ‘ill-equipped to assess and respond to digital challenges’ – UN Secretary-General

Most governments do not have sufficient skills to respond to digital challenges, a result of decades of underinvestment in state capacities. The UN Secretary-General’s latest policy brief says that government capacities should therefore be a priority for cooperation on digital issues. 

In case you’re not following the process already: The Global Digital Compact is an initiative of the Secretary-General for promoting international digital cooperation. UN member states are expected to agree on the principles forming part of the Global Digital Compact during next year’s Summit of the Future. 

If you’ve already read the brief: We wouldn’t blame you for thinking that the brief proposes quite a few mechanisms at a time when there are already hundreds of them in place. After all, the Secretary-General’s initiative followed a report which recommended that we ‘make existing intergovernmental forums and mechanisms fit for the digital age rather than rush to create new mechanisms’.

If you haven’t done so already: Consider contributing to the informal consultations. The next two deep dives are in two weeks’ time.


The week ahead (29 May–4 June)

30 May: The G7 AI working group’s first meeting, effectively kickstarts the Hiroshima AI process.

30–31 May: The EU-US Trade and Technology Council (TTC) meets in Sweden. No, they won’t tackle data flows. Yes, they will tackle a host of other issues – from AI to green tech.

31 May–2 June: The Council of Europe’s Committee on AI will hold its 6th meeting in Strasbourg, under the chairmanship of Ambassador Thomas Scheider, who was re-elected during the 5th meeting.  

31 May–2 June: The 15th International Conference on Cyber Conflict (CyCon), organised by the NATO Cooperative Cyber Defence Centre of Excellence, takes place in Tallinn, Estonia.

31 May: Join us online or in Geneva for our conference on Building Trust in Digital Identities. As more governments around the world are exploring and implementing digital identity (or e-ID) solutions, we look at safety, security, and interoperability issues.

For more events, bookmark the DW observatory’s calendar of global policy events.


#ReadingCorner
A human hand sketching a robot on graph paper

How do we avoid becoming knowledge slaves? By developing bottom-up AI

‘If you’ve ever used these [ChatGPT or similar] tools, you might have realised that you’re revealing your thoughts (and possibly emotions) through your questions and interactions with the AI platforms,’ writes Diplo’s Executive Director Dr Jovan Kurbalija. ‘You can therefore imagine the huge amount of data these AI tools are gathering and the patterns that they’re able to extract from the way we think.’ 

The consequence is that we “risk becoming victims of ‘knowledge slavery’ where corporate and/or government AI monopolies control our access to our knowledge.” There’s a solution: developing bottom-up AI. Read the full article.

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #112 – 22 May 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 80

Dear readers,

The search for ways to govern AI reached the US Senate Judiciary halls last week, with a hearing involving OpenAI’s Sam Altman, among others. The G7 made negligible progress on tackling AI issues, but significant progress on operationalising the Data Free Flow with Trust approach.   

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

US Senate hearing: 10 key messages from OpenAI’s CEO Sam Altman 

If OpenAI Sam Altman’s hearing before the US Congress last week reminded you of Mark Zuckerberg’s testimony a few years ago, you’re not alone. Both CEOs testified before the Senate Judiciary Committee (albeit different subcommittees), and both called for regulation of their respective industries.

However, there’s a significant distinction between the two. Zuckerberg was asked to testify in 2018 primarily due to concerns surrounding data privacy and the Cambridge Analytica scandal. In Altman’s case, there was no scandal: lawmakers are trying to figure out how to navigate the uncharted territory of AI. And with Altman’s hearing coming several years later, lawmakers now have more familiarity with policies and approaches that proved effective, and those that failed. 

Here are ten key messages Altman delivered to lawmakers during last week’s subcommittee hearing.

1. We need regulations that employ a capabilities-based approach…

Amid discussions around the EU’s forthcoming AI Act, which will take on a risk-based approach (the higher the risk, the stricter the rules), Altman argued that US lawmakers should favour a power- or capabilities-based strategy (the stronger or more powerful the algorithm, the stricter the rules). 

He suggested that lawmakers consider ‘a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities’.

What would these capabilities look like? According to Altman, the benchmark would be determined by what the models can accomplish. So presumably, one would take AI’s abilities at the time of regulation as a starting point, and gradually increase the benchmarks as AI improves its abilities.

2. Regulations that will tackle more powerful models…

We know it takes time for legislation to be developed. But let’s say lawmakers were to introduce new legislation tomorrow: Altman thinks that the starting point should be more powerful models, rather than what exists right now.

‘Before we released GPT-4, our latest model, we spent over six months conducting extensive evaluations, external red teaming and dangerous capability testing. We are proud of the progress that we made. GPT-4 is more likely to respond, helpfully and truthfully and refuse harmful requests than any other widely deployed model of similar capability… We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.’

3. Regulations that acknowledge that users are just as responsible…

Altman did not mince words: ‘Certainly companies like ours bear a lot of responsibility for the tools that we put out in the world, but tool users do as well.’

Hence the need for a new liability framework, Altman restated.

4. …And regulations that place the burden on larger companies.

Altman notes that regulation comes at the risk of slowing down the American industry ‘in such a way that China or somebody else makes faster progress.’ 

So how should lawmakers deal with this risk? Altman suggests that the regulatory pressure should be on the larger companies that have the resources to handle the burden, unlike smaller companies. ‘We don’t wanna slow down smaller startups. We don’t wanna slow down open source efforts.’ 

5. Independent scorecards are a great idea, as long as they recognise that its ‘early stages’

When a Senator asked Altman whether there should be independent testing labs to provide scorecards that indicate ‘whether or not the content can be trusted, what the ingredients are, and what the garbage going in may be, because it could result in garbage going out’, Altman’s positive response was followed by a caveat.

‘These models are getting more accurate over time… (but) this technology is in its early stages. It definitely still makes mistakes… Users are pretty sophisticated and understand where the mistakes are… that they need to be responsible for verifying what the models say, that they go off and check it.’

The question is, when will (it be convenient to say that) the technology outgrew its early stages? 

6. Labels are another great idea for telling fact from fiction

Altman points out that to assist people understand what they’re reading and viewing, it helps if there are labels to tell people what they’re looking at. ‘People need to know if they’re talking to an AI, if, if content that they’re looking at might be generated or might not’.

The generated content will still be out there, but at least, creators of the generated content can be transparent with their viewers, and viewers can make informed choices, he said.

7. It takes three to tango: the combined effort of government, the private sector, and users to tackle AI governance

Neither regulation nor scorecards or labels will be sufficient on their own. Altman referred to the birth of photoshopped images, highlighting how people rapidly learned to understand that images might be photoshopped and the tool misused.

The same applies to AI: ‘It’s going to require a combination of companies doing the right thing, regulation and public education.’

8. Generative AI won’t be the downfall of news organisations

The reason is simple, according to Altman: ‘The current version of GPT-4 ended training in 2021. It’s not a good way to find recent news.’

He acknowledges that other generative tools built on top of ChatGPT can pose a risk for news organisations (presumably referring to the ongoing battle in Canada, and previously in Australia, on media bargaining), but also thinks that it was the internet that let news organisations down.

9. AI won’t be the downfall of jobs, either

Altman reassured lawmakers that ‘GPT-4 and other systems like it are good at doing tasks, not jobs’. We reckon jobs are made up of tasks, and that’s why Altman might have chosen different words later in his testimony.

‘GPT-4 will entirely automate away some jobs, and it will create new ones that we believe will be much better… This has been continually happening… So there will be an impact on jobs. We try to be very clear about that, and I think it will require partnership between the industry and government, but mostly action by the government to figure out how we want to mitigate that.’

10. Stay calm and carry on: GPT is ‘a tool, not a creature’

We had little doubt about that, but what Altman said next might have been aimed at reassuring those who said they’re worried about humanity’s future: GPT-4 is a tool ‘that people have a great deal of control over and how they use it.’ 
The question for Altman is: how far are we from losing control over AI? It’s a question no one asked him.


Digital policy roundup (15–22 May)
// AI & DATA //

G7 launches Hiroshima AI dialogue process

The G7 has agreed to launch a dialogue on generative AI – including issues such as governance, disinformation, and copyright – in cooperation with the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). Sunday’s announcement, which came at the end of the three-day summit in Hiroshima, Japan, provides the details of what the G7 digital ministers agreed to in April. The working group tasked with the Hiroshima AI process is expected to start its work this year. 

The G7 also agreed to support the development of AI standards. (Refresher: Here’s the G7 digital ministers’ Action Plan on AI interoperability.)  

Why is this relevant? On the home front, with the exception of a few legislative hotspots working on AI rules, most governments are worrying about generative AI (including ChatGPT) but are not yet ready to take legislative action. On the global front, while the G7’s Hiroshima AI process is at the forefront of tackling generative AI, the group acknowledges that there’s a serious discrepancy among the G7 member states’ approaches to policy. The challenges are different, but the results are similar. 

G7 greenlights plans for Data Free Flow with Trust concept

The G7 had firmer plans in place for data flows. As anticipated, the G7 endorsed the plan for operationalising the Data Free Flow with Trust (DFFT) concept, outlined last month by the G7 digital ministers.

The leaders’ joint statement draws attention to the difference between unjustifiable data localisation regulations and those that serve the public interests of individual countries. The practical application of this disparate treatment remains uncertain; the new Institutional Arrangement for Partnership (IAP), which will be led by the OECD, has a lot of work ahead.

Why is this relevant? The IAP’s work won’t be easy. As the G7 digital ministers acknowledged, there are significant differences in how G7 states (read: the USA and EU countries) approach cross-border data flows. But as any good negotiator will say, identifying commonalities offers a solid foundation, so the G7 communique’s language (also found in previous G7 and G20 declarations) remains promising. Expect accelerated progress on this initiative in the months to come. 

 Accessories, Formal Wear, Tie, Face, Head, Person, Photography, Portrait, People, Adult, Male, Man, Clothing, Suit, Computer Hardware, Electronics, Hardware, Monitor, Screen, Crowd, Necktie, Eric Schmidt
Campaigns 81

Ex-Google CEO says AI regulation should be left to companies 

Former Google CEO Eric Schmidt believes that governments should leave AI regulation to companies since no one outside the tech industry has the necessary expertise. Watch the report or read the transcript (excerpt):

NBC: You’ve described the need for guardrails and what I’ve heard from you is, we should not put restrictive regulations from the outside, certainly from policymakers who don’t understand it. I have to say I don’t hear a lot of guardrails around the industry in that. it really just as I’m understanding it from you comes down to what the industry decides for itself.

Eric Schmidt: When this technology becomes more broadly available, which it will and very quickly, the problem is going to be much worse. I would much rather have the current companies define reasonable boundaries. 

NBC: It shouldn’t be a regulatory framework. It maybe shouldn’t even be a sort of a democratic vote. It should be the expertise within the industry that helps to sort that out. 

Eric Schmidt: The industry will first do that because there’s no way a non-industry person can understand what is possible. It’s just too new, too hard, there’s not the expertise. There’s no one in the government who can get it right, but the industry can roughly get it right and then the government can put a regulatory structure around it.


// SECTION 230 //

Section 230 unaffected by two US Supreme Court judgements

As anticipated, the US Supreme Court left Section 230 untouched in two judgements involving families of people killed by Islamist extremists overseas. The families tried to hold social media platforms liable for allowing extremists on their platforms or recommending such content to users, arguing that Section 230 (a rule that protects internet platforms from liability for third-party content posted on the platforms) should not shield the platforms.

What the Twitter vs Taamneh (21-1496) judgement says: US Supreme Court justices agreed unanimously to reverse a lower court’s judgement against Twitter, in a case initiated by the US relatives of Nawras Alassaf, who was killed in Istanbul in 2017. The Supreme Court struck down claims that Twitter aided extremist groups: Twitter’s ‘algorithms appear agnostic as to the nature of the content, matching any content (including ISIS’ content) with any user who is more likely to view that content. The fact that these algorithms matched some ISIS content with some users, therefore, does not convert defendants’ passive assistance into active abetting.’

What the Gonzalez vs Google (21-1333) judgement says: In its judgement in a parallel case, the US Supreme Court sent back the lawsuit brought by the family of Nohemi Gonzalez, who was fatally shot in Paris in 2015, to the lower court. The Supreme Court declined to even address the scope of Section 230, as the family’s claims were likely to fail in the light of the Twitter case.


// TIKTOK //

EU unfazed by TikTok’s cultural diplomacy at Cannes

TikTok’s partnership with the Festival de Cannes was the talk of the French town last week. But TikTok’s cultural diplomacy efforts, which appeared at Cannes for the second year, failed to impress the European Commission.

Referring to TikTok’s appearance at Cannes, in an interview on France TV’s Télématin (jump to 1’17’) European Commissioner Thierry Breton said that the company ‘still (has) a lot of room for improvement’, especially when it comes to safeguarding children’s data. Breton also confirmed that he was in talks with TikTok’s CEO recently, presumably about the Digital Service Act commitments, which very large platforms need to deliver on by 25 August.


The week ahead (22–28 May)

23–25 May: The 3rd edition of the Quantum Matter International Conference – QUANTUMatter 2023 – takes place in Madrid, Spain. Experts will tackle the latest in quantum technologies, emerging quantum materials and novel generations of quantum communication protocols, quantum sensing, and quantum simulations.

23–26 May: The Open-Ended Working Group (OEWG) will hold informal intersessional meetings that will comprise the Chair’s informal roundtable discussion on capacity-building and discussions on topics under the OEWG’s mandate.

24 May: If you haven’t yet proposed a session for this year’s Internet Governance Forum (IGF) (to be held on 8–12 October in Kyoto, Japan), you still have a couple of days left until the extended deadline.

24–25 May: The 69th meeting of TF-CSIRT, the task force that coordinates Computer Security and Incident Response Teams in Europe, takes place in Bucharest, Romania.

24–26 May: The 16th international Computers, Privacy, and Data Protection (CPDP) conference, taking place in Brussels and online, will deal with ‘Ideas that drive our digital world’, mostly related to AI governance, and, well, data protection.

25 May: There will be lots to talk about during the Global Digital Compact’s next thematic deep dive, on AI and emerging technologies (UTC’s afternoon) and digital trust and security (UTC’s evening). Register to participate.

For more events, bookmark the DW observatory’s calendar of global policy events.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #111 – 15 May 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 92

Dear readers,

Once again we’re starting with an AI-related highlight: There’s been progress in the AI Act’s legislative journey, and the EU could well see the proposed rules come into effect very soon. In other news, ChatGPT is now being monitored in Latin America, while TikTok got kicked off Austrian governments’ phones. 

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Europe’s AI Act moves ahead: Why, how, and what’s next

The EU’s proposed AI Act moved ahead last week after a key vote at the committee level. In practice, this means that the proposed law could very well be passed by the end of this year.  Let’s break all of this down and see what this means for companies and consumers.

Who voted. Last week, members of the European Parliament in two committees – the Internal Market Committee and the Civil Liberties Committee – voted on hundreds of amendments made to the European Commission’s original draft rules.

The AI Act proposes a sliding scale of rules based on risk. Practices with an unacceptable level of risk will be prohibited; those considered high-risk will carry a strict set of obligations; less risky ones will have more relaxed rules, and so on.  

Why the vote is relevant. First, lawmakers wanted to ensure that general-purpose AI – like ChatGPT – is captured by the new draft rules. Second, in the grand scheme of things, when one of the principal EU entities agrees on a text, that marks a significant milestone in the EU’s multi-stage legislative process.

An infographic illustrates the process of an ordinary legislative procedure, starting with an initiative from the European Commission, proceeding to a First Draft, and then undergoing independent reviews in the Council of the EU and European Parliament before continuing the joint discussions, until reaching its final form. The icons for First Draft and the Council of the EU have green checkmarks, while the icon for the European Parliament has a light green dotted-line check mark indicating its ongoing status. A vertical dashed red line divides the illustration between the separate Council of the EU and European Parliament steps and the joint amendments and agreement step, indicating its current status.
To give you a better idea of where we are: As soon as Parliament approves its draft text in plenary during the upcoming 12–15 June session (marked by the dotted green checkmark), the AI Act advances to the trilogue talks (the dotted red line). The EU Council’s negotiating text was approved in December. Source: Based on a diagram from artificialintelligenceact.eu

What Parliament’s version of the AI Act says. There are many new amendments, so we’ve rounded up the most important:

  • Tough rules for ChatGPT-like tools. Parliament’s amendments regulate ChatGPT and similar tools. The proposed rules define generative AI systems under a new category called Foundation Models (this is what underpins generative AI tools). Providers would have to abide by similar obligations as high-risk systems. This means applying safety checks, data governance measures, and risk mitigations before introducing models into the market. The proposed rules would also oblige them to consider foreseeable risks to health, safety, and democracy.
  • Copyright tackled, too (sort of). Providers would also need to be transparent: They would need to inform people that content was machine-generated and provide a summary of (any) copyrighted materials used to train their AIs. It’s then up to the rights holders to sue for copyright infringement if they so decide.
  • New prohibited practices introduced. Parliament wants to see intrusive and discriminatory uses of AI systems banned. These would include real-time remote biometric identification systems in public spaces, the use of emotion recognition systems by law enforcement and employers, and the indiscriminate scraping of biometric data from social media or CCTV footage for creating databases (reminds us of Clearview AI’s practices – coincidentally last week, Austria’s Data Protection Authority ruled against the company.).
  • Consumers may complain. The proposal boosts people’s right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that impact their rights.
  • Research activities excluded. AI components provided under open-source licences are also excluded. So if a company says it is experimenting with a system, it might be able to avoid the rules. But if it then implements that system? The rules would apply.

Compare and contrast with China’s plans. Interestingly, the draft rules on generative AI, which China published last month, contain similar provisions on transparency, accountability, data protection, and risk management. The Chinese version does, however, go much further on copyright (tools can’t be trained on data that infringes intellectual property rights) and the accuracy of information (whatever is generated by AI needs to be true and accurate) – two major concerns for governments around the world.

What to expect in the next stage. Once the discussions between the EU Council and European Parliament (on the Commission’s proposal) start – the so-called trilogues – there’s a risk that the rules could get watered down. It’s not necessarily a matter of diluting the stringent rules for providers – six months on from the introduction of ChatGPT, there’s a pretty clear understanding of what these tools can, cannot, and shouldn’t be allowed to do. 

Rather, it’s more a matter of governments wanting to ensure their own freedom to use AI tools in ways they deem essential for people’s safety (including some practices that Parliament wants banned) and to address national security concerns when needed. 

As for timelines, there’s pressure from all sides to see this through by the end of the year. Providers want legal certainty; users want protection, and the Spanish Presidency (which takes the helm of the EU Council from June to December this year) will want to be remembered for seeing the law through.

Digital policy roundup (815 May)
// AI //

Data protection authorities in Latin America monitoring ChatGPT

Latin American data protection watchdogs forming part of the Ibero-American Data Protection Network (RIPD) are monitoring OpenAI’s ChatGPT for potential privacy breaches. The network, comprised of 16 authorities from 12 countries, is also coordinating action around children’s access to the AI tool and other risks, such as misinformation.

Why is it relevant? ChatGPT has become a global concern, far beyond the investigative action we’d expect from the usual regulatory hotspots (USA, Europe, China).


// COMPETITION //

European Commission approves Microsoft’s acquisition of Activision

Microsoft’s acquisition of Activision, the creator of the widely popular Call of Duty video game franchise, has received the European Commission’s seal of approval. Microsoft must adhere fully to its commitments for approval to be granted.

Since the EU’s antitrust regulators believe that Microsoft could harm competition if it made Activision’s games exclusive to its own cloud game streaming service, the company will now have to give consumers a licence to stream anywhere they like. 

Why is it relevant? This contrasts with last month’s decision by the UK’s Competition and Markets Authority (CMA) to block the acquisition over concerns that the merger would negatively affect the cloud gaming industry. This decision will be confirmed or rejected on appeal. In the USA, the Federal Trade Commission’s case is scheduled for a hearing on 2 August. 

Flow shows remedies to avoid anti-competitive actions by Microsoft with Activision games. Icons show Microsoft must commit to 'No harm to the distribution of Activision Blizzard console games. It must ameliorate access limitations to Activision Blizzard's games by offering free licence access to all Activision games for cloud game streaming providers and users, creating opportunities for innovation, and preventing barriers for competitors.
Campaigns 93

// TIKTOK //

Austria blocks TikTok from government phones

The Austrian government has joined other countries in banning Chinese-owned TikTok from being used on federal government officials’ official phones.

The announcement was made by Austria’s Federal Chancellor, together with his vice-chancellor (and minister of culture and arts), and the ministers for finance and home affairs. Citing the ban by the European Commission in February,  Austria is concerned with three issues:

  1. Foreign authorities (read: China) potentially having technical access to official devices through the app’s functions and exploiting any vulnerabilities to access sensitive information.
  2. The potential for data protection and security breaches through the collection of a large amount of personal information and sensor data.
  3. The risk of influencing the opinion-forming process of public officials, such as through the manipulation of search results.

Why is it relevant? This adds momentum to the ongoing anti-TikTok wave in Europe and the USA and adds pressure on the company to prove its trustworthiness and security measures to avoid being blocked.


// ANTITRUST //

Italy investigating Apple for alleged abuse of dominant position in app market

The Italian competition authority (the Autorità Garante della Concorrenza e del Mercato – AGCM) has launched an investigation into Apple’s uneven application of its own app tracking policies, which the agency says is a potential abuse of the company’s dominant position in the online app market. 

What’s the issue about? If you’re an iPhone or iPad user, you might have noticed a privacy pop-up (such as the one below) when installing third-party apps that try to track you. That’s a feature of Apple’s App Tracking Transparency (ATT) policy introduced two years ago. The problem is that the same interruption doesn’t apply to Apple itself when its own apps try to track you, so users are more likely to think twice before allowing a third-party app to track their activity. In addition, the advertising data passed on to third-party developers is inferior to the data that Apple possesses, putting third-party developers at a disadvantage.

Screenshot of a phone shows the message: Allow 'PalAbout' to track your activity across other companies' apps and websites? Your data will be used to deliver personalised ads to you, followed by two options, 1. Ask App Not to Track, and 2. Allow.
Campaigns 94

Why is it relevant? Not only are Apple’s App Store practices being probed by the EU’s competition authority in at least three separate cases (there’s a fourth concerning mobile app payments, which continued last week), but the ATT policy itself is being investigated elsewhere, including the UK, Germany, and California.


// DATA PROTECTION //

GSMA gets GDPR fine for use of facial recognition during annual event 

The Spanish data protection authority has confirmed that GSMA, the organiser of the annual Mobile World Congress (MWC), violated the GDPR. The fine of EUR 200,000 (USD 218,000) was also confirmed on appeal. 

The authority found that GSMA failed to conduct the necessary impact assessments before deciding to collect biometric data on almost 20,000 participants for the MWC’s 2021 edition. Worse, sensitive biometric information was a mandatory step of the registration procedure, with no possibility to opt out.

Why is it relevant? If you’re an event organiser and you’re thinking of using facial recognition to automate participants’ entry to your event, think again. Despite our tendency to focus mainly on Big Tech’s use of our data, the GDPR covers more than that. Every person or organisation handling personal data, including sensitive details like biometrics, falls under the regulation.


// SHUTDOWNS //

Internet shutdowns amid unrest

Internet access was cut off in several regions of Pakistan last week, while access to Twitter, Facebook, and YouTube has been entirely restricted in the wake of the arrest of Pakistan’s former Prime Minister Imran Khan.

In Sudan, ongoing conflict has led to energy shortages, which, in turn, led to prolonged internet outages.

Both shutdowns were confirmed by NetBlocks, a global internet monitoring service.


Video shot of Trudeau with the closed caption 'still saying that it doesn't want to pay journalists for the work they do'.
Campaigns 95

Trudeau slams Meta. Canada’s Prime Minister Justin Trudeau rebuked Meta for refusing to compensate publishers for news articles that appear on its platform, calling it ‘deeply irresponsible’. The day before, Google and Meta testified at the Senate’s Standing Committee on Transport and Communications hearing, urging revisions to the proposed online news bill (C-18) to avoid their departure from Canada.

The week ahead (15–21 May)

16 May: OpenAI CEO Sam Altman and IBM Chief Privacy and Trust Office Christina Montgomery appear at a hearing of the US Senate Judiciary Subcommittee on Privacy, Technology and the Law to discuss AI governance and oversight of rules. Watch live at 10:00 EDT (14:00 UTC).

16 May: The first ministerial meeting of the new EU-India Trade and Technology Council (TTC), launched in February, takes place in Brussels today.

16–17 May: The agenda of heads of government attending the 4th Council of Europe Summit in Reykjavik, Iceland, includes AI governance.

17 May: Happy World Telecommunication and Information Society Day! The celebration marks the signing of the first International Telegraph Convention in 1865 and the creation of the Geneva-based International Telecommunication Union (ITU).

19–21 May: The G7 Summit takes place in Hiroshima, Japan, this week. As Japan’s Prime Minister announced recently, AI will also be on the agenda. (As a refresher, read our coverage in Weekly #108: ChatGPT to debut on multilateral agenda at G7 Summit).

For more events, bookmark the observatory’s calendar of global policy events.

#ReadingCorner
Sophos State of Ransomware 2023
Campaigns 96

Ransomware attacks: A sobering reality check
The latest edition of Sophos’ annual report, State of Ransomware 2023, confirms that ransomware remains a major threat, especially for target-rich, resource-poor organisations. Although ransomware attacks haven’t increased compared to the previous year, a higher number of attacks now involve encrypting data (and then stealing it). Read the report.

steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?


Numéro 79 de la lettre d’information Digital Watch – mai 2023

En bref

Pentagone : la fuite sur Discord est plus grave qu’on ne le pense

De temps en temps, les renseignements des agences américaines et de leurs alliés sont exposés à de graves fuites. La divulgation, en avril, d’une cinquantaine de documents top secret sur le service de discussion de jeux Discord en est l’une des plus flagrantes.

La publication de messages diplomatiques par WikiLeaks en 2010, les révélations d’Edward Snowden en 2013, et la divulgation des outils de piratage de la National Security Agency et de la CIA en 2016 et 2017 figurent parmi les plus grandes fuites de l’ère moderne.

Outrage ou haussement d’épaules ? Réaction dégressive

Chaque nouvelle fuite semble susciter de moins en moins d’indignation au niveau mondial. Aussi, lorsqu’une autre fuite des services de renseignement américains a fait surface en avril sur Discord (une plateforme sociale relativement peu connue), elle n’a guère suscité d’intérêt. Si le sensationnalisme peut entraver les efforts des forces de l’ordre, le désintérêt n’est pas non plus très utile.

La fuite sur Discord a été révélée le 6 avril par le New York Times. Jack Teixeira, 21 ans, aviateur de première classe dans la Garde nationale aérienne du Massachusetts, est à l’origine de la fuite.

Il n’a pas été difficile pour le FBI de l’identifier. Il a téléchargé les documents dans une discussion en ligne sur Discord (un serveur) qu’il administrait officieusement, et a suivi l’enquête du FBI jusqu’à sa propre fuite. Il a été inculpé quelques jours plus tard.

Prise pour une fausse information 

En peu de temps, les documents divulgués ont été diffusés sur d’autres plateformes de médias sociaux par des utilisateurs qui pensaient qu’il s’agissait de faux documents. La possibilité que les documents soient top secret n’a pas semblé être prise en compte.
Comme l’a rapporté CNN, « la plupart des utilisateurs [de Discord] ont diffusé les fichiers pensant au départ qu’il s’agissait de faux », a déclaré l’un des utilisateurs de Discord. Lorsqu’ils ont été déclarés authentiques, ils se trouvaient déjà sur Twitter et d’autres plateformes.

 Page, Text

Un très mauvais calendrier

Bien qu’il n’y ait jamais de bon moment, cette fuite est survenue à une période particulièrement sensible du conflit qui oppose la Russie à l’Ukraine. 

Même si les données n’étaient pas aussi détaillées que lors des fuites précédentes, cette dernière brèche a fourni des détails intimes sur la situation actuelle en Ukraine, ainsi que des renseignements sur deux des plus proches alliés des États-Unis : la Corée du Sud et Israël.

Tandis que l’Europe a été en grande partie épargnée, les informations divulguées ont révélé que l’Ukraine dispose de forces spéciales européennes sur le terrain, et que près de la moitié des chars en route vers Kiev proviennent de Pologne et de Slovénie. Les conséquences collatérales de la fuite s’étendent à de nombreux pays.

Toujours en circulation 

Quelques jours après l’annonce de l’enquête du Pentagone, les documents ayant fuité étaient toujours accessibles aussi bien sur Twitter que sur d’autres plateformes, ce qui a relancé le débat sur la responsabilité des entreprises de médias sociaux dans les affaires liées à la sécurité nationale. Il n’existe pas de solution unique pour résoudre les problèmes de filtrage des contenus sur les médias sociaux, ce qui complique le suivi. 

Malheureusement, mais, comme on pouvait s’y attendre, des fuites sont inévitables, surtout lorsque des renseignements classifiés sont accessibles à un aussi grand nombre de personnes. En 2019, 1,25 million de citoyens américains avaient une autorisation d’accès aux informations les plus secrètes des États-Unis.

Une possibilité serait donc que les plateformes de médias sociaux renforcent leur politique en matière de contenu lorsqu’il s’agit de fuites d’informations de renseignement. Si l’ancien employé de Twitter interrogé par CNN a dit vrai, « la publication de documents militaires américains classifiés ne constituerait probablement pas une violation de la politique de Twitter en matière de documents piratés ». Une autre solution serait que les entreprises renforcent leurs capacités de contrôle du contenu. Pour éviter d’imposer des charges trop lourdes aux jeunes entreprises ou aux petites plateformes, les capacités devraient être adaptées à la masse d’utilisateurs d’une plateforme (le cadre utilisé par la loi sur les services numériques de l’UE en est un bon exemple).

Le problème devient plus complexe lorsque des contenus illégaux sont partagés sur des plateformes qui utilisent le cryptage de bout en bout. Comme l’ont souligné à maintes reprises les services répressifs, s’il ne fait aucun doute que le cryptage joue un rôle important dans la protection de la vie privée, il entrave également leur capacité à identifier, poursuivre et réprimer les infractions.

Pour l’instant, nous devrions nous concentrer sur le fait que la dernière fuite a été téléchargée par un utilisateur sur un forum public de médias sociaux, malgré les dommages potentiels pour la sécurité nationale de son propre pays (les États-Unis) et le risque pour les citoyens d’un pays déchiré par la guerre (l’Ukraine). C’est sans aucun doute la plus grande des préoccupations.

 Text, Document, Business Card, Paper

Baromètre

Les développements de la politique numérique qui ont fait la une des journaux internationaux

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux éléments du mois d’avril. Vous trouverez plus de détails dans chaque mise à jour du Digital Watch Observatory.

neutre

L’architecture mondiale de la gouvernance numérique

Les ministres du numérique du G7 commenceront à mettre en œuvre le projet japonais de libre circulation des données en toute confiance (DFFT) par l’intermédiaire d’un nouvel organe, l’Arrangement institutionnel pour le partenariat (IAP), dirigé par l’Organisation de coopération et de développement économiques (OCDE). Ils ont également discuté de l’IA, de l’infrastructure numérique et de la concurrence.


neutre

Le développement durable

Le Forum mondial des données de l’ONU, qui s’est tenu à Hangzhou, en Chine, a appelé à une meilleure gouvernance des données et à une collaboration accrue entre les gouvernements pour parvenir à un avenir durable. Le secrétaire général de l’ONU, António Guterres, a déclaré que les données restent un élément essentiel du développement et du progrès au XXIe siècle


en progression

La sécurité

Le Pentagone a entamé une enquête sur la fuite de plus de 50 documents classifiés qui se sont retrouvés sur la plateforme de médias sociaux Discord (voir notre article en pages 2-3). Une opération internationale conjointe des forces de l’ordre a permis de saisir le Genesis Market, un marché du dark web.

La Commission européenne a annoncé un plan de 1,1 milliard d’euros (1,2 milliard de dollars) visant à renforcer les capacités de l’UE à lutter contre les attaques et à favoriser une meilleure coordination entre les États membres.

TikTok a été interdit sur les appareils gouvernementaux en Australie; le Centre national de cybersécurité irlandais a également recommandé aux fonctionnaires de s’abstenir d’utiliser TikTok sur leurs appareils.
Le rapport annuel de l’Internet Watch Foundation (IWF) a révélé que les images d’abus sexuels graves sur des enfants sont en augmentation.


neutre

Le commerce électronique et économie de l’Internet

L’autorité britannique de la concurrence et des marchés (CMA) a bloqué l’acquisition d’Activision Blizzard par Microsoft, craignant qu’elle n’ait un impact négatif sur le secteur des jeux en nuage. Microsoft fera appel.

La Commission européenne a désigné 19 entreprises technologiques comme étant de très grandes plateformes en ligne (17) et de très grands moteurs de recherche en ligne (2), qui devront se conformer à des règles plus strictes dans le cadre de la nouvelle loi sur les services numériques.
La Commission sud-coréenne du commerce équitable (FTC) a infligé une amende à Google pour pratiques commerciales déloyales. Un groupe de jeunes entreprises indiennes a demandé à un tribunal local de suspendre le nouveau système de facturation in-app de Google. Au Royaume-Uni, Google permettra aux développeurs Android d’utiliser d’autres options de paiement


neutre

Infrastructure

Le Conseil et le Parlement de l’UE sont parvenus à un accord politique provisoire sur le règlement des semi-conducteurs, qui vise à faire doubler la production mondiale de puces de l’UE de 20 % d’ici à 2030.


en progression

Les droits numériques

Des gouvernements du monde entier ont lancé des enquêtes sur le ChatGPT d’OpenAI, principalement parce que les pratiques de l’entreprise violaient les droits des personnes en matière de protection de la vie privée et des données (voir notre article principal).

Le gouvernement indien envisage d’ouvrir Aadhaar, le système d’identité numérique du pays, aux entités privées pour authentifier les identités des utilisateurs.
Les députés européens ont voté contre une proposition visant à autoriser les transferts de données personnelles des citoyens de l’UE vers les États-Unis dans le cadre du nouvel accord UE–États-Unis sur la protection des données personnelles.


neutre

La politique de contenu

L’administration centrale du cyberespace de la Chine va mener une campagne nationale de trois mois pour éliminer de la circulation en ligne les fausses nouvelles concernant les sociétés chinoises. L’objectif est de permettre aux entreprises et aux entrepreneurs de travailler dans une bonne atmosphère d’opinion publique en ligne.


neutre

Juridiction et questions légales

La Cour suprême du Brésil a bloqué – puis rétabli – l’application de messagerie Telegram pour les utilisateurs du pays après que l’entreprise n’a pas fourni de données liées à un groupe d’organisations néonazies utilisant la plateforme.
Un tribunal de Los Angeles a rejeté une demande de dommages-intérêts déposée par un conducteur de Tesla, l’entreprise ayant réussi à faire valoir que le logiciel de conduite partiellement automatisée n’était pas un système autopiloté.


en progression

Les nouvelles technologies

Aux États-Unis, l’Administration Biden étudie des mesures potentielles de responsabilisation pour les systèmes d’intelligence artificielle. L’appel à commentaires de la National Telecommunications and Information Administration (NTIA) se poursuit jusqu’au 10 juin. Un sénateur démocrate américain a déposé un projet de loi visant à créer un groupe de travail chargé d’examiner la politique en matière d’IA. Le ministère américain de la Sécurité intérieure a également annoncé la création d’un nouveau groupe de travail chargé de « diriger l’utilisation responsable de l’IA pour sécuriser le territoire national » tout en se défendant contre l’utilisation malveillante de l’IA.

Un groupe de 11 membres du Parlement européen demande instamment au président des États-Unis et au chef de la Commission européenne de co-organiser un sommet mondial de haut niveau sur la gouvernance de l’IA. L’administration centrale du cyberespace de la Chine (CAC) a proposé de nouvelles mesures pour réglementer les services d’IA générative. Le projet est ouvert aux commentaires du public jusqu’au 10 mai.
Des dizaines d’organisations de défense et d’experts en sécurité des enfants ont demandé à Meta de renoncer à son projet d’autoriser les enfants à entrer dans son monde de réalité virtuelle, Horizon Worlds, en raison des risques potentiels de harcèlement et de violation de la vie privée pour les jeunes utilisateurs.

En bref

Pourquoi les autorités enquêtent sur ChatGPT:
les trois raisons principales

Grâce à sa capacité à reproduire des réponses humaines dans les interactions textuelles, le système ChatGPT d’OpenAI a été salué comme une percée dans la technologie de l’IA. Mais les gouvernements ne sont pas tout à fait convaincus. Qu’est-ce qui les inquiète?

Protection de la vie privée et des données

Tout d’abord, il y a la question centrale de la collecte présumée illégale de données, la pratique trop courante consistant à collecter des données personnelles sans le consentement ou la connaissance de l’utilisateur.

C’est l’une des raisons pour lesquelles l’autorité italienne de protection des données, le Garante per la Protezione dei Dati Personali, a imposé une interdiction temporaire à ChatGPT. L’entreprise a répondu à la plupart des préoccupations de l’autorité et le logiciel est à nouveau disponible en Italie, mais cela ne résout pas tous les problèmes.

D’autres autorités de protection des données s’intéressent à ce problème, notamment la Commission nationale de l’informatique et des libertés (CNIL) en France, qui a reçu au moins deux plaintes, et l’Agencia Española de Protección de Datos (AEPD) en Espagne. Enfin, le Conseil européen de la protection des données (EDPD) vient de créer un groupe de travail dont l’activité liée au ChatGPT consistera à coordonner les positions des autres autorités européennes.

Les préoccupations relatives à la protection des données ne se limitent toutefois pas à l’Europe. La plainte déposée par le Center for Artificial Intelligence and Digital Policy (CAIDP) auprès de la Federal Trade Commission (FTC) des États-Unis fait valoir que les pratiques d’OpenAI comportent de nombreux risques pour la vie privée. Le Commissariat à la protection de la vie privée du Canada mène également une enquête.

Peu fiable

Ensuite, il y a le problème des résultats inexacts. Le modèle ChatGPT d’OpenAI a été utilisé par plusieurs entreprises, dont Microsoft Bing, pour générer du texte. Cependant, comme le confirme OpenAI elle-même, l’outil n’est pas toujours précis.La fiabilité a été l’un des éléments à l’origine de la décision de l’Italie d’interdire ChatGPT, et dans l’une des plaintes reçues par la CNIL française. Dans sa plainte auprès de la FTC, la CAIDP a également affirmé que les pratiques d’OpenAI étaient trompeuses, car l’outil est «très persuasif», même si le contenu n’est pas fiable.

 Symbol, Text, Ammunition, Grenade, Weapon, Number, Logo
Campaigns 110

Dans le cas de l’Italie, OpenAI a déclaré à l’autorité qu’il était «techniquement impossible, à l’heure actuelle, de rectifier les inexactitudes». Ce n’est pas très rassurant quand on sait que ces outils d’IA peuvent être utilisés dans des contextes sensibles, comme les soins de santé et l’éducation. Le seul recours, pour l’instant, est de fournir aux utilisateurs de meilleurs moyens de signaler les informations inexactes.

La sécurité des enfants

Troisièmement, il y a la question de la sécurité des enfants et l’absence d’un système de vérification de l’âge. L’Italie et le CAIPD ont tous deux fait valoir que, dans l’état actuel des choses, les enfants peuvent être exposés à des contenus qui ne sont pas adaptés à leur âge ou à leur degré de maturité.

Bien qu’OpenAI soit de retour en Italie après avoir introduit une question d’âge sur le formulaire d’inscription de ChatGPT, la demande de l’autorité pour un système de barrière basé sur l’âge est toujours d’actualité. OpenAI devait soumettre ses plans d’ici au mois de mai et les mettre en œuvre d’ici au mois de septembre. Cette demande coïncide avec les efforts déployés par l’Union européenne pour améliorer la manière dont les plateformes confirment l’âge de leurs utilisateurs.

Tant que de nouveaux outils d’IA apparaîtront, nous nous attendons à ce que les technologies d’IA fassent l’objet d’un examen minutieux, notamment en ce qui concerne les risques potentiels en matière de protection de la vie privée et des données. La réponse d’OpenAI aux diverses demandes et enquêtes pourrait constituer un précédent pour la manière dont les entreprises d’IA seront tenues responsables de leurs pratiques à l’avenir. Dans le même temps, il est de plus en plus nécessaire de renforcer la réglementation et la surveillance des technologies d’IA, en particulier en ce qui concerne les algorithmes d’apprentissage automatique.

Genève

Mise à jour des politiques de la Genève internationale

Ligne d’action C4 du SMSI: Comprendre l’apprentissage par l’IA: implications pour les pays en voie de développement | 17 avril

Un événement organisé par l’UIT et l’OIT a permis d’examiner l’impact des technologies de l’IA sur l’écosystème mondial de l’éducation. 

Se concentrant principalement sur les problèmes rencontrés par les pays du Sud, les experts ont discuté de l’utilisation de ces technologies dans des domaines tels que la surveillance des examens, la transcription des cours magistraux, l’analyse de la réussite des étudiants, les tâches administratives des enseignants et le retour d’information en temps réel sur les questions posées par les étudiants.

Ils ont également évoqué la charge de travail supplémentaire pour les enseignants, qui doivent s’assurer qu’eux-mêmes et leurs apprenants maîtrisent les outils nécessaires, ainsi que l’utilisation et le stockage de données personnelles par les fournisseurs de technologies d’IA, et d’autres acteurs du système éducatif. 

Les solutions à ces défis doivent également tenir compte du déficit de compétences numériques et des problèmes de connectivité.

70e session de la Commission de la CEE-ONU: Transformations numériques et vertes pour le développement durable dans la région | 18–19 avril

La 70e session de la Commission économique des Nations unies pour l’Europe (CEE-ONU) a accueilli des représentants au niveau ministériel des États membres de la CEE-ONU pour un événement de deux jours qui a abordé la transformation numérique et verte pour le développement durable en Europe, l’économie circulaire, les transports, l’énergie, le financement du changement climatique et les matières premières critiques.

L’événement a permis aux participants d’échanger leurs expériences et leurs réussites, de faire le point sur les activités de la Commission et d’examiner les questions liées à l’intégration économique et à la coopération entre les pays de la région. La session a mis l’accent sur la nécessité d’une transformation verte pour relever les défis urgents liés au changement climatique, à la perte de biodiversité et aux pressions environnementales, et a souligné le potentiel des technologies numériques pour le développement économique, la mise en œuvre des politiques et la gestion des ressources naturelles.

Journée des femmes dans les TIC 2023 | 27 avril

La Journée internationale des femmes dans les TIC, un événement annuel qui promeut l’égalité des sexes et la diversité dans l’industrie technologique, avait pour thème «Des compétences numériques pour la vie».

La célébration mondiale a eu lieu au Zimbabwe dans le cadre du Sommet Transform Africa 2023, tandis que d’autres régions ont organisé leurs propres événements et célébrations.

L’événement a été institué par l’UIT en 2011, et il est désormais célébré dans le monde entier. Les gouvernements, les entreprises, les institutions académiques, les agences des Nations unies et les ONG soutiennent l’événement, offrant aux femmes la possibilité de s’informer sur les TIC, de rencontrer des modèles et des mentors, et d’explorer différentes carrières dans l’industrie.

À ce jour, l’événement a accueilli plus de 11 400 activités dans 171 pays, auxquelles ont participé plus de 377 000 femmes et jeunes femmes.

À venir

À surveiller :
Événements mondiaux en matière de politique numérique en avril

10–12 mai 2023 | Groupe intergouvernemental d’experts sur le commerce électronique et l’économie numérique (Genève et en ligne)

Le groupe d’experts de la CNUCED sur le commerce électronique et l’économie numérique se réunit chaque année pour examiner les moyens d’aider les pays en développement à s’engager dans l’économie numérique en pleine évolution et à en tirer profit, ainsi qu’à réduire la fracture numérique. La réunion a deux points de fond à l’ordre du jour : comment mettre les données au service du Programme de développement durable à l’horizon 2030 et le Groupe de travail sur la mesure du commerce électronique et de l’économie numérique.

19–21 mai 2023 | Sommet du G7 à Hiroshima 2023 (Hiroshima, Japon)

Les dirigeants du Groupe des sept économies avancées ainsi que les présidents du Conseil européen et de la Commission européenne se réunissent chaque année pour discuter de questions politiques mondiales cruciales. Pendant la présidence japonaise, en 2023, le Premier ministre japonais Fumio Kishida a identifié plusieurs priorités pour le sommet, notamment l’économie mondiale, la sécurité énergétique et alimentaire, le désarmement nucléaire, la sécurité économique, le changement climatique, la santé mondiale et le développement. Les outils d’intelligence artificielle seront également à l’ordre du jour.

24–26 mai 2023 | 16e conférence internationale du CPDP (Bruxelles et en ligne)

La prochaine conférence «Computers, Privacy and Data Protection» – Ordinateurs, vie privée et protection des données (CPDP) –, dont le thème est «Les idées qui animent notre monde numérique», se concentrera sur des questions émergentes telles que la gouvernance et l’éthique de l’IA, la sauvegarde des droits de l’enfant à l’ère des algorithmes, et le développement d’un cadre durable de transfert de données entre l’Union européenne et les États-Unis. Chaque année, la conférence réunit des experts de divers domaines, notamment du monde universitaire, du droit, de l’industrie et de la société civile, afin de favoriser le débat sur la protection de la vie privée et des données.

29–31 mai 2023 | Forum GLOBSEC 2023 à Bratislava (Bratislava, Slovaquie)

La 18e édition du forum de Bratislava réunira des représentants de haut niveau de divers secteurs pour relever les défis qui façonnent l’évolution du paysage mondial dans quatre domaines principaux: la défense et la sécurité, la géopolitique, la démocratie et la résilience, ainsi que l’économie et les affaires. Le forum de trois jours comprendra plus de 100 intervenants et plus de 40 sessions.

30 mai–2 juin 2023 CyCon 2023 (Tallinn, Estonie)

Le Centre d’excellence en coopération pour la cyberdéfense de l’OTAN accueillera la CyCon 2023, une conférence annuelle qui aborde les questions urgentes de cybersécurité d’un point de vue juridique, technologique, stratégique et militaire. Sous le thème «Meeting Reality», l’événement de cette année réunira des experts gouvernementaux, militaires et industriels, qui se pencheront sur les cadres politiques et juridiques, les technologies qui changent la donne, les hypothèses de cyberconflits, le conflit russo-ukrainien et les cas d’utilisation de l’IA dans le domaine de la cybersécurité.