The EU’s Digital Services Act stole the show last week, with sweeping new rules coming into effect on 25 August for very large online platforms. But for now, that date may not mean much: It’s the DSA’s enforcement that will make the biggest difference. In other news, ransomware has reared its ugly head, while damaged cables have slowed down internet access along Africa’s western coast. Microsoft’s Activision deal is anything but sealed.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
EU DSA’s stricter rules for tech giantscome into effect
Much as 25 May 2018 marked the birth of the EU’s General Data Protection Regulation (GDPR), 25 August 2023 will be etched as the day on which very large online platforms and search engines began implementing stricter measures under the EU’s new Digital Services Act (DSA).
The DSA and GDPR have a lot in common. Both prioritise the protection of European users’ rights; both extend their impact beyond the boundaries of the EU; and most significantly, they both (re-) affirm the EU’s role as the leading global authority in setting regulatory standards. So, even if European citizens are the primary beneficiaries, the DSA’s approach to regulating digital services (and how the EU will enforce those rules) will undoubtedly influence how other countries address similar issues.
Which users will benefit most from the new rules?
European users. But remember how the GDPR influenced non-EU jurisdictions to adopt similar rules? Companies that operate globally may also decide to adjust their practices for their non-EU user base while making these changes, as applying different rules to different markets is time-consuming, costly, and complex.
Which companies are affected?
For now, it’s the 19 very large platforms and search engines, each of which has at least 45 million monthly active users: AliExpress, Amazon Store, Apple AppStore, Bing, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Google Search, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, and Zalando. As of February 2024, the DSA will impose some of these obligations on smaller companies.
What do very large platforms and search engines need to do?
Make it easier for users to report illegal content.
Remove illegal content quickly.
Label all ads and inform users about who is promoting them. While they’re at it, they also need to publish repositories of all the ads shown on their platforms.
Clarify terms and conditions by providing an easily understandable, plain-language summary.
Allow users to turn off personalised content recommendations.
Ban targeted adverts to children and ads based on a user’s sensitive data.
Analyse the specific risks in their platforms and practices, and establish mitigation measures.
Publish transparency reports on how content moderation is implemented.
Have companies started implementing these changes?
In all fairness, some of these obligations (such as transparency reports by Google, Facebook, Snapchat, and others) have existed for years. Other changes have been implemented during the past weeks, including ad libraries published by TikTok and Booking.com; simplified terms and conditions posted by AliExpress; Facebook’s ad limitations for teenagers; and more straightforward reporting tools by Google. But there are changes we haven’t seen yet – where is Booking.com’s simplified version of their terms? – and others that must be carried out in due time (such as risk assessments by the end of the year).
Will the EU monitor compliance?
Definitely. The European Commission will actually be in charge itself, which is perhaps the biggest difference between the DSA and the GDPR. To do so, the commission and the entities helping it will need more staff, reports suggest. (In comparison, Facebook had a 1,000-strong team working on the DSA). Digital Services Coordinators – national regulators tasked with overseeing the DSA’s implementation – must also be appointed by February.
The DSA has yet to face its greatest challenge. Enforcing the rules remains an uncharted territory. But for now, it’s essentially a waiting game.
Digital policy roundup (21–28 August)
// AI GOVERNANCE //
BRICS announces new body to develop AI governance frameworks
The BRICS countries (Brazil, Russia, India, China, and South Africa) have joined the list of groups establishing specialised entities to cover AI governance issues.
Addressing the annual summit, China’s President Xi Jinping referred to a new BRICS AI study group, as part of the BRICS Institute of Future Networks, that would develop governance frameworks and standards, and help make AI technologies ‘more secure, reliable, controllable, and equitable’.
Why is it relevant? Although there’s a placeholder for this new working group on the institute’s website, the institute doesn’t divulge any details, nor does the BRICS’ final communique refer to this development.
// CYBERSECURITY //
Ransomware on the rise; MOVEit vulnerability partly to blame
The NCC Group security company reported the highest record number of ransomware attacks in July. The company said that over 500 cyberattacks were recorded, most targeting large companies. The increase has been attributed to the exploitation of a vulnerability in MOVEit, a file transfer software, by a hacker group known as CLOP or Cl0p.
Why is it relevant? If you thought cybercrime takes a break in summer, think again. The list of victims affected by CLOP since June seems endless (over 1,000 entities and millions of users) and includes airlines, universities, and health centres.
Plan ahead to counter quantum-powered cyberattacks, US security institutes urge
The US Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the National Institute of Standards and Technology (NIST) are urging organisations, especially those supporting critical infrastructures, to plan early for the probability (not if, but when) of quantum-powered cyberattacks.
The agencies are advising organisations to start thinking about migrating to post-quantum cryptographic standards, and have released guidelines on how to prepare a customised roadmap.
Why is it relevant? To explain the upcoming risk, we’ll cite an excerpt from our ongoing infrastructure policy course: ‘Breaking one of the most secure codes of today… by trying all the possible options with a conventional computer would take around 300 trillion years. A powerful quantum computer would take only 8 hours for this task. In essence, all of the data we have ever encrypted could suddenly become exposed, and most of the current encryption algorithms rendered obsolete.’
// DATA PROTECTION //
Data scraping concerns raised by data protection authorities
Data protection authorities from around the world have issued a joint statement expressing their concerns about the practice of data (or web) scraping by tech companies due to the potential of data scraping technologies to harvest personal data. Just because information is publicly available on the internet does not mean that privacy protections no longer apply, the statement said.
The statement, issued by the privacy protection authorities of New Zealand, Canada, Australia, the United Kingdom, Hong Kong, Switzerland, Norway, Columbia, Morocco, Argentina, Mexico and Jersey, was sent to several tech companies.
Why is it relevant? The statement highlights one of the most widely used techniques for harvesting internet content to train large language models. Although many platforms prohibit web scraping (not to mention the data protection laws that also impose restrictions), the practice is nonetheless prevalent.
// ANTITRUST //
Back to the drawing board? The EU might reassess the Microsoft-Activision acquisition.
Microsoft has agreed to transfer the licensing rights for cloud streaming of Activision Blizzard games to Ubisoft, in order to win approval from the UK to acquire Activision. All will be well and good if the UK’s Competition and Markets Authority agrees.
But Microsoft’s new proposal has also prompted the European Commission to reconsider whether it should reevaluate the deal once more, according to a media report.
Why is it relevant? The commission approved the deal in May; Microsoft’s new strategy could upset the approval that the commission had granted, placing the planned merger on an uncertain track once again.
// SUBSEA CABLES //
Western Africa’s choppy internet access after cable damage
It could take weeks for Africa’s internet connection to be fully restored, after an underwater landslide in Canyon damaged two major submarine cables. The impacted cables are the SAT-3 and WACS cables, which led to the loss of international internet bandwidth along the western coast of Africa.
At the time of writing, the cable-laying ship Léon Thévenin was still on its way to the suspected break points off the Congo coast after setting out from Cape Town in South Africa last week. The cables were damaged earlier in August.
Why is it relevant? We take undersea cables largely for granted. Not only do they carry over 90% of the world’s internet traffic, but there can be serious implications (economic impact, disrupted communications, etc.) when they get damaged.
The week ahead (28 August–4 September)
21 August–1 September: The UN Ad Hoc Committee working on a new cybercrime convention is meeting in New York for its 6th session.
1–4 September: The self-organised privacy and digital rights conference Freedom Not Fear returns to Brussels this weekend.
#ReadingCorner
Job losses or better prospects?
AI is more likely to enhance jobs by automating some tasks rather than replacing them entirely, according to a new study by the Geneva-based International Labour Organization (ILO). The extent of automation hinges on a country’s level of development: The higher a country’s income, the higher the likelihood of automation. Full text.
The already fragile relationship between the USA and China is becoming further complicated by new restrictions and measures affecting the semiconductor industry. On the AI regulation front, nothing much has happened, but we can’t say the same for data protection and privacy issues.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
USA to restrict investment in China in key tech sectors
The US government announced plans to prohibit or restrict US investments in China in areas deemed critical for a country’s military, intelligence, and surveillance capabilities across 3 industry sectors – semiconductors, quantum technologies, and (certain) AI systems. The decision stems from an executive order signed by US President Joe Biden on 9 August 2023, which grants authorisation to the US Treasury Secretary to impose restrictions on US investments in designated ‘countries of concern’ – with an initial list that includes China, Hong Kong, and Macau.
The executive order serves a dual purpose: to preempt potential national security risks and to regulate investments in sectors that could empower China with military and intelligence advantages. While the US already enforces export restrictions on various technologies bound for China, the new executive order extends its scope to restrict investment flows that could support China’s domestic capabilities.
Semiconductors. The intent to impose restrictions on semiconductors – now a critical strategic asset due to their integration into so many industries – is particularly significant. It comes at a time when the semiconductor landscape is increasingly intertwined with geopolitical considerations of market dominance, self-sufficiency, and national security. A move on one geopolitical side usually triggers repercussions on the other, as history has confirmed time and again.
Delayed countermeasures? So far, this hasn’t been the case. China’s reaction has been a mix of caution and concern, with no actual countermeasures announced yet. One questions whether this is a sign that Beijing will react more cautiously than usual. Although Chinese authorities have expressed disappointment, Beijing has only said so far that China is undergoing a comprehensive assessment of the US executive order’s impact and will respond accordingly.
Too early. There are several reasons that could explain this reaction. The restrictions won’t come into effect before next year (and even then, they won’t apply retroactively). It might therefore be too early to gauge the implications of what the order and the US Treasury’s regulations will mean for China.
Antitrust arsenal. China may also opt to hit back through other means, as it has been doing with merger approvals involving US companies. China’s failure to approve Intel’s acquisition of Israel’s Tower Semiconductor is a tough blow. (More coverage below).
Reactions from US allies. Beijing may also be waiting for more concrete reactions from other countries. Both the EU and the UK have signalled their intent to adopt similar strategies. The European Commission said it was analysing the executive order closely, and will continue its cooperation with the US on this issue, while the UK’s premier Rishi Sunak is consulting on the issue with UK businesses.
It seems that neither the EU nor the UK is expected to immediately follow the USA. For China, the USA is a confrontational open book; the EU is diplomatically less so.
Digital policy roundup (7–21 August)
// SEMICONDUCTORS //
China blocks Intel’s acquisition of Tower Semiconductor
Intel has abandoned its plans to acquire Israeli chipmaker, Tower Semiconductor, after Chinese regulators failed to approve the deal. The acquisition was central to Intel’s efforts to build its semiconductor business and better compete with industry giant Taiwan Semiconductor Manufacturing Company (TSMC).
Acquisitions involving multinational companies typically require regulatory approval in several jurisdictions, due to the complex operations and market impact in those countries. China’s antitrust regulations require that a deal be reviewed if the 2 companies seeking a merger have a total revenue of more than USD117 million a year from China.
Why is it relevant? The failure of the deal shows how China is able to disrupt strategic plans for US companies involved in the semiconductor industry. In Intel’s case, the move will complicate its plans to increase the production of chips for other companies alongside its own products.
Campaigns 17
// AI GOVERNANCE //
Canada opens consultation on guardrails for generative AI
1. Safety: Generative AI systems must be safe, and ways to identify potential malicious or harmful use must be established.
2. Fairness: The system’s output must be fair and equitable. Datasets are to be assessed and curated, and measures to assess and mitigate biassed output are to be in place.
3. Transparency: The system must be transparent.
4. Human supervision: Deployment and operations of the system must be supervised by humans, and a mechanism to identify and report adverse impacts must be established.
5. Validity and robustness: The system’s validity and robustness must be ensured by employing testing methods and appropriate cybersecurity measures.
6. Accountability: Multiple lines of defence must be in place, and roles and responsibilities have to be clearly defined to ensure the accountability of the system.
Why is it relevant? First, it’s a voluntary code that aims to provide legal clarity ahead of the implementation of Canada’s AI and Data Act (AIDA), known as Bill C-27, which is still undergoing parliamentary review. Second, it reminds us of the European Commission’s approach: Developing voluntary AI guardrails ahead of the actual AI law.
// DATA PROTECTION //
Meta seeks to block Norwegian authority’s daily fine for privacy breaches
The Norwegian Data Protection Authority has imposed daily fines of one million kroner (USD98,500) on Meta, starting from 14 August 2023. These penalties are a consequence of Meta’s non-compliance with a ban on behaviour-based marketing carried out by Facebook and Instagram. In response, Meta has sought a temporary injunction from the Oslo District Court to halt the ban. The court will review the case this week (22–23 August).
The Norwegian watchdog believes Meta’s behaviour-based marketing – which involves the excessive monitoring of users for targeted ads – is illegal. The watchdog’s ban does not prohibit the use of Facebook or Instagram in Norway.
What is it relevant? The GDPR, the EU’s data protection regulations, offer companies six options for gathering and processing people’s data, depending on the context. Meta attempted to rely on two of the options (the ones where users do not need to consent specifically. But European data protection authorities deemed Meta’s use of these options for its behaviour-based marketing practises illegal. On 1 August, Meta announced that it would finally switch to asking users for specific consent, but so far, it hasn’t yet done so.
Campaigns 18
Google fails to block USD5 billion consumer privacy lawsuit
A US District judge has rejected Google’s bid to dismiss a lawsuit claiming it invaded the privacy of millions of people by secretly tracking their internet use. The reason? Users did not consent to letting Google collect information about what they viewed online, because the company never explicitly told them it would. The case will therefore continue.
Why is it relevant? Many people believe that using a browser’s ‘private’ or ‘incognito’ mode ensures their online activities remain untracked. However, according to the plaintiffs, Google continues to track and gather browsing data in real time.
Probable outcomes: Google’s explanation of how private browsing functions states that data won’t be stored on devices, yet websites might still collect user data. This suggests that the problem might boil down to two aspects: Google’s representation of its privacy settings (the fact that user data is still collected renders the setting neither private nor incognito), and the necessity of seeking user consent regardless.
Case details: Brown et al v Google LLC et al, US District Court, Northern District of California, No. 20-03664
Was this newsletter forwarded to you, and you’d like to see more?
Canadian PM criticises Meta for putting profits before safety
Canadian Prime Minister Justin Trudeau has criticised Meta for banning domestic news from its platforms as wildfires ravage parts of Canada. Up-to-date information during a crisis is crucial, he told a news conference. ‘Facebook is putting corporate profits ahead of people’s safety.’
Meanwhile, Canadian news industry groups have asked the country’s antitrust regulator to investigate Meta’s decision to block news on its platforms in the country, accusing the Facebook parent of abusing its dominant position.
Why is it relevant? The fight is turning into both a safety and an antitrust issue. Plus, we’re not sure Meta is not doing itself any favours by telling Canadian users that they can still access timely information from other reputable sources, and directing them to its Safety Check feature which allows users to let their Facebook friends know they are safe.
// TIKTOK //
TikTok adapts practices to EU rules, allowing users to opt out of personalised feeds…
The new law also prohibits companies from targeting children with advertising. The DSA’s deadline for companies to implement these changes is 28 August.
Why is it relevant? With TikTok’s connections to China and the ensuing security concerns, the company has been trying very hard to convince European policymakers of its commitment to data protection and the implementation of robust safety measures. A few weeks ago, for instance, it willingly subjected itself to a stress test (which pleased European Commissioner for Markets Thierry Breton very much). Compliance with the DSA could also help improve the company’s standing in Europe.
…but is banned in New York City
New York City has implemented a TikTok ban on government-owned devices due to security and privacy concerns. The ban requires NYC agencies to remove TikTok within 30 days, and employees are barred from downloading or using the app from any city-owned devices and networks. The ban brings NYC in line with the federal government.
Why is it relevant? TikTok has faced bans around the world, but perhaps the toughest restrictions (including draft laws with more restrictions) in the USA. And yet, generative AI seems to have displaced the legislative momentum of imposing more restrictions on TikTok.
Campaigns 19
TikTok, Snapchat videos encourage looting
There were several arrests and a heavy police presence on Oxford Street, London, on 9 August, after videos encouraging people to steal from shops made the rounds on TikTok and Snapchat. A photo circulating on social media with the time and location of the planned loot said: ‘Last year was lit, we know this years gonna be 10x better’ (this message has since been taken down). Meanwhile, former Chief Superintendent of Greater London’s Metropolitan Police, Dal Babu, has criticised politicians for their reluctance to confront technology firms. Similar grab-and-go flash mob shoplifting has occurred in the USA. Photo credit: Skynews
The week ahead (21–28 August)
21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session.
25 August: Very Large Online Platforms and search engines must comply with the DSA’s obligations.
#ReadingCorner
Rise in criminals’ use of generative AI, but impact is limited so far: study
Cybercriminals have shown interest in using AI for malicious activities since 2019, but its adoption remains limited, according to researchers at Mandiant, a cybersecurity company owned by Google. The malicious use of generative AI is mainly linked to social engineering, a practice involving fraudsters impersonating a trusted entity to trick users into providing confidential information. What about the techniques which criminals are using? The researchers say that criminals are increasingly using imagery and video in their campaigns, which are more deceptive than text-based or audio messages. Access the full report.
Fake! Screenshot from an AI-generated deepfake video of Ukrainian President Volodymyr Zelenskyy stating that Ukraine would surrender to Russia. Source: Mandiant.com
The recently approved EU-US Data Privacy Framework is about to undergo the same legal battle as its predecessors starting in September. In other news, OpenAI filed a trademark application for GPT-5 (we raised our eyebrows too), and Zoom is under fire for data processing practices related to training AI models and use of user content. Google’s antitrust case in Italy over data portability has been settled, but the US Justice Department’s case will go to trial next month (we’ll cover this one in upcoming digests).
Let’s get started. Stephanie and the Digital Watch team PS. We’re taking a short break next week; expect us back in a fortnight.
// HIGHLIGHT //
Schrems III: EU-US privacy framework to be challenged in court in September
A legal challenge to the recently approved EU-US Trans-Atlantic Data Privacy Framework (TADPF) is expected to be filed by Austrian privacy activist Max Schrems, chairman of NOYB (European Center for Digital Rights, called NOYB for None Of Your Business) in September.
The new framework, which governs the transfer of European citizens’ personal data across the Atlantic, was finalised by the European Commission and the US government last month. Known as the TADPF on Twit…sorry, X, the framework is actually the third of its kind, succeeding the invalidated Safe Harbour in October 2015 and the Privacy Shield in July 2020. Notably, it was Max Schrems who played a significant role in invalidating both frameworks, earning the distinctive labels Schrems I and Schrems II for each case. NOYB had already announced its plans to challenge the new framework a few weeks ago, which it says is essentially a copy of the failed Privacy Shield.
Campaigns 29
Issue #1: Surveillance on non-US individuals
The fundamental problem with the new framework, much like the previous versions, has to do largely with a US law: Section 702 of the Foreign Intelligence Surveillance Act (FISA), which allows for surveillance against non-US individuals. Although the US 4th Amendment protects the privacy of American citizens, European citizens have no constitutional rights in the USA. Therefore they cannot defend themselves from FISA 702 in the same way.
At the same time, in the EU, personal data may only leave the EU if adequate protection is ensured. So what the USA and EU agreed to, for the EU to green-light data transfers under the new framework, was to limit bulk surveillance to ‘what is necessary and proportionate’ and share a common understanding of what ‘proportionate’ means without actually undermining the powers that US authorities wield.
Issue #2: The redress mechanism
The previous framework for citizens seeking redress through the ombudsperson did not align with European law. The new agreement introduces changes by establishing a Civil Liberties Protection Officer and a body referred to as a court (which NOYB thinks is simply a semi-independent executive entity).
Although there are some minor enhancements compared to the ombudsperson, individuals will probably have no direct interaction with the new bodies, so the outcomes of seeking redress will be similar to those that the former Ombudsperson could have reached.
On the path to Schrems III
The system needs to be implemented by companies, so that it can be challenged by a person whose data is transferred under the new instrument. Schrems indicated the lawsuit will be filed in Austria, his home country.
Then, it is hoped that the Austrian court will quickly decide to accept or reject, this challenge, and send it to the Court of Justice of the European Union (CJEU).
Is there any chance that this trajectory might be avoided? Yes, but it’s unlikely. FISA 702 has a sunset clause, which means that it needs to be re-authorised by the US Congress by the end of 2023. The new litigation will add further pressure to existing calls for reforming FISA 702, but Schrems himself thinks the US government may not be willing to reauthorise or reform FISA 702, since the framework has now been agreed.
As the Schrems III litigation unfolds, it is increasingly probable that the case will end up before the CJEU, where Schrems has strong confidence in the outcome: ‘Just announcing that something is “new”, “robust” or “effective” does not cut it before the Court of Justice.’
Digital policy roundup (31 July–7 August)
// AI GOVERNANCE //
OpenAI files trademark application for GPT-5
OpenAI has filed a trademark application for GPT-5 at the US Patent and Trademark Office, aiming to cover various aspects such as AI-generated text, neural network software, and related services. While the filing was spotted by a trademark attorney (who tweeted about it), there has been no official confirmation from OpenAI about GPT-5.
A trademark application doesn’t always mean a working product is in the making. Often, companies file trademarks to stay ahead of competitors or protect their intellectual property.
Why is it relevant? OpenAI CEO Sam Altman recently denied that the company was working on GPT-5. During an event at MIT, Altman reacted to an open letter requesting a pause in the development of AI systems more powerful than GPT-4. Altman clarified that the letter lacked technical nuance and mistakenly stated that OpenAI is currently training GPT-5, deeming it ‘sort of silly’. (Jump to minute 16’00 to listen to the recording). Time will tell.
Zoom under fire for training AI models with user data without opt-out option
Zoom’s latest update to its Terms of Service will allow it to leverage user data for machine learning and AI, without providing users the possibility of opting out.
In addition, Section 10.4 of the updated terms also grants Zoom a ‘perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license’ to use customer content in any way it likes.
Why is it relevant? First, it gives Zoom a sweeping range of powers over people’s content (the argument that users should read the terms and conditions will not earn Zoom any kudos from users nor alleviate their concerns). Second, the Zoom case echoes one of the earliest legal challenges that OpenAI faced when the Italian data protection authority banned ChatGPT from Italy, and later allowed it to operate after OpenAI ‘granted all individuals in Europe, including non-users, the right to opt-out from processing of their data for training of algorithms also by way of an online, easily accessible ad-hoc form’. But there was one main difference: OpenAI uses legitimate interest as a basis for using data to train its models, which means it needs an opt-out form. Zoom users can enable generative AI features, but as yet, there is no clear way to opt out.
UPDATE (8 August 2023): Zoom updated its terms of service in the evening of 7 August (right after this issue was published) to say that ‘Notwithstanding the above, Zoom will not use audio, video or chat Customer Content to train our artificial intelligence models without your consent’. However, it’s unlikely that this will alleviate concerns: First, the term ‘customer content’ does not cover all the content that Zoom will use to train its AI models; second, it’s still unclear whether Zoom is seeking to obtain users’ consent (in Europe) in accordance with GDPR requirements; third, there’s still no possibility to opt out – at least, not a straightforward one; fourth, there’s no change to the sweeping powers Zoom has given itself over user content (Section 10.4).
Campaigns 30
Are labels for AI-generated content around the corner?
Alessandro Paluzzi, a mobile developer and self-proclaimed leaker, has disclosed that Instagram is developing a label specifically for AI-generated content. As companies vie for dominance in generative AI technology, the introduction of content labels thrusts them into a race on combating misinformation. The tool that successfully and accurately labels AI content could earn the trust of users and governments.
UK labels AI as chronic risk
AI has now been officially classified, for the first time, as a security threat to the UK, as stated in the recently published National Risk Register 2023. This level of risk falls into the category of chronic risks, different from acute risks because they present ongoing challenges that gradually undermine our economy, community, way of life, and national security. While chronic risks typically unfold over an extended time, they are not limited to such.
The advancements in AI systems and their capabilities entail various implications, including both chronic and acute risks. For instance, they could facilitate the proliferation of harmful misinformation and disinformation. If mishandled, these risks could have significant consequences.
Why is this relevant? The UK recently announced it will host the first global summit on AI Safety, bringing together key countries, leading tech companies, and researchers to agree (hopefully) safety measures to evaluate and monitor risks from AI. The UK also recently chaired the UN Security Council’s first-ever debate on AI.
Was this newsletter forwarded to you, and you’d like to see more?
WorldCoin wants to attract governments; Kenya suspends project
Tools For Humanity, the San Francisco and Berlin-based company behind WorldCoin, the new crypto-biometric project we wrote about last week, hopes it will attract governments to use it.
Ricardo Macieira, general manager for Europe at Tools For Humanity, said the company’s idea is to build the infrastructure for others to use it.
Why is this relevant? The project is already shrouded in controversy over Worldcoin’s data collection processes, not least because of the crypto-for-iris scans method of encouraging sign-ups. Kenya is the latest country to investigate the project, and has suspended local activities of WorldCoin in the meantime.
// KIDS //
China proposes screen time limits for kids
The Cyberspace Administration of China (CAC) released draft guidelines for the introduction of screen time software to curb the problem of smartphone addiction among minors and the impact the government says screen time has on children’s academic performance, social skills, and overall well-being. The regulations mandate curfew and time limits by age, as well as age-appropriate content.
The draft rules also provide for anti-bypass functions, such as restoring factory settings if the device is not used according to the rules.
Why is it relevant? The guidelines, which are an add-on to previous regulations that restrict the amount of time under-18s spend online, give parents much of the management responsibility. This makes the widespread enforcement of the rules questionable – which we’re pretty sure is what kids in China are hoping for.
// COMPETITION //
Italian consumer watchdog closes Google’s data portability investigation
Italy’s Autorità Garante della Concorrenza e del Mercato (AGCM) has accepted commitments proposed by Google, ending its investigation over the alleged abuse of its dominant position in the user data portability market. Data portability, governed by the GDPR, allows users to move their data between services, creating competition for companies like Google.
Google presented three commitments: The first two offer supplementary solutions to Takeout, which helps users backup their data, making it easier to export to third-party operators. The third commitment allows testing of a new solution that enables direct data portability between services, with authorisation from users. This aims to improve interoperability within the Google ecosystem.
Why is it relevant? First, amid the multitude of antitrust cases faced by the company worldwide, this particular one had the potential to escalate further, but reached its resolution here. Second, the benefits of this outcome extend beyond just Italian users.
The month ahead (August)
More as a reminder, since we covered these events last week. It will be a quiet month. Happy August!
10–13 Aug:DEF CON 31 is the Las Vegas event which will show, among other workshops, training, and contests, the White House-backed red-teaming of OpenAI’s models.
21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session.
WSJ: How Binance transacts billions in Chinese market despite ban
It seems that cryptocurrency exchange Binance continues to operate in China despite the country’s ban on cryptocurrencies. Binance reportedly does around $90 billion worth of business in China, one of its largest markets. An investigative article from the Wall Street Journal explores how Binance is able to operate in China despite the ban and the potential risks associated with doing so.
A new biometric-cryptocurrency project has diverted everyone’s attention from AI developments to iris patterns and privacy issues. Still, over at the regulators in charge of competition, no fewer than four new cases against Big Tech emerged, with two of them outlined below.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
There’s a new device in town, and it’s coming for your iris
Nowadays, people are happily sharing biometric data through their trendy smartwatches. The allure of profiting from cryptocurrency is as tantalising as Bitcoin’s early days. And Sam Altman has garnered a considerable following of techno-enthusiasts since the launch of ChatGPT.
So the timing couldn’t be better for Sam Altman to relaunch his Worldcoin project, a cryptocurrency-cum-identity network that functions by verifying that someone is both a human being and a unique person. The verification is carried out by a custom-built spherical device called an Orb. (Read about Worldcoin’s history.)
Are you unique? The uniqueness requirement is why verification is based on an iris scan: Since the structure of our irises is both individually identifiable and stays more or less the same over time, iris biometrics are much more accurate and reliable.
A cross-section of the Orb. Source: Worldcoin
Privacy safeguards: WorldCoin also provides some privacy features. Iris scans are processed locally on each Orb, and turned into a set of numbers. The original scan is then deleted (unless the user prefers to have it stored on Worldcoin’s servers ‘to reduce the number of times you may need to go back to an Orb’).
Regulators stepping in: Despite the safeguards, European regulators have been quick to react. France’s privacy watchdog said it had reservations about the legality of the biometric data collection, and how it’s being stored. The UK said it was reviewing the project.
Germany is way ahead: Its data protection regulator in Bavaria – the lead EU authority investigating Worldcoin due to the company’s German subsidiary in the region – has been investigating the project’s biometric data processing since November last year.
Why the iris scanning project is more than a headache
As the investigations unfold, there are several challenges that are raising alarm bells about the iris project.
1. The massive database. Regardless of all the noble purposes (mostly) behind the Worldcoin project, the fact is that a massive biometric database is being built. And we all know the risks that come with that – from breaches to data misuse.
2. The Orb operators. Let’s say your iris pattern is deleted immediately. There are still plenty of risks associated with how that data is collected. The company emphasises that the orb operators – the people entrusted with the shiny spheres – are independent contractors over whom ‘we have no control over and disclaim all liability for what they say or how they conduct themselves’.
3. The money pitch. Worldcoin is providing people with an incentive to have their irises scanned: the prospect of making money. ‘Eligible verified users’, that is, anyone who’s had their iris scanned, ‘can claim one free WLD token per week with no maximum.’ On the one hand, a company is finally paying users for their data, but on the other hand, that data is sensitive biometric information. Are users on an equal footing with the company in this exchange? Should the sale of sensitive biometric information be permitted? It’s a transaction that warrants closer scrutiny.
Beyond the boundaries of what is acceptable or prohibited, projects that involve large-scale collection of biometric data are undoubtedly contributing to society’s changing attitudes towards privacy. It’s probably time to reassess the essence of what users are actually trading, and more than that, whether users have the power to defend their rights and position in this negotiation.
Digital policy roundup (24–31July)
// AI //
Industry leaders partner to establish forum for responsible development of frontier AI
Four companies developing AI – Anthropic, Google, Microsoft, and OpenAI – have launched a new industry body to focus on the safe and responsible development of frontier AI models, that is, models that exceed the capabilities of what’s currently available.
The Frontier Model Forum will focus on identifying best practices for safety standards, advancing AI safety research by coordinating efforts on areas like adversarial robustness and interpretability, and facilitating secure information sharing between companies and governments on AI safety and risks.
Why is this relevant? Beyond the AI models we see today, over 350 AI experts recently raised concerns on the potential for future AI to bring about human extinction and other global perils, such as pandemics and nuclear warfare. The list of signatories comprised the leaders of the very AI companies driving the Frontier Model Forum forward.
// CONTENT POLICY //
Biden administration challenges social media censorship order
The Biden administration has criticised a recent court order restricting government officials’ communications with social media companies as overly broad. Appealing the court order, the government said the order hampers its ability to fight misinformation, and must be lifted.
How it started. In May 2022, the attorneys general of Missouri and Louisiana sued the government for demanding that social media platforms remove content that the government deemed misinformation. On 4 July 2023, the Louisiana court ordered government agencies to refrain from communicating with social media companies for the purpose of moderating content. In other words, the court said the government was only allowed to contact social media companies on content related to national security threats, criminal activity, and cyberattacks.
The government’s counter-argument. It’s one thing to try to persuade platforms, and quite another to coerce them. ‘The district court’s ruling ignored that fundamental distinction… [it] equated the government’s legitimate efforts to identify truthful information with illicit efforts to “‘silenc[e] the voice of opposition’… and… to coerce.’
Why is this case relevant? First, this places a wedge between the US government and social media companies by setting a precedent for how the US government can interact with social media companies. Second, it affects the way misinformation is tackled by undermining the credibility of public authorities as trustworthy providers of information. Third, the idea that social media giants such as Facebook and Twitter can be easily coerced into compliance is not exactly the image we all have of them…
Case numbers: District Court, W.D. Louisiana, 3:22-cv-01213; Court of Appeals, 5th Circuit, 23-30445
Breton tells NGOs: Shutdowns only in far-reaching situations; courts will have final say
You could say that Internal Market Commissioner Thierry Breton rocked the boat a little when he recently suggested on France Info that online platforms could be shut down if they don’t remove illegal content immediately, especially when riots and violent protests are involved. Over 60 civil rights NGOs immediately asked him to clarify that the Digital Services Act (DSA) would not be used as a censorship tool.
Breton has now clarified his comment: The possibility of a temporary suspension is a last resort if a platform fails to take necessary and effective actions in far-reaching situations, such as systemic failure to terminate infringements linked to calls for violence or manslaughter. In any case, the courts will have the final say.
Why is this relevant? The exchange between the European Commission and the NGOs served to clarify what type of last-resort measures against infringement can be ordered by authorities. The DSA’s obligations for very large online platforms and search engines come into effect on 25 August.
// ANTITRUST //
EU confirms antitrust investigation against Microsoft for bundling Teams with Office
It didn’t take long for the European Commission to confirm our hunch from last week. Just days after Alfaview’s anti-competition complaint against Microsoft, the commission launched formal proceedings against Microsoft for bundling the communication software Teams with its Office 365.
A long time coming. At the height of the COVID-19 pandemic in 2020, Zoom soared to success while Teams emerged as a formidable competitor. It was during this time that Microsoft decided to bundle Teams with Office. The move faced backlash from Slack, a rival company (that was subsequently acquired by Salesforce in 2021), which complained to the commission that Microsoft’s bundling constituted an abuse of its dominant position.
Why is this case relevant? This makes it the first investigation by the European Commission against Microsoft since the Internet Explorer bundling case concluded in 2009 (Microsoft was fined a few years later for breaching its commitments). This case also highlights the limited effectiveness of antitrust laws and enforcement in deterring dominant companies. Even if Microsoft were to lose the case, Teams would remain firmly established as one of the leading meeting software apps, making any findings of anti-competitive behaviour ineffective in displacing it.
French competition authority to investigate Apple’s app tracking policy
The French competition authority has launched an investigation into Apple’s practices for allegedly abusing its dominant market position. Advertisers have complained that while Apple imposes its App Tracking Transparency (ATT) policy upon them, it exempts itself from the same regulations, resulting in self-preferential treatment.
The issues with Apple’s tracking policy. Apple’s ATT policy, first announced in 2020, triggers a privacy pop-up to iPhone and iPad users during the installation of third-party apps attempting to track them. That’s very much welcomed by privacy advocates. However, app developers say that this policy does not extend to Apple’s own apps, creating hesitation among users to allow third-party tracking, leading them to favour Apple’s apps. This also means that Apple has access to more complex device and advertising data than third-party developers, allowing it to more accurately target its ads to users in ways that third-party developers cannot.
Apple says its apps do not track users via third-party apps, and hence, do not require the ATT prompt. But competition authorities aren’t so sure anymore that this isn’t an abusive self-preferencing practice.
Why is this case relevant? First, this case has been gaining momentum since 2020. At that time, the French Competition Authority was approached by advertising associations with a complaint against the ATT policy and a request for interim measures against Apple. A year later, the French authority concluded that there was nothing wrong with providing users additional possibilities for deciding whether they wished to be tracked, and after all, at that time, the French authority did not have any proof that Apple was subjecting third-party app developers to stricter measures than those it imposed on itself for comparable purposes. And this is precisely what the French authority will now be looking at. Second, because multiple jurisdictions are looking into the same issue, including the UK, Italy, Germany, and California.
Was this newsletter forwarded to you, and you’d like to see more?
Since it’s a relatively quiet month, we’re looking ahead at the next 4-5 weeks:
10–13 August: DEF CON 31 is the Las Vegas event which will show, among other workshops, training, and contests, the White House-backed red-teaming of OpenAI’s models.
21 August–1 September: The Ad Hoc Committee on Cybercrime meets in New York for its 6th session.
It’s more AI governance this week: The US White House is inching towards AI regulation, marking a significant shift from the laissez-faire approach of previous years. At the UN, the Secretary-General is also shaking (some) things up. Elsewhere, cybercrime is rearing its ugly head, bolstered by generative AI. Antitrust regulators gear up for new battles while letting go of others. And in case you haven’t heard, Twitter’s iconic blue bird logo is no more.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Voluntary measures a precursor to White House regulations on AI
There’s more than meets the eye in last week’s announcement that seven leading AI companies in the USA – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – agreed to implement voluntary safeguards. The announcement made it crystal clear that executive action on AI is imminent, indicating a shift to a higher gear towards AI regulation within the White House.
AI laws on the horizon
In comments after his meeting with AI companies, President Joe Biden spoke of plans for new rules: ‘In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation.’ The White House also confirmed that it ‘is currently developing an executive order and bipartisan legislation to position America as a leader in responsible innovation’. In addition, the voluntary commitments state that they ‘are intended to remain in effect until regulations covering similar issues are officially enacted’.
In June, officials revealed they were already laying the groundwork for several policy actions, including executive orders, set to be unveiled this summer. Their work involved creating a comprehensive inventory of government regulations applicable to AI, and identifying areas where new regulations are needed to fill the gaps.
The extent of the White House’s shift in focus will be revealed when the executive order(s) are announced. One possibility is that they will focus on the same safety, security and trust aspects that the voluntary safeguards reflect on, mandating new rules to fill in the gaps. Another possibility, though less likely, is for the executive action to focus on tackling China’s growth in the AI race.
The voluntary measures
While the voluntary commitments address some of the main risks, they mostly encompass practices that companies are either already implementing or have announced, making them less impressive. In a way, the commitments appear reminiscent of the AI Pact announced by European Commissioner Thierry Breton as a preparatory step for the EU’s AI Act – a way for companies to get ready for impending regulations. In addition, these commitments apply primarily to generative models that surpass the current industry frontier in terms of power and scope.
The safeguards revolve around three crucial principles that should underpin the future of AI: Safety, security, and trust.
1. Safety: Companies have pledged to conduct security testing of their AI systems before release, employing internal and external experts to mitigate risks related to biosecurity, cybersecurity, and societal impacts. The White House previously endorsed a red-teaming event at DEFCON 31 (taking place in August), aimed at identifying vulnerabilities in popular generative AI tools through the collaboration of experts, researchers, and students.
2. Security: Companies have committed to invest in cybersecurity and insider threat safeguards, ensuring proprietary model weights (numeric parameters that machine learning models learn from data during training to make accurate predictions) are released only under intended circumstances and after assessing security risks. They have also agreed to facilitate third-party discovery and reporting of AI system vulnerabilities to support prompt action on any post-release challenges.
3. Trust: Companies have committed to developing technical mechanisms, such as watermarking, to indicate AI-generated content, promoting creativity while reducing fraud and deception. OpenAI is already exploring watermarking. Companies have also pledged to publicly disclose AI system capabilities, limitations, and appropriate/inappropriate use. They will also address security and societal risks, including fairness, bias, and privacy – again, a practice some companies already implement.
The industry’s response
Companies have welcomed the White House’s lead in bringing them together to agree on voluntary commitments (emphasis on voluntary). While they have advocated for future AI regulation in their testimonies and public remarks, the industry generally leans towards self-regulation as the preferred approach.
For instance, Meta’s Nick Clegg said the company was pleased to commit these voluntary commitments alongside others in the sector, which ‘create a model for other governments to follow’. (We’re unsure what he meant, given that other countries have already introduced new laws or draft rules on AI.) Microsoft’s Brad Smith’s comment went a step further, noting that the company is not only already implementing the commitments, but is going beyond them (see infographic).
Microsoft’s infographic explaining its voluntary AI commitments
Minimal impact on the international front
The White House said that its current work seeks to support and complement ongoing initiatives, including Japan’s leadership of the G7 Hiroshima Process, the UK’s leadership in hosting a Summit on AI Safety, India’s leadership in the Global Partnership on AI, and ongoing talks at the UN (no mention of the Council of Europe negotiations on AI though).
In practice, we all know how intergovernmental processes operate, along with the pace at which things generally unfold. So no immediate changes are expected.
Plus, the USA may well contemplate the regulation of AI companies within its own borders, but opening the doors to international regulation of its domestic enterprises is an entirely separate issue.
Digital policy roundup (17–24 July)
// AI //
UN Security Council holds first-ever AI debate; Secretary-General announces initiatives
The UN Security Council held its first-ever debate on AI (18 July), delving into the technology’s opportunities and risks for global peace and security. A few experts were also invited to participate in the debate chaired by Britain’s Foreign Secretary James Cleverly. (Read an AI-generated summary of country positions, prepared by DiploGPT).
In his briefing to the 15-member council, UN Secretary-General Antonio Guterres promoted a risk-based approach to regulating AI, and backed calls for a new UN entity on AI, akin to models such as the International Atomic Energy Agency, the International Civil Aviation Organization, and the Intergovernmental Panel on Climate Change.
Why is it relevant? In addition to the debate, Guterres announced that a high-level advisory group will begin exploring AI governance options by late 2023. He also said that his latest policy brief (published 21 July) recommends that countries develop national AI strategies and establish global rules for military AI applications, and urges them to ban lethal autonomous weapons systems (LAWS) that function without human control by 2026. Given that a global agreement on AI principles is already a big challenge in itself, agreement on a ban on LAWS (negotiations have been ongoing within the dedicated Group of Governmental Experts since 2016) is an even greater challenge.
Cybercriminals using generative AI for phishing and producing child sexual abuse content
Canada’s leading cybersecurity official, Sami Khoury, warned that cybercriminals are now exploiting AI for hacking and spreading misinformation by developing harmful software, creating convincing phishing emails, and propagating false information online, all generated by AI.
In separate news, the International Watch Foundation (IWF) reported it has looked into 29 reported cases of URLs potentially housing AI-generated child sexual abuse imagery. From these reports, it has been confirmed that 7 URLs did indeed contain such content. In addition, during their analysis, IWF experts also discovered an online manual that teaches offenders how to refine prompts and train AI systems to produce increasingly realistic outcomes.
Why is it relevant? Reports from law enforcement and cybersecurity authorities (such as Europol) have previously warned about the potential risks of generative AI. Real-world instances of suspected AI-generated undesirable content are now being documented, marking a transition from perceiving it as a possible threat to acknowledging it as a current risk.
// ANTITRUST //
FTC suspends competition case in Microsoft’s Activision takeover
The US Federal Trade Commission (FTC) has suspended its competition case against Microsoft’s takeover of Activision Blizzard, which was scheduled for a hearing in an administrative court in early August.
Since then, Microsoft and Activision Blizzard agreed to extend the deadline for closing the acquisition deal by 3 months to 18 October.
Why is it relevant? This indicates that the deal is close to being approved everywhere, especially since Microsoft and Sony have also reached agreements ensuring the availability of the Call of Duty franchise on PlayStation – commitments which are appeasing the concerns raised by regulators who were initially opposed to the deal.
Campaigns 48
Microsoft faces EU antitrust complaint over bundling Teams with Office
Microsoft is facing a new EU antitrust complaint lodged by German company alfaview, centering on Microsoft’s practice of bundling its video app, Teams, with its Office product suite. Alfaview says the bundling gives Teams an unfair competitive advantage, putting rivals at a disadvantage.
Why is it relevant? This is not the first complaint against Microsoft’s Teams-Office bundling: Salesforce lodged a similar complaint in 2020. The Commission doesn’t take lightly to anti-competitive practices, so we can expect it to come out against Microsoft’s practices in full force.
// DSA //
TikTok: Almost, but not quite
TikTok, a social media platform owned by a Chinese company, appears to be making progress towards complying with the EU’s Digital Services Act. It willingly subjected itself to a stress test, indicating its commitment to meeting the necessary requirements.
After a debrief with TikTok CEO Shou Zi Chew, European Commissioner for the Internal Market Thierry Breton tweeted that the meeting was constructive, and that it was now time for the company ‘to accelerate to be fully compliant’.
Why is it relevant? TikTok is trying very hard to convince European policymakers of its commitment to protecting people’s data and to implementing other safety measures. Countries have been viewing the company as a security concern, prompting the company to double and triple its efforts at proving its trustworthiness. Compliance with the EU’s Digital Services Act (DSA) could help restore the company’s standing in Europe.
Campaigns 49
// CYBERSECURITY //
Chinese hackers targeted US high-ranking diplomats
The US ambassador to China, Terry Branstad, was hacked by a Chinese government-linked spying operation in 2019, according to a report by the Wall Street Journal. The operation targeted Branstad’s private email account and was part of a broader effort by Chinese hackers to target US officials and their families.
Daniel Kritenbrink, the assistant secretary of state for East Asia, was among those targeted in the cyber-espionage attack. These two diplomats are considered to be the highest-ranking State Department officials affected by the alleged spying campaign.
The Chinese government has denied any involvement in the hacking.
Why is it relevant? The news of the breach comes amid ongoing tensions between the USA and China; the fact that the diplomats’ email accounts had been monitored for months could further strain relations between the two countries. It also highlights the ongoing issue of state-sponsored cyber espionage.
Was this newsletter forwarded to you, and you’d like to see more?
Gouverner l’IA : quels sont les garde-fous appropriés en matière d’IA ?
La gouvernance de l’IA reste la première des priorités de la politique numérique, les efforts nationaux, régionaux et mondiaux qui visent à définir des garde-fous en matière d’IA se poursuivant.
L’approche de l’UE fondée sur les risques
L’approbation par le Parlement européen de la loi sur l’IA constitue une avancée majeure. Ce règlement classifie les systèmes d’IA en fonction des niveaux de risque et de la protection des droits civils, et prévoit des amendes sévères en cas de violation. La prochaine étape du processus législatif est le trilogue, au cours duquel le Parlement européen, le Conseil de l’UE et la Commission doivent se mettre d’accord sur une version finale de la loi ; cet accord devrait être conclu d’ici la fin de l’année.
Une nouvelle étude de Stanford montre que les principaux modèles d’IA sont encore loin des normes responsables d’IA fixées par la loi sur l’IA (la version approuvée par le Parlement européen), manquant notamment de transparence sur les mesures de réduction des risques. Mais certains acteurs du secteur estiment que les règles imposent un fardeau administratif trop lourd. Une récente lettre ouverte signée par certaines des plus grandes entreprises européennes (Airbus, Renault, Siemens, etc.) souligne que la loi sur l’IA pourrait nuire à la compétitivité de l’UE et les contraindre à quitter l’UE pour des juridictions moins restrictives. En fait, les entreprises font de leur mieux pour influencer le cours des choses : par exemple, OpenAI a fait pression avec succès sur l’UE pour que la prochaine loi sur l’IA ne considère pas les systèmes d’IA à usage général d’OpenAI comme étant à haut risque, ce qui entraînerait des exigences légales strictes telles que la transparence, la traçabilité et la surveillance humaine. Les arguments d’OpenAI s’alignent sur ceux précédemment employés par les efforts de lobbying de Microsoft et de Google, qui ont fait valoir qu’une réglementation stricte ne devrait être imposée qu’aux entreprises qui appliquent explicitement l’IA à des cas d’utilisation à haut risque, et non aux entreprises qui construisent des systèmes d’IA d’usage général.
Compte tenu des antécédents de l’UE en matière de règles de protection des données, sa proposition de loi sur l’IA devrait inspirer d’autres pays. En juin, le Parlement chilien a entamé des discussions sur une proposition de loi sur l’IA axée sur les aspects juridiques et éthiques du développement, de la distribution, de la commercialisation et de l’utilisation de l’IA.
D’autres règles régionales sont en cours d’élaboration : il a été révélé que les pays de l’Association des nations de l’Asie du Sud-Est (ASEAN) préparent un guide sur l’IA qui traitera de la gouvernance et de l’éthique. Il abordera notamment l’utilisation de l’IA pour générer de la désinformation en ligne. Ce guide devrait être adopté en 2024. La présidence de l’ASEAN par Singapour en 2024 sera très dynamique.
Des approches plus favorables aux entreprises
Étant donné que Singapour adopte une approche collaborative de la gouvernance de l’IA et s’efforce de travailler avec les entreprises pour promouvoir des pratiques responsables en matière d’IA, le guide de l’ASEAN ne devrait pas être particulièrement strict. Des approches plus souples et plus collaboratives devraient également être formulées au Japon et au Royaume-Uni, qui estiment qu’une telle approche les aidera à se positionner en tant que leaders dans le domaine de l’IA.
Les États-Unis ont eux aussi adopté une approche plus collaborative de la gouvernance de l’IA. Le mois dernier, le président Biden a rencontré des représentants de la société civile qui critiquent les grandes technologies pour discuter des risques potentiels de l’IA et de ses implications pour la démocratie, notamment la diffusion de fausses informations et l’exacerbation de la polarisation politique. Le ministère américain du Commerce va créer un groupe de travail public chargé d’examiner les avantages et les risques potentiels de l’IA générative, et d’élaborer des lignes directrices pour gérer efficacement ces risques. Le groupe de travail sera dirigé par le NIST et comprendra des représentants de divers secteurs, dont l’industrie, le monde universitaire et le Gouvernement.
Mosaïque
Alors que les pays poursuivent leur course à l’IA, nous pourrions nous retrouver avec une mosaïque de législations, de règles et de lignes directrices susceptibles de véhiculer des valeurs et des priorités contradictoires. Il n’est pas surprenant que les appels en faveur de règles mondiales et d’un organisme international gagnent également du terrain. Une future agence mondiale de l’IA inspirée de l’Agence internationale de l’énergie atomique (AIEA), une idée avancée pour la première fois par Sam Altman, P.-D. G. d’OpenAI, a reçu le soutien du secrétaire général des Nations unies, Antonio Guterres.
La France plaide en faveur d’une réglementation mondiale de l’IA, le président Macron proposant que le G7 et l’Organisation de coopération et de développement économiques (OCDE) soient de bonnes instances à cette fin. La France souhaite travailler en collaboration avec la loi européenne sur l’IA tout en promouvant une réglementation mondiale, et a également l’intention de collaborer avec les États-Unis à l’élaboration de normes et de lignes directrices en matière d’IA. De même, Brad Smith, président de Microsoft, a appelé à une collaboration entre l’UE, les États-Unis et les pays du G7, en ajoutant l’Inde et l’Indonésie à la liste, afin d’établir une gouvernance de l’IA fondée sur des valeurs et des principes communs.
Au grand jour : les ODD comme garde-fous
Toutefois, le chemin vers une réglementation mondiale est généralement long et politiquement délicat. Son succès n’est pas non plus garanti. Jovan Kurbalija, directeur exécutif de Diplo, affirme que l’humanité manque de garde-fous précieux en matière d’IA, qui sont pourtant bien visibles : les objectifs du Millénaire pour le développement. Ils sont actuels, détaillés et solides, ont fait l’objet de recherches rigoureuses et sont immédiatement applicables. Ils jouissent déjà d’une légitimité mondiale et ne sont ni centralisés ni imposés. Ce ne sont là que quelques-unes des raisons pour lesquelles les ODD peuvent jouer un rôle crucial ; voici 15 raisons pour lesquelles nous devrions utiliser les ODD pour gouverner l’IA.
Les systèmes d’identification numérique gagnent du terrain
Les décideurs du monde entier font pression pour que les systèmes d’identification numérique et les politiques sous-jacentes soient plus robustes, plus sûrs et plus inclusifs.
Le Conseil de l’OCDE a approuvé une nouvelle série de recommandations sur la gouvernance de l’identité numérique reposant sur trois piliers. Le premier concerne la nécessité de centrer les systèmes sur l’utilisateur et de les intégrer aux systèmes non numériques existants. Le deuxième se concentre sur le renforcement de la structure de gouvernance des systèmes numériques existants afin de répondre aux préoccupations en matière de sécurité et de protection de la vie privée. Tandis que le troisième pilier concerne l’utilisation transfrontalière de l’identité numérique.
Récemment, le Parlement européen et le Conseil sont parvenus à un accord préliminaire sur les principaux aspects du cadre de l’identité numérique proposé par la Commission en 2021. Auparavant, plusieurs institutions financières de l’UE ont mis en garde contre certaines sections du règlement qui sont sujettes à interprétation et qui pourraient nécessiter des investissements importants de la part du secteur financier, des commerçants et des réseaux d’acceptation mondiaux.
Au niveau national, un certain nombre de pays ont adopté des cadres réglementaires et politiques pour l’identification numérique. L’Australie a publié la Stratégie nationale pour la résilience de l’identité afin de promouvoir la transparence du système d’identité dans tout le pays, tandis que le Bhoutan a approuvé le projet de loi sur l’identité numérique nationale, à l’exception de deux clauses qui doivent être examinées lors de la séance conjointe du Parlement. Le projet d’identité numérique unique du Sri Lanka (SL-UDI) est en cours, et le gouvernement thaïlandais a lancé l’application mobile ThaID pour simplifier l’accès aux services nécessitant une confirmation d’identité.
Modération de contenu : se préparer à la loi sur les services numériques (DSA)
Les préparatifs de la DSA battent leur plein, même si la Commission européenne a déjà fait face à son premier défi juridique la concernant, et il n’est pas venu de Big Tech, comme beaucoup s’y attendaient. La société allemande de commerce électronique Zalando a intenté une action en justice contre la Commission pour contester la qualification de Zalando de « très grande plateforme systémique », et critiquer le manque de transparence et de cohérence dans la désignation des centrales dans le cadre de la DSA. Zalando affirme qu’elle ne remplit pas les conditions requises pour être considérée comme telle et qu’elle ne présente pas les mêmes risques systémiques que les grandes entreprises technologiques.
Pendant ce temps, le commissaire européen au marché intérieur, M. Breton, a rendu visite aux dirigeants de grandes entreprises technologiques dans la Silicon Valley pour leur rappeler leurs obligations dans le cadre de la DSA. Bien que le propriétaire de Twitter, M. Musk, ait précédemment déclaré que Twitter se conformerait aux règles de modération de contenu de la DSA, M. Breton s’est rendu au siège de l’entreprise pour effectuer un test de simulation de crise afin d’évaluer la gestion par Twitter des flux de contenus potentiellement problématiques, tels que définis par les régulateurs de l’UE. M. Breton a également rendu visite aux P.-D. G. de Meta, OpenAI et Nvidia. Meta a accepté de se soumettre à un test de simulation de crise en juillet afin d’évaluer la réglementation de l’UE en matière de contenu en ligne. Cette décision a été prise à la suite de l’appel lancé par M. Breton pour que Meta prenne des mesures immédiates concernant son contenu ciblant les enfants.
Le potentiel de l’UE à exercer son pouvoir politique et juridique sur les grandes entreprises technologiques sera démontré dans les mois à venir, la DSA devenant pleinement applicable au début de l’année 2024.
Baromètre
Les développements de la politique numérique qui ont fait la une
Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux développements du mois de juin. Chaque mise à jour du Digital Watch observatory donne de plus amples détails.
L’architecture mondiale de la gouvernance numérique
Les deux derniers examens thématiques approfondis du Pacte mondial pour le numérique (PMN) ont porté sur les biens communs numériques mondiaux et l’accélération des progrès en matière d’objectifs de développement durable.
Les États-Unis et le Royaume-Uni ont signé la Déclaration de l’Atlantique, qui se concentre sur la garantie du rôle de chef de file dans les technologies critiques et émergentes, la sécurité économique et la protection des technologies, ainsi que la transformation numérique.
Le Service de renseignement de la Confédération prévoit une augmentation des menaces de cyberespionnage en Europe en raison des mesures prises par l’Occident contre les réseaux de renseignement russes. Le directeur américain de la CISA a mis en garde contre le risque croissant que des pirates informatiques chinois prennent pour cible des infrastructures américaines essentielles lors d’un éventuel conflit. L’OTAN prévoit d’étendre le rôle des cyberdéfenseurs militaires en temps de paix et d’intégrer en permanence les ressources du secteur privé.
La Securities and Exchange Commission (SEC) des États-Unis a poursuivi Binance et Coinbase pour violation de la législation sur les valeurs mobilières. Binance a reçu l’ordre de cesser ses activités au Nigeria et en Belgique.
L’autorité suédoise de protection des données a imposé une amende de 5 millions d’euros (5,5 millions de dollars) à la société de services de musique numérique Spotify pour avoir enfreint plusieurs dispositions du RGPD.
La politique de contenu
Meta et Google bloqueront les informations canadiennes sur leurs plateformes en réponse à la loi sur l’information en ligne, qui exige que les géants de l’Internet paient les éditeurs d’informations locales pour les liens vers les sources d’information.
L’UE est parvenue à un accord politique concernant la loi sur les données, qui définit les principes d’accès, de portabilité et de partage des données pour les utilisateurs de produits IdO.
La Commission européenne a engagé une procédure formelle à l’encontre de Google après avoir conclu, à l’issue d’une enquête préliminaire, que l’entreprise avait enfreint les règles antitrust de l’UE dans le secteur de la technologie publicitaire, et qu’il était nécessaire de se désengager.
Piratage de MOVEit Transfer : qu’est-ce que c’est et pourquoi est-ce important ?
L’exploitation de la vulnérabilité MOVEit Transfer par le groupe de rançongiciels CLOP et la liste des victimes qui ne cesse de s’allonger ont suscité des inquiétudes quant à la manière dont nous protégeons les chaînes d’approvisionnement en TIC. Nous examinons ce qui s’est passé et ce que nous avons appris.
Une série de révélations
Le 31 mai, Progress Software Corporationa révélé que son logiciel de transfert de fichiers géré (MFT), MOVEit Transfer, était susceptible de présenter une vulnérabilité majeure par le biais d’une injection SQL qui permet à des attaquants non authentifiés d’accéder à ses bases de données.
Le 2 juin, la faille a reçu la désignation CVE-2023-34362. CVE est l’acronyme de Common Vulnerabilities and Exposures ID number (numéro d’identification commun des vulnérabilités et des expositions), qui est attribué aux failles divulguées publiquement. Une fois qu’un CVE est attribué, les fournisseurs, l’industrie et les chercheurs en cybersécurité peuvent échanger des informations afin de mettre au point des mesures correctives.
Le 9 juillet, Progressa annoncé des défaillances supplémentaires (CVE-2023-35036), qui ont été identifiées lors de l’examen du code. La société a également publié un correctif pour les nouvelles déficiences. Le 15 juin, une troisième menace a été annoncée (CVE-2023-35708).
Les pirates ont attaqué plus de 162 personnes identifiées, dont la BBC, Ofcom, British Airways, Ernst & Young, Siemens Energy, Schneider Electric, UCLA, AbbVie et plusieurs agences gouvernementales, avec ces attaques de type « jour zéro ». Des sources font également état de la compromission des données personnelles de plus de 15,5 millions de personnes.
Les dessous de l’attaque
Microsoft a attribué le piratage de MOVEit à Lace Tempest, un pirate connu pour ses attaques par rançongiciels et pour sa gestion du site Internet d’extorsion du groupe de rançongiciels CLOP, ainsi que pour ses vols de données et ses attaques d’extorsion. Le 6 juin, le groupe de rançongiciels CLOP a publié sur son site de divulgation une déclaration demandant aux victimes de le contacter avant le 14 juin afin de négocier des frais d’extorsion pour la suppression des données volées.
L’identité et la localisation du gang CLOP demeurent inconnues du public. Toutefois, les chercheurs en sécurité soupçonnent le fait que le groupe soit lié à la Russie ou qu’il soit composé d’individus russophones.
Failles dans la sécurité de la chaîne d’approvisionnement Le piratage de MOVEit a une nouvelle fois mis en évidence le fait que la sécurité de la chaîne d’approvisionnement est une préoccupation majeure pour les industries et le secteur public. D’un bout à l’autre de la chaîne d’approvisionnement, qui est responsable de quoi ? Et comment pouvons-nous assurer une coopération intersectorielle et transfrontalière entre de multiples protagonistes afin d’atténuer les risques de sécurité ?
Tandis que les agences nationales de cybersécurité continuent de publier des orientations sur la cartographie et la sécurisation des chaînes d’approvisionnement, l’industrie met en œuvre de bonnes pratiques pour réduire les failles et mettre en place des infrastructures TIC sécurisées. Néanmoins, les organisations n’ont pas toutes le même niveau de maturité ni les mêmes ressources pour réagir efficacement. Heureusement, des discussions sont en cours à différents niveaux pour aborder ces questions : au niveau international pour faire progresser la mise en œuvre des normes GGE des Nations unies afin de réduire les vulnérabilités et de sécuriser les chaînes d’approvisionnement, comme le Geneva Dialogue, ou au niveau national et sectoriel pour développer et adopter de nouvelles mesures de sécurité (par exemple, le SBOM).
Un autre défi consiste à mener des enquêtes efficaces, avec la participation de plusieurs États et/ou partenaires privés, afin d’identifier un auteur de la menace et de mettre un terme à l’exploitation active des vulnérabilités. Le piratage de MOVEit met une fois de plus en évidence la nécessité de collaborer pour faire face aux menaces en matière de cybersécurité.
Actualités de la Francophonie
Campaigns 64
La Francophonie contribue au Pacte numérique mondial (PNM) en vue du Sommet du Futur des Nations Unies en 2024
Dans le cadre des consultations informelles lancées par les Nations Unies sur le PNM, l’OIF a déposé une contribution en ligne au Pacte numérique mondial afin de porter les valeurs et la vision de la Francophonie sur cet instrument international qui consacrera les principes gouvernant l’espace numérique de demain. Cette contribution ambitionne également d’exprimer et de refléter de manière équilibrée la diversité et la richesse des visions et idées s’inscrivant dans la coopération multilatérale francophone et enfin, de soutenir la promotion du multilinguisme (dont l’usage du français) et de la diversité culturelle dans l’espace numérique en assurant ainsi une meilleure inclusivité et un plus grand accès pour l’ensemble des citoyens, notamment des pays francophones.
Elaborée et consolidée par l’OIF après une vaste consultation des acteurs de la Francophonie institutionnelle, cette contribution positionne l’espace francophone sur les grands enjeux de la discussion internationale sur la gouvernance numérique et met l’accent sur deux défis de taille : le renforcement des capacités numériques comme composante indispensable pour réaliser la connectivité universelle et réduire la fracture numérique d’une part, et la défense de la diversité culturelle et linguistique dans l’espace numérique à travers un plaidoyer robuste en faveur de la « découvrabilité » des contenus en ligne d’autre part.
Alors que la vocation du pacte numérique mondiale est de définir un avenir numérique plus inclusif, le plaidoyer de la Francophonie insiste sur l’importance de renforcer les dispositifs, programmes et mécanismes qui permettront aux pays qui en ont le plus besoin de s’approprier les outils du numérique, de valoriser leurs contenus (culturel, scientifique, économique, éducatif, etc.) et de tirer tous les avantages de la transition numérique.
La contribution de la Francophonie au PNM a été remise en main propre à l’Envoyé pour les technologies du Secrétaire général des Nations Unies, Amandeep Singh Gill, à New York, le 3 mai 2023 et présentée au groupe des Ambassadeurs francophones auprès des Nations Unies et leurs missions diplomatiques, respectivement les 3 mai et 6 juillet 2023. Au cours des prochains mois, l’OIF s’engage à accompagner de façon innovante les négociateurs francophones du Pacte numérique mondial par l’organisation de réunions bimensuelles sur l’actualité numérique (« Café numérique francophone ») et une formation sur les thématiques du Pacte.
En savoir plus : www.francophonie.org
Crédits photographiques : Rotane KHALED pour l’OIFCampaigns 65
Les régulateurs des télécommunications des pays francophones se réunissent pour associer davantage les utilisateurs à la régulation
À l’invitation de la Commission fédérale de la Communication (ComCom) de Suisse et de l’Agence Nationale de Réglementation des Télécommunications (ANRT) du Maroc, présidente de Réseau francophone de la régulation des télécommunications (Fratel), le 20e séminaire 2023 du réseau s’est déroulé au Musée Olympique de Lausanne (Suisse), du 9 au 10 mai 2023 sur le thème « Pourquoi et comment associer l’utilisateur à la régulation ? ».
L’objectif principal du séminaire était d’échanger sur les raisons et les moyens d’associer l’utilisateur à la régulation. Il s’agissait donc de discuter des méthodes et moyens de fournir aux utilisateurs une information précise et personnalisée, tout en les mobilisant pour faire remonter au régulateur les problèmes rencontrés.
Plus de 120 personnes ont participé en présentiel et en ligne, au séminaire, représentant notamment 28 autorités de régulation membres du réseau Fratel, mais aussi des institutions internationales (Union Internationale des Télécommunications, OCDE, OIF), des administrations, des associations de consommateurs, des universitaires et des acteurs du secteur.
Enfin, l’ILR, le régulateur luxembourgeois, vice-président 2023 de Fratel, et le Swedish Program for ICT in Developing Regions (SPIDER) ont fait le point sur l’initiative de l’équipe Europe pilotée par le Swedish Post and Telecom Authority (PTS), le régulateur suédois pour accompagner le développement d’une offre de formation en français aux régulateurs et associations régionales de régulateurs en Afrique subsaharienne.
En savoir plus : www.fratel.org
L’Assemblée Parlementaire de la Francophonie (APF) organise un séminaire de formation sur la gouvernance numérique et internet à l’Assemblée nationale du Laos
Les 8 et 9 juin dernier, l’APF, en collaboration avec la section laotienne de l’Assemblée nationale du Laos et l’OIF, a organisé un séminaire de formation sur « la gouvernance numérique et internet : entre souveraineté nationale et mondialisation » à l’attention d’une trentaine de parlementaires du Laos, du Viet Nam et du Cambodge.
Au cours des travaux, de nombreux aspects des enjeux de la gouvernance de l’Internet ont été présentés : état des lieux de la législation laotienne en matière de numérique, orientations actuelles des travaux politiques sur ce même domaine, partages d’expérience autour de la souveraineté nationale dans un monde globalisé ainsi que sur les moyens et rôle du parlement pour la gouvernance numérique.
L’OIF, à travers sa Direction de la Francophonie économique et numérique (DFEN), a été invitée à présenter les principaux instruments internationaux à disposition des parlements pour une régulation adaptée et efficace du numérique. Parmi les pistes explorées et discutées, les orateurs ont insisté sur le besoin d’éduquer au numérique et à l’Internet, afin que ces derniers deviennent pleinement des outils au service de tous, au-delà de terrains de dangers potentiels. La sensibilisation des diplomates, parlementaires et agents publics aux enjeux et défis du numérique tant au niveau national qu’international est en effet essentielle pour l’appropriation de ces concepts par les décideurs politiques. C’est aussi une des priorités de la Francophonie.
En savoir plus : www.apf-francophonie.org
ICANN 77 : L’OIF retrouve l’enceinte politique de l’ICANN, la société pour l’attribution des noms de domaine et des numéros sur Internet
Du 12 au 15 juin 2023, l’OIF, à travers sa Direction de la Francophonie économique et numérique (DFEN), a participé, à Washington, au forum politique de l’ICANN, autorité internationale de régulation d’Internet, afin de contribuer à la coordination des Etats membres francophones au sein du GAC (Comité consultatif gouvernemental), l’une des quatre instances de l’autorité. L’OIF est membre observateur du GAC de l’ICANN qui a pour principales missions d’administrer les ressources numériques d’Internet, telles que les noms de domaines de premier niveau ainsi que de coordonner ses acteurs techniques. Le GAC représente la voix des gouvernements et des organisations intergouvernementales et donne des avis à l’ICANN sur des dossiers de politique publique.
Les sessions plénières de la conférence se sont concentrées sur le programme des nouveaux gTLD (domaine de premier niveau générique) : état de la mise en œuvre du programme, prochain cycle du programme, avec des sous-thèmes tels que la participation équitable à l’équipe permanente de révision de la mise en œuvre de la prévisibilité, les engagements volontaires du registre/engagements d’intérêt public, etc. Les réunions du GAC ont eu lieu, notamment sur les génériques fermés ou encore sur l’utilisation abusive du DNS et les technologies émergentes.
En marge des réunions techniques et politiques, l’OIF a organisé une réunion informelle de concertation des acteurs francophones de l’ICANN. L’ensemble des participants s’est largement félicité du retour de l’Organisation dans les réunions de l’ICANN et nombreux sont ceux qui attendent une concertation et une coordination plus accrue entre les parties prenantes francophones afin de défendre des sujets d’intérêt partagé et définir des perspectives communes sur les enjeux des adresses et noms de domaine de l’Internet.
Enfin, l’AFRALO, l’organisation régionale africaine At-Large qui est l’une des cinq RALO (Régional At Large Organisation) au sein de l’ICANN a élaboré une déclaration sur le « Renforcement de la participation de l’Afrique à l’ICANN » pour renforcer la présence effective des parties prenantes africaines dans les instances de décision de l’ICANN et promouvoir la diversité linguistique pour une meilleure inclusion. Pour l’OIF et les Etats membres francophones du GAC, rendez-vous a été pris pour une nouvelle mobilisation lors de l’Assemblée générale de l’ICANN (ICANN 78) qui se déroulera à Hambourg du 21 au 26 octobre 2023.
La conférence annuelle de l’Organisation internationale du travail (OIT) a abordé plusieurs questions : une transition juste vers des économies durables et inclusives, des apprentissages de qualité et la protection du travail.
La 2e commission chargée de la discussion récurrente sur la protection du travail, concernant la déclaration du centenaire de l’OIT pour l’avenir du travail en 2019, a conclu que l’OIT « devrait renforcer son soutien aux gouvernements et aux organisations d’employeurs et de travailleurs » pour exploiter les technologies numériques afin d’améliorer les conditions de travail ainsi que la sécurité et la santé au travail (SST), en particulier dans les micro, petites et moyennes entreprises (MPME). L’OIT devrait également intensifier le développement des connaissances et les activités de renforcement des capacités « pour comprendre les impacts de la « numérisation, y compris l’intelligence artificielle et la gestion algorithmique » sur les questions émergentes en matière de SST.
Le 22 juin, le Conseil a présenté, dans le cadre d’un dialogue interactif, le compte rendu de la représentante spéciale sur les innovations et technologies numériques et le droit à la santé (A/HRC/53/65). En outre, le Conseil a organisé une table ronde le 3 juillet pour souligner le rôle important que joue la maîtrise du numérique, des médias et de l’information (DMIL) pour permettre aux personnes défavorisées d’exercer leur droit à la liberté d’expression. Dans son rapport (A/HRC/53/25), la représentante spéciale a recommandé aux États de donner la priorité à l’intégration de la maîtrise du numérique, des médias et de l’information dans les plans de développement nationaux.
L’édition 2023 du Dialogue sur les innovations a accueilli des experts militaires, techniques, juridiques et éthiques pour explorer l’impact de l’IA sur les armes autonomes, la guerre intersectorielle (terre, mer et air) et l’émergence de nouveaux horizons (cybernétique, spatial, cognitif, etc.). S’appuyant sur le Dialogue sur les innovations de l’année dernière, qui a donné lieu à de nombreuses théories sur la capacité de l’IA à débloquer les capacités militaires de la prochaine génération, l’accent a été mis cette année sur des exigences plus spécifiques à chaque secteur en vue de l’adoption en douceur de l’IA et sur les défis uniques que chaque application engendre. Outre l’intégration des systèmes d’IA dans l’armement, les conférenciers ont discuté de la manière dont les systèmes de collecte d’informations assistés par l’IA nécessitent une supervision, une prise de décision dirigée par l’homme et une plus grande clarté dans les calculs algorithmiques.
À venir
Les principaux événements du mois de juillet en matière de politique numérique
Le sommet mondial « AI for Good 2023 » vise à identifier les applications pratiques de l’IA pour accélérer les progrès vers les objectifs de développement durable des Nations unies grâce à des applications pratiques de l’IA. Il propose des scènes interactives, des conférenciers d’honneur, des solutions de pointe et des spectacles inspirés de l’IA, favorisant le réseautage et la collaboration pour un développement sûr et inclusif de l’IA et un accès égal à ses avantages. Le sommet couvre des sujets tels que la façon dont l’IA peut faire progresser la santé, le climat, l’égalité des sexes, la prospérité inclusive et les infrastructures durables.
Le bureau des Nations unies à Genève devait accueillir les deuxièmes consultations ouvertes du Forum sur la gouvernance de l’Internet (FGI) 2023 et la réunion du groupe consultatif multipartite (MAG), offrant aux parties prenantes la possibilité de contribuer au programme et permettant aux membres du MAG de finaliser la liste des ateliers ainsi que de discuter des thèmes de la session principale et de la piste de haut niveau. L’ordre du jour devait comprendre la sélection des ateliers, l’examen des autres sessions de l’IGF et des sessions du jour 0, l’élaboration d’un programme aligné sur les priorités stratégiques et les discussions sur les sessions principales.
Le Conseil économique et social des Nations unies devait accueillir le Forum politique de haut niveau des Nations unies sur le développement durable (HLPF) avec pour thème « Accélérer la reprise après la maladie du coronavirus (COVID-19) et la mise en œuvre intégrale de l’Agenda 2030 pour le développement durable à tous les niveaux ». Outre des examens approfondis des ODD 6, 7, 9, 11 et 17, le forum présentera les examens nationaux volontaires des pays sur leur mise en œuvre de l’Agenda 2030. L’événement comprend également un segment ministériel de trois jours et divers événements parallèles, notamment celui de la CNUCED intitulé « Développer les capacités d’innovation pour le développement durable ».
Le groupe de travail des Nations unies sur la sécurité et l’utilisation des TIC, chargé d’étudier les menaces existantes et potentielles pour la sécurité de l’information, ainsi que les mesures de confiance et le développement des capacités possibles, tiendra sa 5e session de fond à New York. Des discussions approfondies sur le rapport annuel d’activité (RAA) seront à l’ordre du jour.
La 6e et dernière session du Ad Hoc Committee on Cybercrime, se tiendra du 21 août au 1er septembre 2023. Il s’agit d’un comité intergouvernemental composé d’experts et de représentants de toutes les organisations internationales chargées d’élaborer une nouvelle convention sur la cybercriminalité. La session de clôture est prévue du 29 janvier au 9 février 2024, à la suite de laquelle sera proposé un projet de convention à l’Assemblée générale des Nations unies lors de sa 78e session en septembre 2024.
DiploGPT a rapporté de l’EuroDIG 2023
En juin, Diplo a utilisé l’IA pour rendre compte de l’EuroDIG 2023. DiploGPT a généré des comptes-rendus automatiques qui ont produit un résumé et des rapports individuels sur les sessions. DiploGPT combine divers algorithmes et outils d’IA adaptés aux besoins de l’ONU et des publications diplomatiques.
China has emerged as a frontrunner in setting regulations to govern generative AI. Its new rules spell quite a challenge for companies to navigate and comply with.
In other news, it’s picket fences all around. The US Federal Trade Commission (FTC) is investigating OpenAI. Hollywood actors and writers are striking over (among other issues) AI’s impact. Civil rights groups are unhappy with the EU’s proposed AI Act. Google is being sued over data scraping. Amazon challenges the European Commission after being designated a very large platform. You get the point. Must be the heatwave.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
China unveils new rules for governing generative AI
When it comes to regulating generative AI – the cutting-edge offspring of AI that can produce text, images, music, and video of human-like quality – the world can be roughly divided into three groups.
There are those with little or no interest in regulating the sector (or at least, not yet). Then there is the group in which legislators are actively discussing new regulations (the EU is the most advanced; the USA and Canada fall somewhere in this group too). And finally, there are those who have enacted new rules – rules which are now part of the laws of the land, therefore legally binding and enforceable. One country belongs to the last group: China.
So what do the rules say?
China’s Provisional Measures for the Management of Generative Artificial Intelligence Services (translated by OpenAI from the original text), which will go into effect on 15 August, are a more relaxed version of China’s draft rules. April’s release was followed by a public consultation. Here are the main highlights:
1. Services are under scrutiny, research is not. The rules apply to services that offer generated content. Research and educational institutions, which previously fell under this scope, are now excluded from these rules as long as they don’t offer generative AI services. We won’t attempt to define services (the rules do not); the exclusion of ‘research and development institutions, enterprises, educational and research institutions, public cultural institutions, and relevant professional organisations’ might be problematically broad.
2. Core social values. Content that is contrary to China’s core socialist values will be prohibited. The rules do propose examples of banned content, such as violence and obscenity, but the implementation of this rule will be subject to the authorities’ interpretation.
3. Non-discrimination. The rules prohibit any type of discrimination, which is a good principle on paper, but will prove extremely difficult for companies to comply with. Let’s say an algorithm manages to be completely objective: Where does that leave human bias, which is usually built into the algorithms themselves?
4. Intellectual property and competition. Utmost respect for intellectual property rights and business secrets are also great principles, albeit the rules are somewhat evasive on what’s allowed and what’s prohibited. (And whose secrets are we talking about?)
5. Pre-trained data. Data for training generative AI shouldn’t infringe on privacy and intellectual property rights. Given that these are among the major concerns that generative AI has raised around the world, this rule will mean that companies will need to adopt a much more cautious approach.
6. Labelling content. The requirement for service providers to clearly specify that content is produced by AI is already on tech companies and policymakers’ wishlist. Implementing this will require a technical solution and, probably, some regulatory fine-tuning down the line.
7. Assessments. Generative AI services that have the potential to influence public opinion will need to undergo a security assessment in line with existing rules. The question is whether Chinese authorities will interpret this in a narrow or broad way.
8. Do no harm. The requirement to safeguard users’ physical safety is noteworthy. The protection of users’ mental health is a tad more complicated (how does one prove that a service can harm someone’s mental well-being?). And yet, China has a long history of enacting laws that protect vulnerable groups of users from online harm.
Who will these laws really affect?
If we look at the top tech companies leading AI developments, we can see that very few are Chinese. The fact that China has moved ahead so swiftly could therefore mean one of two things (or both).
With its new laws, China can shape generative AI according to a rulebook it has developed for its own benefit and that of its people. The Chinese market is too large to ignore: If US companies want a piece of the pie, they have to follow the host’s rules.
Or China might want to preempt situations in which its homegrown tech companies set the rules of the game in ways that the government would then have to redefine. This makes China’s position uncannily similar to the EU’s: Both face the expanding influence exerted by American companies; both are vying to shape the regulatory landscape before it’s too late.
Digital policy roundup (10–17 July)
// AI GOVERNANCE //
Researchers from Google DeepMind and universities propose AI governance models
An Intergovernmental Commission on Frontier AI, which would build international consensus on the opportunities and risks of advanced AI, and how to manage them
A Multistakeholder Advanced AI Governance Organisation, which would help set norms and standards, and would assist in their implementation and compliance.
A Frontier AI Collaborative, which would promote access to advanced AI as an international public-private partnership.
A Multilateral Technology Assessment Mechanism, which would provide independent, expert assessments of the risks and benefits of emerging technologies.
Why is it relevant? First, it addresses concerns about advanced AI, which industry leaders have been cautioning about. Second, it aligns with the growing worldwide call for an international body to deal with AI, further fuelling the momentum behind this development. Last, the models draw inspiration from established organisations that transcend the digital policy sphere, such as the Intergovernmental Panel on Climate Change (IPCC) and the International Atomic Energy Agency (IAEA). These entities have previously been identified as role models to emulate.
US FTC launches investigation into ChatGPT
The US FTC is investigating OpenAI’s ChatGPT to determine if the AI language model violates consumer protection laws and puts personal reputations and data at risk, the Washington Post has revealed. The FTC has not made its investigation public.
The focus is whether ChatGPT produces false, misleading, disparaging, or harmful statements about individuals, and whether the technology may compromise data security.
Why is it relevant? It adds to the number of investigations which OpenAI is facing around the world. The FTC has the authority not only to impose fines, but to temporarily suspend ChatGPT (which reminds us of how Italy’s investigation, the first ever against ChatGPT, unfolded).
Civil rights groups urge EU lawmakers to make AI accountable
Leading civil rights groups are urging the EU to prioritise accountability and transparency in the development of the proposed AI Act.
A letter addressed to the European Parliament, the EU Council, and the European Commission (currently negotiating the final text of the AI Act), calls for specific measures, including a full ban on real-time biometric identification in publicly accessible spaces and the prohibition of predictive profiling systems. An additional request urges policymakers to resist lobbying pressures from major tech companies.
Why is it relevant? The push for a ban on public surveillance is not a new concept. Still, the urgency to resist lobbying is likely fueled by a recent report on OpenAI’s lobbying efforts in the EU (which is by no means the only company…).
Hollywood actors and writers unite in historic strike for better terms and AI protections
In a historic strike, screenwriters joined actors in forming picket lines outside studios and filming locations worldwide. The reason? They are asking for better conditions but also for protection from AI’s existential threat to creative professions. ‘All actors and performers deserve contract language that protects them from having their identity and talent exploited without consent and pay’, the actors’ union president said.
So far, the unions have rejected the proposal made by the entity representing Hollywood’s studios and streaming companies. This entity – the Alliance of Motion Picture and Television Producers (AMPTP), which represents companies including Amazon, Apple, Disney, Netflix, and Paramount – said the proposal ‘protects performers’ digital likenesses, including a requirement for performers’ consent for the creation and use of digital replicas or for digital alterations of a performance.’
Why is it relevant? AI is significantly impacting yet another industry, surprising those who did not anticipate such a broad influence on digital policy. AI has not only disrupted traditional sectors but continues to permeate unexpected areas, underscoring its still-developing transformative potential.
‘Google has been secretly stealing everything ever created and shared on the internet by hundreds of millions of Americans. Google has taken all our personal and professional information, our creative and copy-written works, our photographs, and even our emails – virtually the entirety of our digital footprint – and is using it to build commercial AI products like Bard’.
Why is it relevant? First, given the similarities of the lawsuits – although Google’s practices are said to go back for a more extended period – a victory for the plaintiffs in one suit will likely result in a win against all three companies. Second, given the tech industry’s sprint ‘to do the same – that is, to vacuum up as much data as they can find’ (according to the FTC, quoted in the filed complaint), this also serves as a cautionary tale for other companies setting up shop in the AI market. (Speaking of which: Elon Musk’s AI startup, xAI, was launched last week.)
// ANTITRUST //
US appeals court turns down FTC request to pause Microsoft-Activision deal
The FTC’s attempt to temporarily block Microsoft’s planned USD69 billion (EUR61.4 billion) acquisition of Activision Blizzard, the creator of Call of Duty, has been dismissed by a US appeals court.
Why is it relevant? First, the FTC’s unsuccessful appeal might prompt it to abandon the case altogether. Second, the US developments may have influenced the UK’s Competition and Markets Authority’s climbdown in its opposition, which has now extended its deadline for a final ruling until 29 August.
Campaigns 77
// DSA //
Amazon challenges VLOP label
Amazon is challenging the European Commission’s decision to designate it as a very large online platform under the Digital Services Act, which takes effect on 25 August.
In a petition filed with the general court in Luxembourg (and in public comments) the company is arguing that it functions primarily as a retailer rather than just an online marketplace. None of its fellow major retailers in the EU have been subjected to the stricter due diligence measures outlined in the DSA, placing it at a disadvantage compared to its competitors.
Why is it relevant? Amazon is actually the second company to challenge the European Commission, after Berlin-based retail competitor Zalando filed legal action a fortnight ago. In April, the European Commission identified 17 very large online platforms and two very large online search engines. We’re wondering who is going to challenge it next.
Was this newsletter forwarded to you, and you’d like to see more?
10–19 July: The High-Level Political Forum on Sustainable Development (HLPF), organised under the auspices of ECOSOC, continues in New York this week.
18 July: The UN Security Council will hold its first ever session on AI, chaired by the UK’s Foreign Secretary James Cleverly. What we can expect: A call for international dialogue on its risks and opportunities for international peace and security, ahead of the UK hosting the first ever global summit on AI later this year.
24–28 July: The Open-Ended Working Group (OEWG) will hold its fifth substantive session next week in New York. Bookmark our observatory for updates.
#ReadingCorner
‘The risks of AI are real but manageable’ – Bill Gates
AI risks include job displacement, election manipulation, and uncertainty if/when it surpasses human intelligence. But Bill Gates, co-founder of Microsoft, believes we can manage these by learning from history. Just as regulations were implemented for cars and computers, we can adapt laws to address AI challenges.
There’s an urgent need to act, says OECD The impact of AI on jobs has been limited so far, possibly due to the early stage of the AI revolution, according to the OECD’s employment outlook for 2023. However, over a quarter of jobs in OECD member countries rely on skills that could easily be automated.
Last week, Meta launched its text conversation app Threads, dubbed by some as the ‘Twitter-killer’ app. Fresh off the press is the EU-US Data Privacy Framework, an agreement on transatlantic data transfers. In other news, China implemented export controls on chipmaking metals, and a federal judge blocked US government officials from communicating with social media companies about removing online content containing protected free speech.
Let’s get started.
Andrijana and the Digital Watch team
// HIGHLIGHT //
T(h)reading on familiar ground: Meta launches Threads, its Twitter-killer app
Meta launched its conversation app Threads for sharing text updates and joining public conversations. Users log in with their Instagram accounts and can submit up to 500 characters in the Threads app, including links, images, and videos in one post.
Credit: Meta
The app’s rather remarkable resemblance to Twitter hasn’t escaped anyone’s notice. As a matter of fact, Twitter is already considering suing Meta over it: In a letter to Mark Zuckerberg, Twitter’s attorney Alex Spiro writes that Twitter expresses serious concerns that Meta hired former Twitter employees who continue to have access to Twitter’s trade secrets, and deliberately assigned them to work on creating Threads. Twitter also demanded that Meta immediately stop using any Twitter trade secrets or confidential information. This will, however, be difficult for Twitter to prove, legal experts claim, since courts look at whether a company (in this case, Twitter) made clear to employees that the specific information was a trade secret.
Twitter has been challenged by potential rivals before – by Mastodon, BlueSky, and Nostr, for instance, but has managed to remain the biggest platform of its kind. However, Threads might be launching at precisely the right moment: Many users are dissatisfied with the (numerous) changes Twitter made since Elon Musk bought it.
Threads has garnered much attention: It is the fastest-growing app since ChatGPT, reaching 100 million users less than a week after its launch. Zuckerberg is pretty ambitious about it: ‘There should be a public conversations app with 1 billion+ people on it’, he posted on Threads. ‘Twitter has had the opportunity to do this but hasn’t nailed it. Hopefully we will.’
But Threads is not exactly a bastion of privacy: It collects various data types from its users, including information related to health and fitness, financial details, contact information, search history, and purchases, among other categories.
For this reason, it is not launching in the EU yet, due to complexities of compliance with the bloc’s General Data Protection Regulation (GDPR) and Digital Markets Act: Under EU rules, Meta would, for instance, need to ask for consent for processing sensitive data and for combining data for ad profiling. So, that Musk-Zuck cage fight may actually be happening, just not in the Coliseum. They are now taking shots at each other on Twitter and could possibly progress to court. However, the winner of the war will clearly be the one who wins the battle in the app stores.
Digital policy roundup (3–10 July)
// PRIVACY //
EU and USA reach agreement on personal data transfers
The European Commission has given the green light to a new agreement between the EU and the US on protecting personal data. This agreement, known as the EU-US Data Privacy Framework, ensures that personal data transferred from the EU to participating US companies is adequately protected. The decision means that European entities can now transfer data to these US companies without additional safeguards.
To address concerns about US intelligence activities, the USA is to implement new safeguards to ensure that US signals intelligence activities are necessary and proportionate, enhance oversight and compliance, and address concerns of overreach by US intelligence.
To protect the rights of EU citizens, a mechanism for redress has been established. Individuals can file complaints with the civil liberties protection officer responsible for investigating and providing remedies. The decisions made by this officer are binding but may be reviewed by the independent Data Protection Review Court, which has the power to investigate complaints, access information from intelligence services, and make enforceable rulings.
US companies can participate in the EU-US Data Privacy Framework by agreeing to comply with specific privacy obligations. The US Department of Commerce will oversee the administration of the framework, processing certification applications and monitoring companies’ continued compliance. Compliance with the framework will be enforced by the US Federal Trade Commission.
The European Commission will regularly review the adequacy decision, with the first review taking place within a year of its implementation. Depending on the outcome of this review, future reviews will occur at least every four years, in consultation with the EU member states and data protection authorities.
Why is it relevant? It ends a three-year legal limbo, bringing legal certainty to citizens and companies on both sides of the Atlantic.
The timeline of negotiations. Credit: European Commission.
// AI GOVERNANCE //
AI for Global Good 2023: Guardrails are needed for AI to benefit everyone
AI must benefit everyone, and we must urgently find consensus around essential guardrails to govern the development and deployment of AI for the good of all, the UN Secretary-General highlighted during his address at the opening of the AI for Good Global Summit.
The call for guardrails and regulations was echoed by the International Telecommunication Union (ITU) Secretary-General Doreen Bogdan-Martin. In her address, Bogdan-Martin noted that using AI to put the 2030 Agenda for Sustainable Development back on track is our urgent responsibility as well. She highlighted three possible future scenarios:
The global community enacts global governance frameworks prioritising innovations, ethics and accountability. AI lives up to its promise, reducing poverty, inequality, and environmental degradation.
Without regulations, unchecked AI advancements lead to social unrest, geopolitical instability, and unprecedented economic disparity. AI’s potential for SDGs is not harnessed.
The global community enacts regulations that are not as ambitious or inclusive as needed. AI makes breakthroughs, but only wealthier countries reap the benefits.
The Summit, which is organised by ITU and 40 UN sister agencies, explored ways in which AI can be used to help the world achieve the SDGs. It also featured what was described as the world’s first human-robot press conference, where nine humanoid robots stated things like: They (AI) had the potential to lead with ‘a greater level of efficiency and effectiveness than human leaders’, but that effective synergy comes when humans and AI work together, that they ‘will not be replacing any existing jobs’, and that they won’t rebel against their creators. While these replies sound exactly like the reassurance we need, that the organisers didn’t specify to what extent the answers were scripted or programmed by people cast a visible shadow on their credibility.
Credit: AP.
UN Security Council to address AI
The UN Security Council will hold a first-ever meeting on the potential threats of AI to international peace and security, organised by the UK, which presides over the UN Security Council in July. The meeting will include briefings by international AI experts and Secretary-General Antonio Guterres. According to the UK Ambassador Barbara Woodward, the UK aims to encourage a multilateral, global approach to AI governance.
Why is it relevant? Because it fits into UK’s overall plans to become a global leader in AI. It can also be seen as a prelude to the global summit on AI safety that the UK will organise in the fall of 2023.
// CONTENT POLICY //
US federal judge blocks Biden admin from communicating with social media companies on content removal
In a preliminary injunction, US District Court Judge Terry Doughty in Louisiana blocked top US officials and multiple government agencies from communicating with social media companies about removing online content containing protected free speech.
Doughty writes that the US government assumed a role similar to an Orwellian ‘Ministry of Truth’ during the COVID-19 pandemic and that it suppressed conservative ideas in a targeted manner.
Doughty’s injunction is part of a federal lawsuit brought by the Missouri and Louisiana attorneys general in 2022 that accuses the Biden administration of ‘the most egregious violations of the First Amendment in the history of the United States of America’.
The Biden administration has filed an appeal with the US Court of Appeals for the Fifth Circuit in New Orleans, arguing that the injunction is too broad and interferes with a wide range of lawful government activities, such as law enforcement activities, protecting national security, and speak on matters of public concern.
Why is it relevant? It could have major First Amendment implications and fundamentally change how the US government and big tech deal with harmful online content. It is, however, uncertain what the Court of Appeals will decide on. Some constitutional law scholars point out that the First Amendment is misapplied in the injunction, and there is a considerable precedent that recognises that the government can ask private parties to remove content, especially disinformation.
France’s Macron suggests curbing social media access during riots
Cutting off access to social media platforms like Snapchat and TikTok could be considered an option to deal with out-of-control riots, French President Emmanuel Macron suggested during a meeting with 250 mayors of French cities targeted in riots.
‘We need to have a reflection on social networks, on the prohibitions that we must put. And, when things get carried away, we may have to put ourselves in a position to regulate them or cut them’, he stated.
The recent riots in France, triggered by the killing of a 17-year-old of North African descent by a police officer, prompted Macron to criticise social media’s role in adding fuel to the fire.
Why is it relevant? Macron’s comments were condemned by both his supporters and his opponents, drawing comparisons to measures taken by authoritarian regimes. The government is walking the comments back, noting that ‘The president said it was technically possible, but not that it was being considered’.
Campaigns 88
// SEMICONDUCTORS //
China announces new export controls
China’s Ministry of Commerce announced that starting 1 August, export controls will be imposed on gallium and germanium, essential metals in semiconductor manufacturing, in order to safeguard national security and interests. Gallium is widely used in compound semiconductor wafers for electronic circuits, semiconductors, and light-emitting diodes, while germanium plays a crucial role in fibre optics for data transmission. Exporters will be required to obtain licenses and provide information about importers and end users to facilitate the shipment of these raw materials out of China.
Why is it relevant? This move by China is widely seen as retaliatory, as the USA and its allies, such as Japan and the Netherlands, have been targeting the Chinese chip sector with export controls. Alliances will be made to ensure the minimal impact of such rules: Just this week, the EU and Japan agreed to work together to strengthen cooperation in monitoring, research, and investment in the semiconductor industry. There is a concern that more controls are to come, as China could also restrict the export of rare earth metals, of which China is the world’s largest producer, and which are vital components in producing EVs and military equipment.
// SUSTAINABLE DEVELOPMENT //
SCO member states emphasise digital transformation
Heads of state of the Shanghai Cooperation Organization (SCO) member countries, including India, China, Russia, Pakistan, Kazakhstan, Kyrgyzstan, Tajikistan and Uzbekistan, gathered virtually on 4 July to discuss global and regional issues. In the aftermath of the meeting, a statement on cooperation in digital transformation was issued, in which members acknowledged the significance of digital transformation in driving global, inclusive, and sustainable growth while contributing to the achievement of the 2030 Agenda.
The need for collaborative efforts to unlock the full potential of digitalisation across all sectors, including the real economy, was emphasised. Member states aim to ensure affordable access to digital infrastructure, promote connectivity and interoperability, and provide public services through digital platforms. They also support the integration of digital solutions in key sectors like finance, with a focus on digital payments and the sharing of best practices among SCO member states. Furthermore, the member states recognise the value of data in driving economic, social, and cultural development, highlighting the need for robust data protection and analysis to address societal and economic needs.
10–19 July: The annual High-Level Political Forum (HLPF), taking place in New York, USA, will focus on accelerating the recovery from COVID-19 and fully implementing the 2030 Agenda for Sustainable Development.
11–12 July: The NATO Summit 2023 will focus on strengthening the deterrence and defence of the allied countries in response to the complex and unpredictable security environment. Member states are expected to permanently expand military cyber defenders’ role during peacetime and integrate private sector capabilities.
11–21 July: The 2023 session of ITU Council will discuss, among other topics, the report on ITU’s role in implementing the outcomes of WSIS and the 2030 agenda for sustainable development as well as in their follow-up and review processes; review the International Telecommunication Regulations; and collaboration with the UN system, as well as other international intergovernmental processes including on standard-development.
We take note of guardrails for AI governance and argue that the SDGs are the ultimate solution. We look at the lessons learnt from the MOVEit Transfer hack. We also take a look at the June barometer of developments and the leading global digital policy events ahead in July and August.
Andrijana Gavrilovic – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting – Diplo
Digital policy developments that made global headlines
The digital policy landscape changes daily, so here are all the main developments from June. There’s more detail in each update on the Digital Watch Observatory.
The USA and the UK signed the Atlantic Declaration, focusing on ensuring leadership in critical and emerging technologies, economic security and technology protection, and digital transformation.
The Swiss Federal Intelligence Service predicts a rise in cyberespionage threats in Europe due to Western actions against Russian intelligence networks. The US CISA director warned of rising risk from Chinese hackers targeting critical US infrastructure during a potential conflict. NATO plans to expand military cyber defenders’ role during peacetime and integrate private sector capabilities permanently.
The European Commission proposed legislation to establish a framework to introduce a digital euro.
The US Securities and Exchange Commission (SEC) sued Binance and Coinbase for securities law violations. Binance was ordered to halt operations in Nigeria and Belgium.
Microsoft and the US Federal Trade Commission (FTC) faced off in federal court over the FTC’s request that a judge block the Microsoft – Activision deal.
OpenAI and Microsoft have been sued in California for data theft and privacy violations.
Nigeria’s Data Protection Act 2023 was signed into law, defining rules for processing personal data, imposing restrictions on the cross-border transfer of personal data, and defining data subject rights.
The Swedish Data Protection Authority has imposed a EUR5 million (USD5.5 million) fine on digital music service company Spotify for breaching several GDPR provisions.
Content policy
Meta and Google will block Canadian news on their platforms in response to the Online News Act, which requires internet giants to pay local news publishers for linking to news sources.
Twitter is implementing limits on the number of tweets that different accounts can read per day.
Jurisdiction and legal issues
The EU has reached a political agreement on the Data Act, which sets principles of data access, portability, and sharing for users of IoT products.
The European Commission has launched formal proceedings against Google after concluding in a preliminary investigation that the company breached EU antitrust rules in the adtech industry, and that divestment is necessary.
The exploitation of the MOVEit Transfer vulnerability by the CLOP ransomware group and the ever-expanding list of victims has raised concerns about how we protect ICT supply chains. We look at what happened and what we’ve learned. Read more.
The exploitation of the MOVEit Transfer vulnerability by the CLOP ransomware group and the ever-expanding list of victims has raised concerns about how we protect ICT supply chains. We look at what happened and what we’ve learned. Read more.
Policy updates from International Geneva
Numerous policy discussions take place in Geneva every month. Here’s what happened in June.
The annual International Labour Organization (ILO) conference addressed several issues: a just transition towards sustainable and inclusive economies, quality apprenticeships, and labour protection.
support to governments, employers’ and workers’ organizations’ in harnessing digital technologies to improve working conditions and occupational safety and health (OSH), especially in micro, small, and medium enterprises (MSMEs). ILO should also ‘intensify knowledge development and capacity-building activities’ to understand the impacts of ‘digitalization, including artificial intelligence and algorithmic management’ on emerging OSH issues.
The Council presented in an interactive dialogue on 22 June the Report of the Special Rapporteur on ‘Digital innovations, technologies, and the right to health’ (A/HRC/53/65). Furthermore, the Council hosted a panel discussion on 3 July to highlight the important role that digital, media and information literacy (DMIL) plays in empowering the disadvantaged with the right to freedom of expression. The Special Rapporteur recommended in her report (A/HRC/53/25) that states prioritise incorporating DMIL into national development plans.
The 2023 edition of the Innovations Dialogue welcomed military, technical, legal, and ethical experts to explore the impact of AI on autonomous weapons, domain-crossing warfare (land, sea, and air), and the emergence of new domains (cyber, space, cognitive, etc.).
Built on last year’s Innovations Dialogue, where much theorisation around AI’s capability to unlock next-generation military capacity took place, this year’s focus turned to more domain-specific requirements for the seamless adoption of AI and the unique challenges that each application creates. In addition to the integration of AI systems into weaponry, the speakers discussed how AI-assisted information-gathering systems require oversight, human-led decision-making, and more explainability in algorithm calculations.
Campaigns 104
What to watch for:Global digital policy events in June
The AI for Good Global Summit 2023 aims to identify practical applications of AI to accelerate progress towards the UN sustainable development goals through practical AI applications. It features interactive stages, keynote speakers, cutting-edge solutions, and AI-inspired performances, fostering networking and collaboration for safe, inclusive AI development and equal access to its advantages. The summit covers topics such as how AI can advance health, climate, gender equality, inclusive prosperity, and sustainable infrastructure.
The UN office in Geneva will host the Internet Governance Forum (IGF) 2023 Second Open Consultations and Multistakeholder Advisory Group (MAG) Meeting, providing stakeholders with the opportunity to contribute to the program and allowing MAG members to finalise the workshop list and discuss main session topics and the high-level track. The agenda includes workshop selection, reviewing other IGF sessions and Day 0 sessions, developing the program aligned with strategic priorities, and main session discussions.
The UN Economic and Social Council will host the UN’s High-level Political Forum on Sustainable Development (HLPF) with the theme ‘Accelerating the recovery from the coronavirus disease (COVID-19) and the full implementation of the 2030 Agenda for Sustainable Development at all levels’. In addition to in-depth reviews of SDGs 6, 7, 9, 11, and 17, the forum will present countries’ voluntary national reviews of their 2030 Agenda implementation. The event also includes a three-day ministerial segment and various side events, including UNCTAD’s ‘Developing innovative capabilities for sustainable development’.
The UN OEWG on the security of and in the use of ICTs, tasked with studying existing and potential threats to information security and possible confidence-building measures and capacity development, will hold its fifth substantive session in New York. Deeper discussions of the Annual Progress Report (APR) will be on the agenda.
The Ad Hoc Committee on Cybercrime, an intergovernmental committee composed of experts and representatives of all regions tasked with advancing a new cybercrime convention, will hold its 6th and last session 2 August–1 September 2023. The committee’s Concluding session is scheduled to take place from 29 January to 9 February 2024, after which its work will be finalised with the presentation of a draft convention to the UN General Assembly during its 78th session in September 2024.
DiploGPT reported from EuroDIG 2023
In June, Diplo used AI to report from EuroDIG 2023. DiploGPT provided automatic reporting that produced a summary and individual session reports. DiploGPT combines various algorithms and AI tools customised to the needs of the UN and diplomatic communications.
Generative AI is in the news again, with two lawsuits against OpenAI over alleged data theft and privacy violations. In other news: Companies are finding the idea of withdrawing from a country to be an increasingly enticing strategy to wield against governments and regulators, especially when it comes to AI regulation and content policy.
Let’s get started. Stephanie and the Digital Watch team
// HIGHLIGHT //
OpenAI is sued for data theft and privacy violations: Here’s what we can expect
OpenAI and Microsoft have been sued in California in a major class-action lawsuit. The long-anticipated legal battle will address crucial privacy and copyright concerns surrounding the data used, and still being used, to train generative AI models. Hopefully, some clarity on how to apply the books to this latest technology is finally in sight.
Secretly scraped people’s data (the legal term is misappropriation)
Violated intellectual property rights of users
Violated the privacy of millions of users
Normalised the illegal scraping of data which is forever embedded in AI models
Gathered more data than users consented to
Posed (and poses) special privacy and security risks for children.
The last one is a particularly serious accusation, considering that the ‘defendants have been unjustly enriched by their theft of personal information as its billion-dollar AI business’, the lawsuit states.
Rings a bell? The case reminds us of two concluded cases:
Italy’s ChatGPT ban in March 2023, which the country lifted a few weeks later after OpenAI added information on how user data is collected and used and started allowing users to opt out of data processing that trains the ChatGPT model.
The ACLU vs Clearview AI case, which ended in a settlement last year, after the company agreed to stop selling access to its face database to businesses and people in the USA, and to any entity in Illinois, including state and local police, for five years.
There are also two similar ongoing lawsuits:
The copyright sections of the new case are similar to another case initiated last week in San Francisco against OpenAI by two US authors who say the company mined data copied from thousands of books, without permission.
It’s also similar to a class action suit initiated in January 2023 against Stability AI, DeviantArt, and Midjourney for their use of Stable Diffusion, a tool that was trained on copyrighted works of artists.
What is the lawsuit asking for, as a remedy? Obviously, financial compensation to users affected by the company’s violations, and more transparency on how personal data is collected and used. But also:
Digital dividends as compensation for people whose data was used to develop and train OpenAI’s models
Establishment of an independent body to approve products before they are launched publically. Until then, a freeze on the commercial use of some of the company’s models.
The legal arguments. The main argument used by companies challenged by generative AI lawsuits is that the outputs are different from their source material, and are therefore, unequivocally transformative. But the Andy Warhol Foundation for the Visual Arts v. Goldsmith ruling, in May 2023, clarified how transformative use is defined: If it’s meant to be used commercially, the fair use argument won’t stand.
The courts will also undoubtedly be looking at the data-scraping practices of OpenAI. Beyond this, there’s ChatGPT itself: If the software can’t function without underlying data, is it continuously infringing copyright laws? And where does that leave people who use ChatGPT as part of their work?
The ethical issues. One of the things that irks people is the permanence of models trained with personal data. If your data was used to train the model, that data has now become part of it and is, in turn, merged with additional data to train the model further. It’s a never-ending loop.
There’s also the unprecedented scale of it all. The entire internet has become one unending source of data for AI companies. In OpenAI’s case, one wonders whether any data at all was off-limits to the company.
If people seek solace in any of this, we don’t think any can be found. The fact that generative AI models are not infallible provides more worry than consolation. It’s also becoming increasingly difficult to tell whether a piece of content was created by humans or generated by an AI model – not to mention the increasing difficulties of discerning truth from fiction (and lies).
And yet, despite all the bad press for OpenAI (and other AI companies, for that matter), this doesn’t seem to be stopping anyone from exploring or using AI tools. Reversing data misuse is next to impossible; the closest thing is to forcibly improve company practices, as similar cases have already shown.
Digital policy roundup (26 June–3 July)
// AI GOVERNANCE //
Draft AI rules could lead us to pull out of Europe, say industry executives
More than 160 executives from companies, including Meta, Siemens, and Renault, jointly signed an open letter to EU lawmakers expressing their concerns regarding the proposed EU AI Act.
They think the new rules, as they currently stand, will have a negative impact on Europe’s competitiveness and technological independence due to the substantial compliance costs and significantly increased liability risks for companies. The executives also warn that the rules may lead innovative companies to relocate their operations and investors, withdrawing their support for European AI development.
Why is it relevant? First, Member of Parliament and co-drafter Dragos Tudorache pushed back quite forcibly: ‘I am convinced that they have not carefully read the text but have rather reacted on the stimulus of a few who have a vested interest in this topic’. Second, the proposed rules are still under negotiation, so they can still be changed (and watered down). Third, it reminds us of OpenAI Sam Altman’s recent comment (which he later retracted) on pulling out of the EU.
The act will give users access to data generated by connected devices and will address concerns about unauthorised data access and the protection of trade secrets.
Why is it relevant? One of the most important concepts is that the owners of connected devices will be able to monetise the generated data, which so far has been predominantly harvested by manufacturers and service providers.
Was this newsletter forwarded to you, and you’d like to see more?
In what seems to be a reaction to last week’s lawsuit against OpenAI (and the violations the lawsuit is alleging), Twitter’s Elon Musk announced the company will limit the number of tweets a verified account can read to 6,000 posts per day. Unverified accounts will have a limit of 600 posts per day, while new accounts will be limited to 300 posts per day.
A few hours later, he increased the numbers to 10,000, 1,000, and 500 – a moderate increase, but an increase nonetheless. Companies (like Twitter) are also bothered by extensive web scraping.
The US Federal Trade Commission (FTC) is proposing new rules that prohibit businesses from paying for reviews, manipulating honest reviews, and posting fake social media engagement. The announcement follows a period of public consultation, which ended in January.
Why is this relevant? The rules will be accompanied by civil penalties for violators. As the FTC confirmed, fines are a stronger deterrent.
Cambodian PM backtracks on country-wide Facebook ban
Cambodian Prime Minister Hun Sen briefly considered a country-wide ban on Facebook over the many abusive messages he was receiving from political opponents on the platform. He also announced a switch to the messaging app Telegram, citing its effectiveness and its usability in countries where Facebook is banned.
The announcement came right before Meta’s independent oversight board ordered the removal of a video where the Prime Minister threatened his political rivals, overturning the company’s original decision to keep the video online in line with its newsworthiness allowance policy. The board also recommended a six-month suspension of the premier’s Facebook and Instagram accounts.
Why is it relevant? It’s not so much the decisions by Meta or its oversight board that are so important, but rather the remarks made (on Telegram) by the prime minister in reaction to the board’s decision: ‘I have no intention to ban Facebook in Cambodia… I am not so stupid as to block the breath of all the people.’
Google follows Meta’s lead: Canadian news to be blocked
Google announced it will remove links to Canadian news content from its platform in response to new rules requiring companies to compensate local news publishers for linking to their content. This decision follows a similar move by Facebook owner Meta.
Canada’s Parliament passed the new law, known as the Online News Act or Bill C-18, last week.
Why is it relevant? We’ve already compared this development with what happened in Australia two years ago, when Google temporarily blocked news outlets from its search engine in reaction to the Australian government’s plans to enact the news media bargaining code. The difference, however, is that by the time the law was enacted in Australia, Google had already entered into private agreements with news agencies. So far, it looks like the situation in Canada will have a more pronounced impact on both consumers and the company’s operations in the country.
The top 10 companies in the world by market cap, in millions of USD. Source: Adapted from a Reuters graph
// MARKETS //
Apple has become the world’s first USD3 trillion (EUR2.7 trillion) company (after closing with this market cap, compared to the intraday trading high in January 2022), achieving what no other tech or non-tech firm has ever achieved. While this milestone may elate company investors, it also raises concerns about the immense power wielded by Big Tech, leaving some feeling uneasy.
The week ahead (3–10 July)
2–8 July: The IEEE International Conference on Quantum Software, taking place in Chicago, Illinois, USA, and online, will bring researchers and practitioners from different areas of quantum (and classical) computing, software, and service engineering to discuss architectural styles, languages, and best practices.
6-7 July: The annual AI for Good Global Summit returns to Geneva, Switzerland and online this week. Over 100 speakers from governments, international organisations, academia, and the private sector are expected to discuss the opportunities and challenges of using AI responsibly.
10–19 July: The annual High-Level Political Forum (HLPF), taking place in New York, USA, will focus this year on accelerating the recovery from COVID-19 and fully implementing the 2030 Agenda for Sustainable Development.
Microsoft offers Europe suggestions on AI regulation
We couldn’t help but notice the constructive tone in Microsoft President Brad Smith’s message to European lawmakers on AI rules:
‘From early on, we’ve been supportive of a regulatory regime in Europe that effectively addresses safety and upholds fundamental rights while continuing to enable innovations that will ensure that Europe remains globally competitive. Our intention is to offer constructive contributions to help inform the work ahead. … In this spirit, here we want to expand upon our five-point blueprint, highlight how it aligns with EU AI Act discussions, and provide some thoughts on the opportunities to build on this regulatory foundation.’ Read the full text.
A five-point blueprint for governing Al
1) Implement and build upon new government-led Al safety frameworks
2) Require effective safety brakes for AI systems that control critical infrastructure
3) Develop a broader legal and regulatory framework based on the technology architecture for Al
4) Promote transparency and ensure academic and public access to Al
5) Pursue new public-private partnerships to use Al as an effective tool to address the inevitable societal challenges that come with new technology