DW Weekly #110 – 8 May 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 9

Dear all,

Policymakers were quite busy last week, proposing new laws, new strategies, and holding new consultations on laws and strategies. Actors reacting to some of these developments did not mince their words. But first, updates from the world of generative AI, as has become customary these days.

Let’s get started.
Stephanie and the Digital Watch team


// HIGHLIGHT //

The US White House’s approach to regulating AI:
Innovate, tackle risks, repeat

Generative AI tools, like ChatGPT, have reignited one of the most prominent dilemmas policymakers face: How to regulate this emerging technology without hindering innovation. The world’s three AI hotspots – the USA, the EU, and China – are each known for their distinct approaches to regulation: ranging from a hands-off attitude to strict regulation and enforcement frameworks.

Of the three, the USA has always favoured an innovation-first approach. It was this approach that helped a handful of American companies become the massive tech behemoths they are. But fast forward to 2023, a year that generative AI has hugely impacted: The ambience of today differs significantly from the atmosphere that enveloped Big Tech when they were just starting up. 

US policymakers and top government officials have been sounding alarm bells over these risks in recent weeks. AI experts called for a moratorium on further development of generative AI tools: ‘We do not have the guardrails in place, the laws that we need, the public education, or the expertise in government to manage the consequences of the rapid changes that are now taking place,’ the chair of research institute Centre for AI and Digital Policy told the US Congress recently.

Despite all these warning bells, two developments last week signalled that the White House would continue favouring a guarded innovation-first approach as long as the risks are tackled.

The first was a high-level meeting between US Vice President Kamala Harris and the CEOs of Alphabet/Google, Anthropic, Microsoft, and OpenAI. According to the invitation the CEOs received, the aim was to have ‘a frank discussion of the risks we each see in current and near-term AI development, actions to mitigate those risks, and other ways we can work together’.

And the meeting, at which President Joe Biden also made a brief appearance, was exactly that. Harris told the CEOs that they needed to make sure their products were safe and secure before deploying them to the public, that they needed to mitigate the risks (to privacy, democratic values, and jobs), and that they needed to set an example for others, consistent with the US’ voluntary frameworks on AI. 

In essence, the White House signalled that, for now, it has decided to trust that the companies will act responsibly, leaving it to Congress to figure out how tech companies can be held responsible. During a press call the day before, in reply to whether the administration ‘trust(ed) these companies to do that proactively given the history that we’ve seen in Silicon Valley with other technologies like social media’, the reply by senior administration officials was: 

‘Clearly there will be things that we are doing and will continue to do in government, but we do think that these companies have an important responsibility. And many of them have spoken to their responsibilities. And, you know, part of what we want to do is make sure we have a conversation about how they’re going to fulfil those pledges.’

Tweeted photo from US President Biden shows him visiting an AI meeting table led by  Vice-President Harris. The photo carries the quote, 'What you're doing has enormous potential --' .

The second was the announcement of new measures ‘to promote responsible AI innovation’: funding to launch new research institutes; upcoming guidance on the use of AI by the federal government; and, interestingly, an endorsement of a red-teaming event at DEFCON 31 that will bring together AI experts, researchers, and students to dissect popular generative AI tools for vulnerabilities.

Why would the White House support a red-teaming event? First, because it’s a practical way of reducing the number of vulnerabilities and, therefore, limiting risks. Hackers will be able to experiment on jailbroken versions of the software, confidentially report vulnerabilities, and the companies will be given time to fix their software.

Second, it opens up the software to the scrutiny of the (albeit limited) public. Unless people know what’s really under the bonnet, they can’t report issues or help fix it.

Third, it’s low-hanging fruit for any approach that favours giving companies a free hand to innovate for now and taking other steps that do not involve heavy-handed regulation.

The question is not whether these steps will be enough. They’re not, as new AI tools will continue to be developed. Rather, it’s whether this guarded trust is misplaced and whether policymakers have learned from the past. As Federal Trade Commission chair Lina Khan wrote, ‘The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice.’


Digital policy roundup (1–8 May)
// AI //

UK’s competition authority launches review of AI models

The UK’s Competition and Markets Authority (CMA) has kickstarted an initial review of how AI models impact competition and consumers. The focus is strictly on ensuring that AI models do not harm consumer welfare nor restrict competition in the market.

The CMA has called for public input (till 3 June). Depending on the findings, the CMA may consider regulatory interventions to address any anti-competitive issues arising from the use of AI models.

Why is this relevant? Because it contrasts with the steps the CMA’s counterpart across the pond – the Federal Trade Commission – pledged to take.


// CABLES //

NATO warns of potential Russian threat to undersea pipelines and cables

NATO’s intelligence chief David Cattler has warned that there is a ‘significant risk’ that Russia could attack critical infrastructure in Europe or North America, such as internet cables and gas pipelines, as part of its conflict with the West over Ukraine.

The ability to undermine the security of Western banking, energy, and internet networks is becoming a tremendous strategic advantage for NATO’s opponents, Cattler said.

Why is this relevant? Apart from the warning itself, the comments came a day before NATO’s Secretary General Jens Stoltenberg met with industry leaders to discuss the security of critical undersea infrastructure. Undoubtedly, the security of the Nord Stream pipeline’s surrounding region was on the agenda.


// SENDER-PAYS //

Stakeholders oppose fair share fee

A coalition of stakeholders has come together to publicly caution against the potential implementation of a fair share fee imposed on content providers who would be obliged to pass it on to telecom companies. 

The group, which includes NGOs, cloud associations, and broadband service providers, was reacting to a consultation launched by the European Commission in February. It’s not the consultation itself they’re worried about, but any misleading conclusions the consultations might lead to. 

Why is this relevant? First, the signatories to the statement think there’s ‘no evidence that a real problem’ exists. Second, they say the fee would be a potential violation of the net neutrality principle – a principle that the EU has staunchly protected.


// MEDIA //

Google and Meta voice opposition to Canada’s online news bill

The battle over Canada’s proposed online news bill continues. In last week’s hearing by the Senate’s Standing Committee on Transport and Communications, both Google and Meta said that they would have to withdraw from Canada should the proposed bill pass as it stands now.

One of the main issues is that the bill obliges companies to pay news publishers for linking to their sites, ‘making us lose money with every click’, according to Google’s vice-president for news, Richard Gingras.

Why is this relevant? Because Google and Meta have repeated their threat that they’re ready to leave if the bill isn’t revised.


// PRIVACY //

EU’s top court rules on two GDPR cases 

In the first – Case C-487/21 – the Court of Justice of the EU clarified that the right of access under the GDPR entitles individuals to obtain a faithful reproduction of their personal data. That can mean entire documents, if there’s personal data on each page. 

In the second – Case C-300/21 –  the court confirmed that the right to compensation under the GDPR is subject to three conditions: infringement of the GDPR, material or non-material damages resulting from the infringement, and a causal link between the damage and the infringement. But violation of the GDPR alone does not automatically entitle the claimant to compensation. That’s up to national laws to determine.


Was this newsletter forwarded to you, and you’d like to see more?


// IPR //

EU Commission releases non-binding recommendation to combat online piracy of live events

The European Commission has opted for a non-binding strategy to combat the piracy of live events, generating dissatisfaction among both lawmakers and rightsholders. 

The measure outlines several recommendations for national authorities, rightsholders, and intermediary service providers to tackle the issue of live event piracy more effectively, but its non-binding nature falls short of what is required to address the issue.

Why is this relevant? Because the European Commission went ahead with its plans despite not one but two complaints from a group of parliamentarians calling for a legislative instrument to counter online piracy.


// AUTONOMOUS CARS //

China publishes draft standards for smart vehicles 

China’s Ministry of Industry has published a series of draft technical standards for autonomous vehicles (Chinese) – developed within a National Automobile Standardisation Technical Committee – that will address cybersecurity and data protection issues. The public can comment till 5 July.

One of the standards requires that data generated by autonomous vehicles be stored locally. The government wants to ensure that any sensitive data stays within China’s borders.

Another standard will require autonomous vehicles to be equipped with data storage equipment to allow data to be retrieved and analysed in the case of an accident. Reminds us of flight data recorder black boxes.


The week ahead (8–14 May)

9–12 May: The Women in Tech Global Conference, in hybrid format, will bring women active in the technology sector together to discuss their perspectives on tech leadership, gender parity, digital economy, and more.

10 May: Last day for feedback on two open consultations: The European Commission’s single charger draft rules and China’s proposed regulation for generative AI tools (Chinese).

10–12 May: UNCTAD’s Intergovernmental Group of Experts on E-commerce and the Digital Economy meets in Geneva for its sixth session.

11 May: In the European Parliament, the joint committee (IMCO/LIBE) vote on the report on the Artificial Intelligence Act takes place today.
For more events, bookmark the observatory’s calendar of global policy events.


#ReadingCorner
AI pioneer Geoffrey Hinton.
Campaigns 10

AI an urgent threat, says AI pioneer

AI pioneer Geoffrey Hinton, who turned 75 in December and who recently resigned from Google, tells news portal Reuters that AI could pose a ‘more urgent’ threat to humanity than climate change. In another interview with the Guardian, he says there’s no simple solution.


starlink
Campaigns 11

Starlink arrives in Africa, but South Africa left behind

Starlink, the satellite internet constellation developed by Elon Musk’s SpaceX, has started operating in Nigeria, Mozambique, Rwanda, and Mauritius over the past few months, with 19 more African countries scheduled for launch this year and the next. But South Africa is notably missing from this list. Could this be due to South Africa’s foreign ownership rule, which grants licences only to companies with at least 30% South African ownership of the company seeking to operate there? A Ventures Africa contributor investigates.


steph
Stephanie Borg Psaila – Author
Director of Digital Policy, DiploFoundation
ginger
Virginia Paque – Editor
Senior editor – Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

DW Weekly #109 – 1 May 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 18

Dear readers,

It’s more AI this week, after the G7 wrapped up a weekend-long ministerial meeting, with data flows joining the ranks on the ministers’ agenda. Microsoft is far from impressed at the UK’s decision to block its acquisition of Activision Blizzard, while the European Commission announces the names of very large platforms and search engines that will face tougher content regulation and consumer protection rules. The European Parliament reached a political deal on the AI Act, but we’ll cover that after the vote in plenary.

Stephanie and the Digital Watch team


// HIGHLIGHT //

G7 countries take on AI, data flows

As we anticipated last week, the G7 digital ministers wrapped up their weekend meeting in Japan with a focus on AI and data flows, setting the stage for new developments in some areas and a bit of a  letdown in others. The good parts first.

If AI is queen, data is still king 

The digital ministers of the world’s seven largest economies – Canada, France, Germany, Italy, Japan, the UK, and the USA – will start implementing Japan’s plan for a Data Free Flow with Trust (DFFT).

In essence, this approach will try to reconcile the need for data to flow freely, with the need to keep personal data safe and uphold people’s right to privacy. Although the group of seven have strong economies in common, the way the USA approaches data protection is at odds with stronger European safeguards. Max Schrems can tell us a thing or two about this.

The new plans outlined in the G7 digital ministers’ declaration involve setting up a new entity in the coming months, called an Institutional Arrangement for Partnership (IAP), and choosing the OECD to lead the IAP’s work. Although the OECD has only 37 members, its recent success in getting over 140 countries to agree on a new global digital tax deal shows that it’s capable of navigating complex terrains. Data flows are clearly a politically sensitive, highly charged issue.

The DFFT was first proposed by Japan’s former prime minister Shinzo Abe, debuted at the World Economic Forum annual meeting in Davos, and was later endorsed by the G20. Since Japan is chairing the G7 this year, it will want to see the IAP up and running by the end of 2023. Good news for DFFT supporters: The pressure’s on for the IAP stream. 

Generative AI: G7 digital ministers hedge their bets 

While things will move quickly on the data flows front, the ministerial declaration is somewhat of a letdown when it comes to regulating generative AI tools such as ChatGPT.

The group of seven did acknowledge the popularity that generative AI tools have gained quickly and the need to take stock of benefits and challenges. But the best the ministers could offer was a vague, non-committal plan:   

‘We plan to convene future G7 discussions on generative AI which could include topics such as… (transparency, disinformation)… These discussions should harness expertise and leverage international organisations such as the OECD to consider analysis on the impact of policy developments and GPAI to conduct relevant practical projects.’

What they did commit to is ‘to support interoperable tools for trustworthy AI’, meaning: to develop tools that allow AI tools to work together seamlessly. This includes developing standards and promoting dialogue on interoperability between governance frameworks.

Meanwhile, there’s still a possibility that the G7 heads of state, meeting later this month in Hiroshima, will take more concrete steps to tackle the privacy and security concerns of generative AI. 

 Groupshot, Person, People, Clothing, Coat, Formal Wear, Suit, Adult, Male, Man, Blazer, Jacket, Face, Head, Nathaniel Fick, Margrethe Vestager, Sogyal Rinpoche, Tarō Konō, Yasutoshi Nishimura
G7 digital ministers and other delegates during the first day of a two-day meeting in Japan, on 29 April 2023. Credit: Kyodo

Digital policy roundup (24 April–1 May)
// AI //

Italy lifts ban on ChatGPT after OpenAI introduces privacy improvements

The Italian data protection regulator has confirmed it has allowed OpenAI’s ChatGPT to resume operations in Italy after the company implemented several privacy-enhancing changes. 

The Italian Garante Per La Protezione Dei Dati Personali temporarily blocked the AI software in response to four concerns: a data breach (which the company said was a bug), unlawful data collection, inaccurate results, and the lack of any age verification checks. OpenAI has now fulfilled most of the regulator’s requests. It added information on how users’ data is collected and used and allows users to opt out of data processing that trains the algorithmic model. 

What’s next? There are still two requests from the regulator that OpenAI must implement in Italy: It needs to implement an age-based gated system to keep children safe from accessing inappropriate content (this will serve as a testbed for age-verification systems), and it needs to launch a publicity campaign to inform users about their right to opt out of data processing for training the model. 

Why the emphasis on a publicity campaign for users? Because there’s no opt-in for users to consent to data processing for training algorithms (OpenAI will rely on legitimate interest). So should users object, their recourse is to submit an opt-out form to OpenAI. 

Meanwhile, scrutiny by the EU’s ad hoc task force and other data protection watchdogs continues.

USA: A new bill to create a task force to review AI policies

Driven by the need to review AI policy, Democratic Senator Michael Bennet introduced a bill that would create a task force to review AI policy and make recommendations. It would then terminate its operations after 18 months. 

Why is this relevant? Inasmuch as the idea behind it is good, it could take longer for the task force to materialise than for it to complete its job.


// ANTITRUST //

UK competition watchdog blocks Microsoft’s purchase of Activision Blizzard

The UK’s Competition and Markets Authority (CMA) has blocked Microsoft’s acquisition of Activision Blizzard, valued at USD68.7 billion (EUR62.5 billion), over concerns that it would negatively affect the cloud gaming industry.

We might have known this would happen: In February, the watchdog said the merger would harm competition and proposed several remedies. Even though Microsoft’s reassurances seemed promising, it did not dissuade the watchdog strongly enough to overturn its initial thoughts, and a war of words ensued.

Why is this relevant? It’s relevant because of what happens next. An unsuccessful appeal by Microsoft could influence the decisions of the US Federal Trade Commission and the European Commission. If past experience is anything to go by, a second rejection by the European Commission will convince the FTC to block the merger as well. 

Different Activision Blizzard game characters pose for a group photo.
Some of the characters in Activision Blizzard’s games. (Credit: Activision Blizzard)

// DSA //

Digital Services Act: European Commission identifies 19 very large tech companies

The European Commission has designated 19 tech companies under two categories – very large online platforms (VLOPs) and very large online search engines (VLOSEs) – which will need to comply with stricter rules under the Digital Services Act. 
These companies have more than 45 million monthly active users, according to the data the companies themselves had to disclose last February.

What happens next? The companies must comply with the new rules within four months. The rules include no ad targeting based on a user’s sensitive information (such as political opinion), tougher measures to curb the spread of illegal content, and a requirement to carry out their first risk assessment.

You’ve earned a badge!

The 17 very large online platforms are: Alibaba AliExpress, Amazon Store, Apple AppStore, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Snapchat, TikTok, Twitter, Wikipedia, YouTube, and Zalando.

The 2 very large online search engines are: Bing and Google Search.


// CONTENT POLICY //

China to root out false news about Chinese businesses

The Central Cyberspace Administration of China will carry out a three-month nationwide campaign to remove fake news about Chinese businesses from online circulation. The aim is to allow ‘enterprises and entrepreneurs’ to work in ‘a good atmosphere of online public opinion’.

Why is it relevant? There’s nothing new about China’s ‘clean-up cyberspace’ campaigns (known as Qinglang) – these campaigns actually started in 2016. But the fact that the government wants to improve local businesses’ reputation shows its intent to promote its domestic market.

Brazil blocks (and reinstates) Telegram over non-disclosure of personal data 

Brazil’s Supreme Court temporarily suspended access to messaging app Telegram for users in the country after the company failed to comply with an order to provide data linked to a group of neo-Nazi organisations using the platform. 

Telegram CEO Pavel Durov said that the data requested by the court ‘is technologically impossible for us to obtain’, as the users had already left the platform. But the court disagreed. 

The court has lifted its suspension, but retained the non-compliance fine of one million reais (USD198,000 or EUR182,000) per day until the company provides the requested data.

Why is it relevant? Because the same thing happened to Telegram last year and to WhatsApp in previous years. 

Telegram CEOs message April 2023
Campaigns 19

(Click here for the original Du Rove Channel machine-readable message)


// CHILDREN //

Out of control? Severe child sexual abuse imagery on the rise

The news that no one wants to hear: The number of images depicting child sexual abuse classified as severe has more than doubled since 2020. 

The annual report of the Internet Watch Foundation (IWF), a non-profit that works to eliminate abusive content from the internet, reveals more harrowing trends. For instance, content involving children aged 7-10 increased by 60%, with most victims being girls. Some of the most extreme abuse is committed against very young children, including babies.

Why is it relevant? 

First, it comes out at the same time as the results of a two-year investigation by The Guardian, which found tech company Meta is struggling to prevent criminals from using its platforms, Facebook and Instagram, in an effort to counter child sexual abuse. 

Second, because it strengthens the call, reiterated by law enforcement agencies a fortnight ago (and by the IWF in its report), for tech companies to prioritise child safety over end-to-end encryption. The agencies say that encryption shouldn’t come at the expense of diminishing companies’ abilities to identify abusive content.


The week ahead (1–7 May)

1–4 May: This year’s Web Summit, which gathers leaders and start-ups from the tech and software industries, is taking place in Brazil this week. 

3 May: It’s World Press Freedom Day! To celebrate the 30th anniversary of this international day, UNESCO is holding a special event in New York on 2 May, which will also be livestreamed.

3 May: Last day to provide feedback on the EU’s initiative on virtual worlds: A head start towards the next technological transition

3–4 May: The 6G Global Summit is happening in Bahrain (and online).

3–5 May: This year’s forum on Science, Technology and Innovation for the Sustainable Development Goals (STI Forum), taking place in New York, is about accelerating the post-Covid-19 recovery.

5 May: A stakeholder workshop organised by the EU will discuss how to ensure effective compliance with the data-related rules in the Digital Markets Act. It’s being held in Brussels and online.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Digital Watch newsletter – Issue 79 – May 2023

Pentagon: The leak on Discord is more significant than we think

From time to time, the intelligence of US agencies and its allies are exposed in major leaks. April’s leak of 50-or-so top secret documents on the gaming chat service Discord was one of the most egregious.

The release of diplomatic cables by WikiLeaks in 2010, the 2013 disclosures by Edward Snowden, and the disclosure of the National Security Agency and CIA’s hacking tools in 2016 and 2017 rank among the world’s biggest modern-time leaks.

Outrage or shrug? Diminishing response

Every new leak seems to generate less and less outrage on a global level. So when another US intelligence leak surfaced in April on Discord (a relatively unknown social platform), it hardly caused a blip on the radar. While sensationalism can hinder law enforcement’s efforts, disinterest isn’t exactly helpful either.

The Discord leak was revealed on 6 April by the New York Times. Behind the leak was  21-year-old Jack Teixeira, an airman first class in the Massachusetts Air National Guard. 

It wasn’t difficult for the FBI to identify him. He uploaded the documents to an online community on Discord (a server) that he was unofficially administrating, and tracked the FBI investigation into his own leak. He was charged a few days later.

Mistaken for fake news

In that short time, the leaked documents were spread to other social media platforms by users who thought the documents were fake. The possibility of the documents being top secret didn’t seem to register.

As CNN reported: ‘Most [Discord] users distributed the files because they thought they were fake at first,’ one Discord said. ‘By the time they were confirmed as legitimate, they were already all over Twitter and other platforms.’

A Google Trends graph shows how people’s interest in searching for information related to leaks has dwindled over time
A Google Trends graph shows how people’s interest in searching for information related to leaks has dwindled over time

Very bad timing

Not that there is ever a good time, but this leak arrived at a particularly sensitive moment in Russia’s ongoing conflict against Ukraine. 

Although the data was not as comprehensive as in previous leaks, this latest breach provided intimate details about the current situation in Ukraine, as well as intelligence on two of the US’s closest allies: South Korea and Israel. 

While Europe was mostly spared, the leaked information uncovered that Ukraine has European special forces on the ground and that almost half of the tanks en route to Kyiv are from Poland and Slovenia. The collateral consequences of the leak extend to many countries.

Still out there

Days after the Pentagon announced its investigation, the leaked documents could still be accessed on Twitter and other platforms, promoting a debate about the responsibility of social media companies in cases involving national security. There’s no one solution that can solve the content moderation issues on social media, complicating the follow-up. 

Unfortunately, but unsurprisingly, leaks are bound to happen, especially when classified information is accessible to so many people. In 2019, there were 1.25 million US citizens with clearance to access the USA’s top-secret information.

One solution, therefore, is for social media platforms to strengthen their content policies when it concerns leaks of intelligence information. If the former Twitter employee interviewed by CNN is correct, ‘the posting of classified US military documents would likely not be a violation of Twitter’s hacked materials policy’. Another possibility is for companies to strengthen their content moderation capabilities. To avoid imposing impossible burdens on start-up or small platforms, capabilities should be matched to the size of a platform’s user base (the framework used by the EU’s Digital Services Act is a good example).

The issue becomes more complex when illegal material is shared on platforms that use end-to-end encryption. As law enforcement agencies have emphasised time and time again, while there’s no doubt that encryption plays an important role in safeguarding privacy, it also hampers their ability to identify, pursue, and prosecute violations.  

For now, we should focus on the fact that the latest leak was uploaded by a user to a public forum on social media, despite the potential damage to the national security of their own country (the USA) and the risk to citizens of a war-torn country (Ukraine). That is undoubtedly the biggest concern

criminal complaint
Campaigns 33

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from April. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

same relevance

G7 digital ministers will start implementing Japan’s plan for a Data Free Flow with Trust (DFFT) through a new body, the Institutional Arrangement for Partnership (IAP), led by the Organisation for Economic Co-operation and Development (OECD). They also discussed AI, digital infrastructure, and competition.

Sustainable development

same relevance

The UN World Data Forum, held in Hangzhou, China, called for better data governance and increased collaboration between governments to achieve a sustainable future. UN Secretary-General António Guterres said that data remains a critical component of development and progress in the 21st century.

Security

increasing relevance

The Pentagon started investigating the leak of over 50 classified documents that turned up on the social media platform Discord. See our story on pages 2–3. A joint international law enforcement operation seized the Genesis Market, a dark web market.

The European Commission announced a EUR1.1 billion (USD1.2 billion) plan to strengthen the EU’s capabilities to fend off attacks and support more coordination among member states. 

TikTok was banned on government devices in Australia; the Irish National Cyber Security Centre also recommended that government officials refrain from using TikTok on devices.

The annual report of the Internet Watch Foundation (IWF) revealed that severe child sexual abuse imagery is on the rise.

E-commerce and the internet economy

same relevance

The UK’s Competition and Markets Authority (CMA) blocked Microsoft’s acquisition of Activision Blizzard over concerns that it would negatively affect the cloud gaming industry. Microsoft will appeal.

The European Commission designated 19 tech companies as very large online platforms (VLOPs) (17) and very large online search engines (VLOSEs) (2), which will need to comply with stricter rules under the new Digital Services Act. 

South Korea’s Fair Trade Commission (FTC) fined Google for unfair business practices. A group of Indian start-ups asked a local court to suspend Google’s new in-app billing fee system. In the UK, Google will let Android developers use alternate payment options.

Infrastructure

same relevance

The EU’s Council and Parliament reached a political agreement over the new Chips Act, which aims to double the EU’s global output of chips by 20% by 2030.

Digital rights

increasing relevance

Governments around the world launched investigations into OpenAI’s ChatGPT, principally over concerns that the company’s practices violated people’s privacy and data protection rights. See our main story.

The Indian government is considering opening up Aadhaar, the country’s digital identity system, to private entities to authenticate users’ identities. 

European MEPs voted against a proposal to allow personal data transfers of EU citizens to the USA under the new EU-US Data Privacy Framework.

Content policy

same relevance

The Central Cyberspace Administration of China will carry out a three-month nationwide campaign to remove fake news about Chinese businesses from online circulation. The aim is to allow enterprises and entrepreneurs to work in a good atmosphere of online public opinion.

Jurisdiction and legal issues

same relevance

Brazil’s Supreme Court blocked – and then reinstated – messaging app Telegram for users in the country after the company failed to provide data linked to a group of neo-Nazi organisations using the platform. 

A Los Angeles court dismissed a claim for damages by a Tesla driver, after the company successfully argued that the partially automated driving software was not a self-piloted system.

New technologies

increasing relevance

In the USA, the Biden administration is studying potential accountability measures for AI systems. The National Telecommunications and Information Administration’s  (NTIA) call for feedback runs till 10 June. A US Democratic Senator has introduced a bill that would create a task force to review AI policy. The US Department of Homeland Security also announced a new task force to ‘lead in the responsible use of AI to secure the homeland’ while defending against malicious use of AI.

A group of 11 members of the European Parliament are urging the US President and European Commission chief to co-organise a high-level global summit on AI governance. 

The Cyberspace Administration of China (CAC) proposed new measures for regulating generative AI services. The draft is open for public comments until 10 May.

Dozens of advocacy organisations and children’s safety experts called on Meta to halt its plans to allow kids into its virtual reality world, Horizon Worlds, due to potential risks of harassment and privacy violations for young users.


Why authorities are investigating ChatGPT: The top 3 reasons

With its ability to replicate human-like responses in text-based interactions, OpenAI’s ChatGPT has been hailed as a breakthrough in AI technology. But governments aren’t entirely sold on it. So what’s worrying them?

Privacy and data protection

Firstly, there’s the central issue of allegedly unlawful data collection, the all-too-common practice of collecting personal data without the user’s consent or knowledge. 

This is one of the reasons why the Italian privacy watchdog, the Garante per la Protezione dei Dati Personali, imposed a temporary ban on ChatGPT. The company addressed most of the authority’s concerns, and the software is now available in Italy again, but that doesn’t solve all the problems.

The same concern is being tackled by other data protection authorities, including France’s Commission nationale de l’informatique et des libertés (CNIL), which received at least two complaints, and Spain’s Agencia Española de Protección de Datos (AEPD). Then there’s the European Data Protection Board (EDPD)’s newly-launched task force, whose ChatGPT-related work will involve coordinating the positions of the other European authorities.

Concerns around data protection have not been limited to Europe, however. The complaint by the Center for Artificial Intelligence and Digital Policy (CAIDP) to the US Federal Trade Commission (FTC) argued that OpenAI’s practices contain numerous privacy risks. Canada’s Office of the Privacy Commissioner is also investigating.

Unreliable 

Secondly, there’s the issue of inaccurate results. OpenAI’s ChatGPT model has been used by several companies, including Microsoft Bing, to generate text. However, as OpenAI itself confirms, the tool is not always accurate. Reliability was one of the issues behind Italy’s decision to ban ChatGPT, and in one of the complaints received by the French CNIL. The CAIDP’s complaint to the FTC also argued that OpenAI’s practices were deceptive since the tool is ‘highly persuasive’, even if the content is unreliable. In Italy’s case, OpenAI told the authority it was ‘technically impossible, as of now, to rectify inaccuracies’. That’s of little reassurance, considering how these AI tools can be used in sensitive contexts such as healthcare and education. The only recourse, for now, is to provide users with better ways to report inaccurate information.

OpenAi
Campaigns 34

Children’s safety

Thirdly, there’s the issue of children’s safety and the absence of an age verification system. Both Italy and the CAIPD argued that, as things stand, children can be exposed to content that is inappropriate for their age or level of maturity. 

Even though OpenAI has returned to Italy after introducing an age question on ChatGPT’s sign-up form, the authority’s request for an age-based gated system still stands. OpenAI must submit its plans by May and implement them by September. This request coincides with efforts by the EU to improve how platforms confirm their users’ age. 

As long as new AI tools keep emerging, we expect to see continued scrutiny of AI technologies, particularly around their potential privacy and data protection risks. OpenAI’s response to the various demands and investigations may set a precedent for how AI companies are held accountable for their practices in the future. At the same time, there is a growing need for greater regulation and oversight of AI technologies, particularly around machine learning algorithms.

Policy updates from International Geneva

WSIS Action Line C4: Understanding AI-powered learning: Implications for developing countries | 17 April

An ITU and the ILO event examined the impact of AI technologies on the global education ecosystem. 

Focusing mostly on the issues experienced by the Global South, experts discussed how these technologies were being used in areas such as exam monitoring, faculty lecture transcriptions, student success analyses, teachers’ administrative tasks, and real-time feedback to student questions. 

They also talked about the added workload for teachers to ensure that they and their learners are proficient with the necessary tools, as well as the use and storage of personal data by the providers of AI technologies and others within the educational system. 

Solutions to these challenges must also address the existing digital skills gap and connectivity issues.


UNECE Commission’s 70th Session: Digital and Green Transformations for Sustainable Development in the Region | 18–19 April

The 70th session of the UN Economic Commission for Europe (UNECE) Commission hosted ministerial-level representatives from UNECE member states for a two-day event that tackled digital and green transformation for sustainable development in Europe, the circular economy, transport, energy, financing for climate change, and critical raw materials. 

The event allowed participants to exchange experiences and success stories, review progress on the Commission’s activities, and consider issues related to economic integration and cooperation among countries in the region. The session emphasised the need for a green transformation to address pressing challenges related to climate change, biodiversity loss, and environmental pressures, and highlighted the potential of digital technologies for economic development, policy implementation, and natural resource management.


Girls in ICT Day 2023 | 27 April

The International Girls in ICT Day, an annual event that promotes gender equality and diversity in the tech industry, was themed Digital Skills for Life. 

The global celebration was held in Zimbabwe as part of the Transform Africa Summit 2023, while other regions conducted their own events and celebrations

The event was instituted by ITU in 2011, and it is now celebrated worldwide. Governments, businesses, academic institutions, UN agencies, and NGOs support the event, providing girls with opportunities to learn about ICT, meet role models and mentors, and explore different career paths in the industry. 

To date, the event has hosted over 11,400 activities held in 171 countries, with more than 377,000 girls and young women participating.

What to watch for: Global digital policy events in May

10–12 May 2023 | Intergovernmental Group of Experts on E-commerce and the Digital Economy (Geneva and online) 

UNCTAD’s group of experts on e-commerce and the digital economy meets annually to discuss ways of supporting developing countries to engage in and benefit from the evolving digital economy and narrowing the digital divide. The meeting has two substantive agenda items: How to make data work for the 2030 Agenda for Sustainable Development and the Working Group on Measuring E-commerce and the Digital Economy.


19–21 May 2023 | G7 Hiroshima Summit 2023 (Hiroshima, Japan)

The leaders of the Group of Seven advanced economies, along with the presidents of the European Council and the European Commission, convene annually to discuss crucial global policy issues. During Japan’s presidency in 2023, Japanese Prime Minister Fumio Kishida identified several priorities for the summit, including the global economy, energy and food security, nuclear disarmament, economic security, climate change, global health, and development. AI tools will also be on the agenda.


24–26 May 2023 | 16th International CPDP conference (Brussels and online) 

The upcoming Computers, Privacy, and Data Protection (CPDP) conference, themed ‘Ideas That Drive Our Digital World’, will focus on emerging issues such as AI governance and ethics, safeguarding children’s rights in the algorithmic age, and developing a sustainable EU-US data transfer framework. Every year, the conference brings together experts from diverse fields, including academia, law, industry, and civil society, to foster discussion on privacy and data protection.


29–31 May 2023 | GLOBSEC 2023 Bratislava Forum (Bratislava, Slovakia)

The 18th edition of the Bratislava Forum will bring together high-level representatives from various sectors to tackle the challenges shaping the changing global landscape across four main areas: defence and security, geopolitics, democracy and resilience, and economy and business. The three-day forum will feature more than 100 speakers and over 40 sessions.


30 May–2 Jun 2023  | CyCon 2023 (Tallinn, Estonia) 

The NATO Cooperative Cyber Defence Centre of Excellence will host CyCon 2023, an annual conference that tackles pressing cybersecurity issues from legal, technological, strategic, and military perspectives. Themed ‘Meeting Reality’, this year’s event will bring together experts from government, military, and industry to address policy and legal frameworks, game-changing technologies, cyber conflict assumptions, the Russo-Ukrainian conflict, and AI use cases in cybersecurity.

The Digital Watch observatory maintains a live calendar of upcoming and past events.


DW Weekly #108 – 24 April 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 40

Dear all,

Policymakers have been particularly busy this week. We cover most of the newly proposed regulations below, together with a trend that’s picking up: calls for multilateral cooperation on AI regulation. Plus: Cybersecurity (and, to a certain extent, AI) is dominating this week’s discussions.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

ChatGPT to debut on multilateral agenda at G7 Summit

Governments’ concerns on how to regulate AI tools the likes of ChatGPT are taking on a multilateral dimension. 

Japan, this year’s chair of the G7 (which gathers seven of the world’s largest economies), will include generative AI on the G7 Summit in Hiroshima agenda, scheduled to take place 19–21 May. Although AI is not (yet?) on the official list of topics that will be discussed at the summit (Russia and China top the list of topics on the agenda), Japan’s Prime Minister Fumio Kishida confirmed the plan last week.

Japan has taken a keen interest in ChatGPT. Earlier in April, the CEO of OpenAI (the company behind ChatGPT), Sam Altman, made Japan the destination for his maiden overseas visit, where he met Prime Minister Kishida to discuss opening an office in the country. While authorities around the world have been reluctant to use ChatGPT for official business over privacy and security concerns, the Japanese city of Yokosuka became the first city to use ChatGPT in its municipal offices. Media reports announced that financial groups in Japan would also employ ChatGPT for internal use.

#Factbox: In March 2023, the Japanese ranked as one of the top three populations in the world to visit the ChatGPT website, according to data by SimilarWeb. The data indicates that Japan has the highest proportion of users after the USA and India.

 Chart, Plot, Map

That’s not to say Japan is not concerned with growing privacy and security concerns. On the home front, a government taskforce led by the country’s cabinet will tackle the pros and cons of generative AI. At the G7, Prime Minister Kishida said that AI discussions during the Hiroshima Summit would tackle the creation of international rules around the use of AI. 

In the lead-up to the summit, the trio of Japanese digital ministers hosting the G7 Digital and Tech Ministers’ Meeting this week will call for accelerated research into generative AI due to concerns over their impact on society. An action plan on AI governance is also in the pipeline. Digital Minister Taro Kono said he wants ‘the G7 to send out a unified message’ on the issue; Communications Minister Takeaki Matsumoto said Japan would like to lead multilateral efforts in advancing and regulating AI.

As a country with growing ambitions in generative AI tools, Japan is planning to show G7 countries and the rest of the world that it can lead the way for multilateral action that tackles ongoing concerns while leveraging AI’s potential. Fellow G7 countries Canada, France, Germany, Italy, the UK, and to a certain extent, the USA – all of whom are investigating OpenAI’s ChatGPT – will probably welcome any initiative that tackles privacy and security concerns with open arms.


Digital policy roundup (17–24 April)
// AI //

European lawmakers urge US President to hold global summit on AI

A group of EU parliamentarians are urging US President Joe Biden and European Commission chief Ursula von der Leyen to come together for a high-level global summit on AI, to set preliminary governing principles for developing, controlling, and deploying powerful AI. 

The statement, which was signed by 11 members of the EU Parliament, also calls on the Trade and Technology Council to facilitate an agenda, and on other countries to get involved in setting rules of the road for ‘very powerful AI’.

Why is this relevant? The lawmakers’ call adds to the growing momentum for international cooperation on AI. The statement calls on Biden and von der Leyen to lead a global effort, which is, practically speaking, in line with the work of the G7 group of countries (See also: Last week’s coverage on the regulatory efforts of China, the EU, and the USA over general purpose AI).

US Homeland Security creating AI task force

Meanwhile, the chief of the US Department of Homeland Security, Alejandro Mayorkas, announced a new task force to ‘lead in the responsible use of AI to secure the homeland’, while also defending ‘against the malicious use of this transformational technology’.


// CYBERSECURITY //

EU announces mega plan to strengthen cybersecurity capabilities 

The European Commission has announced a EUR1.1 billion (USD1.2 billion) plan to strengthen the EU’s capabilities to fend off attacks and support more coordination among member states. The proposed regulation, called the Cyber Solidarity Act, will introduce three things.

The first is a European Cyber Shield, comprised of Security Operations Centres (SOCs) across the EU, whose main task will be detecting cyber threats. The second is a Cyber Emergency Mechanism, to help ensure that EU member states are prepared for and ready to respond to major cyber attacks. The third is a Cybersecurity Incident Review Mechanism, to assess large-scale incidents after they occur. The commission also launched a Cybersecurity Skills Academy to address the ongoing skills shortage in the sector.

What’s next? It will be the usual legislative process: The European Parliament and the EU Council will each start debating the proposed text.


Was this newsletter forwarded to you, and you’d like to see more?


// CHIPS //

EU one step closer to passing Chips Act

The EU’s Council and Parliament have reached a political agreement over the new Chips Act, which aims to double the EU’s global output of chips by 20% by 2030.

The new framework includes the Chips for Europe Initiative, which is expected to mobilise €43 billion in public and private investments to entice chip makers to build factories in the EU. 

Why is it relevant? The EU is trying to compete with the USA in terms of subsidies. But the EU’s plan falls a little short compared to the US Chips for America Act’s $52 billion.

What’s next? The agreement needs to be endorsed and formally adopted by both institutions.

A circuit board with a prominently displayed chip.

// COMPETITION //

Google to allow app developers in UK to use alternative payment systems

The UK competition watchdog has announced that Google will allow app developers in the UK to offer different payment systems of their choice. The UK’s Competition and Markets Authority (CMA), which has provisionally agreed with Google’s proposed commitments, is also seeking the industry’s feedback (consultation open till 19 May) to make sure that the company’s commitments ‘are appropriate’ – in other words, whether these promises are enough to appease app developers.

What’s in it for app developers? More options, and therefore, more competition, Google explains. App developers will be able to break away from Google Play’s billing system, which currently accounts for a whopping 90% of the native app downloads, taking a cut of up to 30% from every in-app purchase.

What’s in it for Google? At face value, these commitments look like they’re all in favour of app developers. But in practice, Google’s cut will only be reduced to 26-27% – which is still a hefty service fee for app developers who will also be required to pay an additional fee for an alternative service. More than that, if the CMA agrees to these commitments, it will have to drop its investigation into Google’s alleged anti-competitive practices as part of the deal. 

What’s next? For now, the CMA has said that Google’s proposed commitments are sufficient to address the concerns it had at the start of the antitrust investigation. If nothing changes, the CMA will confirm the deal.


// TIKTOK //

Ireland adds itself to list of countries banning TikTok on official devices

One US Congress hearing and several country bans later, TikTok, owned by Chinese company ByteDance, is still regarded as risky for official devices.

This time, the advice is coming from the Irish National Cyber Security Centre. which has recommended that staff in government departments and state agencies refrain from using TikTok on official devices.

Why is this relevant? First, TikTok is under global scrutiny over its data practices. Second, security trumps any other economic interest for Ireland and possibly many other countries. In Irish head of government Leo Varadkar’s words, the company was ‘a big investor in Ireland and employs a lot of people’, but the government has to ‘take the advice of cybersecurity experts’.


// DIGITAL IDs //

India to allow private entities to use Aadhaar

The Indian government is considering opening up Aadhaar, the country’s digital identity system, to private entities for authenticating the identity of users. 

The Ministry of Electronics and IT is proposing a short amendment to the Aadhaar rules, through which any non-government entity can request clearance to link to it. Entities will need to explain why they want to use Aadhaar; the ministry will then assess whether the proposal succeeds in ‘promoting ease of living of residents and enabling better access to services for them’. A public consultation runs till 5 May.


// AUTONOMOUS VEHICLES //

Tesla emerges unscathed in Autopilot car crash trial

The Los Angeles Superior Court has dismissed a USD3 million (EUR2.7 million) claim for damages by a Tesla driver, Justine Hsu, over alleged defects in the car’s Autopilot system. Tesla successfully argued that the partially automated driving software was not a self-piloted system.


The week ahead (24–30 April)

24 April: Data protection is the theme of the next thematic deep dive in preparation for the  intergovernmental negotiations on the Global Digital Compact (GDC). These in-depth discussions are organised by Rwanda and Sweden as co-facilitators. Refer to the guiding questions before registering. (Learn more about the GDC process on the Digital Watch Observatory’s dedicated space).

24 April: The UN marks the International Day of Multilateralism and Diplomacy for Peace.

24–27 April: The UN World Data Forum 2023 will look at data and statistics, focusing on how to strengthen the use of data for sustainable development. Expect UN Secretary-General Antonio Guterres’ to address the forum and an outcome document charting the progress of discussions. It’s in hybrid format: in situ in Hangzhou, Zhejiang Province in China and online.

24–27 April: The annual RSA conference, hosted annually in San Francisco, USA, discusses issues ‘covering the entire spectrum of cybersecurity’. Expect keynotes from some of the world’s industry leaders. Livestreams will be available.

25–26 April: The European Cyber Agora has cybersecurity and cyber diplomacy at its core. Now that the EU has launched a plan to strengthen its cyber capabilities, there will be lots to talk about. The event is facilitated by Microsoft, the German Marshall Fund of the United States, and EU Cyber Direct, and it’s in hybrid format (Brussels and online).

26 April: The Global Forum On Cyber Expertise (GFCE) is holding its European meeting back to back with the Cyber Agora. Also in hybrid format (Brussels and online), the focus is cyber capacity building in Europe and Africa. 

26 April: WIPO celebrates World Intellectual Property Day. This year’s theme is Women and IP: Accelerating innovation and creativity.

26–27 April: POLITICO Live’s 6th Europe Tech Summit will talk policy and regulation. Focusing on Europe, this two-day hybrid event will bring top EU executives to discuss cyber threats, emerging tech, standards, and everything in between.

29–30 April: The G7 Digital and Tech Ministers’ Meeting in Takasaki, Gunma, will tackle AI tools such as ChatGPT. Also on the agenda: a framework for the so-called Data Free Flow with Trust (DFFT: a concept aimed at fostering cross-border data flows through harmonised approaches to promote openness and trust in data flows) championed by Japan in 2019. The G7 meeting will be hosted by Japan’s economy minister, internal affairs and communications minister, and minister for digital transformation.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Numéro 78 de la lettre d’information Digital Watch – avril 2023

Baromètre

Les développements de la politique numérique qui ont fait la une de la presse mondiale

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux développements du mois de mars. Vous trouverez plus de détails dans chaque mise à jour du Digital Watch Observatory.

Architecture mondiale de la gouvernance numérique

neutre

Les cofacilitateurs du Pacte mondial pour le numérique (PMN) ont organisé un examen thématique approfondi de l’inclusion numérique et de la connectivité afin de préparer les négociations intergouvernementales sur le PMN.


Développement durable

en progression

Le rapport 2023 de la CNUCED sur la technologie et l’innovation explore les avantages potentiels de l’innovation verte pour les pays en développement, notamment la stimulation de la croissance économique et le renforcement des capacités technologiques.

La Commission européenne a dévoilé le règlement pour une industrie « zéro net » visant à stimuler les technologies énergétiques propres dans l’UE, et à soutenir la transition vers un système énergétique plus durable et plus sûr. Elle a également adopté une nouvelle proposition visant à rendre la réparation des biens plus facile et moins coûteuse pour les consommateurs. Enfin, elle a présenté un nouvel acte visant à renforcer la résilience et la sécurité des chaînes d’approvisionnement en matières premières essentielles dans l’UE, en réduisant la dépendance à l’égard des importations en provenance de pays tiers.

L’alliance numérique entre l’Union européenne, l’Amérique latine et les Caraïbes a été créée. Elle se concentre sur la construction d’infrastructures numériques, ainsi que sur la promotion de la connectivité et de l’innovation.


Sécurité

en progression

Une série de documents ayant fait l’objet d’une fuite, les « Vulkan files », révèle les tactiques de cyberguerre de la Russie contre des adversaires tels que l’Ukraine, les États-Unis, le Royaume-Uni et la Nouvelle-Zélande. L’équipe ukrainienne d’intervention en cas d’urgence informatique (CERT-UA) a enregistré une recrudescence des cyberattaques contre l’Ukraine depuis le début de l’année.

Un nouveau rapport d’Europol tire la sonnette d’alarme quant à l’utilisation abusive potentielle de grands modèles linguistiques (tels que ChatGPT, Bard, etc.). Les forces de l’ordre internationales ont saisi le Genesis Market du dark web, populaire pour la vente de produits numériques aux cybercriminels.
La National Cyber Force (NCF) du Royaume-Uni a dévoilé les détails de son approche en matière d’opérations cybernétiques responsables.


Infrastructure

neutre

Des sociétés de télécommunications chinoises publiques investissent 500 millions de dollars dans la construction de leur propre réseau de câbles Internet sous-marins à fibre optique, afin de concurrencer un projet similaire soutenu par les États-Unis, dans le cadre de la guerre technologique à laquelle se livrent les deux pays.

L’ICANN, l’organisation responsable de la gestion du registre d’adresses de l’Internet, se prépare à lancer une nouvelle série de gTLD.

Commerce électronique et économie de l’Internet

neutre

Un groupe de haut niveau a été créé pour fournir à la Commission européenne des conseils et une expertise concernant la mise en œuvre et l’application de la loi sur les marchés numériques (DMA).

Le Brésil va imposer de nouvelles mesures fiscales pour lutter contre la concurrence déloyale des géants asiatiques du commerce électronique et limiter les avantages fiscaux accordés aux entreprises.


Les droits numériques

en progression

L’Organisation des États ibéro-américains (OEI) a adopté la Charte ibéro-américaine des principes et des droits dans l’environnement numérique afin de garantir l’inclusion dans les sociétés de l’information par l’exercice des droits de l’Homme fondamentaux.

Un organisme de surveillance britannique a infligé une amende de 16 millions de dollars à TikTok pour avoir collecté des données sur des enfants sans le consentement de leurs parents. Une ONG portugaise a poursuivi TikTok pour avoir permis à des enfants de moins de 13 ans de s’inscrire sans autorisation parentale et sans protection adéquate.


La politique de contenu

en progression

Google ne bloquera plus les contenus d’information au Canada, ce qu’il faisait temporairement en réponse à un projet de réglementation qui obligerait les plateformes Internet à rémunérer les entreprises de médias canadiennes pour la mise à disposition de contenus d’information. Dans le même temps, Meta a annoncé qu’il mettrait fin à l’accès aux contenus d’information pour les utilisateurs canadiens si les règles étaient introduites sous leur forme actuelle.

Les premiers ministres de la Moldavie, de la République tchèque, de la Slovaquie, de l’Estonie, de la Lettonie, de la Lituanie, de la Pologne et de l’Ukraine ont signé une lettre ouverte appelant les entreprises technologiques à contribuer à la lutte contre la diffusion de fausses informations.


Juridiction et les questions juridiques

neutre

Un juge américain a décidé que le programme de prêt de livres numériques de l’Internet Archive violait les droits d’auteur, ce qui pourrait constituer un précédent juridique pour les futures bibliothèques en ligne.

Le Bureau de l’information du Conseil d’État chinois (SCIO) a publié un livre blanc récapitulant les lois et réglementations du pays en matière d’Internet.
Les autorités de régulation britanniques ont revu leur position sur l’acquisition d’Activision Blizzard par Microsoft, alors qu’elles craignaient auparavant que cette opération ne nuise à la concurrence dans le secteur des jeux pour consoles.


Les nouvelles technologies

en progression

L’Italie a imposé une limitation d’utilisation (temporaire) à ChatGPT, le chatbot basé sur l’IA.

L’UNESCO a appelé les gouvernements à mettre en œuvre immédiatement sa recommandation sur l’éthique de l’intelligence artificielle.

Les législateurs français ont adopté un projet de loi visant à utiliser une technologie de surveillance basée sur l’IA pour assurer la sécurité des Jeux olympiques de Paris en 2024.

Le Japon a annoncé de nouvelles restrictions sur les exportations d’équipements de fabrication de puces vers les pays qui présentent des risques pour la sécurité.

En bref

Mettre en place des barrières de sécurité pour les données

TikTok est devenu la cible de critiques de la part de plusieurs pays en raison de problèmes liés à la confidentialité des données et à la sécurité nationale. Le cœur du problème semble résider dans la propriété de TikTok par la société chinoise ByteDance, car la loi chinoise de 2017 sur le renseignement national exige des entreprises qu’elles contribuent au travail des services de renseignement de l’État, ce qui suscite des craintes quant au transfert des données des utilisateurs vers la Chine. En outre, certains craignent que le gouvernement chinois n’utilise la plateforme à des fins d’espionnage ou à d’autres intentions malveillantes. Plusieurs pays ont poursuivi TikTok en justice pour avoir exposé des enfants à des contenus préjudiciables et à d’autres pratiques susceptibles de porter atteinte à leur vie privée.

TikTok a tenté d’apaiser les craintes de deux leaders mondiaux en matière de réglementation technologique : les États-Unis et l’Union européenne. L’entreprise s’est engagée à transférer les données américaines aux États-Unis dans le cadre du projet Texas. La sécurité des données européennes serait assurée par le projet Clover, qui comprend des passerelles de sécurité qui détermineront l’accès aux données et les transferts de données en dehors de l’Europe, un audit externe des processus de données par une société de sécurité européenne tierce et de nouvelles technologies de renforcement de la protection de la vie privée.

Le mois dernier, la Belgique, la Norvège, les Pays-Bas, le Royaume-Uni, la France, la Nouvelle-Zélande et l’Australie ont publié des lignes directrices interdisant l’installation et l’utilisation de TikTok sur les appareils gouvernementaux. L’interdiction envisagée par le Japon est plus générale : les législateurs proposeront d’interdire les plateformes de médias sociaux si elles sont utilisées pour des campagnes de désinformation.

Le témoignage très médiatisé du PDG de TikTok, Shou Chew, devant le Congrès américain n’a pas permis à l’entreprise d’obtenir une grande faveur juridique aux États-Unis : les législateurs ne sont toujours pas convaincus que TikTok n’est pas dépendante de la Chine. Il semble que les États-Unis vont adopter une loi (très probablement la loi RESTRICT) visant à interdire l’application. La bataille risque d’être rude : les critiques soutiennent que l’interdiction de TikTok pourrait violer les droits du premier amendement et créerait un dangereux précédent en limitant le droit à la liberté d’expression en ligne. Une autre option est la cession, par laquelle ByteDance vendrait les activités américaines de TikTok à une entité détenue par les États-Unis.

Qu’en pense la Chine ?

Début mars, la Chine a violemment critiqué les États-Unis : le porte-parole du ministère chinois des Affaires étrangères, Mao Ning, a déclaré : « Nous demandons aux institutions et aux personnes américaines concernées de se débarrasser de leur parti pris idéologique et de leur mentalité de guerre froide à somme nulle, de considérer la Chine et les relations sino-américaines sous un angle objectif et rationnel, de cesser de présenter la Chine comme une menace en citant des informations erronées, de cesser de dénigrer le parti communiste chinois et de cesser d’essayer de marquer des points politiques aux dépens des relations sino-américaines. » M. Ning a ajouté : « Comment les États-Unis, première superpuissance mondiale, peuvent-ils être aussi peu sûrs d’eux-mêmes pour craindre à ce point l’application préférée d’un jeune ? »

M. Ning a également critiqué l’UE au sujet de la restriction imposée à TikTok, notant que l’Union devrait « respecter l’économie de marché et la concurrence loyale, cesser d’exagérer et d’abuser du concept de sécurité nationale, et fournir un environnement commercial ouvert, équitable, transparent et non discriminatoire à toutes les entreprises ». Des remarques similaires ont été répétées à la mi-mars par le porte-parole du ministère des Affaires étrangères, Wang Wenbin.

Alors que les informations selon lesquelles les États-Unis exigeraient une cession ont été confirmées par un représentant de TikTok, M. Wenbin a également fait remarquer que « les États-Unis n’ont pas encore démontré, preuves à l’appui, que TikTok menace leur sécurité nationale » et qu’« ils devraient cesser de répandre des informations erronées sur la sécurité des données ».

Le ministère chinois du Commerce a tracé une ligne dans le sable : le Gouvernement chinois s’opposerait à la vente ou à la cession de TikTok conformément aux règles d’exportation de la Chine pour 2020. Ces remarques ont été faites le jour même où M. Chew a témoigné devant le Congrès, ce qui jette un doute supplémentaire sur l’indépendance de TikTok par rapport au gouvernement chinois.

La Chine a également fait des « démarches solennelles » auprès de l’Australie au sujet de l’interdiction australienne de TikTok sur les appareils gouvernementaux.

Quelle perspective d’avenir pour TikTok ?

Etre  optimiste et espérer que l’application ne soit pas interdite ne pourrait pas suffire. Aux États-Unis, le sort de TikTok sera probablement décidé par les tribunaux. Il y a de fortes chances que les autres pays mentionnés dans cet article fassent de même.

Le modèle GPT-4 : repousser les limites, soulever des inquiétudes

Le monde de l’IA a connu une multitude de développements passionnants en mars. Si l’arrivée de GPT-4 promet de porter le traitement du langage naturel et la reconnaissance d’images à de nouveaux sommets, les préoccupations soulevées par l’initiative « Pause Giant AI Experiments », une lettre ouverte sur les implications éthiques des expériences d’IA à grande échelle, ne peuvent être ignorées.

OpenAI a annoncé le développement de GPT-4, un grand modèle multimodal qui peut traiter à la fois du texte et des images en tant qu’entrées. Cette annonce marque une étape importante dans l’évolution des modèles GPT, les modèles GPT-3 et GPT-3.5 étant limités au traitement du texte. La capacité de GPT-4 à traiter des modalités multiples élargira les possibilités de traitement du langage naturel et de reconnaissance d’images, ce qui ouvrira de nouvelles perspectives pour les applications d’intelligence artificielle. Cette évolution ne manquera pas de susciter beaucoup d’intérêt et d’impatience, car la communauté de l’IA attend des précisions sur les capacités de GPT-4 et son impact potentiel dans ce domaine.

Avec la capacité de traiter 32 000 tokens de texte, contrairement à GPT-3, qui était limité à 4 000 tokens, GPT-4 offre des possibilités accrues pour la création de contenus longs, l’analyse de documents et les conversations approfondies (la tokenisation est un moyen de séparer un morceau de texte en unités plus petites appelées « tokens » ; ici, les tokens peuvent être des mots, des caractères ou des sous-mots). Le dernier modèle, GPT-4, est capable de traiter et de générer de longs passages de texte. Il a obtenu des résultats impressionnants lors d’une série de tests de certification académique et professionnelle, tels que le LSAT, le GRE, le SAT, les examens d’AP et une simulation d’examen du barreau.

Ce qui a suscité une grande controverse parmi le public, c’est le fait que le nombre de paramètres du modèle et les informations sur les données d’entraînement n’ont pas été rendus publics, que le document de recherche publié par les développeurs n’offre pas beaucoup d’informations, et que même les fonctionnalités annoncées ne sont pas encore disponibles. En outre, l’accès à GPT-4 est limité à ceux qui s’inscrivent sur la liste d’attente ou s’abonnent au service premium ChatGPT Plus.

Ce buzz a apparemment été la goutte d’eau qui a fait déborder le vase pour beaucoup. Peu de temps après, un groupe de chercheurs en IA, dont Elon Musk et Steve Wozniak, a signé l’initiative « Pause Giant AI Experiments », une lettre ouverte exhortant les laboratoires d’IA à freiner. La lettre appelle à une interdiction mondiale de la formation de systèmes d’IA plus puissants que GPT-4. Elle s’inquiète du risque que l’IA devienne une « menace pour l’existence de la civilisation humaine ». Elle souligne que l’IA pourrait être utilisée pour créer des armes autonomes et « surpasser le contrôle humain ». La lettre poursuit en suggérant que l’IA pourrait éventuellement devenir si puissante qu’elle pourrait créer une superintelligence qui surpasserait les êtres humains.

Les signataires ne sont pas les seuls à avoir des craintes. Stephen Hawking a prévenu que l’IA pourrait éventuellement « sonner le glas de la race humaine ». Même Bill Gates a déclaré que certains risques existaient. Toutefois, M. Gates a également affirmé (ce qui n’est pas surprenant, puisque OpenAI est soutenue par Microsoft) qu’une pause dans le développement de l’IA ne résoudrait pas les problèmes et qu’une telle pause serait difficile à mettre en œuvre.

La lettre ouverte a relancé le débat au sein de la communauté scientifique et technologique sur l’importance d’un développement responsable de l’IA, notamment en répondant aux préoccupations concernant les préjugés, la transparence, les suppressions d’emplois, la protection de la vie privée et le risque de militarisation de l’IA. Les pouvoirs publics et les entreprises technologiques ont un rôle important à jouer dans la réglementation de l’IA, notamment en définissant des lignes directrices éthiques, en investissant dans la recherche sur la sécurité et en assurant la formation des personnes travaillant dans ce domaine.

Cet article vous est présenté par le AI and Data Lab de Diplo. Ce laboratoire suit de près l’évolution du journal de l’IA, mène des expériences telles que « l’IA peut-elle battre l’intuition humaine ? », et crée des applications telles que ce rapport.

À Diplo, nous discutons également de l’impact de l’IA sur notre avenir dans le cadre d’une série de webinaires. Rejoignez-nous le 2 mai pour discuter de l’éthique et de la gouvernance de l’IA d’un point de vue non occidental.

Quoi de neuf concernant les négociations sur la cybersécurité ?

Le groupe de travail à composition non limitée (GTCNL) des Nations unies sur la cybersécurité a tenu sa quatrième session de fond. Nous vous en présentons les grandes lignes ci-dessous.

Menaces existantes et potentielles. Les risques liés à la chaîne d’approvisionnement, l’utilisation d’instruments alimentés par l’IA, les rançongiciels et les retombées des cyberattaques russes sur l’Ukraine, qui ont affecté les infrastructures en Europe, ont été mentionnés, parmi d’autres menaces, au cours de la session. Le Kenya a proposé de créer un répertoire des Nations unies sur les menaces communes. L’UE a proposé de formuler une position commune sur les rançongiciels, et la République tchèque a proposé une discussion plus détaillée sur le comportement responsable des États dans le développement de nouvelles technologies.

Règles, normes et principes. La Russie et la Syrie ont fait valoir que les règles non contraignantes existantes ne réglementent pas efficacement l’utilisation des TIC pour prévenir les conflits interétatiques et ont proposé de rédiger un traité juridiquement contraignant. D’autres pays (comme le Sri Lanka et le Canada) ont critiqué cette proposition. L’Égypte a fait valoir que l’élaboration de nouvelles normes n’entrait pas en conflit avec le cadre normatif existant.

Droit international (DI). La plupart des États ont réaffirmé l’applicabilité du droit international à l’espace cybernétique, mais certains (Cuba, Inde, Jordanie, Nicaragua, Pakistan, Russie, Syrie…) ont fait valoir que l’applicabilité automatique était prématurée et ont soutenu une proposition de traité juridiquement contraignant. La Russie a présenté un concept actualisé de la « Convention des Nations unies sur la garantie de la sécurité internationale de l’information », avec la Biélorussie et le Nicaragua comme co-parrains. La plupart des États ne sont pas favorables à l’élaboration d’un nouvel instrument juridiquement contraignant.

En ce qui concerne le droit international humanitaire (DIH), l’UE et la Suisse ont affirmé son applicabilité ; toutefois, la Russie et le Belarus ont refusé l’application automatique du DIH dans le cyberespace, invoquant l’absence de consensus sur ce qui constitue une attaque armée.

Les principes de la Charte des Nations unies et le respect des obligations des États ont également été discutés pour la première fois, nous semble-t-il. La plupart des États ont également soutenu la proposition canado-suisse d’inclure ces sujets, le règlement pacifique des différends, le DIH et la responsabilité de l’État dans le programme de travail du GTCNL en 2023. 

Mesures de confiance (CBM). Certaines délégations ont appelé à une participation plus active des organisations régionales afin qu’elles partagent leurs expériences au sein du GTCNL. Il y a également eu un large accord pour établir un répertoire des points de contact (POC), bien que les États aient continué à discuter de qui devrait être nommé comme POC (agences ou personnes particulières), quelles fonctions ils devraient avoir, etc.

Renforcement des capacités. Certains pays ont souligné que le programme d’action visant à promouvoir un comportement responsable de l’État sera le principal instrument pour structurer les initiatives de renforcement des capacités. L’Iran a souligné que l’UIT pourrait être un forum permanent de coordination à cet égard. Cuba a soutenu cette idée.

Les États ont également discuté du contenu de la proposition indienne sur le portail mondial de coopération en matière de cybersécurité. Singapour et les Pays-Bas ont toutefois rappelé les portails de coopération existants, tels que les cyberportails de l’UNIDIR et du GFCE.

Un dialogue institutionnel régulier. Les partisans du Programme d’action ont souligné la complémentarité du GTCNL et du Programme d’action. Certains États ont évoqué la possibilité de discuter de normes cybernétiques supplémentaires dans le cadre du Programme d’action, si nécessaire, et ont demandé que le GTCNL consacre une session au Programme d’action. La Chine a fait remarquer que les États qui ont soutenu la résolution sur le programme d’action sapent le statut du GTCNL. La Russie, la Biélorussie et le Nicaragua ont proposé un organe permanent doté de mécanismes d’examen comme alternative au Programme d’action. Certains États ont toutefois prévenu que des pistes de discussion parallèles nécessiteraient davantage de ressources.

Prochaines étapes. Le président prévoit d’organiser une réunion virtuelle informelle à la fin du mois d’avril pour que les répertoires régionaux de POC puissent partager leurs expériences. Le deuxième document officieux révisé sur le répertoire de POC est attendu après. Une réunion entre les sessions sur l’IL et le dialogue institutionnel régulier se tiendra vers la fin du mois de mai. Le projet de rapport annuel de situation zéro est également attendu pour le début du mois de juin. Les États examineront le rapport annuel de situation lors de la 5e session de fond, qui se tiendra du 24 au 28 juillet 2023.Lisez notre rapport détaillé de la session.

Genève

Mise à jour des politiques de la Genève internationale

Forum du SMSI 2023 | 13–17 mars

L’édition 2023 du Forum du Sommet mondial sur la société de l’information (SMSI) a comporté plus de 250 sessions explorant un large éventail de questions liées aux TIC pour le développement et à la mise en œuvre des lignes d’action du SMSI convenues en 2003. Le forum comprenait également un volet de haut niveau qui soulignait, entre autres, l’urgence de faire progresser l’accès à l’Internet, sa disponibilité et son caractère abordable en tant que moteurs de la numérisation, ainsi que l’importance d’encourager la confiance dans les technologies numériques. L’événement a été accueilli par l’UIT, et organisé conjointement avec l’UNESCO, la CNUCED et le Programme des Nations unies pour le développement (PNUD). D’autres résultats du forum seront publiés par l’UIT sur la page dédiée. 

Le dernier jour du forum, le Diplo et la Geneva Internet Platform (GIP), ainsi que les missions permanentes de Djibouti, du Kenya et de la Namibie, ont organisé une session sur le renforcement des voix de l’Afrique dans les processus numériques mondiaux. Cette session a souligné la nécessité d’une coopération renforcée – à l’intérieur et à l’extérieur de l’Afrique – pour mettre en œuvre les stratégies de transformation numérique du continent et veiller à ce que les intérêts africains soient correctement représentés et pris en compte dans les processus internationaux de gouvernance numérique. Renforcer et développer les capacités individuelles et institutionnelles, coordonner les positions communes sur les questions d’intérêt mutuel, tirer parti de l’expertise des acteurs de divers groupes de parties prenantes, et assurer une communication efficace et efficiente entre les missions et les capitales sont quelques-unes des mesures suggérées pour garantir que les voix africaines sont pleinement et significativement représentées sur la scène internationale. Lisez les conclusions de la session.

La 1re session du GEG sur les LAWS (Lethal Autonomous Weapons Systems) | 6–10 mars

Le Groupe d’experts gouvernementaux (GEG) sur les technologies émergentes dans le domaine des systèmes d’armes autonomes létaux ( LAWS – Lethal Autonomous Weapons Systems) a tenu sa première session en mars. Au cours de cette réunion de cinq jours, le groupe s’est concentré sur les aspects suivants des technologies émergentes dans le domaine des LAWS : la caractérisation des LAWS (définitions et portée) ; l’application du DIH (interdictions et réglementations éventuelles) ; l’interaction homme-machine, le contrôle humain significatif, le jugement humain et les considérations éthiques ; la responsabilité et l’obligation de rendre des comptes ; les examens juridiques ; l’atténuation des risques et les mesures de confiance.

La 26e session de la Commission de la science et de la technologie au service du développement (CSTD) s’est tenue à Bruxelles | 27–31 mars

La 26e session de la CSTD a abordé (a) la technologie et l’innovation pour une production plus propre, plus productive et plus compétitive, et (b) la garantie de l’eau potable et de l’assainissement pour tous : une solution par la science, la technologie et l’innovation. 

Lors de la cérémonie d’ouverture, Rebeca Grynspan, secrétaire générale de la CNUCED, a fait une déclaration dans laquelle elle a insisté sur le fait que l’humanité se trouvait à un tournant décisif, entre défis mondiaux et possibilités technologiques. La secrétaire générale a souligné le déclin inquiétant du progrès humain global au cours des deux dernières années, qui met en péril nos objectifs d’avenir durable. La résolution de ces problèmes économiques, sociaux et environnementaux importants nécessite une action mondiale coordonnée.

La session a également été l’occasion de présenter le rapport 2023 sur la technologie et l’innovation, qui identifie les opportunités cruciales et les solutions à base de particules permettant aux pays en développement d’utiliser l’innovation pour une croissance durable.

À venir

À surveiller :
événements mondiaux en matière de politique numérique en avril

11–21 avril, Comité ad hoc sur la cybercriminalité (Vienne, Autriche)

La coalition numérique Partner2Connect (P2C) est une alliance multipartite visant à mobiliser des ressources, des partenariats et des engagements pour parvenir à une connectivité universelle et significative. Créée en 2021 par l’UIT, le projet de feuille de route numérique du Secrétaire général des Nations Unies et de l’Envoyé pour la technologie, la coalition a franchi des étapes importantes en 2022. La réunion annuelle, qui se tiendra au siège de l’UIT à Genève, en examinera les succès et les défis à ce jour, ainsi que les projets visant à connecter les personnes non desservies dans le monde entier.

13 avril, La GDC en profondeur : gouvernance de l’Internet (en ligne)

Les cofacilitateurs du Pacte mondial pour le numérique (PMN) organisent une série d’approfondissements thématiques afin de préparer les négociations intergouvernementales sur le PMN. La discussion du 13 avril portera sur la gouvernance de l’Internet. Au fur et à mesure que ces discussions détaillées se dérouleront, la GIP examinera la manière dont les thèmes principaux ont été abordés dans différents documents politiques clés. Visitez notre page dédiée dans le Digital Watch Observatory pour en savoir plus sur la façon dont les questions liées à la gouvernance de l’Internet ont été abordées dans ces documents.

24–27 avril, Forum mondial de l’ONU sur les données (Hangzhou, Chine)

Le Forum mondial des données des Nations unies fait progresser l’innovation en matière de données, encourage la coopération, génère un soutien politique et financier pour les initiatives en matière de données, et facilite les progrès vers l’amélioration des données pour le développement durable. Le forum se concentre sur les domaines thématiques suivants : innovation et partenariats pour des données de meilleure qualité et plus inclusives ; maximisation de l’utilisation et de la valeur des données pour une meilleure prise de décision ; construction de la confiance et de l’éthique dans les données ; tendances émergentes et partenariats pour développer l’écosystème des données.

24–27 avril, RSA (San Francisco, USA)

La conférence RSA 2023 se tiendra sur le thème « Plus forts ensemble », et proposera des séminaires, des ateliers, des formations, une exposition, des discours d’ouverture et des activités interactives.

29–30 avril, réunion des ministres du Numérique et de la Technologie du G7 2023 (Hangzhou, Chine)

La réunion des ministres du Numérique et de la Technologie du G7 abordera diverses questions liées à la numérisation, y compris les préoccupations émergentes et les changements dans l’environnement mondial des affaires numériques. Les ministres discuteront d’un cadre pour rendre opérationnelle la libre circulation des données en toute confiance (DFFT), en coopération avec le G7 et d’autres pays, tout en respectant les réglementations nationales, en améliorant la transparence, en garantissant l’interopérabilité et en promouvant les partenariats public-privé. L’opérationnalisation du DFFT devrait aider les PME et d’autres acteurs à utiliser en toute sécurité des données provenant du monde entier, ce qui leur permettra de développer des activités transfrontalières.

DW Weekly #107 – 17 April 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 58

Dear readers,

As authorities grapple with ChatGTP and similar AI tools, the first regulatory initiatives are now in sight. OpenAI, the company behind ChatGPT, is in for a troubled period with new investigations.

Meanwhile, tech companies are under pressure over many other issues – from hosting content, which is deemed a national security concern, to new fines and probes for (alleged) anticompetitive practices. 

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Governments vs ChatGPT: A whirlwind of regulations in sight

ChatGPT, the AI-powered tool that allows you to chat and get answers to almost any question (we’re not guaranteeing it’s the right answer), has taken the world by storm. 

Things are progressing fast. In the space of just two days, we learned that Google is creating a new AI-based search engine (unrelated to chatbot Bard which it launched last month), Elon Musk has created a new company, X.AI, which is probably linked to his effort to build an everything app called X, and China’s e-commerce giant Alibaba launched its own ChatGPT-style AI model

Now, governments around the world are starting to take notice of the potential of these tools. They are launching investigations into ChatGPT (in April, we covered Italy’s temporary ban, and the investigations that privacy regulators in France, Ireland, Switzerland, Germany, and Canada are considering) and are now ramping up efforts to introduce new rules. 

The new developments from last week are coming from the three main AI hotspots: the EU, China, and the USA.

1. The latest from Europe

Known for its tough rules on data protection and digital services/markets, the EU is inching close to seeing its AI Act – proposed by the European Commission two years ago – materialise. While the European Council has adopted its stance, the draft is currently being debated by the European Parliament (it will then need to be negotiated between the three EU institutions in the so-called trilogues). Progress is slow, but sure.

As policymakers debate the text, a group of experts argue that general-purpose AI systems carry serious risks and must not be exempt under the new EU legislation. Under the proposed rules, certain accountability requirements apply only to high-risk systems. The experts argue that software such as ChatGPT needs to be assessed for its potential to cause harm and must also have commensurate safety measures in place. The rules must also look at the entire life cycle of a product.

What does this mean? If the rules are updated to consider, for instance, the development phase of a product, this means that we won’t just wait to look at whether an AI model was trained on copyrighted material, or on private data, after the fact. Rather, a product is audited before its launch. This is quite similar to what China is proposing (see below) and what the USA will be looking into soon (details further down).

The draft rules on general-purpose AI are still up for debate at the European Parliament, so things might still change. 

Meanwhile, prompted by Italy’s ban and Spain’s request to look into privacy concerns surrounding ChatGPT, the EU’s data protection watchdog has launched a task force to coordinate the work of European data protection authorities.

There’s little information about the European Data Protection Board’s (EDPB) new task force other than a decision to tackle ChatGPT-related action during the EDPB’s next plenary (scheduled for 26 April).  

2. The latest from China

China has also taken a no-nonsense approach to regulating tech companies in recent years. The Cyberspace Administration of China (CAC) has wasted no time in proposing new measures for regulating generative AI services, which are open for public comments until 10 May. 

The rules. Providers need to ensure that content reflects the country’s core values, and shouldn’t include anything that might disrupt the economic and social order. No discrimination, false information, or intellectual property infringements are allowed. Tools must undergo a security assessment before being launched.

Who they apply to. The onus of responsibility falls on organisations and individuals that use these tools to generate text, images, and sounds for public consumption. They are also responsible for making sure that pre-trained data is lawfully sourced.

The industry is also calling for prudence. The Payment & Clearing Association of China has advised its industry members to avoid uploading confidential information to ChatGPT and similar AI tools, over risks of cross-border data leaks.

3. The latest from the USA

Well-known for its laissez-faire approach to regulating technological innovation, the USA is taking (baby) steps towards new AI rules.

The Biden administration is studying potential accountability measures for AI systems, such as ChatGPT. In its request for public feedback (which runs until 10 June), the National Telecommunications and Information Administration (NTIA) of the Department of Commerce is looking into new policies for AI audits and assessments that tackle bias, discrimination, data protection, privacy, and transparency. 

What this exercise covers. Everything and anything that falls under the definition of ‘AI system’ and ‘automated systems’, including technology that can ‘generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments’. 

What’s next? There’s already a growing interest in regulating AI governance tools, the NTIA writes, so this exercise will help it advise the White House on how to develop an ecosystem of accountability rules. 

Separately, there are also indications that Senate Democrats are working on new legislation spearheaded by Majority Leader Chuck Schumer. A draft is in circulation, but don’t expect anything tangible soon unless the initiative secures bipartisan support.

And in a bid to avoid facing intellectual property infringements, music company Universal Music Group has ordered streaming platforms, including Spotify and Apple, to block AI services from scraping melodies and lyrics from copyrighted songs, according to the Financial Times. The company fears that AI systems are being trained on the artists’ intellectual property. IPR lawsuits are looming.


Digital policy roundup (10–17 April)
// OPENAI //

Italy tells OpenAI: Comply or face ban

The Italian Data Protection Authority (GDPD), which was the first to open an investigation into OpenAI’s ChatGPT, has provided the company with a list of demands it must comply with by 30 April, before the authority may lift its temporary ban

The Italian government wants OpenAI to let people know how personal data will be used to train the tool, and to request consent from users (or based on legitimate interest) before processing their personal data. 

But a more challenging request is for the company to implement an age-gating system for underaged users and to introduce measures for identifying accounts used by children (the latter must to be in place by 30 September). 

Why is this relevant? The age-verification request coincides with efforts by the EU to improve how platforms verify their users’ age. The new eID proposal, for instance, will introduce a much-needed framework of certification and interoperability for age-verification measures. The way OpenAI tackles this issue will be a testbed for new measures. 

More European countries launch probes into ChatGPT

France’s data protection regulator (CNIL) has opened a formal investigation on ChatGPT after receiving five complaints. These include complaints from member of parliament Eric Bothorel, lawyer Zoé Vilain, and developer David Libeau

Germany’s data protection conference (DSK), the body of independent German data protection supervisory authorities of the federal and state governments, has opened an investigation into ChatGPT. The announcement was made by the North Rhine-Westphalia watchdog (the DSK itself has been mum about it).

Spain’s data protection agency (AEPD) announced an independent investigation in parallel to the work being carried out by the EDPD.


// CYBERSECURITY //

Classified Pentagon documents leaked on social media

The Pentagon is investigating the leak of over 50 classified documents that turned up on the social media platform Discord. Jack Teixeira, a 21-year-old US air national guardsman, suspected of leaking the documents, was charged in Boston, USA, on Friday under the Espionage Act.

Days after the Pentagon announced its investigation, the leaked documents could still be accessed on Twitter and other platforms, prompting a debate on the responsibility of social media companies in cases involving national security.

Jack Teixeira
Campaigns 59

Russia accuses Pentagon, NATO of masterminding Ukraine attacks against Russia

The press office of Russia’s Federal Security Service (FSB) has accused the Pentagon and NATO countries of being behind massive cyberattacks from Ukraine against Russia’s critical infrastructure.

The FSB claims that over 5,000 hacker attacks on Russian critical infrastructure have been recorded since the beginning of 2022 and that cyberattack units of Western countries are using Ukrainian territory to carry out these attacks.

USA-Russia cyber impasse on hold?

Meanwhile, Russia’s official news agency TASS reported that the USA has maintained contact with Russia on cybersecurity issues. 

US Department of State’s Ambassador-at-Large for Cyberspace and Digital Policy Nathaniel Fick told TASS that channels of communication remain open. ‘Yes, I’m across the table from Russian counterparts with some frequency, and with Chinese as well,’ he said.


// ANTITRUST //

South Korea fines Google for abusing global market dominance

Google is in trouble in South Korea after the country’s Fair Trade Commission (FTC) fined the company USD 31.9 million (EUR 29.2 million) for unfair business practices.

The FTC found that Meta entered into agreements with Korean mobile game companies between June 2016 and April 2018, which banned them from releasing their content on One Store, a local marketplace which rivals Meta’s marketplace.

Indian start-ups seek court order to block Google’s in-app billing system

Google could also be in trouble in India after a group of start-ups, led by the Alliance of Digital India Foundation (ADIF), asked an Indian court to suspend the company’s new in-app billing fee system, until the antitrust authority probes Google’s failure to comply with an October 2022 order. A new antitrust directive, issued in October, allowed the use of third-party billing services for in-app payments.

Tweet from the ADIF.
The Alliance of Digital India Foundation (ADIF) comments on the South Korean decision to fine Google. Source: @adif_India

// DATA FLOWS //

European MEPs vote against proposal to greenlight westward data transfers

The proposal to allow personal data transfers of EU citizens with the USA under the new EU-US Data Privacy Framework has been rejected by the European Parliament

Parliamentarians expressed concerns about the adequacy of US data protection laws and called for enhanced safeguards to protect the personal data of European citizens. The proposed framework does not provide sufficient safeguards, according to the members of parliament.

While the European Parliament’s position is not legally binding, it adds pressure on the European Commission to reconsider its approach to data transfers with the US and prioritise more robust data protection measures.


// METAVERSE //

Meta urged to keep kids off of the metaverse 

Dozens of advocacy organisations and children’s safety experts are calling on Meta to halt its plans to allow kids into its virtual reality world, Horizon Worlds. In a letter addressed to Meta CEO Mark Zuckerberg, the groups and experts expressed concerns about the potential risks of harassment and privacy violations for young users in the metaverse app. 

The experts also said that given Meta’s track record of addressing damaging design after harm has occurred, they are requesting that Meta not allow kids into the metaverse until it can ensure their safety and privacy with robust measures in place.

A child wearing a virtual reality headset.
Campaigns 60

Meta says metaverse can transform education

Was it a coincidence that two days before, Meta’s Global Affairs Chief Nick Clegg penned an article lauding metaverse’s potential for education

In any case, Clegg explains how the metaverse can enable access to educational resources and opportunities for learners across geographical and economic barriers and how virtual learning classrooms, simulations, and collaborative environments can enhance learning outcomes.

Clegg also acknowledges a need for responsible and inclusive design of metaverse educational experiences, with a focus on privacy, safety, and accessibility. 


The week ahead (17–23 April)

16–19 April: The American Registry for Internet Numbers (ARIN) 51st Public Policy and Members Meeting in Florida is discussing internet number resources, regional policy development, and the overall advancement of the internet.

21–23 April: The closing session of the European Commission citizens’ panel on the metaverse and other virtual worlds will ask participants to turn their ideas into concrete recommendations. They’ll be asked to suggest policy measures to help shape the evolution of virtual worlds.


#ReadingCorner
Tech Diplomacy front cover

Tech diplomacy in the Bay Area

In 2018, Diplo’s techplomacy mapping exercise explored how different diplomatic representations interact with the San Francisco Bay Area ecosystem. Since then, a lot has changed, which prompted Diplo to update its research. The 2023 report, Tech Diplomacy Practice in the San Francisco Bay Area’, launched last week, makes some important observations.

Tech diplomacy has matured, moving from informal engagements to more structured, formal engagements. Government representations in the San Francisco Bay Area and the structures within tech companies that act as partners to the conversation have become both more diverse and complex, adding challenges to reaching one another. San Francisco is seeing more and more collaborations between international diplomatic representations and tech companies to achieve common goals. Read the full text.


steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?


DW Weekly #106 – 10 April 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 69

Dear readers,

Making generative AI safe is still the talk of the tech world, while TikTok continues to run into hurdles, and the US-China chips war keeps getting more heated. The UK has revealed details of its cyber operations, and law enforcement won a significant battle in cybercrime when it took down the dark web marketplace Genesis Market. We round off the digital policy updates of this issue with the EU initiative to shape its vision of virtual worlds.

Andrijana and the Digital Watch team


// HIGHLIGHT //

Geneva SDOs chime in: Standards are the answer to safe AI development

The safe development of AI seems to be on everyone’s minds these days. If you’ve been reading any tech-related news, you’re probably aware of Pause giant AI experiments: An open letter. (Sidebar: if you haven’t read it, tech experts, including Elon Musk, Steve Wozniak, and Yuval Harari are asking tech giants for a six-month pause in the training of AI systems more powerful than GPT-4, until we can ensure that their effects will be positive and their risks manageable.). The letter has encountered criticism; for example, Bill Gates believes that pausing AI development won’t solve the challenges and would be difficult to enforce. Ex-Google CEO Eric Schmidt commented that such a pause ‘will simply benefit China’.

So far, the letter hasn’t achieved much in practice as companies clearly continue competing in AI – just in the past week, Meta released an AI model that can identify items within images, Microsoft rolled out an AI image generator in Edge, Alibaba invited businesses to test its chatbot Squirrel AI, and Qualcomm and Nvidia sparred for the top spot in AI chip efficiency tests.

Three key Geneva-based international standards-developing organisations (SDOs) chimed in. In their reply to the open letter, the International Electrotechnical Commission (IEC), the International Organization for Standardization (ISO), and the International Telecommunication Union (ITU) highlight the role that standards can have in safe AI development. Standards underpin regulatory frameworks, provide ‘appropriate guardrails for responsible, safe, and trustworthy AI development’, and ‘can help mitigate the risks associated with AI systems and ensure that they are aligned with societal values and expectations’. The three SDOs invited interested stakeholders to join the work of developing consensus-based international standards and encourage their adoption.

digital standards 0 1

But international standards take time to develop. and their uptake by the industry is very much a voluntary matter (there are some exceptions when regulations may require compliance with specific standards). So some countries have started taking matters into their own hands to at least alleviate data privacy concerns: After Italy’s much-talked-about (temporary) ban of ChatGPT, privacy regulators in France, Ireland, and Switzerland reached out to Italian ones to find out more about the basis of the ban, and Germany is considering a ban as well. 

The company behind ChatGPT, Open AI, has since offered remedies in Italy, having committed to enhancing transparency in using personal data and existing mechanisms to exercise data subject rights and safeguards for children. 

But the company may also have to offer remedial measures in Canada, where the Office of the Privacy Commissioner of Canada will investigate a complaint alleging that OpenAI collected, used and disclosed personal information without consent. 

More countries are keeping a close eye on generative AI (rhyme not intended). The UK’s Information Commissioner’s Office (ICO) has stressed that organisations developing or using generative AI must approach data protection by design and default and outlined eight important questions that developers and users should consider. Switzerland’s Federal Data Protection and Information Commissioner had a similar message, advising users to examine the purposes for which text or images they upload are used  and reminding companies that employ AI to observe data protection legislation.

Will more countries follow with their own warnings? Almost certainly. 


Digital policy roundup (3–10 April)
// TIKTOK //

Bans, lawsuits, fines and investigations

The latest in the slew of bans on TikTok comes from Australia, which opted to ban the app on government devices, similar to its Five Eyes intelligence allies and multiple European countries. Albania is also contemplating such a ban.

TikTok has been blasted for allowing children under 13 to use its app, contradicting its terms of service. In the last week, it has been fined for this reason in the UK and sued in Portugal. The UK fine includes collecting children’s data without parental consent. 

Another lawsuit has been filed in Portugal against TikTok for ‘misleading commercial practices’ and ‘opaque privacy policies’.

Vietnam, TikTok’s sixth biggest market, is also set to open an investigation into the platform because of the harmful content and false information that its algorithm can suggest. If it’s found guilty, strict fines will be imposed.

TikTok seems to be in the direst straits in the USA, where a general ban on the app is being contemplated. A top TikTok lawyer reportedly laughed when an employee asked about the ban. Will he have the last laugh?

Caricature shows workers building a fence around a large monitor with a TikTok logo on the screen.

// CHIPS //

China reacts to Japan’s chip export controls, urges WTO to monitor chip export restrictions

Japan’s recently announced restrictions on exports of 23 types of semiconductor manufacturing equipment caused backlash from China, which described them as ‘essentially harmful acts against China under the coercion of a certain country’.

Japan’s move is far from surprising; rumours that it will join the USA (the ‘certain country’ China seems to be referencing) and the Netherlands to institute chip export rules started swirling in January. The USA was the first country in this triad to implement the rules, in October

China took this fight to the WTO in December when it initiated a trade dispute procedure against US chip export control measures, arguing that these measures ‘threatened the stability of the global industry supply chains’. After the Netherlands and Japan seemingly followed the US move in March, China has now urged the WTO to monitor export chip restrictions, and the trio’s deal, arguing that it violates the fairness and transparency principles of the WTO. China also asked the trio to acknowledge if they have reached a private deal on chip exports.

China has made ‘serious démarches to the Japanese side at various levels to express our strong discontent and grave concerns’’ according to foreign ministry spokeswoman Mao Ning, and has called on Japan to modify the restrictions.


// CYBERSECURITY //

UK reveals details of its cyber operations

The UK National Cyber Force (NCF) disclosed how it conducts ‘responsible cyber operations to counter state threats, support military operations, and disrupt terrorists and serious criminals’. 

The document Responsible cyber power in practice highlights that the NCF’s cyber operations are accountable, precise, and calibrated, i.e. conducted in a legal and ethical manner, timed and targeted with precision, with their intended impact carefully assessed.

The NCF’s approach to adversarial cyber operations is based on the ‘doctrine of cognitive effect’ – using techniques that have the potential to sow distrust, decrease morale, and weaken the targets’ ability to plan and conduct their activities effectively, with the goal of changing their behaviour. 

The NCF highlighted that its operations are covert, and the intent is that adversaries do not realise that the effects they are experiencing are the result of a cyber operation, which is why it was not forthcoming with details. The NCF did state that it has protected military deployments overseas; disrupted terrorist groups; countered sophisticated, stealthy and continuous cyber threats; countered state disinformation campaigns; reduced the threat of external interference in democratic elections; and removed child sexual abuse material from public spaces online.


Operation Cookie Monster seizes criminal marketplace Genesis Market

A joint international law enforcement operation has seized the Genesis Market, a dark web market which offered access to over 80 million account access credentials such as usernames and passwords for email, bank accounts, and social media. The operation, led by the US FBI and the Dutch National Police, involved 17 countries and was codenamed Operation Cookie Monster.

Screenshot of a website with the heading 'This website has been seized' and 'Operation Cookie Monster', noting 'Genesis Market's domains have been seized by the FBI pursuant to a seizure warrant issued by the United States District Court for the Eastern District of Wisconsin. These seizures were possible because of international law enforcement and private sector coordination involving the partners listed below.' Contact details and partners are listed.
Image source: Dutch National Police

// METAVERSE //

The EU seeks feedback on its vision for virtual worlds 

The European Commission is presenting an initiative on virtual worlds – metaverses — entitled ‘An EU initiative on virtual worlds: A head start towards the next technological transition’. Its goal is to develop a vision for virtual worlds based on respect for digital rights and EU laws and values. The European Commission will seek feedback from stakeholders and the public through citizen panels and targeted workshops.

The week ahead (10–16 April)

11–13 April: The Digital Rights and Inclusion Forum will be held in Nairobi, Kenya, under the theme ‘Building the sustainable internet for all’. The forum will highlight Africa’s challenges and provide solutions for a sustainable online future for everyone.

11–21 April: The fifth session of the Ad Hoc Committee on Cybercrime will consider the preamble, provisions on international cooperation, preventive measures, technical assistance, the mechanism of implementation, and the final provisions of the future convention on cybercrime.

12–13 April: The ECOM21 23 will discuss business operations, technology, and regulatory frameworks in Riga, Latvia.

13 April: The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 13 April discussion will cover internet governance. As these in-depth discussions unfold, the GIP Digital Watch will examine how the GDC’s focus topics have been tackled in different key policy documents. Visit our dedicated GDC page on the Digital Watch observatory to read more about how issues related to internet governance have been covered in such documents.

16–19 April: The American Registry for Internet Numbers (ARIN) 51 Public Policy and Members Meeting will discuss internet number resources, regional policy development, and the overall advancement of the internet in Tampa, Florida, USA.

Diplo and the Geneva Internet Platform (GIP) are organising an event on 13 April entitled Technology and Diplomacy: The Rise of Multilateralism in the Bay Area in San Francisco, California, where we will officially launch our ‘Tech Diplomacy Practice in the San Francisco Bay Area’ report. If you’re based in San Francisco, register and join us!


#ReadingCorner
Can sharks eat the internet

Can sharks eat the internet?

Well, no, not really. But the headline gets us all thinking about the extreme vulnerability of the undersea infrastructure on which the digital world relies, Diplo’s director Dr Jovan Kurbalija writes.


Anja blog 2560x400px

Can AI beat human intuition?

Check for yourself! What does your intuition tell you: did AI write text A or text B in this blog post


1200 280 max

Latest edition of Digital Watch newsletter

The freshly published April issue of our monthly newsletter on digital policy includes: a look at TikTok coming under fire from several countries due to data privacy and national security concerns, a look at how ChatGPT-4 model is pushing the boundaries of AI development, and a summary of OEWG 2021-2025 continued to discuss cybersecurity at its fourth substantive session.


Andrijana20picture
Andrijana Gavrilovic
Editor, Digital Watch, and Head of Diplomatic and Policy Reporting, DiploFoundation

Was this newsletter forwarded to you, and you’d like to see more?

Digital Watch newsletter – Issue 78 – April 2023

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from March. There’s more detail in each update on the Digital Watch Observatory.        

Global digital architecture

same relevance

The Global Digital Compact (GDC) co-facilitators organised a thematic deep dive on digital inclusion and connectivity to prepare for intergovernmental negotiations on the GDC.

Sustainable development

increasing relevance

UNCTAD’s Technology and Innovation Report 2023 explores the potential benefits of green innovation for developing nations, including driving economic growth and enhancing technological capabilities.

The European Commission unveiled the Net-Zero Industry Act to boost clean energy technologies in the EU and support a transition to a more sustainable and secure energy system. It also adopted a new proposal aiming to make repair of goods easier and cheaper for consumers. It also introduced a new act to enhance the resilience and security of critical raw materials supply chains in the EU, reducing reliance on imports from third countries.

The European Union–Latin America and Caribbean Digital Alliance was established, focusing on building digital infrastructures and promoting connectivity and innovation.

Security

increasing relevance

A trove of leaked documents dubbed the Vulkan files, have revealed Russia’s cyberwarfare tactics against adversaries such as Ukraine, the USA, the UK, and New Zealand. Ukraine’s computer emergency response team (CERT-UA) has recorded a spike in cyberattacks on Ukraine since the start of the year. 

A new report from Europol sounds an alarm about the potential misuse of large language models (the likes of ChatGPT, Bard, etc.). International law enforcement agencies seized the dark web’s Genesis Market, popular for selling digital products to cybercriminals.

The UK National Cyber Force (NCF) disclosed details about its approach to responsible cyber operations.

E-commerce and internet economy

increasing relevance

A high-level group has been established to provide the European Commission with advice and expertise related to the implementation and enforcement of the Digital Markets Act (DMA).

Brazil will impose new tax measures to tackle unfair competition from Asian e-commerce giants and limit tax benefits for companies.

Infrastructure

same relevance

State-owned Chinese telecom companies are investing $500 million to build their own undersea fibre-optic internet cable network to compete with a similar US-backed project amid the ongoing tech war between the two countries.

ICANN, the organisation responsible for managing the internet’s address book, is preparing to launch a new gTLD round.

Digital rights

increasing relevance

The Organisation of Ibero-American States (OEI) adopted the Ibero-American Charter of Principles and Rights in Digital Environments to guarantee inclusion in information societies via the exercise of fundamental human rights.

A UK watchdog has fined TikTok $16 million for collecting children’s data without parental consent. A Portuguese NGO sued TikTok for allowing children under 13 to join without parental permission and adequate protection.

Content policy

increasing relevance

Google will no longer block news content in Canada, which it did temporarily in response to draft rules that would require internet platforms to compensate Canadian media companies for making news content available. At the same time, Meta has announced that it will end access to news content for Canadian users if the rules are introduced in their current form.

The prime ministers of Moldova, the Czech Republic, Slovakia, Estonia, Latvia, Lithuania, Poland, and Ukraine have signed an open letter which calls on tech firms to help stop the spread of false information.

Jurisdiction and legal issues

increasing relevance

A US judge has ruled that the Internet Archive’s digital book-lending programme violates copyrights, potentially setting a legal precedent for future online libraries.

China’s State Council Information Office (SCIO) has released a white paper recapping the country’s laws and regulations on the internet.

UK regulators have revised their stance on Microsoft’s acquisition of Activision Blizzard, having previously raised concerns that it would harm competition in console gaming.

New technologies

increasing relevance

Italy imposed a (temporary) limitation on ChatGPT, the AI-based chatbot.

UNESCO has called upon governments to immediately implement its Recommendation on the Ethics of Artificial Intelligence

French lawmakers have passed a bill to use AI-powered surveillance technology to secure the 2024 Paris Olympics.

Japan has announced new restrictions on exports of chipmaking equipment to countries that pose security risks.


Putting up data security fences

TikTok has come under fire from several countries due to data privacy and national security concerns. The core of the issue seems to lie in TikTok’s ownership by the Chinese company ByteDance, as China’s 2017 National Intelligence law requires companies to assist with state intelligence work, raising fears about the transfer of user data to China. Additionally, there are concerns that the Chinese government could use the platform for espionage or other malicious purposes. Several countries have sued TikTok for exposing children to harmful content and other practices that put their privacy at risk, as well.

TikTok has tried to alleviate fears of two global leaders on tech regulations – the USA and the EU. The company has committed to moving US data to the USA under Project Texas. European data security would be achieved by Project Clover, which includes security gateways that will determine data access and data transfers outside of Europe, external auditing of data processes by a third-party European security company, and new privacy-enhancing technologies.

During the past month, Belgium, Norway, the Netherlands, the UK, France, New Zealand, and Australia issued guidelines against installing and using TikTok on government devices. A ban contemplated by Japan is more general: Lawmakers will propose banning social media platforms if used for disinformation campaigns. 

The much-publicised testimony of TikTok CEO Shou Chew before the US Congress didn’t garner the company much legal favour in the USA: The lawmakers are still not convinced that TikTok is not beholden to China. It seems the USA will be proceeding with legislation (most likely the RESTRICT Act) to ban the app. That might be an uphill battle: Critics argue that banning TikTok may violate First Amendment rights and would set a dangerous precedent of curtailing the right to free expression online. Another option is divestiture, whereby ByteDance would sell the US operations of TikTok to a US-owned entity. 

TikTok 1
Campaigns 84

Chew testifies before the US Congress. Source: CNN

What does China have to say?

At the beginning of March, China fiercely criticised the USA: Chinese Foreign Ministry spokesperson Mao Ning stated ‘We demand the relevant US institutions and individuals discard their ideological bias and zero-sum Cold War mentality, view China and China-U.S. relations in an objective and rational light, stop framing China as a threat by quoting disinformation, stop denigrating the Communist Party of China and stop trying to score political points at the expense of China-USA relations.’ Ning added, ‘How unsure of itself can the US, the world’s top superpower, be to fear a young person’s favourite app to such a degree?’

Ning also criticised the EU over its TikTok restriction, noting that the bloc should ‘Respect the market economy and fair competition, stop overstretching and abusing the concept of national security and provide an open, fair, transparent and non-discriminatory business environment for all companies.’ Similar remarks were repeated mid-March by Foreign Ministry spokesperson Wang Wenbin.

As reports of the USA demanding divestiture were confirmed by a TikTok representative, Wenbin also noted that ‘The USA has yet to prove with evidence that TikTok threatens its national security’ and that ‘it should stop spreading disinformation about data security.’

China’s Ministry of Commerce drew a line in the sand: the Chinese government would oppose the sale or divestiture of TikTok per China’s 2020 export rules. These remarks were made the same day Chew testified before Congress, casting further doubt on TikTok’s independence from the Chinese government.

China has also ‘made solemn démarches’ to Australia over the Australian ban on TikTok on government devices. 

What’s next for TikTok? 

More reassurances in the hope that the app is not banned from general use. The reality is that this might not be enough. In the USA, TikTok’s fate will likely ultimately be decided by the courts. There’s a very good chance that other countries mentioned in this article would follow suit.


The GPT-4 model: Pushing boundaries, raising concerns

The world of AI witnessed a flurry of exciting developments in March. While the arrival of GPT-4 promises to take natural language processing and image recognition to new heights, the concerns raised by the ‘Pause Giant AI Experiments: An Open Letter initiative’ about the ethical implications of large-scale AI experiments cannot be ignored. 

OpenAI has announced the development of GPT-4, a large multimodal model that can process both text and images as inputs. This announcement marks a significant milestone in the evolution of GPT models, as GPT-3 and GPT-3.5 were limited to processing text only. The ability of GPT-4 to process multiple modalities will expand the capabilities of natural language processing and image recognition, opening up new possibilities for AI applications. This development is sure to generate a lot of interest and anticipation as the AI community awaits further details about GPT-4’s capabilities and its potential impact on the field.

With the ability to process 32,000 tokens of text, unlike GPT-3, which was limited to 4,000 tokens, GPT-4 offers expanded possibilities for long-form content creation, document analysis, and extended conversations (Tokenisation is a way of separating a piece of text into smaller units called tokens; here, tokens can be either words, characters, or subwords). The latest GPT-4 model has the capacity to process and generate extended passages of text. It has achieved impressive results on a range of academic and professional certification tests, such as the LSAT, GRE, SATs, AP exams, and a simulated law school bar exam.

What caused great controversy among the public is that the number of the model’s parameters and the training data information have not been made public, the research paper that the developers published does not offer much information, and even the features that were announced are not yet available. Additionally, access to GPT-4 is restricted to those who sign up for the waitlist or subscribe to the premium ChatGPT Plus service.

This buzz was apparently the last straw for many. Not long after, a group of AI researchers, including Elon Musk and Steve Wozniak, signed  the ‘Pause Giant AI Experiments: An Open Letter initiative’ urging AI labs to pump the brakes. The letter calls for a global ban on the training of AI systems more powerful than GPT-4. It expresses concern about the potential for AI to become a ‘threat to the existence of human civilisation’. It points out that AI could be used to create autonomous weapons and ‘out-think and out-manoeuvre human control.’ The letter goes on to suggest that AI could eventually become so powerful that it could create a superintelligence that would outsmart human beings.  

The signatories are not alone in their fears. For example, Stephen Hawking warned that AI could eventually ‘spell the end of the human race’. Even Bill Gates said that certain risks exist. However, Gates also argued (not surprisingly, since OpenAI is Microsoft-backed), that pausing AI development would not solve challenges and that such a pause would be difficult to enforce.

The open letter has reignited debate among the scientific and tech community about the importance of responsible development of AI, including addressing concerns about bias, transparency, job displacement, privacy, and the potential for AI to be weaponised. Government officials and tech companies have a significant role to play in regulating AI, such as setting ethical guidelines, investing in safety research, and providing education for those working in the field.

This article has been brought to you by Diplo’s AI and Data Lab. The lab keeps an eye on developments in the AI diary, runs experiments like Can AI beat human intuition, and creates applications such as this reporting one

At Diplo, we’re also discussing AI’s impact on our future through a series of webinars. Join us on 2 May for the latest one in the series as we discuss AI ethics and governance from a non-Western perspective.

1920 1080 max
Campaigns 85

What’s new with cybersecurity negotiations?

The UN Open-ended Working Group (OEWG) on cybersecurity held its fourth substantive session. We share the highlights below.

Existing and potential threats. Supply chain risks, the use of AI-powered instruments, ransomware, and the spill-over effects of Russian cyberattacks on Ukraine, which have affected the infrastructure in Europe, have been mentioned, among other threats, during the session. Kenya proposed establishing a UN repository of common threats. The EU proposed formulating a common position on ransomware, and the Czech Republic proposed a more detailed discussion on responsible state behaviour in developing new technologies.

Rules, norms, and principles. Russia and Syria argued that existing non-binding rules don’t effectively regulate the use of ICTs to prevent inter-state conflicts and proposed drafting a legally binding treaty. Other countries (e.g. Sri Lanka and Canada) criticised this proposal. Egypt argued that the development of new norms doesn’t conflict with the existing normative framework.

International law (IL). Most states reaffirmed IL’s applicability to cyberspace, but some (Cuba, India, Jordan, Nicaragua, Pakistan, Russia, Syria) argued that automatic applicability is premature and supported a proposal for a legally binding treaty. Russia submitted an updated concept of the ‘Convention of the UN on Ensuring International Information Security’ with Belarus and Nicaragua as co-sponsors. Most states don’t support drafting a new legally binding instrument.

Speaking of international humanitarian law (IHL), the EU and Switzerland affirmed its applicability; however Russia and Belarus refused the automatic application of IHL in cyberspace, citing a lack of consensus on what constitutes an armed attack. 

The UN Charter principles and enforcement of state obligations have been also discussed for the first time, we believe. Most states also supported the Canadian-Swiss proposal to include these topics, peaceful settlement of disputes, IHL, and state responsibility in the OEWG’s programme of work in 2023. 

Confidence building measures (CBMs). Some delegations have called for more active participation of regional organisations to share their experiences in the OEWG. There was also a broad agreement to establish a Points of contact (POC) directory, though states continued discussing who should be nominated as a PoC (agencies or particular persons), what functions they should have, etc.

Capacity building. Some countries highlighted that the Programme of Action (PoA) to advance responsible state behaviour will be the primary instrument to structure capacity-building initiatives. Iran stressed that ITU could be a permanent forum for coordination in this regard. Cuba supported this idea.

States also discussed the content of the Indian proposal on the Global Cyber Security Cooperation Portal. However, Singapore and the Netherlands recalled the existing cooperation portals, such as the UNIDIR and GFCE cyber portals.

Regular institutional dialogue. Supporters of the PoA emphasised the complementarity of the OEWG and the PoA. Some states mentioned the possibility of discussing additional cyber norms under the PoA, if needed, and called for a dedicated OEWG session on the PoA. China noted that states who supported the PoA resolution are undermining the status of the OEWG. Russia, Belarus, and Nicaragua proposed a permanent body with review mechanisms as an alternative to the PoA. Some states, though, warned that parallel tracks of discussions would require more resources.

Next steps. The chair plans to host an informal virtual meeting in late April for regional PoC directories to share their experiences. The second revised non-paper on the PoC directory is expected after. An inter-sessional meeting on IL and regular institutional dialogue will be held around the end of May. The Annual Progress Report zero draft is also expected in early June. States will discuss the APR at the 5th substantive session on 24–28 July 2023.

Read our detailed report from the session.

OEWG 4th session blog banner 1200x675px
Campaigns 86

Policy updates from International Geneva

WSIS Forum 2023 | 13–17 March 

The 2023 edition of the World Summit on the Information Society (WSIS) Forum featured over 250 sessions exploring a wide range of issues related to ICT for development and the implementation of the WSIS Action Lines agreed upon back in 2003. The forum also included a high-level track that highlighted, among other issues, the urgency of advancing internet access, availability and affordability as driving forces of digitalisation, and the importance of fostering trust in digital technologies. The event was hosted by ITU and co-organised together with the UNESCO, UNCTAD, and the UN Development Programme (UNDP). More forum outcomes will be published by ITU on the dedicated page. 

Diplo and the Geneva Internet Platform (GIP), together with the Permanent Missions of Djibouti, Kenya, and Namibia, hosted a session on Strengthening Africa’s voices in global digital processes on the last day of the forum. This session stressed the need for strengthened cooperation – within and beyond Africa – to implement the continent’s digital transformation strategies and ensure that African interests are adequately represented and reflected in international digital governance processes. Building and developing individual and institutional capacities, coordinating common positions on issues of mutual interest, leveraging the expertise of actors from various stakeholder groups, and ensuring effective and efficient communication between missions and capitals were some of the suggested steps towards ensuring that African voices are fully and meaningfully represented on the international stage. Read the session takeaways. 

Diplo WSIS 2023 session 16.08.18
Campaigns 87

Director Dr Jovan Kurbalija moderates the Diplo WSIS session in Geneva. Source: Diplo

The 1st session of the 2023 GGE on LAWS | 6–10 March

The 2023 CCW Group of Governmental Experts on emerging technologies in the area of Lethal Autonomous Weapons Systems (GGE on LAWS) held its first session in March. During the five-day meeting, the group focused on the following dimensions of emerging technologies in the area of LAWS: the characterisation of LAWS – definitions and scope; the application of IHL: possible prohibitions and regulations; human-machine interaction, meaningful human control, human judgement, and ethical considerations; responsibility and accountability; legal reviews; risk mitigation, and confidence-building measures. 

The 26th session of the Commission on Science and Technology for Development (CSTD) | 27–31 March

The 26th session of the CSTD tackled (a) technology and innovation for cleaner and more productive and competitive production and (b) ensuring safe water and sanitation for all: a solution by science, technology and innovation. 

At the opening ceremony, Rebeca Grynspan, Secretary-General of UNCTAD, delivered a statement emphasising the critical juncture humanity finds itself in as a moment of global challenges and technological possibilities. The Secretary-General highlighted the worrisome decline in overall human progress over the past two years, jeopardising our sustainable future goals. Addressing these significant economic, social, and environmental issues requires coordinated global action.

The session also featured the presentation of the 2023 Technology and Innovation Report, which identifies crucial opportunities and particle solutions for developing countries to utilise innovation for sustainable growth. 


What to watch for: Global digital policy events in April

Fifth and Sixth Sessions of the Ad Hoc Committee on Cybercrime
The fifth session of the Ad Hoc Committee on Cybercrime will touch upon the new negotiating consolidated document on the preamble, provisions on international cooperation, preventive measures, technical assistance, the mechanism of implementation, and the final provisions of the convention. The secretariat has also prepared a separate document on the mechanisms of implementation to facilitate the deliberations of member states on the implementation of mechanisms for the convention. Lastly, it is expected that states will further negotiate on the first negotiating consolidated document from the fourth session. Read more.
Fifth and Sixth Sessions of the Ad Hoc Committee on Cybercrime
The fifth session of the Ad Hoc Committee on Cybercrime will touch upon the new negotiating consolidated document on the preamble, provisions on international cooperation, preventive measures, technical assistance, the mechanism of implementation, and the final provisions of the convention. The secretariat has also prepared a separate document on the mechanisms of implementation to facilitate the deliberations of member states on the implementation of mechanisms for the convention. Lastly, it is expected that states will further negotiate on the first negotiating consolidated document from the fourth session. Read more.
GDC deep-dive: Internet governance
The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 13 April discussion will cover internet governance. As these in-depth discussions unfold, the GIP will examine how their focus topics have been tackled in different key policy documents. Visit our dedicated page on the Digital Watch observatory to read more about how issues related to internet governance have been covered in such documents. Read more.
GDC deep-dive: Internet governance
The Global Digital Compact (GDC) co-facilitators are organising a series of thematic deep dives to prepare for intergovernmental negotiations on the GDC. The 13 April discussion will cover internet governance. As these in-depth discussions unfold, the GIP will examine how their focus topics have been tackled in different key policy documents. Visit our dedicated page on the Digital Watch observatory to read more about how issues related to internet governance have been covered in such documents. Read more.
United Nations World Data Forum 2023
The annual UN World Data Forum advances data innovation, encourages cooperation, generates political and financial backing for data initiatives, and facilitates progress towards enhanced data for sustainable development. The forum focus on the following thematic areas: Innovation and partnerships for better and more inclusive data; Maximising the use and value of data for better decision-making; Building trust and ethics in data; Emerging trends and partnerships to develop the data ecosystem. Read more.
United Nations World Data Forum 2023
The annual UN World Data Forum advances data innovation, encourages cooperation, generates political and financial backing for data initiatives, and facilitates progress towards enhanced data for sustainable development. The forum focus on the following thematic areas: Innovation and partnerships for better and more inclusive data; Maximising the use and value of data for better decision-making; Building trust and ethics in data; Emerging trends and partnerships to develop the data ecosystem. Read more.
RSA Conference 2023
The RSA Conference 2023 will take place on 24 – 27 April in San Francisco, the USA. The conference will be held under the theme ‘Stronger Together’, and will feature seminars, workshops, training, an exhibition, keynote addresses, and interactive activities. Read more.
RSA Conference 2023
The RSA Conference 2023 will take place on 24 – 27 April in San Francisco, the USA. The conference will be held under the theme ‘Stronger Together’, and will feature seminars, workshops, training, an exhibition, keynote addresses, and interactive activities. Read more.
G7 Digital and Tech Ministers’ Meeting 2023
The G7 Digital and Tech Ministers’ Meeting will address various digitalisation issues, including emerging concerns and changes in the global environment around digital affairs. The ministers will discuss a framework for operationalising the Data Free Flow with Trust (DFFT) in cooperation with the G7 and other countries while respecting national regulations, enhancing transparency, ensuring interoperability, and promoting public-private partnerships. The operationalisation of DFFT is expected to help SMEs and others to safely and securely use data from around the world, enabling them to develop cross-border businesses. Read more.
G7 Digital and Tech Ministers’ Meeting 2023
The G7 Digital and Tech Ministers’ Meeting will address various digitalisation issues, including emerging concerns and changes in the global environment around digital affairs. The ministers will discuss a framework for operationalising the Data Free Flow with Trust (DFFT) in cooperation with the G7 and other countries while respecting national regulations, enhancing transparency, ensuring interoperability, and promoting public-private partnerships. The operationalisation of DFFT is expected to help SMEs and others to safely and securely use data from around the world, enabling them to develop cross-border businesses. Read more.

The Digital Watch observatory maintains a live calendar of upcoming and past events.


DW Weekly #105 – 3 April 2023

DigWatch Weekly 100th issue 1920x1080px generic
Campaigns 93

Dear all,

All eyes will be on China, as it prepares to receive France’s Emmanuel Macron, Spain’s Pedro Sánchez, and the EU’s Ursula von der Leyen. We’ll keep an eye out for anything that could impact the digital policy landscape.

Meanwhile, Italy has imposed a temporary limit on access to ChatGPT (our analysis for this week), as content policy shares the spotlight with cybersecurity updates – notably, the revelations from the leaked Vulcan Files.

Let’s get started.

Stephanie and the Digital Watch team


// HIGHLIGHT //

Italy’s rage against the machine

Italy has become the first western country to impose a (temporary) limitation on ChatGPT, the AI-based chatbot platform developed by OpenAI, which has caused a sensation around the world.

The Italian Data Protection Authority (GDPD) listed four reasons:

  1. Users’ personal data breached: A data breach affecting ChatGPT users’ conversations and information on payments by subscribers to the service was reported on 20 March. OpenAI attributed this to a bug.
  2. Unlawful data collection: ChatGPT uses massive amounts of personal data to train its algorithms without having a legal basis to collect and process it.
  3. Inaccurate results: ChatGPT spews out inaccuracies and cannot be relied upon as a source of truth.
  4. Inappropriate for children: ChatGPT lacks an age verification mechanism, which exposes children to receiving responses that are ‘absolutely inappropriate to their age and awareness’.

How is access being blocked? In compliance with the Italian data protection authority’s order, OpenAI geoblocked access to ChatGPT to anyone residing in Italy. It also issued refunds to Italian residents who have upgraded to the Plus software upgrade. 

However, OpenAI’s API – the interface that allows other applications to interact with it – and Microsoft’s Bing – which also uses ChatGPT – are still accessible in Italy.

ChatGPT disabled for users in Italy

Dear ChatGPT customer,

We regret to inform you that we have disabled ChatGPT for users in Italy at the request of the Italian Garante. 

We are issuing refunds to all users in Italy who purchased a ChatGPT Plus subscription in March. We are also temporarily pausing subscription renewals in Italy so that users won’t be charged while ChatGPT is suspended.

We are committed to protecting people’s privacy and we believe we offer ChatGPT in compliance with GDPR and other privacy laws. We will engage with the Garante with the goal of restoring your access as soon as possible.

Many of you have told us that you find ChatGPT helpful for everyday tasks, and we look forward to making it available again soon.

If you have any questions or concerns regarding ChatGPT or the refund process, we have prepared a list of Frequently Asked Questions to address them.

– The OpenAl Support Team

What’s the response from users? The reactions have been mixed. Some users think this is shortsighted since there are other ways in which ChatGPT can still be accessed. (One of them is using a VPN, a secure connection that allows users to connect to the internet by masking their actual location. So if an Italian user chooses a different location through its VPN, OpenAI won’t realise that the user is indeed connecting from Italy. This won’t work for users wanting to upgrade: OpenAI has blocked upgrades involving credit cards issued to Italian users or accounts linked to an Italian phone number.)

Others think this is a precursor to what other countries will do. They think that if a company is processing data in breach of the rules (in Europe, that’s the GDPR), then it might be required to revise its practices before it continues offering its services. 

How temporary is ‘temporary’? What happens next depends on two things: the outcomes of the investigation into the recent breach, and whether (and how) OpenAI will reform any of its practices. Let’s revisit the list of grievances:

Personal data breach: Nothing can reverse what happened, but OpenAI can install tighter security controls to prevent other incidents. Once authorities are convinced that stricter precautions have been taken, there’s no reason not to lift its ban on this issue alone.

Unlawful data collection: This is primarily a legal issue. So let’s say an Italian court confirms that the way the data was collected was illegal (it would take a great deal of effort to establish this, as OpenAI’s machine is proprietary, i.e. not open to the public to inspect). OpenAI is not an Italian company, so the court will have limited jurisdiction over it. The most it can do is impose a hefty fine and turn the ban into a semi-permanent one. Will it have achieved its aim? No, as Italian users will still be able to interact with the application. Will it create momentum for other governments to consider guardrails or other forms of regulation? Definitely. 

Inaccurate data: This issue is the most complex. If by inaccurate we mean incorrect information, the software is improving significantly with every new iteration. Compare GPT-4 with its predecessor, v.3.5 (or even the early GPT-4 model with the same version at launch date). But if we mean biased or partial data, the evolution of AI-based software shows us how inherent this issue is to its foundations.    

Inappropriate for children: New standards in age verification are a work in progress, especially in the EU. These won’t come any time soon, but when they do, it will be an important step in really limiting what underaged users have access to. It will make it much harder for kids to access platforms which aren’t meant for them. As for the appropriateness of content,  authorities are working on strategies to reel in bigger fish (TikTok, Facebook, Instagram) in the bigger internet pond.


Digital policy roundup (27 March–3 April)
// AI //

UNESCO urges governments to implement ethical AI framework

UNESCO has called upon governments to implement its Recommendation on the Ethics of Artificial Intelligence immediately. 

Director-General Audrey Azoulay said the ethical issues raised by AI technology, especially discrimination, gender inequality, fake news, and human rights breaches – are concerning. 

‘Industry self-regulation is clearly not sufficient to avoid these ethical harms, which is why the recommendation provides the tools to ensure that AI developments abide by the rule of law, avoiding harm, and ensuring that when harm is done, accountability and redressal mechanisms are at hand for those affected.’


Stop right there! Three blows for ChatGPT

The first is that Elon Musk and a group of AI experts and industry leaders are calling for a six-month moratorium on the development of systems more powerful than OpenAI’s newly released GPT-4 due to potential risks to society. Over 50,000 people have signed the open letter.

The second is that the Center for Artificial Intelligence and Digital Policy has filed a complaint with the US Federal Trade Commission to stop OpenAI from issuing new commercial releases of GPT-4, due to concerns about the software’s ‘biased, [and] deceptive’ nature, which is ‘a risk to privacy and public safety’.

The third is a new report from Europol, which is sounding an alarm about the potential misuse of large language models (the likes of ChatGPT, Bard, etc.). For instance, the software can be misused by criminals who can generate convincingly authentic content at a higher level for their phishing attempts, which gives criminals an edge. The agency is recommending that law enforcement agencies get ready.

(It’s actually four blows if we count Italy’s temporary ban).


Caricature of an AI judge holding a legal scroll.

Indian judge uses ChatGPT to decide bail in murder case

A murder case in India made headlines last week when the Punjab and Haryana High Court used ChatGPT to respond to an application for bail in an ongoing case of attempted murder. Justice Anoop Chitkara asked the AI tool: ‘What is the jurisprudence on bail when the assailants assaulted with cruelty?’ The chatbot considered the presumption of innocence and stated that if the accused has been charged with a violent crime that involves cruelty, they may be considered a risk to the community. The judge clarified that the chatbot was not used to determine the outcome but only ‘to present a broader picture on bail jurisprudence, where cruelty is a factor.’


Was this newsletter forwarded to you, and you’d like to see more?


// CYBERSECURITY //

Vulkan files: Leaked documents reveal Russia’s cyberwarfare plans

A trove of leaked documents, dubbed the Vulkan files, have revealed Russia’s cyberwarfare tactics, according to journalists from 11 media outlets, who say the authenticity of the files has been confirmed by 5 intelligence agencies. 

The documents show that consultancy firm Vulkan worked for Russian military and intelligence agencies to support Russia’s hacking operations and the spread of disinformation. The documents also link a cyberattack tool developed by Vultan with the hacking group Sandworm, to whom the USA attributed various attacks such as NotPetya.

The documents include project plans, contracts, emails, and other internal documents dated between 2016 and 2021.


Pro-Russian hacktivists launch DDoS attacks on Australian organisations

Australian universities have been targeted by Distributed Denial of Service (DDoS) attacks in recent months. Web infrastructure security company Cloudflare reported that the attacks were carried out by Killnet and AnonymousSudan – hacktivist groups with pro-Russia sympathies – on several organisations in Australia. 

Killnet has a record of targeting governments and organisations that openly support the Ukrainian government. Since the start of the Ukraine war, the group has been associated with attacks on the websites of the European Parliament, airports in the USA, and the healthcare sectors in Europe and the USA, among others.


Cyberattacks on Ukraine on the rise, CERT-UA says

Ukraine’s computer emergency response team (CERT-UA) has recorded a spike in cyberattacks on Ukraine since the start of the year. The 300+ cyber incidents processed by CERT-UA are almost twice as many as during the corresponding period last year when Russia was preparing for a full-scale invasion.

In a Telegram message, Russia’s State Special Communications Service said that Russia’s aim is to obtain as much information as possible that can give it an advantage in a conventional war against Ukraine.


// ANTITRUST //

Google to Microsoft: Your cloud practices are anti-competitive

It’s been a while since Big Tech engaged in a public squabble, so when Google accused Microsoft’s cloud practices of being anti-competitive last week, we thought the growing rivalry spurred by ChatGPT had reached new levels.

In comments to Reuters, Google Cloud’s vice president Amit Zavery said Google Cloud had filed a complaint to regulatory bodies and has asked the EU’s antitrust watchdog ‘to take a closer look’ at Microsoft. In response, Microsoft reminded Google that the latter was in the lead in the cloud services sector. We’re wondering: Could this be a hint that it’s actually Google that merits greater scrutiny?


// CONTENT POLICY //

New US bill aims to strengthen news media negotiations with Big Tech

US lawmakers have reintroduced a bill to help news media in their negotiations with Big Tech, after a failed attempt during the last congressional session. The bipartisan bill – the Journalism Competition and Preservation Act – will create a four-year safe harbour period for news organisations to negotiate terms such as revenue sharing with tech companies like Facebook and Google.

Lawmakers are taking advantage of momentum gathered from a similar development in Canada, where the Online News Act, or Bill C-18, is currently being debated in Parliament. Reacting to the Canadian draft rules, Google and Meta threatened to do the same (leaving Reporters Without Border reeling; Google went through with its threat). We’re wondering whether Google – or any other Big Tech entity – will do the same in the USA. 


East Europe governments call on tech companies to fight disinformation

The prime ministers of Moldova, the Czech Republic, Slovakia, Estonia, Latvia, Lithuania, Poland and Ukraine have signed an open letter which calls on tech firms to help stop the spread of false information.

Some of the proposed actions include: refraining from accepting payments from those previously sanctioned, improving the accuracy and transparency of algorithms (rather than focusing on promoting content), and providing researchers free or affordable access to platforms’ data to understand the tactics of manipulative campaigns.


Internet Archive’s digital book lending violates copyright laws, US judge rules

A US judge has ruled that the Internet Archive’s digital book lending program violates copyrights, potentially setting a legal precedent for future online libraries. The initiators of the case, the Association of American Publishers, argued that the program infringed on their authors’ exclusive rights to reproduce and distribute their works.

Although the Internet Archive based its argument on the principle of fair use, Judge Denny Chin disagreed, as the platform’s practice impacts publishers’ income from licensing fees for paper and e-book versions of the same texts. The judge said that Open Library’s practice of providing full access to those books without obtaining permission from the copyright holders violated copyright rules. Internet Archive is preparing an appeal, but until then, it can’t provide new scanned library material.


The week ahead (3–9 April)

4 April: The European Broadcasting Union’s Sustainability Summit 2023 will focus on green streaming and other environment-friendly practices in digital broadcasting.

4–5 April: The International Association of Privacy Professionals (IAPP) Global Privacy Summit will gather privacy practitioners to discuss current regulatory challenges. (Sadly, ticket prices are prohibitively high.)

Diplo and the Geneva Internet Platform are organising two events this week:

Join us online!


#ReadingCorner
Photo of James Cameron

AI governance: Terminator movie director says we might already be too late

AI has become an integral part of modern life, but with its increasing prevalence, James Cameron, the director of the iconic Terminator movies warns that humans are facing a titanic battle (pun intended) for control over technology. Cameron urges governments to create ethical standards for AI before it’s too late. Read more here and here. (Note: Articles on this podcast were making the rounds last week, but the podcast itself is from December).

steph
Stephanie Borg Psaila
Director of Digital Policy, DiploFoundation

Numéro 77 de la lettre d’information Digital Watch – mars 2023

Tendances

Moteurs de recherche alimentés par l’IA :
la course est lancée

À vos marques, prêts, partez ! Mais tout le monde n’était pas sur la ligne de départ. 

Microsoft a décollé comme une fusée lors de sa conférence de presse inopinée, annonçant l’intégration du modèle de langage de grande taille  (LLM) ChatGPT-4, créé par OpenAI, dans son moteur de recherche Bing et dans son navigateur Edge. Google, qui avait précédemment qualifié la menace que représente ChatGPT de « code rouge », a dû réagir rapidement. Le lendemain, il annonçait son IA conversationnelle baptisée « Bard ». Et ce n’est que le début de la course.

ai kuca i stampa
Campaigns 107

Qu’est-ce qui différencie ChatGPT des autres modèles linguistiques précédents ?

La décision d’OpenAI de mettre ChatGPT gratuitement à la disposition du public était audacieuse, compte tenu des coûts importants associés à la maintenance du système tout en traitant des millions de questions d’utilisateurs curieux – des coûts que le P.-D. G. d’OpenAI, Sam Altman, a lui-même qualifiés d’« exorbitants ». Cependant, cette décision s’est avérée non seulement hardie, mais aussi ingénieuse. En rendant ChatGPT accessible à tous, OpenAI a déclenché une étincelle de curiosité et d’intérêt qui a attiré l’attention du monde entier.

En réalité, ChatGPT n’a pas été le premier modèle à être mis à la disposition du public gratuitement. Galactica, de Meta, a tenté le même exploit, a subi un échec cuisant et a été abandonné au bout de quelques jours seulement. Alors, comment ChatGPT a-t-il réussi, alors que Galactica n’a pas abouti ? La réponse est principalement liée à la manière dont il a été annoncé et à qui. Galactica a été présenté à la communauté universitaire comme une IA capable de rédiger sans effort des articles scientifiques – un domaine où le public est exigeant et critique, et qu’il est aisé de décevoir. D’un autre côté, ChatGPT – et ses limites – a été présenté publiquement comme un outil ouvert et accessible à tous, que chacun peut expérimenter et apprécier. Cette approche, combinée aux performances de ChatGPT, l’a distingué des modèles précédents et a changé la donne dans le monde de l’IA.

La décision de mettre ChatGPT à la disposition du grand public n’a pas été sans contraintes. Contrairement aux modèles à source ouverte, ChatGPT n’est disponible que de manière limitée, les chercheurs intéressés ne pouvant pas regarder « sous le capot » du modèle ou l’adapter à leurs besoins spécifiques. Il s’agit d’une approche différente de celle adoptée par OpenAI avec son exceptionnel modèle Whisper, dont la transparence était remarquable. D’un point de vue commercial, la décision d’OpenAI d’offrir ChatGPT au public, malgré les coûts élevés qu’elle implique, s’est avérée être la meilleure, car elle a attiré l’attention et des fonds.

Comment les autres entreprises ont-elles réagi ?

ChatGPT a suscité une attention et une concurrence sans précédent dans le monde de l’IA. L’intérêt considérable du public à son égard a incité les principaux acteurs du secteur à réagir rapidement. Ce type de compétition est sans aucun doute passionnant, mais il peut aussi entraîner des pertes. Néanmoins, les stratégies employées sont assez curieuses.

Microsoft n’a pas hésité à consacrer 10 milliards de dollars à l’intégration de ChatGPT dans tous ses produits phares, y compris Skype, Teams et Word. Elle l’a fait rapidement et ouvertement en réponse à l’intérêt et à la popularité considérables de ChatGPT auprès du public, créant ainsi un exemple à suivre.

Google a annoncé à la hâte qu’il intégrerait le modèle de Bard dans Google Search, mais a 

échoué dans sa première étape lorsque Bard a commis une erreur de fait dans sa démo de lancement. La stratégie de Google s’est traduite par d’importantes pertes initiales pour l’entreprise, une chute de 10 % du cours de l’action ayant effacé 170 milliards de dollars de la valeur marchande de Google.

Malgré le revers causé par la publication et la mise en libre accès du modèle Galactica, Meta persiste dans son intention de mettre en libre accès ses modèles d’IA de pointe. Avec l’ouverture récente de son plus grand modèle de langage, appelé « modèle LLaMa », Meta semble tenter de réduire la dépendance des utilisateurs à l’égard de l’API OpenAI GPT en donnant accès à de nouveaux modèles. LLaMa n’est pas le premier modèle à grande échelle à être ouvert – BLOOM et OPT sont disponibles depuis un certain temps –, mais leur application a été limitée en raison de leurs exigences matérielles élevées. LLaMa est environ dix fois plus petit que ces modèles et peut fonctionner sur un seul processeur graphique, ce qui pourrait permettre à un plus grand nombre de chercheurs d’accéder à des modèles linguistiques de grande taille et de les étudier. Dans le même temps, LLaMa obtient des résultats similaires à ceux de GPT-3.

Les géants chinois de la technologie n’ont pas perdu de temps : Baidu prévoit d’intégrer son chatbot Ernie (abréviation de Enhanced Representation through kNowledge IntEgration) dans son moteur de recherche en mars et, à terme, dans toutes ses activités.

Réponse des autorités de régulation

Les autorités de régulation des États-Unis et de la Chine s’en rendent compte : le P.-D. G. d’OpenAI, Sam Altman, a rencontré des législateurs américains, qui l’auraient pressé de s’exprimer sur les préjugés, la rapidité des changements dans l’IA et les utilisations potentielles de l’IA.

En Chine, ChatGPT n’est pas officiellement disponible, mais les utilisateurs ont pu y accéder grâce à des solutions de contournement. Toutefois, les autorités de régulation ont demandé aux principales entreprises technologiques chinoises de ne pas intégrer ChatGPT dans leurs services, car le logiciel « pourrait aider le gouvernement américain à diffuser de la désinformation et à manipuler les informations mondiales pour servir ses propres intérêts géopolitiques ». Les entreprises technologiques chinoises devront également rendre compte aux régulateurs avant de déployer leurs propres services de type ChatGPT.

L’impact de l’IA générative sur notre avenir

Les développements de février dans le domaine de l’IA générative ont une fois de plus soulevé la question suivante : l’IA va-t-elle prendre le contrôle de nos emplois ? C’est pourquoi les craintes qu’une nouvelle technologie rende le travail humain superflu sont omniprésentes à chaque fois que l’on parle d’une nouvelle technologie. 

Nos collègues du Diplo’s AI Lab, qui ont également utilisé des modèles de langage pour développer des outils d’IA très performants (nous sommes partiaux ici), pensent que l’IA ne rendra pas la plupart des emplois superflus. Certains emplois ont une nature intrinsèquement interhumaine, et l’IA aura du mal à les remplacer. 

Cependant, l’IA rendra certains emplois superflus, comme cela a été le cas pour toutes les nouvelles technologies.
La bonne nouvelle est que les outils d’IA permettront aux travailleurs de gagner du temps en supprimant les tâches courantes de leur liste. Et, si certains emplois disparaîtront, de nouveaux apparaîtront, comme l’ont déjà dit le P.-D. G. de Microsoft, Satya Nadella, et le fondateur de Microsoft, Bill Gates. La question est la suivante : comment faire en sorte que les générations actuelles et futures soient préparées à faire face aux changements actuels et à venir sur le marché du travail ?

002 X2
Campaigns 108

À Diplo, notre laboratoire d’IA et de données est à la pointe du développement de la technologie de l’IA, et a la capacité de transformer la façon dont la diplomatie est menée. Nous examinons également de manière approfondie l’impact de l’IA sur notre avenir dans le cadre d’une série de webinaires. Nous nous sommes demandé si l’IA allait prendre en charge les rapports diplomatiques et quel était le rôle (éventuel) de l’IA dans les négociations diplomatiques. Nous étudions comment ChatGPT peut nous aider à repenser l’éducation et, en tant qu’institution de formation, nous repensons également notre politique concernant l’utilisation des outils d’IA dans le cadre de nos cours et de nos programmes de formation

Vous souhaitez nous faire part de vos réflexions sur l’IA générative ? Écrivez-nous à digitalwatch@diplomacy.edu !


Baromètre

Les développements de la politique numérique qui ont fait la une

Le paysage de la politique numérique évolue quotidiennement. Voici donc les principaux éléments du mois de février. Nous les avons décodés en petites mises à jour qui font autorité. Vous trouverez plus de détails dans chaque mise à jour sur le Digital Watch observatory.

Architecture mondiale de la gouvernance numérique

neutre

Dans le cadre du processus d’élaboration du Pacte mondial pour le numérique, des consultations informelles ont été organisées avec les parties prenantes et les États membres des Nations unies.


Développement durable

neutre

Le rapport de l’UIT intitulé « Faits et chiffres : L’accent mis sur les pays les moins avancés » montre que la fracture numérique entre les pays les moins avancés (PMA) et le reste du monde est passée de 27 % en 2011 à 30 % en 2022.
La Chine a dévoilé un nouveau plan pour la construction d’une Chine numérique d’ici 2035, qui vise à placer le pays à l’avant-garde mondiale du développement numérique.


Sécurité

en progression

Les ministres de l’UE examinent une version modifiée du projet de loi sur la cyberrésilience (CRA), un règlement sur les exigences en matière de cybersécurité pour les produits numériques.

La Maison-Blanche a publié une stratégie nationale de cybersécurité, soulignant le fait que les grandes entreprises devraient assumer une plus grande responsabilité en ce qui concerne les produits et services logiciels non sécurisés.

Aux États-Unis, la Maison-Blanche a demandé aux agences fédérales de supprimer TikTok de tous les appareils fournis par le gouvernement dans un délai de 30 jours, pour des raisons de sécurité. De même, la Commission, le Parlement et le Conseil de l’Union européenne ont interdit l’utilisation de TikTok sur les appareils du personnel. TikTok a depuis annoncé le « Projet Trèfle », une nouvelle stratégie de sécurité des données, dans le cadre de laquelle les données des utilisateurs européens seront transférées en Irlande et en Norvège.
L’application de messagerie Signal a annoncé qu’elle cesserait de fournir des services au Royaume-Uni si on lui demandait de compromettre le cryptage dans le cadre du projet de loi sur la sécurité en ligne.


Infrastructure

en progression

La Commission européenne a lancé une consultation publique sur l’avenir de la connectivité, considérée comme un prélude à des projets qui pourraient obliger les grandes entreprises technologiques à payer leur part des coûts liés à l’infrastructure numérique. Elle a également publié une proposition de loi sur l’infrastructure Gigabit.

SpaceX prévoit d’empêcher l’armée ukrainienne d’utiliser son service Internet par satellite pour contrôler des drones.
L’OTAN a mis en place une cellule de protection des infrastructures sous-marines critiques afin de coordonner l’engagement des acteurs militaires et industriels.

Commerce électronique et économie de l’Internet

neutre

PayPal a suspendu le lancement de son système de monnaie stable en raison d’une surveillance réglementaire accrue.
La Commission australienne de la concurrence et de la consommation (ACCC) va examiner les produits et services interconnectés offerts par les plateformes numériques pour savoir s’ils nuisent à la concurrence et aux consommateurs.


Les droits numériques

en progression

Le Conseil européen de la protection des données (CEPD) a rendu son avis sur le projet de décision d’adéquation du cadre de protection des données de l’UE et des États-Unis, exprimant ses préoccupations quant à l’application des principes de nécessité et de proportionnalité nouvellement introduits.

Les autorités canadiennes de régulation de la vie privée ont ouvert une enquête sur la collecte, l’utilisation et la divulgation d’informations personnelles par TikTok.
Selon un rapport d’Access Now, c’est en Inde, en Ukraine et en Iran que le nombre de coupures d’Internet a été le plus élevé en 2022.


La politique de contenu

neutre

Les signataires du code de pratique 2022 sur la désinformation, qui comprend toutes les grandes plateformes en ligne, ont mis en place un centre de transparence qui garantit la transparence de leurs efforts de lutte contre la désinformation, et ont publié des rapports sur la mise en œuvre des engagements pris dans le cadre du code.

La conférence de l’UNESCO sur l’Internet pour la confiance a débattu de la réglementation des plateformes numériques afin de préserver la liberté d’expression et l’accès à l’information.


Juridiction et les questions juridiques

en progression

La Chine a annoncé la création d’un bureau national des données, qui établira un système de données pour le pays et coordonnera l’utilisation des ressources en données.

La Cour constitutionnelle fédérale allemande a jugé contraire à la Constitution l’utilisation par la police de l’analyse automatisée des données pour prévenir la criminalité.

Le ministère américain de la Justice a demandé à un tribunal de sanctionner Google pour la destruction présumée de preuves dans le cadre d’une affaire antitrust. 
La Commission européenne a retiré sa plainte contre le mécanisme d’achat in-app d’Apple, qui oblige les développeurs d’applications de diffusion de musique en continu à utiliser le système propriétaire s’ils veulent distribuer du contenu payant sur les appareils iOS, mais elle continuera d’enquêter sur les pratiques d’Apple en matière de lutte contre les dérapages.


Technologies

en progression

Les représentants de 59 pays ont lancé un appel commun à l’action sur le développement, le déploiement et l’utilisation responsables de l’IA dans le domaine militaire.

Le Comité sur l’IA du Conseil de l’Europe a poursuivi les discussions sur une convention sur l’IA et les droits de l’homme, et a publié un projet de texte.
Les Pays-Bas restreindront l’exportation des technologies les plus avancées en matière de semi-conducteurs, y compris les systèmes de lithographie dans l’ultraviolet profond (DUV).

Comment les algorithmes mettent-ils l’article 230 à l’épreuve ?

Il a longtemps été avancé que la loi américaine qui protège les plateformes de médias sociaux de toute responsabilité concernant le contenu publié par les utilisateurs sur ces plateformes – l’article 230 de la loi sur la décence des communications (Communications Decency Act) – devrait être restreinte ou carrément supprimée.

L’article 230, qui a fait l’objet de tant de débats, est une règle astucieuse de deux phrases qui stipule que : (a) les plateformes ne sont pas des éditeurs (elles ne sont donc pas responsables du contenu publié par les utilisateurs, contrairement aux éditeurs) ; (b) dans les cas où les plateformes assurent elles-mêmes le contrôle du contenu de tiers, elles ne peuvent pas être sanctionnées pour d’autres contenus préjudiciables qu’elles ne retirent pas.    

Il faut reconnaître que cette règle a permis à Internet de prospérer. Les plateformes ont pu héberger une grande quantité de contenus d’utilisateurs, sans crainte de responsabilité. Elles ont également permis la publication instantanée de contenus, sans que les plateformes soient tenues de les examiner avant de les rendre publics. La liberté d’expression a prospéré.

Mais aujourd’hui, l’ère des algorithmes met l’article 230 à l’épreuve. Il y a quelques semaines, la Cour suprême des États-Unis a commencé à examiner des arguments dans deux affaires, toutes deux jugées initialement par le neuvième district, qui pourraient avoir des répercussions sur l’article 230.

Gonzales vs Google

Dans un procès contre Google, la famille d’une Américaine tuée lors d’une attaque à Paris par des militants islamistes a fait valoir que l’algorithme de Google recommandait aux utilisateurs de YouTube des contenus provenant du groupe militant. La famille fait appel du premier jugement en faisant valoir que l’article 230 ne confère pas d’immunité aux plateformes en ce qui concerne les contenus recommandés par les algorithmes, étant donné que les suggestions faites par les algorithmes ne sont pas des contenus de tiers, mais les propres contenus de l’entreprise. L’argument de Google, en revanche, est que l’article 230 ne vise pas seulement à protéger les entreprises contre les contenus de tiers, mais va jusqu’à stipuler que les plateformes ne devraient pas être considérées comme des éditeurs.

De nombreuses entreprises et organisations ont déposé des recours en justice pour soutenir Google. Twitter, par exemple, fait valoir que les algorithmes permettent de donner la priorité à certains contenus par rapport à d’autres (« les contenus les plus récents par rapport aux plus anciens »), mais qu’ils ne véhiculent aucun contenu propre. Microsoft soutient que les algorithmes sont tellement essentiels à la vie quotidienne que l’article 230 ne devrait pas être restreint ou réinterprété, car cela « causerait des ravages sur l’Internet tel que nous le connaissons ».

Twitter vs Taamneh

La seconde affaire est un appel intenté par Twitter après que le neuvième circuit a écarté l’article 230 et autorisé la poursuite de l’affaire. Il a ensuite été statué que Twitter et d’autres plateformes n’avaient pas pris les mesures adéquates pour empêcher l’apparition de contenus terroristes sur la plateforme. La famille d’un Jordanien a accusé Twitter de ne pas avoir surveillé la plateforme après un attentat en 2017 qui a entraîné la mort de l’homme et de 38 autres victimes.

Cette affaire est axée sur la loi antiterroriste, mais, comme l’appel pourrait infirmer le jugement de la juridiction inférieure (qui comprenait une décision sur l’article 230), l’appel pourrait également avoir des répercussions sur l’article 230.

Quels sont les enjeux ?

Bien que les deux affaires soient liées, c’est l’affaire Gonzales qui devrait aborder la question des algorithmes : les plateformes peuvent-elles être tenues responsables du contenu promu par leurs algorithmes ?

Les deux actions en justice, qui seront tranchées d’ici la fin du mois de juin, peuvent avoir plusieurs issues. La plus radicale serait que le tribunal supprime les protections accordées aux plateformes en vertu de l’article 230. Mais, compte tenu de l’enjeu, cette solution est peu probable.

Il serait plus réaliste que la Cour suprême maintienne l’interprétation actuelle de l’article 230 ou, tout au plus, qu’elle introduise une restriction subtile. Il appartiendra alors au pouvoir législatif de répondre au mécontentement exprimé par les décideurs politiques ces dernières années.

Mise à jour des politiques de la Genève internationale

De nombreuses discussions politiques ont lieu chaque mois à Genève. Voici ce qui s’est passé en février.

GTCNL sur la réduction des menaces spatiales | 7 – 9 février

La troisième session du groupe de travail à composition non limitée (GTCNL) sur la réduction des menaces spatiales s’est tenue à l’Office des Nations unies à Genève. Le GTCNL est chargé, entre autres, de formuler des recommandations sur les normes, les règles et les principes de comportement responsable possibles en ce qui concerne les menaces que les États font peser sur les systèmes spatiaux. Il a été créé par la résolution 76/23 de l’Assemblée générale des Nations unies et s’est déjà réuni deux fois : (a) du 9 au 13 mars 2022, (b) puis les 12 et 13 septembre 2023. Le GTCNL devrait se réunir à nouveau du 7 au 11 août 2023 et soumettre un rapport final à la 78e Assemblée générale des Nations unies en septembre 2023.

Défis et solutions existants en matière de lutte contre la contrefaçon de dispositifs TIC | 15 février

L’Union internationale des télécommunications (UIT) a organisé une série de webinaires sur la lutte contre la contrefaçon et les appareils TIC volés. Dans le premier épisode, des intervenants de différents groupes de parties prenantes ont présenté les problèmes et les défis liés à la circulation des dispositifs TIC contrefaits. Une attention particulière a été accordée aux solutions possibles par le biais de la normalisation.

Explorez la Genève numérique !

Besoin d’un guide sur la gouvernance de l’Internet à Genève ? Notre Geneva Digital Atlas, dans lequel vous trouverez les coordonnées des 46 acteurs les plus importants de la politique numérique, vous accompagnera tout au long de votre voyage. Gardez un œil sur nos Instagram, Twitter, YouTube, Facebook et LinkedIn pour les vidéos hebdomadaires Geneva Digital Tours, dans lesquelles des personnalités de haut niveau vous guident à travers leurs institutions. Au mois de mars, les organisations impliquées dans la normalisation et l’infrastructure seront présentées. Notre première invitée est Doreen Bogdan-Martin, secrétaire générale de l’UIT !

Wjveh3rXYYxnxRNs0 qCnzhNjogLJ03wl1jSA8sZoP8G6g6RAji sYkhhsCcM6bRjJHh9NdfOl0012e90XiVWcCK9ZWzs0408uk7A0TeyL5pY

Les principaux événements du mois de mars en matière de politique numérique

11–16 Mars 2023, ICANN76 (Paris, France)
L’ICANN76 offrira à sa communauté la possibilité d’aborder diverses questions relatives à son activité et à la gestion du système de noms de domaine (DNS). Le programme de ce forum communautaire de l’ICANN76 comprend le développement des capacités / la formation, l’interaction entre les communautés, l’élaboration de politiques, la sensibilisation / l’engagement, la sécurité / les questions techniques et les rapports / les mises à jour.

13–17 mars, Sommet mondial sur la société de l’information – SMSI (Genève, Suisse, et en ligne)

Le thème du Forum 2023 du Sommet mondial sur la société de l’information (SMSI) est « Lignes d’action du SMSI pour reconstruire en mieux et accélérer la réalisation des ODD ». Le Forum du SMSI est une plateforme mondiale multipartite destinée à faire progresser le développement durable par la mise en œuvre des grandes orientations du SMSI. Il facilite le partage d’informations et de connaissances, la création de connaissances, l’identification des tendances émergentes et la promotion de partenariats avec les organisations des Nations unies et les cofacilitateurs des lignes d’action du SMSI. 

Diplo et la Geneva Internet Platform (GIP), avec le soutien des missions permanentes de Djibouti, du Kenya et de la Namibie, co-organisent une session pendant le Forum du SMSI pour débattre de la diplomatie numérique de l’Afrique. La session explore la manière dont l’Afrique peut renforcer sa participation à la gouvernance numérique mondiale, compte tenu de la croissance de ses économies numériques, de ses écosystèmes de start-up et de sa transformation numérique dynamique. Elle vise à identifier les priorités en matière de politique numérique, à améliorer la participation de l’Afrique aux processus de gouvernance numérique mondiale, et à offrir des idées pratiques pour renforcer la diplomatie numérique de l’Afrique dans les processus internationaux liés à la cybersécurité, à l’IA, à la gouvernance des données, à l’accès et à l’infrastructure. Enfin, la session proposera des mesures pratiques pour développer la diplomatie numérique africaine.

27–31 mars, 26e séance de la CSTD (Genève, Suisse, et en ligne)

La Commission de la science et de la technologie au service du développement (CSTD) tiendra sa 26e séance sous les thèmes principaux suivants : Technologie et innovation pour une production plus propre, plus performante et plus compétitive ; Assurer l’eau potable et l’assainissement pour tous : la solution par la science et l’innovation ; Technologie et innovation pour une production plus propre, plus efficace et plus compétitive.La commission se concentrera sur la manière dont la science, la technologie et l’innovation peuvent servir de catalyseurs à l’Agenda 2030, en particulier dans des domaines cruciaux tels que le développement économique, environnemental et social. La CSTD examinera également les progrès réalisés dans la mise en œuvre et le suivi des résultats du Forum du SMSI aux niveaux régional et international, et entendra des présentations sur les examens en cours des politiques de la science, de la technologie et de l’innovation.

29–30 mars, Entretien de l’OMPI sur la propriété intellectuelle et les technologies d’avant-garde (Genève, Suisse, et en ligne)

La septième séance de la conférence de l’Organisation mondiale de la propriété intellectuelle (OMPI) portera sur la propriété intellectuelle et les technologies d’avant-garde, et se concentrera sur l’intersection de la propriété intellectuelle et du métavers, en explorant les technologies d’avant-garde qui le rendent possible et en examinant les défis qu’elles posent au système de propriété intellectuelle existant. L’objectif principal de la séance est de fournir une stratégie pour relever ces défis, et de faire en sorte que l’innovation et le développement continuent de profiter à tout le monde.