Home | Newsletters & Shorts | Digital Watch newsletter – Issue 79 – May 2023

Digital Watch newsletter – Issue 79 – May 2023

Download your copy

EN

Pentagon: The leak on Discord is more significant than we think

From time to time, the intelligence of US agencies and its allies are exposed in major leaks. April’s leak of 50-or-so top secret documents on the gaming chat service Discord was one of the most egregious.

The release of diplomatic cables by WikiLeaks in 2010, the 2013 disclosures by Edward Snowden, and the disclosure of the National Security Agency and CIA’s hacking tools in 2016 and 2017 rank among the world’s biggest modern-time leaks.

Outrage or shrug? Diminishing response

Every new leak seems to generate less and less outrage on a global level. So when another US intelligence leak surfaced in April on Discord (a relatively unknown social platform), it hardly caused a blip on the radar. While sensationalism can hinder law enforcement’s efforts, disinterest isn’t exactly helpful either.

The Discord leak was revealed on 6 April by the New York Times. Behind the leak was  21-year-old Jack Teixeira, an airman first class in the Massachusetts Air National Guard. 

It wasn’t difficult for the FBI to identify him. He uploaded the documents to an online community on Discord (a server) that he was unofficially administrating, and tracked the FBI investigation into his own leak. He was charged a few days later.

Mistaken for fake news

In that short time, the leaked documents were spread to other social media platforms by users who thought the documents were fake. The possibility of the documents being top secret didn’t seem to register.

As CNN reported: ‘Most [Discord] users distributed the files because they thought they were fake at first,’ one Discord said. ‘By the time they were confirmed as legitimate, they were already all over Twitter and other platforms.’

A Google Trends graph shows how people’s interest in searching for information related to leaks has dwindled over time
A Google Trends graph shows how people’s interest in searching for information related to leaks has dwindled over time

Very bad timing

Not that there is ever a good time, but this leak arrived at a particularly sensitive moment in Russia’s ongoing conflict against Ukraine. 

Although the data was not as comprehensive as in previous leaks, this latest breach provided intimate details about the current situation in Ukraine, as well as intelligence on two of the US’s closest allies: South Korea and Israel. 

While Europe was mostly spared, the leaked information uncovered that Ukraine has European special forces on the ground and that almost half of the tanks en route to Kyiv are from Poland and Slovenia. The collateral consequences of the leak extend to many countries.

Still out there

Days after the Pentagon announced its investigation, the leaked documents could still be accessed on Twitter and other platforms, promoting a debate about the responsibility of social media companies in cases involving national security. There’s no one solution that can solve the content moderation issues on social media, complicating the follow-up. 

Unfortunately, but unsurprisingly, leaks are bound to happen, especially when classified information is accessible to so many people. In 2019, there were 1.25 million US citizens with clearance to access the USA’s top-secret information.

One solution, therefore, is for social media platforms to strengthen their content policies when it concerns leaks of intelligence information. If the former Twitter employee interviewed by CNN is correct, ‘the posting of classified US military documents would likely not be a violation of Twitter’s hacked materials policy’. Another possibility is for companies to strengthen their content moderation capabilities. To avoid imposing impossible burdens on start-up or small platforms, capabilities should be matched to the size of a platform’s user base (the framework used by the EU’s Digital Services Act is a good example).

The issue becomes more complex when illegal material is shared on platforms that use end-to-end encryption. As law enforcement agencies have emphasised time and time again, while there’s no doubt that encryption plays an important role in safeguarding privacy, it also hampers their ability to identify, pursue, and prosecute violations.  

For now, we should focus on the fact that the latest leak was uploaded by a user to a public forum on social media, despite the potential damage to the national security of their own country (the USA) and the risk to citizens of a war-torn country (Ukraine). That is undoubtedly the biggest concern

criminal complaint
Digital Watch newsletter – Issue 79 – May 2023 13

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from April. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

same relevance

G7 digital ministers will start implementing Japan’s plan for a Data Free Flow with Trust (DFFT) through a new body, the Institutional Arrangement for Partnership (IAP), led by the Organisation for Economic Co-operation and Development (OECD). They also discussed AI, digital infrastructure, and competition.

Sustainable development

same relevance

The UN World Data Forum, held in Hangzhou, China, called for better data governance and increased collaboration between governments to achieve a sustainable future. UN Secretary-General António Guterres said that data remains a critical component of development and progress in the 21st century.

Security

increasing relevance

The Pentagon started investigating the leak of over 50 classified documents that turned up on the social media platform Discord. See our story on pages 2–3. A joint international law enforcement operation seized the Genesis Market, a dark web market.

The European Commission announced a EUR1.1 billion (USD1.2 billion) plan to strengthen the EU’s capabilities to fend off attacks and support more coordination among member states. 

TikTok was banned on government devices in Australia; the Irish National Cyber Security Centre also recommended that government officials refrain from using TikTok on devices.

The annual report of the Internet Watch Foundation (IWF) revealed that severe child sexual abuse imagery is on the rise.

E-commerce and the internet economy

same relevance

The UK’s Competition and Markets Authority (CMA) blocked Microsoft’s acquisition of Activision Blizzard over concerns that it would negatively affect the cloud gaming industry. Microsoft will appeal.

The European Commission designated 19 tech companies as very large online platforms (VLOPs) (17) and very large online search engines (VLOSEs) (2), which will need to comply with stricter rules under the new Digital Services Act. 

South Korea’s Fair Trade Commission (FTC) fined Google for unfair business practices. A group of Indian start-ups asked a local court to suspend Google’s new in-app billing fee system. In the UK, Google will let Android developers use alternate payment options.

Infrastructure

same relevance

The EU’s Council and Parliament reached a political agreement over the new Chips Act, which aims to double the EU’s global output of chips by 20% by 2030.

Digital rights

increasing relevance

Governments around the world launched investigations into OpenAI’s ChatGPT, principally over concerns that the company’s practices violated people’s privacy and data protection rights. See our main story.

The Indian government is considering opening up Aadhaar, the country’s digital identity system, to private entities to authenticate users’ identities. 

European MEPs voted against a proposal to allow personal data transfers of EU citizens to the USA under the new EU-US Data Privacy Framework.

Content policy

same relevance

The Central Cyberspace Administration of China will carry out a three-month nationwide campaign to remove fake news about Chinese businesses from online circulation. The aim is to allow enterprises and entrepreneurs to work in a good atmosphere of online public opinion.

Jurisdiction and legal issues

same relevance

Brazil’s Supreme Court blocked – and then reinstated – messaging app Telegram for users in the country after the company failed to provide data linked to a group of neo-Nazi organisations using the platform. 

A Los Angeles court dismissed a claim for damages by a Tesla driver, after the company successfully argued that the partially automated driving software was not a self-piloted system.

New technologies

increasing relevance

In the USA, the Biden administration is studying potential accountability measures for AI systems. The National Telecommunications and Information Administration’s  (NTIA) call for feedback runs till 10 June. A US Democratic Senator has introduced a bill that would create a task force to review AI policy. The US Department of Homeland Security also announced a new task force to ‘lead in the responsible use of AI to secure the homeland’ while defending against malicious use of AI.

A group of 11 members of the European Parliament are urging the US President and European Commission chief to co-organise a high-level global summit on AI governance. 

The Cyberspace Administration of China (CAC) proposed new measures for regulating generative AI services. The draft is open for public comments until 10 May.

Dozens of advocacy organisations and children’s safety experts called on Meta to halt its plans to allow kids into its virtual reality world, Horizon Worlds, due to potential risks of harassment and privacy violations for young users.


Why authorities are investigating ChatGPT: The top 3 reasons

With its ability to replicate human-like responses in text-based interactions, OpenAI’s ChatGPT has been hailed as a breakthrough in AI technology. But governments aren’t entirely sold on it. So what’s worrying them?

Privacy and data protection

Firstly, there’s the central issue of allegedly unlawful data collection, the all-too-common practice of collecting personal data without the user’s consent or knowledge. 

This is one of the reasons why the Italian privacy watchdog, the Garante per la Protezione dei Dati Personali, imposed a temporary ban on ChatGPT. The company addressed most of the authority’s concerns, and the software is now available in Italy again, but that doesn’t solve all the problems.

The same concern is being tackled by other data protection authorities, including France’s Commission nationale de l’informatique et des libertés (CNIL), which received at least two complaints, and Spain’s Agencia Española de Protección de Datos (AEPD). Then there’s the European Data Protection Board (EDPD)’s newly-launched task force, whose ChatGPT-related work will involve coordinating the positions of the other European authorities.

Concerns around data protection have not been limited to Europe, however. The complaint by the Center for Artificial Intelligence and Digital Policy (CAIDP) to the US Federal Trade Commission (FTC) argued that OpenAI’s practices contain numerous privacy risks. Canada’s Office of the Privacy Commissioner is also investigating.

Unreliable 

Secondly, there’s the issue of inaccurate results. OpenAI’s ChatGPT model has been used by several companies, including Microsoft Bing, to generate text. However, as OpenAI itself confirms, the tool is not always accurate. Reliability was one of the issues behind Italy’s decision to ban ChatGPT, and in one of the complaints received by the French CNIL. The CAIDP’s complaint to the FTC also argued that OpenAI’s practices were deceptive since the tool is ‘highly persuasive’, even if the content is unreliable. In Italy’s case, OpenAI told the authority it was ‘technically impossible, as of now, to rectify inaccuracies’. That’s of little reassurance, considering how these AI tools can be used in sensitive contexts such as healthcare and education. The only recourse, for now, is to provide users with better ways to report inaccurate information.

OpenAi
Digital Watch newsletter – Issue 79 – May 2023 14

Children’s safety

Thirdly, there’s the issue of children’s safety and the absence of an age verification system. Both Italy and the CAIPD argued that, as things stand, children can be exposed to content that is inappropriate for their age or level of maturity. 

Even though OpenAI has returned to Italy after introducing an age question on ChatGPT’s sign-up form, the authority’s request for an age-based gated system still stands. OpenAI must submit its plans by May and implement them by September. This request coincides with efforts by the EU to improve how platforms confirm their users’ age. 

As long as new AI tools keep emerging, we expect to see continued scrutiny of AI technologies, particularly around their potential privacy and data protection risks. OpenAI’s response to the various demands and investigations may set a precedent for how AI companies are held accountable for their practices in the future. At the same time, there is a growing need for greater regulation and oversight of AI technologies, particularly around machine learning algorithms.

Policy updates from International Geneva

WSIS Action Line C4: Understanding AI-powered learning: Implications for developing countries | 17 April

An ITU and the ILO event examined the impact of AI technologies on the global education ecosystem. 

Focusing mostly on the issues experienced by the Global South, experts discussed how these technologies were being used in areas such as exam monitoring, faculty lecture transcriptions, student success analyses, teachers’ administrative tasks, and real-time feedback to student questions. 

They also talked about the added workload for teachers to ensure that they and their learners are proficient with the necessary tools, as well as the use and storage of personal data by the providers of AI technologies and others within the educational system. 

Solutions to these challenges must also address the existing digital skills gap and connectivity issues.


UNECE Commission’s 70th Session: Digital and Green Transformations for Sustainable Development in the Region | 18–19 April

The 70th session of the UN Economic Commission for Europe (UNECE) Commission hosted ministerial-level representatives from UNECE member states for a two-day event that tackled digital and green transformation for sustainable development in Europe, the circular economy, transport, energy, financing for climate change, and critical raw materials. 

The event allowed participants to exchange experiences and success stories, review progress on the Commission’s activities, and consider issues related to economic integration and cooperation among countries in the region. The session emphasised the need for a green transformation to address pressing challenges related to climate change, biodiversity loss, and environmental pressures, and highlighted the potential of digital technologies for economic development, policy implementation, and natural resource management.


Girls in ICT Day 2023 | 27 April

The International Girls in ICT Day, an annual event that promotes gender equality and diversity in the tech industry, was themed Digital Skills for Life. 

The global celebration was held in Zimbabwe as part of the Transform Africa Summit 2023, while other regions conducted their own events and celebrations

The event was instituted by ITU in 2011, and it is now celebrated worldwide. Governments, businesses, academic institutions, UN agencies, and NGOs support the event, providing girls with opportunities to learn about ICT, meet role models and mentors, and explore different career paths in the industry. 

To date, the event has hosted over 11,400 activities held in 171 countries, with more than 377,000 girls and young women participating.

What to watch for: Global digital policy events in May

10–12 May 2023 | Intergovernmental Group of Experts on E-commerce and the Digital Economy (Geneva and online) 

UNCTAD’s group of experts on e-commerce and the digital economy meets annually to discuss ways of supporting developing countries to engage in and benefit from the evolving digital economy and narrowing the digital divide. The meeting has two substantive agenda items: How to make data work for the 2030 Agenda for Sustainable Development and the Working Group on Measuring E-commerce and the Digital Economy.


19–21 May 2023 | G7 Hiroshima Summit 2023 (Hiroshima, Japan)

The leaders of the Group of Seven advanced economies, along with the presidents of the European Council and the European Commission, convene annually to discuss crucial global policy issues. During Japan’s presidency in 2023, Japanese Prime Minister Fumio Kishida identified several priorities for the summit, including the global economy, energy and food security, nuclear disarmament, economic security, climate change, global health, and development. AI tools will also be on the agenda.


24–26 May 2023 | 16th International CPDP conference (Brussels and online) 

The upcoming Computers, Privacy, and Data Protection (CPDP) conference, themed ‘Ideas That Drive Our Digital World’, will focus on emerging issues such as AI governance and ethics, safeguarding children’s rights in the algorithmic age, and developing a sustainable EU-US data transfer framework. Every year, the conference brings together experts from diverse fields, including academia, law, industry, and civil society, to foster discussion on privacy and data protection.


29–31 May 2023 | GLOBSEC 2023 Bratislava Forum (Bratislava, Slovakia)

The 18th edition of the Bratislava Forum will bring together high-level representatives from various sectors to tackle the challenges shaping the changing global landscape across four main areas: defence and security, geopolitics, democracy and resilience, and economy and business. The three-day forum will feature more than 100 speakers and over 40 sessions.


30 May–2 Jun 2023  | CyCon 2023 (Tallinn, Estonia) 

The NATO Cooperative Cyber Defence Centre of Excellence will host CyCon 2023, an annual conference that tackles pressing cybersecurity issues from legal, technological, strategic, and military perspectives. Themed ‘Meeting Reality’, this year’s event will bring together experts from government, military, and industry to address policy and legal frameworks, game-changing technologies, cyber conflict assumptions, the Russo-Ukrainian conflict, and AI use cases in cybersecurity.

The Digital Watch observatory maintains a live calendar of upcoming and past events.