The USA and the UK signed the Atlantic Declaration to strengthen their economic, technological, commercial and trade relations. The EU’s AI Act might be in jeopardy. Meta is in trouble with the EU over content moderation, namely failure to remove child sexual abuse material from Instagram, and Google has published its Secure AI Framework.
Let’s get started. Andrijana and the Digital Watch team
The first pillar focuses on ensuring US-UK leadership in critical and emerging technologies. Under this pillar, the two nations have established a range of collaborative activities:
They will prioritise research and development efforts, particularly in quantum technologies, by facilitating increased mobility of researchers and students and fostering workforce development to promote knowledge exchange.
They will work together to strengthen their positions in cutting-edge telecommunications by collaborating on 5G and 6G solutions, accelerating the adoption of Open RAN, and enhancing supply chain diversity and resilience.
Deepening cooperation in synthetic biology is also a priority, aiming to drive joint research, develop novel applications, and enhance economic security through improved biomanufacturing pathways.
Investigators will conduct collaborative research in advanced semiconductor technologies, such as advanced materials and compound semiconductors.
Additionally, the countries will accelerate cooperation on AI, with a specific emphasis on safety and responsibility.
This will involve deepening public-private dialogue, mobilising private capital towards strategic technologies, and establishing a US-UK Strategic Technologies Investor Council within the next twelve months. The council will include investors and national security experts who will identify funding gaps and facilitate private investment in critical and emerging technologies. Lastly, efforts will be made to improve talent flows between the USA and the UK, ensuring a robust exchange of skilled professionals.
The second pillar of the partnership centres on advancing cooperation on economic security and technology protection toolkits and supply chains. This involves addressing national security risks associated with some types of outbound investment and preventing their companies’ capital and expertise from fueling technological advances that could enhance the military and intelligence capabilities of countries of concern. Additionally, the countries will work towards flexible and coordinated export controls related to sensitive technologies, enabling the complementarity of their respective toolkits. Strengthening their partnership across sanctions strategy, design, targeting, implementation, and enforcement is another objective. Lastly, the countries aim to reduce vulnerabilities across critical technology supply chains by sharing analysis, developing channels for coordination and consultation during disruptions and crises, and ensuring resilience.
What this means for digital policy: Judging by previous comments from Paul Rosen, the US Treasury’s investment security chief, this is about preventing know-how and investments in advanced semiconductors, AI, and quantum computing from reaching China, which would allegedly use it to bolster military intelligence capabilities. The UK, which already shares a special relationship with the USA in intelligence, just might be joining the US-led export controls on semiconductors. Reminder: Chip giant Arm is headquartered in the UK.
Pillar 3 of the partnership focuses on an inclusive and responsible digital transformation. The countries aim to enhance cooperation on data by establishing a US-UK Data Bridge, ensuring data privacy protections and supporting Global Cross-Border Privacy Rules (CBPR) Forum and the OECD’s Declaration on Government Access to Personal Data Held by Private Sector Entities.
The countries will accelerate cooperation on AI, and the USA welcomed the planned launch of a Global Summit on AI Safety by the UK Prime Minister in the autumn of 2023. Collaboration on Privacy Enhancing Technologies (PETs) is also planned to enable responsible AI models and protect privacy while leveraging data for economic and societal benefits.
Why is it relevant? First, interestingly, there was no mention of the EU in this equation, although it impacts EU and US-UK-EU relations. Second, the USA and the UK are stressing collaboration on AI. It’s clear that AI is a priority for both. The USA is looking to Britain to help lead efforts on AI safety and regulation, hoping that AI companies find more fertile ground in the UK than in the EU’s stricter environment. Fourth, the UK wants to put its EU membership firmly behind it. Sunak stated ‘I know some people have wondered what kind of partner Britain would be after we left the EU. […] And we now have the freedom to regulate the new technologies that will shape our economic future, like AI, more quickly and flexibly.’
Beyond what they said, we’ll see how this impacts what they did not mention: Microsoft’s Activision Blizzard takeover.
Digital policy roundup (6–12 June)
// AI GOVERNANCE //
Is the EU’s AI Act in jeopardy?
The political deal behind the AI Act may be crumbling, and this might affect the Parliament’s endorsement of the text.
In April, a deal struck between the four main political groups at the European Parliament stipulated they would not table alternative amendments to the AI Act. However, the European People’s Party (EPP) was given flexibility on the issue of remote biometric identification (RBI). On 7 June, the final deadline for amendments, the EPP tabled a separate amendment on RBI. There are two problems with that.
Other groups claim that the EPP broke the deal, and they might feel legitimised to vote for amendments that were tabled outside of the deal. If they do, there’s no telling how the Parliament’s plenary vote on 14 June will go.
Not everyone likes what’s in the actual amendment. The EPP’s proposed text stipulates that the member states may authorise the use of real-time RBI systems in public spaces, subject to the prejudicial authorisation, for ‘(1) the targeted search of missing persons, including children; (2) the prevention of a terrorist attack; (3) the identification of perpetrators of criminal offences punishable in the Member State concerned for a maximum period of at least three years.’ MEPs from four political groups (liberals, socialists, greens, and left) firmly oppose the EPP’s amendment on biometric identification. They are asking for a ban on such systems, claiming that AI systems that perform behavioural analysis are prone to error and falsely reporting law-abiding citizens and are discriminatory and ineffective for law enforcement.
Why is it relevant? If the Parliament doesn’t endorse the text, it will slow down the world’s first playbook on AI–it might take longer than the projected end of 2023 to reach a political deal among EU Institutions. This threatens the EU’s plans to be a leader in AI rule-making, and plenty of others are willing to step up.
// CHILD SAFETY ONLINE //
Instagram’s algorithms recommend child-sex content to paedophiles, research finds
An investigation by the Wall Street Journal, Stanford University, and the University of Massachusetts at Amherst uncovered that Instagram has been hosting large networks of accounts posting child sexual abuse material (CSAM). The platform’s practical recommendation algorithms play a key role in Instagram being the most valuable platform for sellers of self-generated CSAM (SG-CSAM): ‘Instagram connects paedophiles and guides them to content sellers via recommendation systems that excel at linking those who share niche interests, the Journal and the academic researchers found.’ Even viewing one such account led to new CSAM selling accounts being recommended to the user, thus helping to build the network.
This is where it gets worse. At the time of the research, Instagram enabled searching for explicit hashtags such as #pedowhore and #preteensex. When researchers searched for a paedophilia-related hashtag, a pop-up informed them: ‘These results may contain images of child sexual abuse’. Below the text, two options were given: ‘Get resources’ and ‘See results anyway’. Instagram has since removed the option to review the content, but it doesn’t stop us from wondering why this option was available in the first place.
Why is it relevant? First,it tells us something about the speed at which the platform reacted to reports of CSAM. Perhaps, it will now change, since the platform said it will form a task force to investigate the problem.
Second, it attracted the ire of European Commissioner Thierry Breton.
Third, Meta will have to demonstrate the measures it plans to take to comply with the EU’s Digital Services Act (DSA) after 25 August or face heavy sanctions, the bloc’s Thierry Breton said. Meta, designated a Very Large Online Platform (VLOP), has stringent obligations under the DSA, and fines for breaches can go to as high as 6% of a company’s global turnover. While Breton didn’t put Twitter on the blast, the company has also been designated as a VLOP, meaning it will also run the risk of being fined.
(And finally, it seems the media did not get the memo: ‘child sexual abuse material’ is the preferred terminology.)
Was this newsletter forwarded to you, and you’d like to see more?
French Senate approves surveillance of suspects using cameras and microphones
The French Senate approved a contentious provision to a justice bill that allows for remote activation of computers and connected devices without the owner’s knowledge. This provision serves two purposes: (1) real-time geolocation for specific offences and the activation of microphones and (2) cameras to capture audio and images, which would be limited to cases of terrorism, delinquency, and organised crime. The Senate also adopted an amendment that limits the possibility of using geolocation to investigate offences punishable by at least ten years imprisonment. However, the implementation of this provision will still require judicial approval.
Why is it relevant? Surveillance tactics are never a favourite with privacy advocates, who typically argue that such privacy breaches cannot be justified by national security concerns. In this instance, the safeguards are unclear, as are mechanisms for redress.
// CYBERSECURITY //
Google introduces Secure AI Framework
Google’s introduced its Secure AI Framework (SAIF), which aims to reduce overall risk when developing and deploying AI systems. It is based on six elements organisations should be mindful of:
Expand strong security foundations to the AI ecosystem by leveraging secure-by-default infrastructure protections and scaling and adapting infrastructure protections as AI threats advance
Bring AI into an organisation’s threat universe by extending detection and response to AI-related cyber incidents
Automate defences to keep pace with existing and new threats, including harnessing the latest AI innovations to improve response efforts
Harmonise platform-level controls to ensure consistent security of AI applications across the organisation
Adapt controls to adjust mitigations and create faster feedback loops for AI deployment via reinforcement learning based on incidents and user feedback
Contextualise AI-system risks in surrounding business processes by conducting end-to-end risk assessments on AI deployment
Google has committed to fostering industry support for SAIF, working directly with organisations to help them understand how to assess and mitigate AI security risks, sharing threat intelligence, expanding its bug hunter programs to incentivise research around AI safety and security anddelivering secure AI offerings.
Why is it relevant? As more AI products are integrated into digital products, the security of the supply chain will benefit from the secure-by-default AI products.
NATO to enhance military cyber defences in peacetime, integrate private sector capabilities
NATO member states are preparing to approve an expanded role for military cyber defenders during peacetime, as well as the permanent integration of private sector capabilities, revealed NATO’s assistant secretary general for emerging security challenges David van Weel. Furthermore, NATO plans to establish a mechanism to facilitate assistance among allies during crises when national response capabilities become overwhelmed.
The endorsement is expected at the upcoming Vilnius summit in Lithuania, scheduled for July.
Why is it relevant? Van Weel stated: ‘We need to move beyond naming and shaming bad actors in response to isolated cyber incidents, and be clear what norms are being broken.’ The norms he referred to are agreed-upon norms of responsible state behaviour in cyberspace, confirmed in the reports of the GGEs and the first OEWG on ICTs. His remarks come just two weeks after UN member states met under the auspices of the OEWG in New York to discuss responsible state behaviour in cyberspace. We’ll have more on that towards the end of this week on the Digital Watch Observatory–keep an eye out.
China issues draft guidelines to tackle cyber violence
The guidelines propose the punishment of online defamation, insults, privacy violations, and offline nuisance behaviour, such as intercepting and insulting victims of cyber violence and their relatives and friends, causing disturbances, intimidating others, and destroying property. They also address using violent online methods for malicious marketing and hype, as well as protecting civil rights and identifying illegal acts.
The guidelines also note that network service providers can be convicted and punished for the offence if they neglect their legal obligations to manage information network security regarding identified instances of cyber violence and fail to rectify the situation after being instructed by regulatory authorities to take corrective measures. This applies to cases where such neglect results in the widespread dissemination of illegal information or other serious consequences.
The draft is open for public comments until 25 June.
// CRYPTOCURRENCIES //
US SEC launches lawsuits against Binance and Coinbase
The world’s biggest cryptocurrency exchanges–Binance and Coinbase–were hit by a wave of lawsuits from the US Securities and Exchange Commission (SEC).
Why is it relevant?Because down the line, these actors might leave the USA for greener pastures. Diplo’s Arvin Kamberi has more on that in the video below.
The week ahead (13–19 June)
13 June: The Swiss Internet Governance Forum 2023will discuss the use and regulation of AI, especially in the context of education, protecting fundamental rights in the digital age, responsible data management, platform influence, democratic practices, responsible use of new technologies, internet governance; and the impact of digitalisation on geopolitics. Our Director of Digital Policy, Stephanie Borg Psaila, will speak at the session: Digital governance and the multistakeholder approach in 2023.
14 June: The last two GDC thematic deep dives will focus on global digital commons and accelerating progress on the SDGs. The discussion on the global digital commons will explore principles, values, and ideas associated with this approach while considering how the Global Digital Commons (GDC) can enhance the safety, inclusivity, and the global ecosystem of digital public infrastructure and goods. The discussion on accelerating progress on the SDGs will examine the role of digital technology in achieving the SDGs and addressing future challenges, and the potential for generalising principles and approaches based on shared experiences. For more information on the GDC, visit our dedicated web page on the Digital Watch Observatory.
15–16 June: This year’s Digital Assembly will be held under the theme: A digital, open and secure Europe, and has openness, competition, digitalisation and cybersecurity in its focus. The assembly is organised by the European Commission and the Swedish Presidency of the Council of the EU.
19–21 June: The 2023 edition of Europe’s regional internet governance gathering–EuroDIG–will be themed Internet in troubled times: Risks, resilience, hope. The GIP will once again partner with EuroDIG to deliver messages and reports from the conference using DiploGPT. The reports and messages will be available on our dedicated Digital Watch page.
19 June–14 July: The 53rd session of the Human Rights Council (HRC) will feature a panel discussion on the role of digital, media, and information literacy in the promotion and enjoyment of the right to freedom of opinion and expression. The council will also consider the report on the relationship between human rights and technical standard-setting processes for new and emerging digital technologies and the practical application of the Guiding Principles on Business and Human Rights, as well as the report on Digital innovation, technologies, and the right to health.
We asked: who holds the dice in the grand game of addressing AI for the future of humanity? A brief summary of the UN Secretary-General’s policy brief with suggestions on how a Global Digital Compact (GDC) could help advance digital cooperation, May’s barometer of updates, and the leading global digital policy events ahead in June also feature.
Andrijana Gavrilović – Author
Editor – Digital Watch; Head of Diplomatic & Policy Reporting, Diplo