Home | Newsletters & Shorts | DW Weekly #127 – 11 September 2023

DW Weekly #127 – 11 September 2023

 Text, Paper, Page

Dear all,

Last week focused on the G20 Summit and the success of Indian diplomacy in fostering consensus on the New Delhi Leaders’ Declaration. AI remained in focus for G7 and on both sides of the Atlantic, and Google is facing a monopoly trial in the USA, the first of its kind in the modern internet era. 

Let’s have a closer look.

Pavlina and the Digital Watch team


G20 Summit and the New Delhi Leaders’ Declaration

The G20 summit over the weekend reached an unanticipated consensus. The summit statement on the Russia-Ukraine conflict, together with the inclusion of the African Union as a new member, is seen as a significant success of Indian diplomacy. 

The group adopted the New Delhi Leaders’ Declaration by consensus, where digital issues received relatively higher relevance than other diplomatic issues. The declaration deals with technological transformation and digital public infrastructure, giving a boost to India’s measures to push the global adoption of digital public infrastructure. G20 Framework for Systems of Digital Public Infrastructure, the global Digital Public Infrastructure Repository, and the One Future Alliance (OFA) proposal are voluntary measures aimed at supporting the Global South to build up inclusive digital public infrastructure.

The declaration also endorses the International Monetary Fund (IMF) and the G20’s Financial Stability Board (FSB)  joint paper outlining policy and regulatory recommendations to address the risks of crypto assets. 

On the topic of AI, the declaration reaffirmed existing G20 AI principles from 2019 with calls for global discussions on AI governance. The declaration also places a strong emphasis on the gender digital divide.

Why is it relevant?

The fact that the G20 has adopted a consensus document, unlike the previous G20 meetings that resulted in the chair’s summaries, is seen as a win, staving off the division within the G20. The outcomes, however, are being criticised for lack of action, implementation steps and timelines.

Digital policy roundup (5–11 September)

India, the Middle East and Europe’s new economic corridor

The USA, India, Saudi Arabia, the UAE, France, Germany, Italy, Japan, and the EU have announced a major international infrastructure project – the India-Middle East-Europe Economic Corridor (IMEC) – to connect India, the Middle East, and Europe with railways, shipping lines, high-speed data cables, and energy pipelines. The project aims to counter China’s Belt and Road vision, where the Middle East is also a key player. 

Why is this relevant?

The Chinese Belt and Road Initiative (BRI), launched in 2013, also referred to as the New Silk Road is an ambitious infrastructure project devised to link East Asia and Europe. Over the years, it has expanded to Africa, Oceania, and Latin America, broadening Chinese influence. The new IMEC project would create an economic corridor between India, the Middle East, and the EU, fostering trade and export, as well as the influence of the partner countries in this region. It also deals with laying high-speed data cables from India to Europe and providing internet access throughout this region.


G7 to develop an international code of conduct for AI

In seeking a unified approach towards AI, the G7 countries have agreed to create an international code of conduct for AI. According to the G7 statement, the current process shall result in a nonbinding international rulebook that would set principles for oversight of advanced forms of AI and cover guidelines and control over the use of AI technology. The code of conduct shall be presented to the G7 leaders at the beginning of November.

Why is this relevant?

The G7 code of conduct for AI would require companies to take responsibility for the AI mechanisms they have created, for potential societal harm, and put cybersecurity and risk management systems in place to mitigate risks caused by the AI, from its development to implementation. The G7 code of conduct aims to guide the development of regulatory and governance regimes, coinciding with the current adoption process of the EU AI Act and the US voluntary commitments in July.

Civil society issues a statement on EU’s AI Act loophole

More than 115 civil society organisations are calling on EU legislators to remove a loophole in the draft AI Act, set to be adopted by the end of the year. In a joint statement, civil society calls for changes to the high-risk classification process in Article 6, asking the legislators to revert to the original wording and ensure that the rights of people affected by AI systems are prioritised.

As per the current wording of Article 6, the regulation would allow ‘the developers of high-risk systems to decide themselves if they believe the system is “high-risk”’. As a result, the same company that would be subject to the law is given the power to decide whether the law applies to them. The changes that created this loophole were introduced as a result of lobbying efforts by tech companies

Why is this relevant?

In its original form, the draft AI Act outlined a list of ‘high-risk uses’ of AI, including AI systems used to monitor students, assess consumers’ creditworthiness, evaluate job-seekers, and determine who gets access to welfare benefits. The legislation would require developers and deployers of such high-risk AI to ensure that their systems are safe and free from discriminatory bias and to provide publicly accessible information about how their systems work.

Pressure builds to legislate on AI in the US

On the other side of the Atlantic, Biden’s administration is under increased pressure to require government agencies to comply with the AI Bill of Rights. More than 60 organisations currently called for the AI Bill of Rights to be a binding policy for US federal government agencies, contractors, and grantees to ensure guardrails and protections against algorithmic abuse.

The USA has taken a comparatively hands-off approach to AI regulation so far. However, the calls to legislate have now materialised in a bipartisan AI legislative effort. The heads of the Senate Judiciary Subcommittee on Privacy, Technology, and Law, Sen. Richard Blumenthal (D-CT) and Sen. Josh Hawley (R-MO), announced a framework to regulate AI. According to The Hill, ‘The framework calls for establishing a licensing regime administered by an independent oversight body. It would require companies that develop AI models to register with the oversight authority, which would have the power to audit the companies seeking licenses.’ It also calls for Congress to clarify that Section 230 of the Communications Decency Act, which shields tech companies from legal consequences of content posted by third parties, does not apply to AI.

Why is this relevant?

The latest push to legislate AI would put in place a binding framework for companies (through the AI framework) and the US federal government (through the binding AI Bill of Rights), providing transparency protection to consumers and children and defending national security. The AI Framework announcement comes days before the AI Senate Forum scheduled for 13 September. The initiative of Senate Majority Leader Chuck Schumer (D-NY) will bring together top executives of the biggest tech companies to an ‘Insight Forum’ and is aimed to supplement the work already underway regarding AI regulation.

UNESCO released guidance on AI in education

Another call to regulate generative AI, this time in schools, comes from UNESCO with its Guidance for Generative AI in Education and Research. According to the guidance, governments should take steps to safeguard data privacy and implement age restrictions (minimum age limit of 13 years) for users, without delay. 

Why is this relevant?

Most educational institutions worldwide are currently faced with the dilemma of implementing and overseeing AI in educational processes. In addition to the dilemma of whether AI should be prohibited or not and how it should be regulated, the current generative AI models, such as ChatGPT, are trained on data from online users, which mostly reflect the values and dominant social norms of the Global North and, therefore, may widen the digital divide.

Was this newsletter forwarded to you, and you’d like to see more?


European Commission designates six companies as gatekeepers under the DMA

The European Commission has designated 6 major tech companies, including Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft, as gatekeepers under the Digital Markets Act (DMA), concluding a 45-day review process. The designation applies to a total of 22 core platform services provided by these companies.

These companies must ensure full compliance with the relevant obligations under the DMA  until 6 March 2024, related to, for example, data use, favouring of own products or services, pre-installation of applications or default settings, interoperability, etc.

In the event a gatekeeper breaches the rules under the DMA, it risks being fined up to 10% of its total global annual turnover. This can be increased to up to 20% of the total annual turnover in case of repeated offences.

Why is this relevant?

The DMA is directly applicable in EU member states. Third parties may invoke the rights and obligations stemming from the DMA directly in their national courts.

Google faces trial on market dominance

In the 1st trial on a monopoly of Big Tech companies since 1998, the US Department of Justice (DOJ) and a bipartisan group of attorneys general from 38 states and territories has commenced a trial against Google on the question of whether Google abused its dominant position in online search. Filed 3 years ago, the case – U.S. et al, v. Google,  alleges that Google used its 90 percent market share to illegally throttle competition in both searches and search advertising. This trial is seen as pivotal for 2 reasons. It moves beyond challenging the mergers and acquisitions of Big Tech to examine their business models, and it is the first case by the DOJ since 1998, when it successfully argued that Microsoft monopolised the personal computers market. 

On the same note, Google has agreed to pay USD2 billion in a tentative settlement with 50 US states on an alleged app store monopoly. 

Why is this relevant?

The outcome of this case will set a precedent for Big Tech on their business practices and their contribution to market dominance. The trial is set to last ten weeks.

The week ahead (11–18 September)

12–15 September: WTO Public Forum 2023 (Geneva), with a launch of the Digital and Sustainable Trade Facilitation Global Report 2023: State of Play and Way Forward on 15 September 

11 September–13 October: The 54th session of the Human Rights Council

14–15 September: Global Cyber Conference 2023

18–19 September: SDG Summit 2023

 Advertisement, Poster

The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0

The RAND Corporation published a report on the impacts of generative AI on social media manipulation and national security risks. While the authors focus on China as an example of this potential threat, many factors could use generative AI for social media manipulation, including technically sophisticated non-state actors. Read the full report.

 Architecture, Building, House, Housing, Villa, Neighborhood, City, Road, Street, Urban, Advertisement, Poster, Hotel, Resort, Person, Bicycle, Transportation, Vehicle

Digital Government Review of Latin America and the Caribbean

The OECD report analyses how governments in Latin America and the Caribbean could use digital technology and data to foster responsiveness, resilience, and proactivity in the public sector. Looking at governance frameworks, digital government capabilities, data-driven public sector, public service design and delivery, and digital innovation in the public sector, it provides policy recommendations. Read the full report.

Itlelson Pavlina square
Pavlina Ittelson – Author
Executive Director, Diplo US
nPHsW2zD ginger
Virginia Paque – Editor
Senior Editor Digital Policy, DiploFoundation