Home | Newsletters & Shorts | Digital Watch newsletter – Issue 80 – June 2023

Digital Watch newsletter – Issue 80 – June 2023

Download your copy

EN

AI is the name of the game

In the spirit of the front page illustration of the grand game of addressing AI for the future of humanity, an essential question arises: Who holds the dice? Is it mere coincidence, the divine, or vested interests?

 Animal, Bird, Penguin, Person, Baby, Face, Head

In May, AI dominated global discussions and media coverage, with AI on the agendas of meetings and parliamentary debates. What’s the hype?

First, there are very loud warnings that AI threatens the very survival of humanity. 

Second, the warning of existential risks is typically associated with a call to regulate future AI development. Using a new dynamic, businesses ask to be regulated. OpenAI CEo Sam Altman emphasised the crucial role of government in regulating AI and advocated for establishing a governmental or global AI agency to oversee the technology. This regulatory body would require companies to obtain licenses before training powerful AI models or operating data centres facilitating AI development. Doing so would hold developers to safety standards and establish consensus on the standards and risks that require mitigation. In parallel, Microsoft has published a comprehensive blueprint for governing AI, with Microsoft President Brad Smith also advocating for creating a new government agency to enforce new AI rules in his foreword.

Third, governments from developed countries are responding positively to the idea of regulating the future development of AI.

Fourth, there are growing voices saying that regulation supported by an existential threat narrative aims to block open-source AI developments and concentrate AI power in the hands of just a few leaders, mainly OpenAI/Microsoft and Google.

Regardless of the motivation behind AI regulation, we can identify a few echoing topics for regulation: privacy violations, bias, the proliferation of scams, misinformation, and the protection of intellectual property, among others. However, not all regulators share the same focal points. Here’s a snapshot of what jurisdictions worldwide expressed in May 2023 regarding their desire to regulate AI and their proposed approaches.

The EU. The world’s first rulebook for AI is, unsurprisingly, being shaped by the regulatory behemoth that is the EU. The bloc is taking a risk-based approach to AI, establishing obligations for AI providers and users based on the level of risk posed by the AI systems. It also introduces a tiered approach for regulating general-purpose AI, and foundation and generative AI models. The draft rules need to be endorsed in the Parliament’s plenary, which is expected to happen during the 12–15 June session. Then, negotiations with the Council on the law’s final form can begin. 

The USA. US government officials met with Alphabet, Anthropic, Microsoft, and OpenAI CEOs and discussed three key areas: the transparency, evaluation, and security of AI systems. The White House and top AI developers will collaborate to evaluate generative AI systems for potential flaws and vulnerabilities, such as confabulations, jailbreaks, and biases. The USA is also evaluating AI’s impact on the workforce, education, consumers, and the risks of biometric data misuse.

The UK. Another government that will collaborate with the industry is the UK: Prime Minister Rishi Sunak has met with the CEOs of OpenAI, Google DeepMind, and Anthropic to discuss the risks AI can pose, such as disinformation, national security, and existential threats. The CEOs agreed to work closely with the UK’s Foundation Model Taskforce to advance AI safety. The UK also focuses on AI-related election risks and the impact of developing AI foundation models for competition and consumer protection. The UK will seemingly keep its sectoral approach to AI, with no general AI regulation planned.

China. The Cyberspace Administration of China (CAC) raised concerns over advanced technologies such as generative AI, noting that they could seriously challenge governance, regulation and the labour market. The country has also called for improving the security governance of AI. In April, the CAC proposed measures for regulating generative AI services, which specify that providers of such services must ensure that their content aligns with China’s core values. Prohibited content includes discrimination, false information, and infringement of intellectual property rights (IPR). Tools utilised in generative AI services must undergo a security assessment before launch. The measures were open for comments until 2 June, meaning we will see the outcome soon.

Australia. Australia is concerned with AI risks such as deepfakes, misinformation, disinformation, self-harm encouragement, and algorithmic bias. The country is currently seeking opinions on whether it should support the development of responsible AI through voluntary approaches, like tools, frameworks, and principles or enforceable regulatory approaches, like laws and mandatory standards. 

South Korea. The country’s AI Act is only a few steps away from the National Assembly’s final vote. It would allow AI development without government pre-approval, categorise high-risk AI and set trustworthiness standards, support innovation in the AI industry, establish ethical guidelines, and create a Basic Plan for AI and an AI Committee overseen by the prime minister. The government also announced it would create new guidelines and standards for copyrights of AI-generated content by September 2023.

Japan. The Japanese government aims to promote and strengthen domestic capabilities to develop generative AI while addressing AI risks such as copyright infringement, exposure of confidential information, false information, and cyberattacks, among other concerns.

Italy. Italy temporarily banned ChatGPT over GDPR violations in March. ChatGPT has returned to Italy after OpenAI revised its privacy disclosures and controls, but Garante, the data protection authority of Italy, is intensifying its scrutiny of AI systems for adherence to privacy laws.

France. French privacy watchdog CNIL launched an AI Action Plan to promote a framework for developing generative AI, which upholds personal data protection and human rights. The framework is based on four pillars: (a) understanding AI’s impact on fairness, transparency, data protection, bias, and security; (b) developing privacy-friendly AI through education and guidelines; (c) collaborating with AI innovators for data protection compliance; and (d) auditing and controlling AI systems to safeguard individuals’ rights, including addressing surveillance, fraud, and complaints.

India. The government is considering a regulatory framework for AI-enabled platforms due to concerns such as IPR, copyright, and algorithm bias, but is looking to do so in conjunction with other countries.

International efforts. Ahead of the EU’s planned AI Act, the European Commission and Google plan to join forces ‘with all AI developers’ to develop a voluntary AI pact. Open AI’s Altman is also set to meet EU officials about the pact

The EU and the USA will jointly prepare an AI code of conduct to foster public trust in the technology. The voluntary code ‘would be open to all like-minded countries,’ US Secretary of State Anthony Blinken stated. 

Additionally, the G7 has agreed to launch a dialogue on generative AI – including issues such as governance, disinformation, and copyright – in cooperation with the Organisation for Economic Co-operation and Development (OECD) and the Global Partnership on AI (GPAI). The ministers will discuss AI as the ‘Hiroshima AI process’ and report results by the end of 2023. The G7 leaders have also called for developing and adopting technical standards to ensure the trustworthiness of AI. 

So, who holds the dice?
It’s not clear yet. We hope it will be citizens who hold the dice. Regulatory efforts aside, one way to ensure that individuals remain in charge of their knowledge, even when codified by AI, is through bottom-up AI. This would mitigate the risk of centralisation of power inherent in large generative AI platforms. In addition, bottom-up AI is typically based on an open-source and transparent approach that can mitigate most safety and security risks related to centralised AI platforms. Many initiatives, including the development strategies of Diplo’s AI, have proven that bottom-up AI is technically feasible and economically viable. There are many reasons to adopt bottom-up AI as a practical way to foster a new societal operating system built around the centrality, dignity, free will, and the achievement of the creative potential of human beings.

Dr Jovan Kurbalija, Director of DiploFoundation, explains why bottom AI is critical for our future.

Digital policy developments that made global headlines

The digital policy landscape changes daily, so here are all the main developments from May. There’s more detail in each update on the Digital Watch Observatory.        

Global digital governance architecture

 Triangle

World Telecommunication and Information Society Day was observed on 17 May with calls by UN officials to bridge the digital divide, support digital public goods, and establish a Global Digital Compact (GDC).
The Fourth EU-US ministerial meeting of the Trade and Technology Council (TTC) covered AI risks, content regulation, digital identities, semiconductors, quantum technologies, and connectivity projects.

Sustainable development

 Triangle

To bridge the digital gender gap by 2030, 100 million more women must embrace mobile internet annually, a GSMA report found.

The EU Commission and WHO launched a landmark digital health initiative to establish a comprehensive global network for digital health certification.
Papua New Guinea rolled out a platform for managing digital IDs.The Maldives introduced a digital ID mobile app for streamlined access to government services. Evrotrust’s eID program became Bulgaria’s official digital ID system.

 

Security

 Triangle

A Chinese report claims it has identified five methods the CIA uses to launch colour revolutions abroad and nine methods used as weapons for cyberattacks.

The Five Eyes cyber agencies attributed cyberattacks on US critical infrastructure to the Chinese state-sponsored hacking group Volt Typhon, which China has denied. The FBI disrupted a Russian cyberespionage operation dubbed Snake. The governments of Colombia, Senegal, Italy and Martinique suffered cyberattacks.

The USA and South Korea issued a joint advisory warning that North Korea is using social engineering tactics in cyberattacks.
NATO has warned of a potential Russian threat to internet cables and gas pipelines in Europe or North America.

Infrastructure

 Triangle

The Body of European Regulators for Electronic Communications (BEREC) and the majority of EU countries are against a push by telecom providers to get Big Tech to contribute to the cost of the rollout of 5G and broadband in Europe.
Tanzania has signed agreements to extend telecommunications services to 8.5 million individuals in rural areas.

 

E-commerce and the internet economy

 Triangle

The Body of European Regulators for Electronic Communications (BEREC) and the majority of EU countries are against a push by telecom providers to get Big Tech to contribute to the cost of the rollout of 5G and broadband in Europe.
Tanzania has signed agreements to extend telecommunications services to 8.5 million individuals in rural areas.

Digital rights

 Triangle

South Korea proposed changes to its Personal Information Protection Act to strengthen consent requirements, unify online/offline data processing standards, and establish criteria for assessing violations.

The 2023 World Press Freedom Index reveals that journalism is threatened by the fake content industry and rapid AI development
Internet shutdowns were reported in Pakistan in the wake of the arrest of the former prime minister, and in Sudan amid protests over the sentencing of an opposition leader, while social media was restricted in Guinea over protests.

 

Content policy

 Triangle

US Supreme Court rulings in Gonzalez v. Google, LLC and Twitter, Inc. v. Taamneh maintained Section 230 protections for online platforms

Google and Meta threatened to block links to Canadian news sites if a bill requiring internet platforms to pay publishers for their news is passed. 

Austria banned the use of TikTok on federal government officials’ work phones.
The Digital Public Goods Alliance (DPGA) and UNDP announced nine innovative open-source solutions to address the global information crisis. The EU called for clear labelling of AI-generated content to combat disinformation. While Twitter pulled out of code to tackle disinformation, it must still comply with the Digital Service Act when operating in the EU.

Jurisdiction and legal issues

 Triangle

Apple faces investigation in France over complaints that it intentionally causes its devices to become obsolete to compel users to purchase new ones. 
Meta was fined €1.2bn in Ireland for mishandling user data and its continued transfer of data to the USA in violation of an EU court ruling.

 

Technologies

 Triangle

A Chinese WTO representative has criticised the USA’s semiconductor industry subsidies, calling them an attempt to stymie China’s technological progress. South Korea asked the USA to review its rule barring China and Russia from using US funds for chip manufacturing and research.

The USA is considering investment restrictions on Chinese chips, AI, and quantum computing to curb the flow of capital and expertise. 
Australia has released a new National Quantum Strategy. China has launched a quantum computing cloud platform for researchers and the public.


UN Secretary-General’s policy brief for GDC

The UN Secretary-General has issued a policy brief with suggestions on how a Global Digital Compact (GDC) could help advance digital cooperation. The GDC is to be agreed upon in the context of the Summit of the Future in 2024 and is expected to ‘outline shared principles for an open, free and secure digital future for all’. Here is a summary of the brief’s main points.

The brief outlines areas where ‘the need for multistakeholder digital cooperation is urgent’: closing the digital divide and advancing SDGs, making the online space open and safe for everyone, and governing AI for humanity. It also suggests objectives and actions for advancing digital cooperation, structured around eight topics proposed to be covered by the GDC.

Digital connectivity and capacity building. The aim is to bridge the digital divide and empower individuals to participate fully in the digital economy. Proposed actions include setting universal connectivity targets and enhancing public education for digital literacy.

Digital cooperation for SDG progress. Objectives involve targeted investments in digital infrastructure and services, ensuring representative and interoperable data, and establishing globally harmonised digital sustainability standards. Proposed actions include defining safe and inclusive digital infrastructures, fostering open and accessible data ecosystems, and developing a common blueprint for digital transformation.

Upholding human rights. The focus is on placing human rights at the core of the digital future, addressing the gender digital divide, and protecting workers’ rights. A key proposed action is establishing a digital human rights advisory mechanism facilitated by the Office of the UN High Commissioner for Human Rights.

Inclusive, open, secure, and shared internet. Objectives include preserving the free and shared nature of the internet and reinforcing accountable multistakeholder governance. Proposed actions involve commitments from governments to avoid blanket internet shutdowns and disruptions to critical infrastructures.

Digital trust and security. Objectives range from strengthening multistakeholder cooperation to developing norms, guidelines, and principles for responsible digital technology use. Proposed actions include creating common standards and industry codes of conduct to address harmful content on digital platforms.

Data protection and empowerment. Objectives include governing data for the benefit of all, empowering individuals to control their personal data, and establishing interoperable standards for data quality. Proposed actions include encouraging countries to adopt a declaration on data rights and seeking convergence on principles for data governance through a Global Data Compact.

Agile governance of AI and emerging technologies. Objectives involve ensuring transparency, reliability, safety, and human control in AI design and use, and prioritising transparency, fairness, and accountability in AI governance. Proposed actions range from establishing a high-level advisory body for AI to building regulatory capacity in the public sector.

Global digital commons. Objectives include inclusive digital cooperation, sustained exchanges across states and sectors, and responsible development of technologies for sustainable development and empowerment.

Implementation mechanisms

The policy brief proposes numerous implementation mechanisms. The most notable is an annual Digital Cooperation Forum (DCF) to be convened by the Secretary-General to facilitate collaboration across digital multistakeholder frameworks and reduce duplication, promote cross-border learning in digital governance, and identify policy solutions for emerging digital challenges and governance gaps. The document further notes that ‘the success of a GDC will rest on its implementation’ at national, regional, and sectoral levels, supported by platforms like the Internet Governance Forum (IGF) and the World Summit on the Information Society Forum (WSIS). The brief suggests establishing a trust fund to sponsor a Digital Cooperation Fellowship Programme to enhance multistakeholder participation.

Global Digital Compact home
Global Digital Compact
Read more about the Global Digital Compact.
Global Digital Compact home
Global Digital Compact
Read more about the Global Digital Compact.

Policy updates from International Geneva

Intergovernmental Group of Experts on E-commerce and the Digital Economy, sixth session | 10–12 May

The main objective of this intergovernmental group of experts is to enhance UNCTAD’s efforts in the fields of information and communications technologies, e-commerce, and the digital economy, to empower developing nations to participate in and gain advantages from the ever-changing digital economy. Additionally, the group works to bridge the digital divide and promote the development of inclusive knowledge societies. The 6th session focuses on two main agenda items: How to make data work for the 2030 Agenda for Sustainable Development and Working Group on Measuring E-commerce and the Digital Economy.


2023 Group of Governmental Experts (GGE) on emerging technologies in the area of lethal autonomous weapons systems (LAWS), second session | 15–19 May

The second session of the GGE on LAWS convened in Geneva to ‘intensify the consideration of proposals and elaborate, by consensus, possible measures’ in the context of the Convention on Certain Conventional Weapons (CCW) while bringing in legal, military, and technological expertise.

In the advance version of the final report (CCW/GGE.1/2023/2), the GGE concluded that when characterising weapon systems built from emerging technologies in the area of LAWS, it is crucial to consider the potential future developments of these technologies. The group also affirmed that states must observe particular compliance with international humanitarian law throughout the life cycle of such weapon systems. States should limit the types of targets and the duration and scope of operations with which the weapon systems can engage; adequate training must be given to human operators. In cases where the weapon system based on technologies in the area of LAWS cannot comply with international law, the system must not be deployed.


The 76th World Health Assembly | 21–30 May

The 76th World Health Assembly (WHA) invited delegates of its 194 member states to Geneva to confer on the organisation’s priorities and policies under the theme of‘WHO at 75: Saving lives, driving health for all. A series of roundtables took place where delegates, partner agencies, representatives of civil society, and WHO experts deliberated about current and future public health issues of global importance. Committee B on 23 May specifically elaborated on the progress reports (A76/37) that highlighted the implementation of  ‘global strategies on digital health’ as agreed in the 73rd WHA. Since the endorsement of the strategies in 2020, the WHA Secretariat, together with development partners and other UN agencies, has trained over 1,600 government officials in more than 100 member states in digital health and AI. The secretariat has also launched numerous initiatives for knowledge dissemination and national developments related to digital health strategies. From 2023 to 2025, the secretariat will continue facilitating coordinated actions set out in the global strategies while prioritising member states’ needs.

What to watch for: Global digital policy events in June

5–8 June | RightsCon (San José, Costa Rica and online)

The 12th annual RightsCon will discuss global developments related to digital rights in 15 tracks: access and inclusion; AI; business, labour, and trade; conflict and humanitarian action; content governance and disinformation; cyber norms and encryption; data protection; digital security for communities; emerging tech; freedom of the media; futures, fictions, and creativity; governance, politics, and elections; human rights-centred design; justice, litigation, and documentation; online hate and violence; philanthropy and organisational development; privacy and surveillance; shutdowns and censorship; and tactics for activists.


12–15 June 2023 | ICANN 77 Policy Forum (Washington, DC, the USA)

The Policy Forum is the second meeting in the three-meeting annual cycle. The focus of this meeting is the policy development work of the Supporting Organizations and Advisory Committees and regional outreach activities. ICANN aims to ensure an inclusive dialogue that provides equal opportunities for all to engage on important policy matters.


13 June 2023  | The Swiss Internet Governance Forum 2023 (Bern, Switzerland and online)

This one-day event will focus on topics such as the use and regulation of AI, especially in the context of education; protecting fundamental rights in the digital age; responsible data management; platform influence; democratic practices; responsible use of new technologies; internet governance; and the impact of digitalisation on geopolitics.


15–16 June 2023 | Digital Assembly 2023 (Arlanda, Sweden and online)

Organised by the European Commission and the Swedish Presidency of the Council of the European Union, this assembly will be themed: A Digital, Open and Secure Europe. The conference program includes five plenary sessions, six breakout sessions, and three side events. The main discussion topics will be digital innovation, cybersecurity, digital infrastructure, digital transformation, AI, and quantum computing.


19–21 June 2023 | EuroDIG 2023 (Tampere, Finland and online)

EuroDIG 2023 will be held under the overarching theme of Internet in troubled times: Risks, resilience, hope. In addition to the conference, EuroDIG hosts YOUthDIG, a yearly pre-event that fosters the active participation of young people (ages 18–30) in internet governance. The GIP will once again partner with EuroDIG to deliver updates and reports from the conference using DiploGPT.


DiploGPT reported from the UN Security Council meeting

In May, Diplo used AI to report from the UN Security Council session: Futureproofing trust for sustaining peace. DiploGPT provided automatic reporting that produced a summary report, an analysis of individual submissions, and answers to the questions posed by the chair of the meeting. DiploGPT combines various algorithms and AI tools customised to the needs of the UN and diplomatic communications.


The Digital Watch observatory maintains a live calendar of upcoming and past events.