Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping was momentous not so much for what was said (see outcomes further down), but for the fact that it happened at all.
Over the weekend, the news of Sam Altman’s ousting from OpenAI caused quite a stir. He didn’t need to wait long to find a new home: Microsoft.
Lots more happened, so let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Biden-Xi Summit cools tensions after long tech standoff
Last week’s meeting between US President Joe Biden and Chinese President Xi Jinping, in San Francisco on the sidelines of the Asia-Pacific Economic Cooperation’s (APEC) Leaders’ Meeting, marked a significant step towards reducing tensions between the two countries.
Implications for tech policy. Tensions, especially about technology, have been escalating for months and years. For instance, in August, the US government issued a new executive order banning several Chinese-owned software and apps from its market. This order was met with some trepidation by tech companies operating in both countries as it was unclear how this would affect their businesses. But now, after Biden and Xi’s meeting, there is hope that tensions between the two countries will ease and that this softening will cover many aspects, including tech cooperation and policy. At least, so we hope.
Responsible competition. Prior to their closed-door meeting, the two leaders pragmatically acknowledged that the USA and China have contrasting histories, cultures, and social systems. Yet, President Xi said, ‘As long as they respect each other, coexist in peace, and pursue win-win cooperation, they will be fully capable of rising above differences and find the right way for the two major countries to get along with each other’. Biden earlier had said, ‘We have to ensure that competition does not veer into conflict. And we also have to manage it responsibly.’
State meeting with Xi, Biden, and staff. Credit: @POTUS on X.
Cooperation on AI. Among other topics, the two presidents agreed on the need ‘to address the risks of advanced AI systems and improve AI safety through US-China government talks,’ the post-summit White House readout said. It’s unclear what this means exactly, given that both China and the USA have already introduced the first elements of an AI framework. The fact that they brought this up, however, means that the USA certainly wants to stop any trace of AI technology theft in its tracks. But what’s in it for China?
US investment. A high-level diplomat suggested to Bloomberg that Xi’s address asking US executives to invest more in China was a signal that China needs US capital because of mistakes at home that have hurt China’s economic growth. If US Ambassador to Japan Rahm Emanuel is right, that explains why cooperation is a win-win outcome.
Tech exports. There’s a significant ‘but’ to the appearance of a thaw. Cooperation will continue as long as advanced US technologies are not used by China to undermine US national security. The readout continued: ‘The President emphasised that the United States will continue to take necessary actions’ to prevent this from happening, at the same time ‘without unduly limiting trade and investment’.
Unreported? Undoubtedly, there were other undisclosed topics discussed by the two leaders during their private meeting. For instance, what happened to the ‘likely’ deal on banning AI from autonomous weapon systems, including drones, which a Chinese embassy official hinted at before the meeting and on which the USA took a new political stand just two days prior?
Although it’s early days to see any significant positive ripple waves after the meeting, we’ll let the fact that Biden and Xi met face to face sink in a little bit. After all, as International Monetary Fund managing director Kristalina Georgieva told Reuters, the meeting was a badly needed signal that the world can cooperate more.
Digital policy roundup (13–20 November)
// AI //
Sam Altman ousted from OpenAI, joins Microsoft
Sam Altman, the CEO of OpenAI, who was fired on Friday in a surprise move by the company’s board, will now be joining Microsoft. Altman will lead a new AI innovation team at Microsoft, CEO Satya Nadella announced today (Monday). Fellow OpenAI co-founder Greg Brockman, who was removed from the board, will also join Microsoft.
Although Twitch co-founder Emmett Shear has been appointed as interim CEO, OpenAI’s future is far from stable: A letter signed by over 700 OpenAI employees has demanded the resignation of the board and the reinstatement of Altman (which might not even be possible at this stage).
Why is it relevant? First, Altman was the driving force behind the company – and its technology – which pushed the boundaries in AI and machine learning in such a short and impactful time. More than that, Altman was OpenAI’s main fundraiser; the new CEO will have big shoes to fill. Second, Microsoft has been a major player in the world of AI for many years; Altman’s move will further increase Microsoft’s already significant influence in this field. Third, tech companies can be as volatile as stock markets.
Sam Altman shows off an OpenAI badge, which he said was the last time to ever wear it.
US Senate’s new AI bill to make risk assessments and AI labels compulsory
A group of US senators have introduced a bill to establish an AI framework for accountability and certification based on two categories of AI systems – high-impact and critical-impact ones. The AI Research, Innovation, and Accountability Act of 2023 – or AIRIA – will also require internet platforms to implement a notification mechanism to inform the users when the platform is using generative AI.
Joint effort. Under the bill, introduced by members of the Senate Commerce Committee, the National Institute of Standards and Technology (NIST) will be tasked with developing risk-based guidelines for high-impact AI systems. Companies using critical-impact AI will be required to conduct detailed risk assessments and comply with a certification framework established by independent organisations and the Commerce Department.
Why is it relevant? The bipartisan AIRIA is the latest US effort to establish AI rules, closely following President Biden’s Executive Order on Safe, Secure, and Trustworthy AI. It’s also the most comprehensive AI legislation introduced in the US Congress to date.
// IPR //
Music publishers seek court order to stop Anthopic’s AI models from training on copyrighted lyrics
A group of music publishers have requested a US federal court judge to block AI company Anthropic from reproducing or distributing their copyrighted song lyrics. The publishers also want the AI company to implement effective measures that would prevent its AI models from using the copyrighted lyrics to train future AI models.
The publishers’ request is part of a lawsuit they filed on 18 October. The case continues on 29 November.
Why is it relevant? First, although the lawsuit is not new, the music publishers’ request for a preliminary injunction shows how impatient copyright holders are with AI companies allegedly using copyrighted materials. Second, the case raises once more the issue of fair use: In a letter to the US Copyright Office last month, Anthropic argued that its models use copyrighted data only for statistical purposes and not for copying creativity.
Case details: Concord Music Group, Inc. v Anthropic PBC, District Court, M.D. Tennessee, 3:23-cv-01092.
The team behind Amazon’s Project Kuiper, a satellite network developed by Amazon, has successfully tested the prototype satellites, which were launched on 6 October. Watch this video to see the Project Kuiper team testing a two-way video call from an Amazon site in Texas. The next step is to start mass producing the satellites for deployment in 2024.
Was this newsletter forwarded to you, and you’d like to see more?
A number of tech companies are challenging the European Commission’s label of digital gatekeepers, which places them into the scope of the new Digital Markets Act. Among the companies:
Meta (Case T-1078/23): The company disagrees with the Commission’s decision to designate its Messenger and Marketplace services under the new law, but does not challenge the inclusion of Facebook, Whatsapp, or Instagram.
Apple (Cases T-1079/23 & T-1080/23): Details aren’t public but media reports said the company was challenging the inclusion of its App Store on the list of gatekeepers.
TikTok (Case (T-1077/23): The company said its designation risked entrenching the power of dominant tech companies.
Microsoft and Google decided not to challenge their gatekeeper status.
Why is it relevant? The introduction of the Digital Markets Act has far-reaching implications for the operations of tech giants. This legal challenge is a first attempt to block its effective implementation. The outcomes of these cases could establish a precedent for the future regulation of digital markets in the EU.
The week ahead (20–27 November)
20 November–15 December: The ITU’s World Radiocommunication Conference, which starts today (Monday) in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.
21–23 November: The 8th European Cyber Week (ECW) will be held in Renne, France, and will bring together cybersecurity and cyber defence experts from the public and private sectors.
27–29 November: The 12th UN Forum on Business and Human Rights will be held in a hybrid format next week to discuss effective change in implementing obligations, responsibilities, and remedies.
#ReadingCorner
Copyright lawsuits: Who’s really protected?
Microsoft, OpenAI, and Adobe are all promising to defend their customers against intellectual property lawsuits, but that guarantee doesn’t apply to everyone. Plus, those indemnities are narrower than the announcements suggest. Read the article.
Guarding artistic creations by polluting data
Data poisoning is a technique used to protect copyrighted artwork from being used by generative AI models. It involves imperceptibly changing the pixels of digital artwork in a way that ‘poisons’ any AI model ingesting it for training purposes, rendering it functionally useless. While it has been primarily used by content creators against web scrapers, it has many other uses. However, data poisoning is not as straightforward and requires a targeted approach to pollute the datasets. Read the article.
The US Department of Commerce (DoC) Bureau of Industry and Security (BIS) announced a tightening of export restrictions on advanced semiconductors to China and other nations subject to arms embargoes. This decision has elicited a strong reaction from China, labelling the measures as ‘unilateral bullying’ and an abuse of export control mechanisms.
Further complicating the US-China tech landscape, there are discussions within the US government about imposing restrictions on Chinese companies access to cloud services. If implemented, this move could have significant consequences for both nations, particularly impacting major players like Amazon Web Services and Microsoft. Finally, Canada has banned Chinese and Russian software from devices issued by the government, citing security concerns.
AI governance
In other developments, a leaked draft text suggests that Southeast Asian countries, under the umbrella of the Association of Southeast Asian Nations (ASEAN), are adopting a business-friendly approach to AI regulation. The draft guide to AI ethics and governance asks companies to consider cultural differences and doesn’t prescribe categories of unacceptable risk. Meanwhile, Germany has introduced an AI action plan intending to increase AI advancement on national and European scales, to compete with the predominant AI forces of the USA and China.
Read more on AI governance below.
Security
The heads of security agencies from the USA, the UK, Australia, Canada, and New Zealand, collectively known as the Five Eyes, have publicly cautioned about China’s widespread espionage campaign to steal commercial secrets. The European Commission has announced a comprehensive review of security risks in vital technology domains, including semiconductors, AI, quantum technologies, and biotechnologies. ChatGPT faced outages on 8 November, believed to be a result of a distributed denial-of-service (DDoS) attack. Hacktivist group Anonymous Sudan claimed responsibility. Finally, Microsoft’s latest Digital Defense Report revealed a global increase in cyberattacks, with government-sponsored spying and influence operations on the rise.
Infrastructure
The US Federal Communications Commission (FCC) voted to initiate the process of restoring net neutrality rules. Initially adopted in 2015, these rules were repealed under the previous administration but are now poised for reinstatement.
Alphabet’s Google reportedly paid a substantial sum of USD 26.3 billion to other companies in 2021 to ensure its search engine remained the default on web browsers and mobile phones. This was revealed during the US Department of Justice’s (DoJ) antitrust trial. For similar anticompetitive actions, the Japan Fair Trade Commission (JFTC) has opened an antimonopoly investigation into Google’s web search dominance.
The European Central Bank (ECB) has decided to commence a two-year preparation phase starting 1 November 2023, to finalise regulations and select private-sector partners before the possible launch of a digital version of the euro. The next step will be the possible implementation after a green light from policymakers. In parallel, the European Data Protection Board (EDPB) has called for enhanced privacy safeguards in the European Commission’s proposed digital euro legislation.
Digital rights
The council presidency and the European Parliament have reached a provisional agreement on a new framework for a European digital identity (eID) to provide all Europeans with a trusted and secure digital identity.Under the new agreement, member states will provide citizens and businesses with digital wallets that link their national digital identities with other personal attributes, such as driver’s licences and diplomas.
The European Parliament’s Internal Market and Consumer Protection Committee has passed a report warning of the addictive nature of certain digital services, advocating tighter regulations to combat addictive design in digital platforms. On a similar note, the European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram.
Key political groups in the European Parliament have reached a consensus on draft legislation compelling internet platforms to detect and report child sexual abuse material (CSAM) to prevent its dissemination on the internet.
Content policy
Meta, the parent company of Facebook and Instagram, is confronting a legal battle initiated by over 30 US states. The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13.
The EU has formally requested details on anti-disinformation measures from Meta and TikTok. Against the backdrop of the Middle East conflict, the EU emphasises the risks associated with the widespread dissemination of illegal content and disinformation.
The UK’s Online Safety Act, imposing new responsibilities on social media companies, has come into effect. This law aims to enhance online safety and holds social media platforms accountable for their content moderation practices.
Development
The Gaza Strip has faced three internet blackouts since the start of the conflict, prompting Elon Musk’s SpaceX’s Starlink to offer internet access to internationally recognised aid organisations in Gaza. Meanwhile, environmental NGOs are urging the EU to take action on electronic waste, calling for a revision of the Waste Electrical and Electronic Equipment Directive (WEEE Directive), per the European Environmental Bureau’s communication.
THE TALK OF THE TOWN – GENEVA
As agreed during the regular session of the ITU Council in July 2023, an additional session dedicated to confirming logistical issues and organisational planning for 2024–2026 was held in October 2023. It was preceded by the cluster of Council Working Group (CWG) and Expert Group (EG) meetings where the list of chairs and vice-chairs were appointed until the 2026 Plenipotentiary Conference. The next cluster of CWG and EG meetings will take place from 24 January to 2 February 2024.
The 3rd Geneva Science and Diplomacy Anticipator (GESDA) Summit saw the launch of the Open Quantum Institute (OQI), a partnership among the Swiss Federal Department of Foreign Affairs (FDFA), CERN and UBS. The OQI aims to make high-performance quantum computers accessible to all users devoted to finding solutions for and accelerating progress in attaining sustainable development goals (SDGs). The OQI will be hosted at CERN beginning in March 2024 and facilitate the exploration of the technology’s use cases in health, energy, climate protection, and more.
Shaping the global AI landscape
Month in, month out, we spent most of 2023 reading and writing about AI governance. October is no exception. As the world grapples with the complexities of this technology, the following initiatives showcase efforts to navigate its ethical, safety, and regulatory challenges on both national and international fronts.
Biden’s executive order on AI. The order represents the most substantial effort by the US government to regulate AI to date. Unveiled with anticipation, the order provides actionable directives where possible and calls for bipartisan legislation where necessary, particularly in data privacy.
Image credit: CNBC
One standout feature is the emphasis on AI safety and security. Developers of the most potent AI systems are now mandated to share safety test results and critical information with the US government. Additionally, AI systems utilised in critical infrastructure sectors are subjected to rigorous safety standards, reflecting a proactive approach to mitigating potential risks associated with AI deployment.
Unlike some emerging AI laws, such as the EU’s AI Act, Biden’s order takes a sectoral approach. It directs specific federal agencies to focus on AI applications within their domains. For instance, the Department of Health and Human Services is tasked with advancing responsible AI use in healthcare, while the DoC is directed to develop guidelines for content authentication and watermarking to label AI-generated content clearly. The DoJ is instructed to address algorithmic discrimination, showcasing a nuanced and tailored approach to AI governance.
Beyond regulations, the executive order aims to bolster the US’s technological edge. It facilitates the entry of highly skilled workers into the country, recognising their pivotal role in advancing AI capabilities. The order also prioritises AI research through funding initiatives, increased access to AI resources and data, and the establishment of new research structures.
G7’s guiding principles. Simultaneously, the G7 nations released their guiding principles for advanced AI, accompanied by a detailed code of conduct for organisations developing AI.
Image credit: Politico
These principles, totalling 11, centre around risk-based responsibility. The G7 encourages developers to implement reliable content authentication mechanisms, signalling a commitment to ensuring transparency in AI-generated content.
A notable similarity with the EU’s AI Act is the risk-based approach, placing responsibility on AI developers to assess and manage the risks associated with their systems. The EU promptly welcomed these principles, citing their potential to complement the legally binding rules under the EU AI Act internationally.
While building on the existing Organisation for Economic Co-operation and Development AI Principles (OECD) principles, the G7 principles go a step further in certain aspects. They encourage developers to deploy reliable content authentication and provenance mechanisms, such as watermarking, to enable users to identify AI-generated content. However, the G7’s approach preserves a degree of flexibility, allowing jurisdictions to adopt the code in ways that align with their individual approaches.
Differing viewpoints on AI regulation among G7 countries are acknowledged, ranging from strict enforcement to more innovation-friendly guidelines. However, some provisions, such as those related to privacy and copyright, are criticised for their vagueness, raising questions about their potential to drive tangible change.
China’s Global AI Governance Initiative (GAIGI). China unveiled its GAIGI during the Third Belt and Road Forum, marking a significant stride in shaping the trajectory of AI on a global scale. China’s GAIGI is expected to bring together 155 countries participating in the Belt and Road Initiative, establishing one of the largest global AI governance forums.
This strategic initiative focuses on five aspects, including ensuring AI development aligns with human progress, promoting mutual benefit, and opposing ideological divisions. It also establishes a testing and assessment system to evaluate and mitigate AI-related risks, similar to the risk-based approach of the EU’s upcoming AI Act. Additionally, the GAIGI supports consensus-based frameworks and provides vital support to developing nations in building their AI capacities.
China’s proactive approach to regulating its homegrown AI industry has granted it a first-mover advantage. Despite its deeply ideological approach, China’s interim measures on generative AI, effective since August this year, were a world first. This advantage positions China as a significant influencer in shaping global standards for AI regulation.
AI Safety Summit at Bletchley Park. The UK’s much-anticipated summit resulted in a landmark commitment among leading AI countries and companies to test frontier AI models before public release.
The Bletchley Declaration identifies the dangers of current AI, including bias, threats to privacy, and deceptive content generation. While addressing these immediate concerns, the focus shifted to frontier AI – advanced models that exceed current capabilities – and their potential for serious harm. Signatories include Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.
Campaigns 18
Governments will now play a more active role in testing AI models. The AI Safety Institute, a new global hub established in the UK, will collaborate with leading AI institutions to assess the safety of emerging AI technologies before and after their public release. This marks a significant departure from the traditional model, where AI companies were solely responsible for ensuring the safety of their models.
The summit resulted in an agreement to form an international advisory panel on AI risk, inspired by the Intergovernmental Panel on Climate Change (IPCC). Each signatory country will nominate a representative to support a larger group of leading AI academics, producing State of the Science reports. This collaborative approach aims to foster international consensus on AI risk.
UN’s High-Level Advisory Body on AI. The UN has taken a unique approach by launching a High-Level Advisory Body on AI, comprising 39 members. Led by UN Tech Envoy Amandeep Singh Gill, the body will publish its first recommendations by the end of this year, with final recommendations expected next year. These recommendations will be discussed during the UN’s Summit of the Future in September 2024.
Unlike previous initiatives that introduced new principles, the UN’s advisory body focuses on assessing existing governance initiatives worldwide, identifying gaps, and proposing solutions. The tech envoy envisions the UN as the platform for governments to discuss and refine AI governance frameworks.
OECD’s updated AI definition. The OECD has officially revised its definition of AI, to read: An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment. It is anticipated that this definition will be incorporated into the EU’s upcoming AI regulation.
Misinformation crowding out the truth in the Middle East
It is said that a lie can travel halfway around the world while the truth is still putting on its shoes. It is also said it was Mark Twain who coined it – which, ironically, is untrue.
Misinformation is as old as humanity and decades old in its current recognisable form, but social media has amplified its speed and scale. An MIT report from 2018 found that lies spread six times faster than the truth – on Twitter, that is. Different platforms amplify misinformation differently – depending on how many mechanisms for the virality of posts the platform has in place.
Yet all social media platforms have struggled with misinformation in recent days, as people have been grappling with the violence unfolding in Israel and Gaza, social media platforms have become inundated with graphic images and videos of the conflict – and images and videos that have nothing to do with it.
What’s happening? Miscaptioned imagery, altered documents, and old videos taken out of context are circulating online. This makes it hard for anyone looking for information about the conflict to parse falsehood from truth.
Campaigns 19
Shaping perceptions. Misleading claims are not confined to the conflict zone; they also impact global perceptions and contribute to the polarisation of opinions. Individuals, influenced by biases and emotions, take sides based on information that often lacks accuracy or context.
False narratives on platforms like X (formerly known as Twitter) can influence political agendas, with instances of fake memos circulating about military aid and allegations of fund transfers. Even supposedly reliable verified accounts contribute significantly to the dissemination of misinformation.
What tech companies are doing.Meta has established a special operations centre staffed with experts, including fluent Hebrew and Arabic speakers. It is working with fact-checkers, using their ratings to downrank false content in the feed to reduce its visibility. TikTok’s measures are somewhat similar. The company established a command centre for its safety team, added moderators proficient in Arabic and Hebrew, and enhanced automated detection systems. X removed hundreds of Hamas-linked accounts and removed or flagged thousands of pieces of content. Google and Apple reportedly disabled live traffic data for online maps for Israel and Gaza. Social messaging platform Telegram blocked Hamas channels on Android due to violations of Google’s app store guidelines.
The EU reacts. The EU ordered X, Alphabet, Meta, and TikTok to remove fake content. European Commissioner Thierry Breton reminded them of their obligations under the new Digital Services Act (DSA), giving X, Meta, and TikTok 24 hours to respond. X confirmed removing Hamas-linked accounts, but the EU sent a formal request for information, marking the beginning of an investigation into compliance with the DSA.
Complicating matters. However, earlier this year, Meta, Amazon, Alphabet, and Twitter laid off many team members focusing on misinformation. This was part of a post-COVID-19-induced restructuring aimed at improving financial efficiency.
The situation underscores the need for robust measures, including effective fact-checking, regulatory oversight, and platform accountability, to mitigate the impact of misinformation on public perception and global discourse.
IGF 2023
The Internet Governance Forum (IGF) 2023 addressed pressing issues amid global tensions, including the Middle East conflict. With a record-breaking 300 sessions, 15 days of video content, and 1,240 speakers, debates covered topics from the Global Digital Compact (GDC) and AI policy to data governance and narrowing the digital divide.
1. How can AI be governed? Sessions explored national and international AI governance options, emphasising transparency and questioning the regulation of AI applications or capabilities.
2. What will be the future of the IGF in the context of the Global Digital Compact (GDC) and the WSIS+20 Review Process? The future of the IGF is closely tied to the GDC and the WSIS+20 Review Process. The 2025 review may decide the IGF’s fate, and negotiations on the GDC, expected in 2024, will also impact the IGF’s trajectory.
3. How can we use the IGF’s wealth of data for an AI-supported, human-centred future?
The IGF’s 18 years of data is considered a public good. Discussions explored using AI to gain insights, enhance multistakeholder participation, and visually represent discussions through knowledge graphs.
4. How can risks of internet fragmentation be mitigated? Multidimensional approaches and inclusive dialogue were proposed to prevent unintended consequences.
5. What challenges arise from the negotiations on the UN treaty on cybercrime? Concerns were raised about the scope, human rights safeguards, undefined cybercrime definitions, and the role of the private sector in the UN treaty on cybercrime negotiations. Clarity, separation of cyber-dependent and cyber-enabled crimes, and international cooperation were emphasised.
6. Will the new global tax rules be as effective as everyone hopes for? The IGF discussed the potential effectiveness of the OECD/G20’s two-pillar solution for global tax rules. Concerns lingered about profit-shifting, tax havens, and power imbalances between Global North and South nations.
7. How can misinformation and protection of digital communication be addressed during times of war? Collaborative efforts between humanitarian organisations, tech companies, and international bodies were deemed essential.
8. How can data governance be strengthened? The discussion emphasised the importance of organised and transparent data governance, including clear standards, an enabling environment, and public-private partnerships. The Data Free Flow with Trust (DFFT) concept, introduced by Japan, was discussed as a framework to facilitate global data flows while ensuring security and privacy.
9. How can the digital divide be bridged? The digital divide requires comprehensive strategies beyond connectivity involving regional initiatives, deploying LEO satellites, and digital literacy efforts. Public-private partnerships, especially with RIRs, were highlighted as crucial for fostering trust and collaboration.
10. How do digital technologies impact the environment? The IGF explored the environmental impact of digital technologies, highlighting the potential to cut emissions by 20% by 2050. Immediate actions, collaborative efforts, awareness campaigns, and sustainable policies were advocated to minimise the environmental footprint of digitalisation. Read more in our IGF 2023 Final report.
Upcoming: UNCTAD eWeek 2023
Organised by the UN Conference on Trade and Development (UNCTAD) in collaboration with eTrade for all partners, the UNCTAD eWeek 2023 is scheduled from 4 to 8 December at the prestigious International Conference Center Geneva (CICG). The central theme of this transformative event is ‘Shaping the future of the digital economy’.
Ministers, senior government officials, CEOs, international organisations, academia, and civil society will convene to address pivotal questions about the future of the digital economy: What does the future we want for the digital economy look like? What is required to make that future come true? How can digital partnerships and enhanced cooperation contribute to more inclusive and sustainable outcomes?
Over the week, participants will join more than 150 sessions addressing themes including platform governance, the impact of AI on the digital economy, eco-friendly digital practices, the empowerment of women through digital entrepreneurship, and the acceleration of digital readiness in developing countries.
The event will explore key policy areas for building inclusive and sustainable digitalisation at various levels, focusing on innovation, scalable good practices, concrete actions and actionable steps.
For youth aged 15–24, there’s a dedicated online consultation to ensure their voices are heard in shaping the digital future for all.
Stay up-to-date with GIP reporting!
The GIP will be actively involved in eWeek 2023 by providing reports from the event. Our human experts will be joined by DiploAI, which will generate reports from all eWeek sessions. Bookmark our dedicated eWeek 2023 page on the Digital Watch Observatory or download the app to follow the reports.
Diplo, the organisation behind the GIP, will also co-organise a session entitled ‘Scenario of the Future with the Youth’ with UNCTAD and Friedrich-Ebert-Stiftung (FES), and a session entitled ‘Digital Economy Agreements and the Future of Digital Trade Rulemaking’ with CUTS International. Diplo’s session will be titled ‘Bottom-up AI and the Right to be Humanly Imperfect.’ For more details, visit our Diplo @ UNCTAD eWeek page.
The ongoing Middle East conflict has made us realise how dangerous and divisive hate speech can be. With illegal content on the rise, governments are putting on pressure and launching new initiatives to help curb the spread. But can these initiatives truly succeed, or are they just another drop in the ocean?
In other news, policymakers are working towards semantic alignment in AI rules, while tech companies are offering indemnity for legal expenses related to copyright infringement claims originating from AI technology.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Governments ramp up pressure on tech companies to tackle fake news and hate speech
Rarely have we witnessed a week quite like the last one, where so much scrutiny was levelled at social media platforms over the rampant spread of disinformation and hate speech. You can tell that leaders are worried about AI’s misuse by terrorists and violent extremists for propaganda, recruitment, and the orchestration of attacks. The fact that so many elections are around the corner raises the stakes even more.
Christchurch Call. In a week dominated by high-stakes discussions, global leaders, including French President Emmanuel Macron and former New Zealand leader Jacinda Ardern, gathered in Paris for the annual Christchurch Call meeting. The focal point was a more concerted effort to combat online extremism and hate speech, a battle that has gained momentum since the far-right shooting at a New Zealand mosque in 2019.
Moderation mismatch. In Paris, Macron seized the opportunity to criticise social media giants. In an interview with the BBC, he slammed Meta and Google for what he termed a failure to moderate terrorist content online. The revelation that Elon Musk’s X platform had only 2,294 content moderators, significantly fewer than its counterparts, fueled concerns about the platforms’ efficacy.
UNESCO’s battle cry. Meanwhile, UNESCO’s Director-General, Audrey Azoulay, sounded an alarm about the surge in online disinformation and hate speech, labelling it a ‘major threat to stability and social cohesion’. UNESCO unveiled an action plan (in the form of guidelines), backed by global consultations and a public opinion survey, emphasising the urgent need for coordinated action against this digital scourge. But while the plan is ambitious, its success hinges on adherence to non-binding recommendations.
Political ads. On another front, EU co-legislators reached a deal on the transparency and targeting of political advertising. Stricter rules will now prohibit targeted ad-delivery techniques involving the processing of personal data in political communications. A public repository for all online political advertising in the EU is set to be managed by an EU Commission-established authority. ‘The new rules will make it harder for foreign actors to spread disinformation and interfere in our free and democratic processes. We also secured a favourable environment for transnational campaigning in time for the next European Parliament elections,’ lead MEP Sandro Gozi said. In the EU’s case, success hinges not on adherence, but on effective enforcement.
Use of AI. Simultaneously, Meta, the parent company of Facebook and Instagram, published a new policy in response to the growing impact of AI on political advertising (after it was disclosed by the press). Starting next year, Meta will require organisations placing political ads to disclose when they use AI software to generate part or all of those ads. Meta will also prohibit advertisers from using AI tools built into Meta’s ad platform to generate ads under a variety of categories, including housing, credit, financial services, and employment. Although we’ve come to look at self-regulation with mixed feelings, the new policy – which will apply globally – is ‘one of the industry’s most significant AI policy choices to come to light to date’, to quote Reuters.
Crack-down in India. Even India joined the fray, with its Ministry of Electronics and Information Technology issuing a stern statement on the handling of misinformation. Significant social media platforms with over 5 million users must comply with strict timeframes for identifying and deleting false content.
As policymakers and tech giants grapple with the surge of online extremism and disinformation, it’s clear that much more needs to happen. The scale of the problem demands a tectonic change, one that goes beyond incremental measures. The much-needed epiphany could lie in the shared understanding and acknowledgement of the severity of the problem. While it might not bring about an instant solution, collective recognition of the problem could serve as a catalyst for a significant breakthrough.
Digital policy roundup (6–13 November)
// AI //
OECD updates its definition of AI system
The OECD’s council has agreed to a new definition of AI system, which reads: ‘An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that [can] influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.’
Compared with the 2019 version, it has added content as one of the possible outputs, referring to generative AI systems.
Why is it relevant? First, the EU, which aligned its AI Act with the OECD’s 2019 definition, is expected to integrate the revised definition into its draft law, presently at trilogue stage. As yet, no documents reflecting the new definition have been published. Second, the EU’s push towards semantic alignment extends further. The EU and USA are currently working on a common taxonomy, or classification system, for key concepts, as part of the EU-US Trade and Technology Council’s work. The council is seeking public input on the draft taxonomy and other work areas until 24 November.
Hollywood actors and studios reach agreement over use of AI
Hollywood actors have finally reached a (tentative) deal with studios, bringing an end to a months-old strike. One of the disagreements was on the use of AI: Under the new deal, producers will be required to get consent and compensate actors for the creation and use of digital replicas of actors, whether created on set or licensed for use.
The film and television industry faced significant disruptions due to a strike that began in May. The underlying rationale was this: While it’s impossible to halt the progress of AI, actors and writers could fight for more equitable compensation and fairer terms. Hollywood’s film and television writers reached an agreement in October, but negotiations between studios and actors were at an impasse until last week’s deal.
Why is it relevant? First, it’s a prime example of how AI has been disrupting creative industries and drawing concerns from actors and writers, despite earlier scepticism. Second, as The Economistthinks, AI could make a handful of actors omnipresent, and hence, eventually boring for audiences. But we think fans just want a good storyline, regardless of whether the well-loved artist is merely a product of AI.
OpenAI’s ChatGPT hit by DDoS attack
OpenAI was hit by a cyberattack last week, resulting in a major outage to its ChatGPT and API. The attack was suspected to be a distributed denial of service (DDoS) attack, which is meant to disrupt access to an online service by flooding it with too much traffic. When the outage first happened, OpenAI reported that the problem was identified, and a fix was deployed. But the outage continued the next day, with the company confirming that it was ‘dealing with periodic outages due to an abnormal traffic pattern reflective of a DDoS attack’.
Responsible. Anonymous Sudan claimed responsibility for the attack, which the group said was in response to OpenAI’s collaboration with Israel and the OpenAI CEO’s willingness to invest more in the country.
Was this newsletter forwarded to you, and you’d like to see more?
G7 ready to tackle AI-driven competition risks; more discussion on genAI needed
Competition authorities from G7 countries believe they already have the legal authority to address AI-driven competitive harm, which power could be further complemented by AI-specific policies, according to a communiqué published at the end of last week’s summit in Tokyo.
When it comes to emerging technologies such as generative AI, however, the G7 competition authorities say that ‘further discussions among us are needed on competition and contestability issues raised by those technologies and how current and new tools can address these adequately.’
Why is it relevant?Unlike other areas of AI governance, competition issues are not a matter of which new laws to enact, but rather how to interpret existing legal frameworks. How could this be done? Competition authorities have suggested that government departments, authorities, and regulators should (a) give proper consideration to the role of effective competition alongside other issues and (b) collaborate closely with each other to tackle systemic problems consistently.
// COPYRIGHT //
OpenAI launches Copyright Shield to cover customers’ legal fees for copyright infringement claims
Sam Altman, the CEO of OpenAI, has announced that the company will cover the legal expenses of business customers faced with copyright infringement claims stemming from using OpenAI’s AI technology. The decision responds to the escalating concern that industry-wide AI technology is being trained on protected content without the authors’ consent.
This initiative, called Copyright Shield, was announced together with a host of other improvements to ChatGPT. Here’s the announcement: ‘OpenAI is committed to protecting our customers with built-in copyright safeguards in our systems. Today, we’re going one step further and introducing Copyright Shield – we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement. This applies to generally available features of ChatGPT Enterprise and our developer platform.’
Why is it relevant? The offer of covering legal costs has become a trend, after Microsoft, in September, announced legal protection to users of its Copilot AI services faced with copyright infringement lawsuits, with Google following suit a month later by adding a second layer of indemnity to also cover AI-generated output. Details of how these services will be implemented are not yet entirely clear.
// PRIVACY //
Meta tells Europeans: Pay or Okay
Meta has rolled out a new policy for European users: Allow Facebook and Instagram to show personalised ads based on user data, or pay a subscription fee to remove ads. But there’s a catch – even if subscribers sign up to remove ads, the company will still gather their data – it just won’t use that data to show them ads. Privacy experts have seen this coming. A legal fight is definitely on the horizon.
// TAXATION //
Apple suffers setback over sweetheart tax case involving Ireland
The Apple-Ireland state aid case, which has been ongoing for almost a decade, is set to be decided by the EU’s Court of Justice, and things don’t look too good for Apple. The current chapter of the case involves a decision by the European Commission, which found that Apple owed Ireland EUR 13 billion (USD 13.8 billion) in unpaid taxes over an alleged tax arrangement granted to Apple, which amounted to illegal state aid. In 2020, the General Court annulled that decision, and the European Commission appealed.
Last week, the Court of Justice’s advocate general said the General Court made legal errors, and the annulment should be set aside. Advocate General Giovanni Pitruzzella advises the court to refer the case back to the lower court for a new decision.
Why is it relevant? First, the new opinion confirms the initial reaction of the European Commission, which at the time had said that the General Court made legal errors. Second, although the advocate general’s opinion is non-binding, it is usually given considerable weight by the court.
Case details: Commission v Ireland and Others, C-465/20 P
The week ahead (13–20 November)
13–16 November: Cape Town, South Africa, will host the Africa Tech Festival, a four-day event that is expected to bring together around 12,000 participants from the policy and technology sectors. There are 3 tracks: AfricaCom is dedicated to telecoms, connectivity, and digital infrastructure; AfricaTech explores innovative and disruptive technologies; AfricaIgnite is dedicated to entrepreneurs.
15 November: The much-anticipated meeting between US President Joe Biden and Chinese President Xi Jinping will take place on the sidelines of the Asia-Pacific Economic Cooperation (APEC) leaders’ meeting in San Francisco. Both sides will be looking for a way to smooth relations, not least on technology issues.
20 November–15 December: The ITU’s World Radiocommunication Conference, taking place in Dubai, UAE, will review the international treaty governing the use of the radio-frequency spectrum and the geostationary-satellite and non-geostationary-satellite orbits. Download the agenda and draft resolutions.
#ReadingCorner
The scourge of disinformation and hate speech during elections
There is no doubt that the use of social media as a daily source of information has grown a lot over the past 15 years. But did you know that it has now surpassed print media, radio, and TV? This leaves citizens particularly exposed to disinformation and hate speech, which are highly prevalent on social media. The Ipsos UNESCO survey on the impact of online disinformation and hate speech sheds light on the growing problem, especially during elections.
One world, two networks? Not yet…
One of the biggest fears among experts is that the tensions between the USA and China could fragment the internet. Telegeography research director Alan Mauldin assesses the impact on the submarine cable industry. If you’re into slide decks, download Mauldin’s presentation.
Last week’s AI Safety Summit, hosted by the UK government, was on everyone’s radar. Despite coming just days after the US President’s Executive Order on AI and the G7’s guiding principles on AI, the summit served to initiate a global process on establishing AI safety standards. The week saw a flurry of other AI policy developments, making it one of the busiest weeks of the year for AI.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Landmark agreement on AI safety-by-design reached by UK, USA, EU, and others
The UK has secured a landmark commitment with leading AI countries and companies to test frontier AI models before releasing them for public use. That’s just one of the initiatives agreed on during last week’s AI Safety Summit, hosted by the UK at Bletchley Park.
Delicate timing. The summit came just after US President Joe Biden announced his executive order on AI, the G7 released its guiding principles, and China’s President Xi Jinping announced its Global AI Governance Initiative. With such a diverse line-up of developments, there was a risk that the UK’s summit would be outshined, and its initiatives overshadowed. But judging by how the UK successfully avoided turning the summit into a marketplace (at least, not publicly), it managed to launch not just a product but a process.
Signing the Bletchley Declaration. The group of countries signing the communique on Day 1 of the summit included Australia, Canada, China, France, Germany, India, Korea, Singapore, the UK, and the USA for a total of 28 countries plus the EU.
Yes, China too. We’ve got to hand it to Prime Minister Rishi Sunak for bringing everyone around the table, including China: ‘Some said, we shouldn’t even invite China… others that we could never get an agreement with them. Both were wrong. A serious strategy for AI safety has to begin with engaging all the world’s leading AI powers.’ And he’s right. On his part, Wu Zhaohui, China’s vice minister of science and technology, told the opening session that Beijing was ready to increase collaboration on AI safety. ‘Countries regardless of their size and scale have equal rights to develop and use AI’, he added, possibly referring to China’s latest efforts to help developing nations build their AI capacities.
Like-minded countries testing AI models. The countries agreeing on the plan to test frontier AI models were actually a smaller group of like-minded countries – Australia, Canada, the EU, France, Germany, Italy, Japan, Korea, Singapore, the USA, and the UK – and ten leading AI companies – Amazon Web Services, Anthropic, Google, Google DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, Open AI, and xAI.
UK Prime Minister Rishi Sunak addressing the AI Safety Summit (1–2 November 2023)
Outcome 1: Shared consensus on AI risks
Current risks. For starters, countries agreed on the dangers of current AI, as outlined in the Bletchley Declaration, which they signed on Day 1 of the summit. Those include bias, threats to privacy and data protection, and risks arising from the ability to generate deceptive content.
A more significant focus: Frontier AI. Though current risks need to be mitigated, the focus was predominantly on frontier AI, that is, advanced models that exceed the capabilities of what we’re seeing today, and their ‘potential for serious, even catastrophic, harm’. It’s not difficult to see why governments have come to fear what’s around the corner, as there have been plenty of stark warnings about the future’s superintelligent systems, the risk of extinction, and the seriousness of these warnings. But as long as they don’t let the dangers of tomorrow divert them from addressing the immediate concerns, they’re on track.
Outcome 2: Governments to test AI models
Shared responsibility. Gone are the days when AI companies were solely responsible for ensuring the safety of their models. Or as Sunak said on Day 2, ‘we shouldn’t rely on them to mark their own homework’. Governments (the like-minded ones) will soon be able to see for themselves whether next-generation AI models are safe enough to be released to the public, or whether they pose threats to critical national security.
How it will work. A new global hub, called the AI Safety Institute (an evolution of the existing Frontier AI Taskforce), will be established in the UK, and will be tasked with testing the safety of emerging AI technologies before and after their public release. It will work closely with the UK’s Alan Turing Institute and the USA’s AI Safety Institute, among others.
Outcome 3: An IPCC for AI
Panel of experts. A third major highlight of the summit is that countries agreed to form an international advisory panel on AI risk. Prime Minister Sunak said the panel was ‘inspired by how the Intergovernmental Panel on Climate Change (IPCC) was set up to reach international science consensus.’
How it will work. Each country who signed on to the Bletchley Declaration will nominate a representative to support a larger group of leading AI academics, tasked with producing State of the Science reports. Turing Award winner Yoshua Bengio will lead the first report as chair of the drafting group. The chair’s secretariat will be housed within the AI Safety Institute.
So what’s next? As far as gatherings go, it looks like the UK’s AI Safety Summit is the first of many. The second summit will be online, co-hosted by Korea in 6 months. An in-person meeting in France will follow a year later. As for the first report, we can expect it to be published ahead of the Korea summit.
Digital policy roundup (30 October–6 November)
// AI //
Big Tech accused of exaggerating AI risks to eliminate competition
On today’s AI landscape, there are a few dominant Big Tech companies, alongside a vibrant open-source community, which is driving significant advancements in AI. The latter is posing a challenging competition to Big Tech, according to Google Brain founder Andrew Ng, leading giant companies to exaggerate the risks of AI in the hope of triggering strict regulation that would stymie the open-source community.
‘It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,’ Ng said.
UN advisory body to tackle gaps in AI governance initiatives
The UN’s newly formed High-Level Advisory Body on AI, comprising 39 members, will assess governance initiatives worldwide, identify existing gaps, and find out how to bridge them, according to UN Tech Envoy Amandeep Singh Gill. He said the UN provides ‘the avenue’ for governments to discuss AI governance frameworks.
The advisory body will publish its first recommendations by the end of this year, and final recommendations next year. They will be discussed during the UN’s Summit of the Future, to be held in September next year.
Why is it relevant?It appears that the advisory body will not release another set of AI principles. Instead, they will focus on closing gaps rather than adding to the growing number of principles.
// MIDDLE EAST //
Third internet blackout in Gaza
The Gaza Strip was disconnected from internet, mobile, and telephone networks over the weekend – the third time since the start of the conflict. NetBlocks, a global internet monitoring service, said: ‘We’ve tracked the gradual decline of connectivity, which has corresponded to a few different factors: power cuts, airstrikes, as well as some amount of connectivity decline due to population movement.’
Was this newsletter forwarded to you, and you’d like to see more?
Facebook and Instagram banned from running behavioural advertising in EU
The European data regulator has ordered the Irish data regulator to impose a permanent ban on Meta’s behavioural advertising across Facebook and Instagram. According to the EU’s GDPR, companies need to have a good reason for collecting and using someone’s personal information; Meta had none.
Ireland is where Meta’s headquarters are located. The ban imposed on the company, which owns Facebook and Instagram, covers all EU countries and those in the European Economic Area.
Why is it relevant? There are six different reasons, or legal bases, that a company can use to process data. One of them, based on consent (meaning that a person has given their clear and specific agreement for their information to be used), is Meta’s least favourite, as the chance of users refusing consent is high. Yet, it may soon be the only basis Meta can actually use – a development which will surely make Austria-based NGO noyb quite happy.
8 November: The International AI Summit, organised by ForumEurope and EuroNews in Brussels and online, will ask whether a global approach to AI regulation is possible.
10-11 November: The annual Paris Peace Forum will tackle trust and safety in the digital world, among other topics.
13–16 November: The Web Summit, dubbed Europe’s biggest tech conference, meets in Lisbon.
#ReadingCorner
A new chapter in IPR: The age of AI-generated content
Intellectual property authorities worldwide face a major challenge: How to approach inventions created not by human ingenuity, but by AI. This issue has sparked significant debate within the intellectual property community, and many lawsuits. Read part one of a three-part series that delves into the impact of AI on intellectual property rights.
The stage is set for some major AI-related developments this week. Biden’s executive order on AI, and the G7’s guiding principles and code of conduct, are out. On Wednesday and Thursday, the UK will host the much-anticipated AI Safety Summit, where political leaders and CEOs will focus squarely on AI risks. In other news, the landscape for children’s online safety is changing, while antitrust lawsuits and investigations show no signs of easing up.
Let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
Biden issues AI executive order; G7 adopts AI principles and code of conduct
You can tell how much AI is on governments’ minds by how many developments take place in a week – or in this case, one day.
Today’s double bill – Biden’s new executive order on AI, and the G7’s guiding principles on AI and code of conduct for developers – was highly anticipated. The White House first announced plans for the executive order in July; more recently, Biden mentioned it again during a tech advisors’ meeting. As for the G7, Japan Prime Minister Fumio Kishida has been providing regular updates on the Hiroshima AI Process for months.
Executive order targets federal agencies’ deployment of AI
Biden’s executive order represents the government’s most substantial effort thus far to regulate AI, providing actionable directives where it can, and calling for bipartisan legislation where needed (such as data privacy). There are three things that stand out:
AI safety and security. The order places heavy emphasis on safety and security by requiring, for instance, that developers of the most powerful AI systems share their safety test results and other critical information with the US government. It also requires that AI systems used in critical infrastructure sectors be subjected to rigorous safety standards.
Sectoral approach. Apart from certain aspects that apply to all federal agencies, the order employs a somewhat sectoral approach to federal agencies’ use of AI (in contrast with other emerging laws such as the EU’s AI Act). For instance, the order directs the US Department of Health and Human Services to advance the responsible use of AI in healthcare, the Department of Commerce to develop guidelines for content authentication and watermarking to clearly label AI-generated content, and the Department of Justice to address algorithmic discrimination.
Skills and research. The order directs authorities to make it easier for highly skilled workers to study and work in the country, an attempt to boost the USA’s technological edge. It will also heavily promote AI research through funding, access to AI resources and data, and new research structures.
G7’s principles place risk-based responsibility on developers
The G7 has adopted two texts: The first is a list of 11 guiding principles for advanced AI. The second – a code of conduct for organisations developing advanced AI – repeats the principles but expands on some of them with details on how to implement them. Our three main highlights:
Risk-based. One notable similarity with the EU’s AI Act is the risk-based element, which places responsibility on developers of AI to adequately assess and manage the risks associated with their systems. The EU promptly welcomed the texts, saying they will ‘complement, at an international level, the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act’.
A step further. The texts build on the existing OECD AI Principles, but in some instances they go a few steps further. For instance, they encourage developers to develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.
(Much) softer approach. Differing viewpoints on AI regulation exist among the G7 countries, ranging from strict enforcement to more innovation-friendly guidelines. The documents allow jurisdictions to adopt the code in ways that align with their individual approaches. But despite this flexibility, a few other provisions are overly vague. Take the provision on privacy and copyright, for instance: ‘Organisations are encouraged to implement appropriate safeguards, to respect rights related to privacy and intellectual property, including copyright-protected content.’ That’s probably not specific enough to provoke change.
Amid mounting concerns about the risks associated with AI, today’s double bill begs the question: Will these developments succeed in changing the security landscape for AI? Biden’s executive order has the most significant strength: Although it lacks enforcement teeth, it carries the constitutional weight to manage federal agencies. But on a global scale, perspectives vary so greatly that their influence is limited. And yet, today’s developments are just the beginning this week.
Digital policy roundup (23–30 October)
// MIDDLE EAST //
Musk’s Starlink to provide internet access to Gaza for humanitarian purposes
Elon Musk confirmed on Saturday that his SpaceX’s Starlink will provide internet connectivity to ‘internationally recognised aid organisations’ in Gaza. This prompted Israel’s communication minister, Shlomo Karhi, to express strong opposition about the Starlink’s potential exploitation by Hamas.
Responding to Karhi’s tweet, Musk replied: ‘We are not so naive. Per my post, no Starlink terminal has attempted to connect from Gaza. If one does, we will take extraordinary measures to confirm that it is used *only* for purely humanitarian reasons. Moreover, we will do a security check with both the US and Israeli governments before turning on even a single terminal.’
Why is it relevant? First, it shows how internet connectivity is increasingly being weaponised during conflicts. Second, the world half-expected Starlink to intervene, given the role it played during the Ukraine conflict, and in other countries affected by natural disasters. But its (public) promise to get go-aheads from both governments could expose the company to new dimensions of responsibility and risks, and could be counterproductive to the aid organisations who so desperately need access to coordinate their relief efforts.
// KIDS ONLINE //
Meta sued by 33 US states over children’s mental health
Meta, Instagram and Facebook’s parent company, is facing a new legal battle from over 30 US states, which are alleging that the company engaged in deceptive practices and contributed to a mental health crisis among young users of its social media platforms.
The lawsuit claims that Meta intentionally and knowingly used addictive features while concealing the potential risks of social media use, violating consumer protection laws, and breaching privacy regulations concerning children under 13.
Why is it relevant? The concerns raised in this lawsuit have been simmering for quite some time. Two years ago, Meta’s former employee Frances Haugen catapulted them into the public consciousness after leaking thousands of internal documents to the press and testifying to the US Senate about the company’s practices. Since then, the issue even showed up on US President Joe Biden’s radar earlier this year. Biden called for tighter regulation ‘to stop Big Tech from collecting personal data on kids and teenagers online’.
Case details: People of the State of California v. Meta Platforms, Inc. et al., District Court, Northern District of California, 4:23-cv-05448
UK implements Online Safety Act, imposing child safety obligations on companies
The UK’s Online Safety Act, which imposes new responsibilities on social media companies, came into effect last week after the law received royal assent.
Among other obligations, social media platforms will be required to swiftly remove illegal content, ensure that harmful content (such as adult pornography) is inaccessible to children, enforce age limits and verification measures, provide transparent information about risks to children, and offer easily accessible reporting options for users facing online difficulties. As is to be expected, there are harsh fines – up to GBP 18 million (USD 21.8 million) or 10% of global annual revenues – in store for non-compliance.
Why is it relevant?For many years, the UK relied on companies’ self-regulated efforts to keep children safe from harmful content. The industry’s initially well-intentioned efforts gradually yielded to alternate choices that prioritised financial interests – the self-regulation experiment is now over, as one child safety expert put it.
Was this newsletter forwarded to you, and you’d like to see more?
US official: North Korea and other states using AI in cyberwarfare
US Deputy National Security Advisor Anne Neuberger has confirmed that North Korea is using AI to escalate its cyber capabilities. In a recent press briefing (held on the sidelines of Singapore International Cyber Week), Neuberger explained: ‘We have observed some North Korean and other nation-state and criminal actors try to use AI models to help accelerate writing malicious software and finding systems to exploit.’ Although experts have often spoken about the risks of AI in cyberwarfare, it’s the first time there’s been an open acknowledgement of its use in offensive cyberattacks. There will be lots to talk about in London this week.
// ANTITRUST //
Google paid billions of dollars to be default search engine
Alphabet’s Google paid USD 26.3 billion (EUR 24.8 billion) to other companies in 2021 to ensure its search engine was the default on web browsers and mobile phones. This was revealed by a company executive testifying during the US Department of Justice’s (DOJ) antitrust trial and in a court record, which the presiding judge refused to redact.
The case, filed in 2020, concerns Google’s search business, which the DOJ and state attorneys-general consider ‘anticompetitive and exclusionary’ sustaining its monopoly on the digital advertising market.
Why is it relevant? First, the original complaint had already indicated that ‘Google pays billions of dollars each year to distributors… to secure default status for its general search engine’. The exact figures have now been made known. Second, this will make it even more difficult for Google to argue against the implications of its exclusionary agreements with other companies.
Case details: USA v. Google LLC, District Court, District of Columbia, 1:20-cv-03010
The Japan Fair Trade Commission (JFTC) is seeking information on Google’s suspected anti-competitive behaviour in the Japanese market, as part of an investigation still in its early stages.
The commission will determine whether Google excluded or restricted the activities of its competitors by entering into exclusionary agreements with other companies.
Why is this relevant? If it all sounds too familiar, that’s because the Japan case is very similar to the US DoJ’s ongoing case against Google.
1–2 November: The Global Cybersecurity Forum gathers in Riyadh, Saudi Arabia, for its annual event, which will this year be dedicated to ‘charting shared priorities in cyberspace’.
3–4 November: The 4th AI Policy Summit takes place in Zurich, Switzerland (at the ETH Zurich campus) and online. Diplo (publisher of this newsletter) is a strategic partner.
4–10 November: The Internet Engineering Task Force (IETF) is gathering in Prague, Czechia and online for its 118th annual meeting.
The topic of AI safety, which appears for the first time in the annual State of AI report, has gained widespread attention and spurred governments and regulators worldwide into action, the 2023 report explains. Yet, beneath this flurry of activity lie significant divisions within the AI community and a lack of substantial progress towards achieving global governance, with governments pursuing conflicting approaches. Read the report.
How to manage AI risks
A group of AI experts has summed up the risks of upcoming, advanced AI systems in a seven-page open letter that urges prompt action, including regulations and safety measures by AI companies. ‘Large-scale social harms and malicious uses, as well as an irreversible loss of human control over autonomous AI systems are looming’, they warn.
AI and social media: Driving us down the rabbit hole
Harvard professor Lawrence Lessig holds a critical stance on the impact of AI and social media, and an even more critical perspective on the human capacity for critical thinking. ‘People have a naïve view: They open up their X feed or their Facebook feed, and [they think] they’re just getting stuff that’s given to them in some kind of neutral way, not recognizing that behind what’s given to them is the most extraordinary intelligence that we have ever created in AI that is extremely good at figuring out how to tweak the attitudes or emotions of the people they’re engaging with to drive them down rabbit holes of engagement.’ Read the interview.
The spread of illegal content and fake news linked to the Middle East conflict has been worrying EU and US policymakers, who are putting more pressure on social media companies to step up their efforts. The USA-China trade war is escalating with tighter restrictions on US chip exports to China and retaliation by China. As other updates confirm, it’s been anything but blue skies as of late. But let’s get started.
Stephanie and the Digital Watch team
// HIGHLIGHT //
China unveils Global AI Governance Initiative as part of Belt and Road
In a significant stride towards shaping the trajectory of AI on a global scale, China’s President Xi Jinping announced the Global AI Governance Initiative (GAIGI) during the opening speech of last week’s Third Belt and Road Forum.
The initiative is expected to bring together all 155 countries that make up the Belt and Road Initiative. This will make it one of the largest global AI governance forums.
Key tenets. Releasing additional details, the Foreign Ministry’s spokesperson said the strategic initiative will focus on five aspects. It will ensure that AI development remains synonymous with human progress, which is quite a noble aim. It will promote mutual benefit, and ‘oppose drawing ideological lines or forming exclusive groups to obstruct other countries from developing AI’ – a clear dig at Western allies. It would establish a testing and assessment system to evaluate and mitigate AI-related risks, which reminds us of the risk-based approach the EU is taking in its upcoming AI Act. It will also support efforts to develop consensus-based frameworks, ‘with full respect for policies and practices among countries,’ and provide vital support to developing nations to build their AI capacities.
First-mover advantage. In recent months, China has been moving swiftly to regulate its homegrown AI industry. Its interim measures on generative AI, effective since August, were a world first; it introduced rules for the ethical application of science and tech (including AI). China is now looking at basic security requirements for generative AI. Very few acknowledge that despite its deeply ideological approach, China was the first to regulate generative AI, giving itself a significant advantage and mileage in the race to influence global standards. So much so that even US experts are now suggesting that the USA and its allies should engage with China ‘to learn from its experience and explore whether any kind of global consensus on AI regulation is possible’.
China’s approach. Interestingly, the interim measures are a watered-down version – or at least, a less robust version compared to its initial draft – a signal that China was favouring a more industry-friendly approach. A few weeks after the measures came into effect, eight major Chinese tech companies obtained approval from the Cyberspace Administration of China (CAC) to deploy their conversational AI services. In between the USA’s underwhelming progress on AI regulation, and the EU’s strict approach, China’s approach could easily gain appeal on the international stage.
Quasi-global. The international audience watching that stage is very large. With over 150 countries forming part of the Belt and Road Initiative, China’s Global AI Governance Initiative will be one of the largest AI governance forums. But the coalition’s size is not the only reason why the initiative will be highly influential. As the Belt and Road Initiative celebrates its 10th anniversary, China is extolling its success in stimulating nearly USD 1 trillion in investment, forming more than 3,000 cooperative projects, creating 420,000 jobs, and lifting 40 million people out of poverty. All of this gives China geopolitical clout and leverage.
Showtime. China’s Global AI Governance Initiative will undoubtedly influence other processes. Of the coalitions that have launched their own vision or process for regulating AI, the most recent is the draft guide to AI ethics, which the Association of Southeast Asian Nations (ASEAN) is working on. The unveiling of China’s initiative comes a few weeks before the UK’s AI Safety Summit (see programme), which China is set to attend (even though it’s still unclear who will represent China – the decision will indicate the level of significance China gives to the UK process).
Xi’s speech conveys a willingness to engage: ‘We stand ready to increase exchanges and dialogue with other countries and jointly promote the sound, orderly and secure AI development in the world’. But as China’s Global Times writes, ‘China is already a very important force in global AI development… there is no way the USA and its Western allies can set up a system of AI management and regulation while squeezing China out.’
Digital policy roundup (16–23 October)
// DISINFORMATION //
EU formally asks Meta, TikTok for details on anti-disinformation measures
As the Middle East conflict unfolds, ‘the widespread dissemination of illegal content and disinformation linked to these events carries a clear risk of stigmatising certain communities and destabilising our democratic structures’, to quote European Commission Thierry Breton.
Last week, we wrote how Breton personally reached out to X’s Elon Musk, TikTok’s Shou Zi Chew, Alphabet’s Sundar Pichai, and Meta’s Mark Zuckerberg, urging them to promptly remove illegal content from their platforms. Two days later, X received a formal request for information.
Deadlines. The companies must provide the commission with information on crisis response measures by 25 October and measures to protect the integrity of elections by 8 November (plus in TikTok’s case, how it’s protecting kids online). As we mentioned previously, we don’t think this exchange will stop with just a few polite letters.
DSA not yet fully operational? Honour it just the same
The European Commission is applying pressure on EU member states to implement parts of the DSA months ahead of its full implementation on 17 February 2024. The ongoing wars and instabilities have led to an ‘unprecedented increase in illegal and harmful content being disseminated online’, it said.
The commission is appealing to the countries’ ‘spirit of sincere cooperation’ to go ahead and form the planned informal network once the DSA starts applying fully, to take coordinated action, and to assist it with enforcing the DSA.
Why is it relevant? It shows the commission’s (or rather, Breton’s) eagerness to see the DSA applied. It’s the kind of pressure that one can hardly choose to ignore.
US senator urges social media platforms to curb deceptive news
Disinformation is not just a concern for European policymakers. US Senator Michael Bennett has also written to the CEOs of Meta, Google, TikTok, and X to take prompt action against ‘deceptive and misleading content about the Israel-Hamas conflict’, which he says is ‘spreading like wildfire’.
Bennett’s letter was quite critical: ‘In many cases, your platforms’ algorithms have amplified this content, contributing to a dangerous cycle of outrage, engagement, and redistribution… Your platforms have made particular design decisions that hamper your ability to identify and remove illegal and dangerous content.’
Why is it relevant? First, it shows that concerns about the spread of disinformation and illegal content in the context of the Middle East conflict are not limited to European policymakers alone (although the approach taken by both sides hasn’t been quite the same). Second, Bennett is drawing attention to the platforms’ algorithms (something that the EU did not mention), which have arguably played a significant role in inadvertently promoting misleading content and creating filter bubbles.
Was this newsletter forwarded to you, and you’d like to see more?
USA tightens restrictions on semiconductor exports to China
The US Department of Commerce’s (DOC) Bureau of Industry and Security (BIS) has tightened export restrictions on advanced semiconductors to China and other countries that are subject to an arms embargo. In practice, this means that China will be unable to obtain high-end chips that are used to train powerful AI models and equipment that can enable the production of tiny chips that are used for AI.
China reacted strongly to the BIS decision, calling these measures ‘unilateral bullying’, and an abuse of export control measures. The measures are an expansion of semiconductor export restrictions implemented last year.
Why is it relevant?This latest tit-for-tat is meant to close loopholes from the 2022 measures. US Secretary of Commerce Gina Raimondo says that the objective remains unchanged: to restrict China from advancements in AI that are vital for its military applications. But the Washington-based Semiconductor Industry Association cautions that export controls ‘could potentially harm the US semiconductor ecosystem instead of advancing national security’.
The heads of US, UK, Australian, Canadian and New Zealand security agencies meeting publicly for the first time, on a stage at Stanford University. Credit: FBI
// CYBERSECURITY //
Five Eyes warn of China’s ‘innovation theft’ campaign
The heads of the Five Eyes security agencies – composed of the USA, UK, Australia, Canada and New Zealand – have warned of a sizeable Chinese espionage campaign to steal commercial secrets. The agency heads met publicly for the first time during a security summit held in Silicon Valley. Over 20,000 people in the UK have been approached online by Chinese spies, the head of the UK’s MI5 told the BBC.
// NET NEUTRALITY //
US FCC vote kicks off process to restore net neutrality rules
The US Federal Communications Commission (FCC) has voted in favour of starting the process to restore net neutrality rules in the USA. The rules were originally adopted by the Obama administration in 2015, but repealed a few years later under the Trump government.
The steps ahead. Although net neutrality proponents will have uttered a collective sigh of relief at this renewal, the process involves multiple steps, including a period for public comments.
Why is it relevant? We won’t state the obvious about net neutrality, or how the FCC will broaden its reach. Rather, we’ll highlight what chairwoman Jessica Rosenworcel said last week: There are already several state-led open internet policies that providers are abiding by right now; it’s time for a national one.
// COMPETITION //
South Africa investigating competition in local news media and adtech market
South Africa’s Competition Commission has launched an investigation into the distribution of media content and the advertising technology (adtech) markets that link buyers and sellers of digital advertising.
The investigation will also determine whether digital platforms such as Meta and Google are engaging in unfair competition with local news publishers by using their content to generate advertising revenue.
Why is it relevant? First, it shows how global investigations – most notably in Australia and Canada – are drawing attention to Big Tech’s behaviour in other markets, and are influencing the measures taken by other regulators. Second, it reflects rising concerns about the shift from print advertising to digital content and advertising – a trend that is not sparing anyone.
// DIGITAL EURO //
ECB launches prep phase for digital euro
The European Central Bank (ECB) has announced a two-year prep phase for the digital euro, which will work on its regulatory framework and the technical setup. The phase starts on 1 November, and comes after a two-year research phase.
The ECB made it clear that the launch doesn’t mean that the digital euro is a certainty. But if there’s eventually a green light, the digital euro will function similarly to online wallets or bank accounts, and will be guaranteed by the ECB. It will only be available to EU residents.
Why is it relevant? Digital currencies issued by central banks (known as Central Bank Digital Currencies (CBDCs)) are in a rapidly developing phase worldwide. Last year, a BIS report said that two-thirds of the world’s central banks are considering introducing CBDC in the near future. Even though only a few countries – such as China, Sweden, and a handful of Caribbean countries – have launched digital currencies or pilot projects, the EU is treading slowly but surely, expecting the digital euro to coexist alongside physical cash and to introduce measures that would safeguard its existing commercial banking sector.
The week ahead (23–30 October)
21–26 October: ICANN78, the organisation’s 25th annual general meeting, is ongoing in Hamburg, Germany and online.
24–26 October: The CEOs of some of the world’s leading telecoms operators are meeting in Paris for the 5G World Summit this week.
25–27 October: Nashville, Tennessee, will host the 13th (ISC)2 Security Congress, convening the cybersecurity community in person and online.
#ReadingCorner
Online abuse of kids ‘escalating’
Child sexual exploitation and abuse online is escalating worldwide, in both scale and methods, the latest We Protect Global Alliance’s threat assessment warns. To put this into numerical perspective, the reports of abuse material reported in the USA in 2019 dwarfs the 32 million reports made in 2022. It gets worse: ‘The true scale of child sexual exploitation and abuse online is likely greater than this as a lot of harm is not reported.’ Read the report, including its recommendations.
If abuse is on the rise, why isn’t the tech industry doing more?
As the eSafety Commissioner of Australia noted last week, some of the biggest tech companies just aren’t living up to their responsibilities to halt the spread of online child sexual abuse content and livestreaming.
‘Within online businesses much of the child safety and wider consumer agenda is marked as an overhead cost not a profit centre …’, writes John Carr, a UK leading expert in child internet safety. ‘Companies will obey clearly stated laws. But the unvarnished truth is many are also willing to exploit any and all available wiggle room or ambiguity to minimise or delay the extent of their engagement with anything which does not contribute directly to the bottom line. If it makes them money they need no further encouragement. If it doesn’t, they do.’ Read the blog post.
This year’s IGF came at a time of heightened global tension. As the Middle East conflict unfolded, aspects related to internet fragmentation, cybersecurity during times of war, and mis- and disinformation entered prominently into the IGF 2023 debates.
During the discussions at this year’s record-breaking IGF (with 300 sessions, 15 days of video content, and 1,240 speakers), participants also debated other topics at length – from the Global Digital Compact (GDC) and other processes to AI policy (such as the Hiroshima AI Process – more further down), data governance dilemmas, and narrowing the digital divide.
There seems to be some form of general consensus among stakeholders – both public and private – that we need to govern AI if we are to leverage it for the benefit of humanity. But what exactly to govern, and, even more importantly, how to do so, remains open for debate.
And so it is no surprise that the IGF featured quite a few such debates, as sessions explored national and international AI governance options, highlighted the need for transparency in both the technical development of AI systems and in the governance processes themselves, and questioned whether to regulate AI applications/uses or capabilities.
Highlights
Just as was the case with the internet, AI is set to impact the entire world, albeit in different ways and at different speeds. And so, setting some form of international governance mechanisms to guide the development and deployment of human-centric, safe and trustworthy AI is essential. The jury is still unsure whether to have international guiding principles, stronger regulations, new agencies, etc.
But there is already a body of work to build upon, from the OECD’s AI principles and the UNESCO recommendation on AI ethics to the G7 Hiroshima AI Process and the EU’s approach to developing voluntary AI guardrails ahead of the AI Act coming into force. Japan’s Prime Minister announced at the start of the IGF that a draft set of guiding principles and a code of conduct for developers of advanced AI is to be put on the table for approval at the upcoming G7 Summit. The texts form part of the Hiroshima AI Process, kickstarted during last May’s G7 Summit.
If the world is to move ahead with some form of global AI governance approach, then this approach needs to be defined in an inclusive manner. There is a tendency for countries and regional blocs with more robust regulatory frameworks to shape governance practices globally, but the voices and interests of smaller and developing countries must be more meaningfully represented and considered.
Take Latin America and Africa, for example: They provide significant raw materials, resources, data, and labour for AI development, but their participation in global processes does not strongly reflect this. Moreover, the discussion on AI harms is still predominantly framed through the Global North lens. To ensure an inclusive and fair AI governance process, reducing regional disparities, strengthening democratic institutions, and promoting transparency and capacity development are essential.
The Brussels effect – where EU regulations made in Brussels become influential worldwide – featured in some discussions. The EU’s AI Act will likely influence regulatory approaches in other jurisdictions globally. However, countries must consider their unique local contexts when designing their regulations and policies to ensure they respond to and reflect local needs and realities. And, of course, this so-called AI localism should also apply when integrating local knowledge systems into AI models. By incorporating this local knowledge, AI models can better address distinct local and regional challenges.
Multistakeholder cooperation in shaping AI governance mechanisms was highlighted as essential. With the private sector driving AI innovation, its involvement in AI governance is inevitable and indispensable. Such an involvement also needs to be transparent, open, and trustworthy.
But it is not all about laws and regulations. Technical standards also have a role to play in advancing trustworthy AI. Different technical standards are necessary within the AI ecosystem at different levels, encompassing certifications for evaluating quality management systems and ensuring product-level adherence to specific benchmarks and requirements. These standards aim to maintain efficient operations, promote reliability, and foster trust in AI products and services.
While we wait for new international regulations to be developed, a wide range of actors could adopt or adapt new or existing voluntary standards for AI. For instance, the Institute of Electrical and Electronics Engineers (IEEE) developed a value-based design approach that UNICEF uses. The implementation of AI also requires a deep understanding of established ethical guidelines. To this end, UNESCO has published the first-ever global Guidance on Generative AI in Education and Research, which aims to support countries in implementing immediate actions and planning long-term policies to properly use generative AI tools.
Aside from laws, regulations, and technical standards, what else could help achieve a human-centric and inclusive approach to AI? Forums and initiatives such as the Global Partnership on AI (GPAI), the Frontier Model Forum, the Partnership on AI, and the MLCommons have a role to play. They can promote the secure and ethical advancement of cutting-edge AI models – by establishing common definitions and understandings of AI system life cycles, creating best practices and standards, and fostering information sharing between policymakers and industry. And states should look into allocating resources to the development of publicly accessible AI technology as a way to ensure wider access to AI technology and its benefits.
2. What will be the future of the IGF in the context of the Global Digital Compact (GDC) and the WSIS+20 Review Process?
From 2003 to 2005, the World Summit of the Information Society (WSIS) came up with several outcome documents meant, among other goals, to advance a more inclusive information society and establish key principles for what was back then a fresh new term: internet governance. The IGF itself was an outcome of WSIS.
In 2025, a WSIS+20 review process will look at the progress made in implementing WSIS outcomes and will, most likely, decide on the future of the IGF (as its current mandate expires in 2025). In parallel with preparing for WSIS+20, UN member states will also have to negotiate on the Global Digital Compact, expected to be adopted in 2024 as a pact for an ‘open, free, and secure digital future for all’.
So, the next two years are set to be intensive. New forums are under consideration. Some existing structures may be strengthened. International organisations are gearing up for an ‘AI mandate race’ that will shape their future and, in some cases, question their very existence.
The IGF’s future will be significantly influenced by the rapidly changing policy environment, as discussed in Kyoto.
Highlights
The Global Digital Compact (GDC) sparked a lot of interest in official sessions, including one main session, bilateral meetings, and corridor chats, with two underlying issues:
IGF input into GDC drafting: The IGF community would like to see more multistakeholder participation throughout the GDC drafting process. Mimicking the IGF mode of operation is unrealistic, as the GDC will be negotiated under UN General Assembly rules. However, while following the UNGA rules of procedure, the GDC should continue to make every effort to include all stakeholders’ perspectives, as it has in the past. Stakeholders were also encouraged to communicate with their national representatives in order to contribute more to the GDC process. (Bookmark our GDC tracker)
Inclusion in governance was in focus during the session on the participation of Small Island Developing States (SIDS) in digital governance processes. The debate brought up an interesting paradox. Although SIDS have the formal possibility of participating in the IGF, they often lack the resources to do so effectively. Other small groups from civil society, business, and academia encounter a similar participatory paradox.
Changes in the global architecture may have a two-fold impact on SIDS. Firstly, the proliferation of digital forums could further strain their already stretched participation capacity. Secondly, the GDC may propose new forms of participation reflecting the specificities of small actors with limited resources. For any future digital governance architecture to work, it will be important for SIDS and other small actors, from businesses to civil society, to be able to have stronger voices.
The IGF debates indicated the renewed relevance of the WSIS process ahead of review in 2025. The G77 is particularly keen to base GDC negotiations on the WSIS Tunis Agenda and the Geneva Declaration of Principles, as stated in the recently adopted G77 Havana Declaration. The G77 argued for a triangulation of digital governance structures among Agenda 2030, WSIS, and the GDC.
Whatever policy outcomes will be reflected in the GDC and the WSIS+20 review, the IGF should be refined, improved, and adapted to the rapidly changing landscape of AI and broader digital developments. More attention should also be given to involving missing communities in IGF debates. The IGF Plus approach was mentioned in discussions in Kyoto.
In Kyoto, international organisations fueled the race for AI mandates to secure a place in the developing frameworks for handling AI. According to Diplo’s analysis of AI in IOs, almost every UN organisation has some AI initiative in place.
In the emerging AI era, many organisations are faced with existential questions about their future and how to manage new policy issues. The primary task facing the UN system and its member states in the upcoming years will be managing the race to put an AI mechanism in place. Duplication of effort, overlapping mandates, and the inevitable confusion when addressing the impact of AI could impede effective multilateralism.
3. How to use IGF’s wealth of data for an AI-supported, human-centred future?
The immense amount of data accumulated through the IGF over the past 18 years is a public good that belongs to all stakeholders. It presents an opportunity for valuable insights when mined and analysed effectively, with AI applications serving as useful tools in this process.
Highlights
The IGF has accumulated a vast repository of knowledge generated by the discussions at the annual forum and its communities over the years (e.g. session recordings and reports; documents submitted for public consultation; IGF messages and annual reports; outputs of youth and parliamentary tracks, best practice forums, policy networks, and dynamic coalitions; summaries of MAG meetings; reports from national, regional and youth IGF initiatives). But this is an underutilised resource that could be used to build a sustainable, inclusive, and human-centric digital future.
AI can increase the effectiveness of disseminating and utilising the knowledge generated by the IGF. It can also help identify underrepresented and marginalised groups and disciplines in the IGF processes, allowing the IGF to increase its focus on involving them.
Moreover, AI can assist in managing the busy schedule of IGF sessions by linking them to similar discussions from previous years, aiding in coordinating related themes over time. It can visually represent hours of discussions and extensive content as a knowledge graph, as demonstrated by Diplo’s experiment with AI-enhanced reporting at IGF2023.
Importantly, preserving the IGF’s knowledge and modus operandi can show the relevance and power of respectful engagement with different opinions and views. Since this approach is not automatic in our time, the IGF’s impact could extend beyond internet governance and have a more profound effect on the methodology of global meetings.
4. How can risks of internet fragmentation be mitigated?
The escalating threat of fragmentation challenges the internet’s global nature. Geopolitical tensions, misinformation, and digital protectionism reshape internet governance, potentially compromising its openness. A multidimensional approach is crucial to understanding and mitigating fragmentation. Inclusive dialogue and international norms play a vital role in reducing these risks.
Highlights
Internet fragmentation would pose significant challenges to the global and interconnected nature of the internet. It would hinder communication, stifle innovation, and undermine the intended functioning of the internet. Throughout the week, different sessions tackled these issues and how to reduce the risks of internet fragmentation.
The internet, as we know it, cannot be taken for granted any more. Geopolitical tensions, the weaponisation of the internet, dis- and misinformation, and the pursuit of digital sovereignty through protectionism could potentially fracture the open nature of the internet. The same can be said for restrictions on access to certain services, internet shutdowns, and censorship.
One way of examining the risks is to look at the different dimensions of fragmentation, fragmentation of the user experience, that of the internet’s technical layer, and fragmentation of internet governance and coordination (explained in detail in this background paper), and the consequences each of them carries.
Policymakers can also use this approach to create a cohesive and comprehensive regulatory approach that does not lead to internet fragmentation (for instance, a layered approach to sanctions can help prevent unintended consequences like hampering internet access). In fact, state control over the public core of the internet and its application layer is a major concern. Different technologies operate at several layers of the internet, and different entities manage those distinct layers.
Disruptions in the application layer could lead to disruptions in the entire internet. Therefore, governance of the public core calls for careful consideration, a clear understanding of these distinctions, and deep technical knowledge.
International norms are critical to reducing the risk of fragmentation. International dialogue in forums like the IGF is invaluable for inclusive discussions and contributions from diverse stakeholders, including different perspectives about fragmentation between the Global North and Global South.
Countries pursue their policies at the national level, but they also need to be mindful of harmonising with regulatory frameworks with extraterritorial reach. In developing national and regional regulatory frameworks, it is indispensable to elicit multistakeholder input, particularly considering the perspectives of marginalised and vulnerable communities. Public policy functions cannot be entrusted entirely to private corporations (or even governments). The involvement of technical stakeholders in public policy processes is essential for sound, logical, informed decision-making and improved governance that protects the technical infrastructure.
5. What challenges arise from the negotiations on the UN treaty on cybercrime?
As negotiations on the new UN cybercrime treaty enter the last mile, they were highly prominent topics of IGF2023. The broad scope of the current draft of the UN Cybercrime Treaty, the lack of adequate human rights safeguards, the absence of a commonly agreed-upon definition of cybercrime, and the uncertain role of the private sector in combating cybercrime are some of the crucial challenges addressed during the sessions.
Highlights
As the Main Session: Cybersecurity, Trust & Safety Online, and the sessionRisks and Opportunities of a new UN Cybercrime Treaty noted, provisions to ensure human rights protection seem blurred. The wide discretion left to states in adopting the provisions related to online content, among others, could leave plenty of wiggle room for authoritarian regimes to target and arbitrarily prosecute activists, journalists, and political opponents. Additionally, retaining personal data from individuals accused of an alleged cybercrime offence could open the door for the misuse and infringement of their right to privacy.
Provisions regarding cybercrime offences need to be clarified, too, as there is no commonly agreed-upon definition of cybercrime. For now, it is clear that we need to separate cyber-dependent serious crimes (like terrorist attacks using autonomous cyberweapons) from cyber-enabled actions (like online speech) that help commit crimes and violate human rights. Additionally, there is a need to overcome cybercrime impunity, especially in cases where states are unwilling or unable to combat it.
International cooperation between states and the private sector is yet another aspect that members have to agree on. Essentially, there is a need to ensure more robust and comprehensive provisions to address capacity development and technical assistance. It was noted that these provisions should facilitate cooperation across different legal jurisdictions and promote relationships with law enforcement agencies.
The role of the private sector is another stumbling stone in the negotiations. The proposed provisions put the private sector in a rather challenging position as they would have to comply with the laws of different jurisdictions. This means that conflicts of laws, including existing international instruments such as the Budapest Convention, would be inevitable and need to be harmonised somehow.
What if states cannot agree on an international treaty? Well, there are still ways to strengthen the fight against cybercrime. Options include establishing a database of cybersecurity experts for knowledge sharing, pooling knowledge for capacity development, expanding the role of organisations like INTERPOL, and encouraging states and businesses to allocate more resources to strengthen their cybersecurity posture.
Has the UN Cybercrime Treaty draft opened Pandora’s box? It always depends on how someone perceives it. What is clear from the sessions discussed is that many challenges need to be addressed as the ‘deadline’ for the UN Cybercrime Treaty approaches.
6. Will the new global tax rules be as effective as everyone is hoping for?
Over the years, the growth of the digital economy – and how to tax it – has led to major concerns over the adequacy of tax rules. The IGF discussion focused on the necessity for clear and open dialogues on digital taxation, and for a just and equitable tax revenue distribution. There are hurdles to implementing effective taxation measures. The involvement of a wider range of stakeholders could be pivotal in shaping workable solutions for the taxation of businesses of tech titans.
Highlights
Global tax rules could ameliorate the unfair consequences of tax havens, provide consistent approaches to allocating profits and reducing uncertainty for multinational companies. The OECD/G20 made significant steps in this direction: In 2021, over 130 countries came together to support a new two-pillar solution. This will introduce a 15% effective minimum tax rate in most jurisdictions and will oblige multinationals to pay tax in countries where their users are located (rather than where they have a physical presence). In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.
For these models to be effective, they need to fully counter the scenarios that have, in the past, allowed multinationals to reduce their tax bills. First, multinational corporations have traditionally shifted profits to low-tax jurisdictions, which has deprived countries in the Global South of their fair share of tax revenue. Second, neither of the two frameworks addresses the issue of tax havens directly (although the minimum tax will help mitigate this issue). Third, the OECD and UN models do not fully take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South.
Until recently, countries in the Global South felt these measures alone were insufficient to ensure tax justice. They, therefore, opted to adopt various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services.
Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious, opining that these countries should carefully consider all implications before signing international tax treaties and perhaps even sign these treaties only after they see their effects play out.
7. How to address misinformation and protection of digital communication during times of war?
In the midst of ongoing conflicts, new concerns about the impact of misinformation have arisen. The primary concern is how this impacts civilians residing in volatile regions. Misinformation adds to confusion, division, and physical and psychological distress, especially for civilians caught in the middle.
Digital communication also has a decidedly operational role in conflict situations, completely different from any military use. It should provide secure communication to reach and inform those in need. The security and robustness of digital networks therefore become critical in ensuring humanitarian assistance.
Highlights
The old wisdom that the truth is the first victim of war has been amplified by digital technology. The session Safeguarding the free flow of information amidst conflict explained how disseminating harmful information can exacerbate pre-existing social tensions and grievances, leading to increased violence and violations of humanitarian law.
The spread of misinformation can cause distress and psychological burdens among individuals living in conflict-affected areas. Misinformation hampers their ability to access potentially life-saving information during emergencies. The distortion of facts and the influence on beliefs and behaviours as a consequence of disseminating harmful information also raise tensions in conflict zones.
In times of peace, experts advocate for a multi-faceted approach to addressing misinformation in conflict zones. In times of war, the immediate concerns focus primarily on ensuring the safety and well-being of civilians. If communication channels are disrupted, the spread of misinformation can be even more dangerous.
In these situations, humanitarian organisations and tech companies must work together to establish secure channels and provide accurate information to those in need. Additionally, efforts should be made to counter cyber threats and protect critical infrastructure. In fact, with the growing reliance on a shared digital infrastructure, civilian entities are more likely to be inadvertently targeted through error or tech failure. The interconnectedness of digital systems means that an attack on one part of the infrastructure can have far-reaching consequences, potentially affecting civilians who are not directly involved in the conflict zone.
The involvement of international organisations and governments is essential in coordinating these efforts and ensuring that humanitarian principles are upheld. Special consideration should also be given to the safety and protection of those working in the digital infrastructure sector during times of conflict.
8. How can data governance be strengthened?
Organised, transparent data governance is crucial in today’s digital landscape and requires clear standards for coherence and consistency, an enabling environment requiring effort, trust, and adaptability from all sectors, and public-private partnerships for addressing critical issues. Intermediaries play a key role in bridging gaps. The Data Free Flow with Trust (DFFT) concept, introduced by Japan in 2019, also promises to strengthen data governance by enabling global data flows while ensuring security and privacy.
Highlights
Data governance plays a critical role in ensuring the effective and responsible use of data, especially in today’s digital age. Discussions during an open forum on public-private partnerships served to identify important measures that can help improve or expand upon existing data governance approaches.
First, clear standards and operating procedures can promote coherence and consistency in data governance. The lack of coherence is one of the main reasons for underwhelming private sector contributions. By defining and implementing robust standards, both the public and private sectors could have a common framework to work upon, facilitating collaboration and maximising the potential for data-driven initiatives.
Second, an enabling environment is essential for effective data governance. This environment requires time, effort, proof-of-concept, trust, and adaptability. Creating such an environment necessitates the involvement of all sectors – public, private, and civil society.
Third, public-private initiatives are crucial to helping bridge data gaps related to critical issues like climate change, poverty, and inequality. Collaboration between the public and private sectors allows for the pooling of resources, expertise, and knowledge, enabling a more holistic approach to addressing these challenges.
Successful public-private partnerships require investment, time, and trust-building efforts. Parties involved must dedicate time to cultivating relationships and fostering mutual understanding. This may include the participation of dedicated individuals from both the private sector and governmental organisations. Their active presence can facilitate effective communication, coordination, and alignment of goals, leading to fruitful collaborations.
Related to public-private initiatives is the role that intermediaries or brokers have to help bridge the skills and capacity gaps between sectors by combining their expertise and resources to drive collaboration and support the achievement of sustainable development goals.
The sustainability of public-private partnerships also depends on the size and global reach of the involved entities. For instance, large firms with global reach are well-positioned to enable such partnerships. They possess the necessary resources, capabilities, and networks to maintain and nourish relationships, ensuring long-term viability and impact in driving sustainable development.
Much was also said about Data Free Flow with Trust (DFFT) – a concept first championed by Japan during the G20 summit in 2019 – which aims to strengthen data governance by facilitating the smooth flow of data worldwide while ensuring data security and privacy for users. Speakers in High-Level Leaders Session I: Understanding Data Free Flow with Trust (DFFT) emphasised how the DFFT concept can help strengthen data governance in additional ways. It provides a framework for harmonising and aligning the different national or regional perspectives, encourages public-private data partnerships, and promotes using regulatory and operational sandboxes as practical solutions to foster good governance among stakeholders.
9. How can the digital divide be bridged?
Although discussions on bridging the digital divide might seem repetitive, the persistence of this topic is warranted by the stark reality revealed in the latest data from the International Telecommunication Union (ITU): approximately 5.4 billion people are using the internet. That leaves 2.6 billion people offline and still in need of access.
Highlights
In the pursuit of universal and meaningful connectivity, precise data tracking emerges as a cornerstone for informed decision-making. Data tracking equips stakeholders with the insights needed to identify areas requiring attention and improvement. Through a blend of quantitative indicators (numerical data and statistical analysis) and a qualitative approach (subjective assessments, such as in-depth case studies), a comprehensive connectivity assessment is achieved, facilitating effective individual country evaluations.
What needs to be improved?While the efforts of international organisations, especially ITU and UNESCO in data collection are complementary, they are often not perfectly coordinated. Other areas for improvement include the lack of quality data on how communities use the internet, a lack of reliable indicators for safety and security, as well as speed, and reckoning realities that rural regions may not be fully reflected in the data collected.
There are several solutions, from regional collaboration and initiatives to the utilisation of emerging technologies.
One proposed approach to expanding internet access involves utilising Low Earth Orbit (LEO) satellites. LEO satellites offer the potential to deliver real-time and reliable internet connectivity to remote or hard-to-reach regions worldwide. Nevertheless, several concerns have surfaced, primarily concerning the cost of accessing such services, their environmental impact, and the technical challenges associated with large-scale LEO satellite deployment.
To make it possible for LEO satellites to be used and deployed effectively, countries need to review their laws, make sure they are in line with international space law, and get involved in international decision-making bodies like ITU and COPUOS to help make policies and rules that support this.
To bridge the digital divide, it is essential to address various factors and develop comprehensive strategies that go beyond connectivity. There is a need for digital solutions customised to fit specific local environments. These strategies must address issues regarding the affordability and availability of devices and technologies and the availability of content and digital skills, as these deficiencies still pose barriers to full internet access.
In the broader context of the digital divide, AI and large language models (LLMs) were highlighted as having the potential to redefine and expand digital skills and literacy. Moreover, including native languages in these models can enable digital interactions, particularly for individuals with lower literacy skills.
The goal of bridging the digital divide can only be achieved through partnerships and collaborations embodied in regional initiatives. Thus, Regional Internet Registries (RIRs) have an important role, particularly in regions that are underserved or have limited access to internet resources.
RIRs often go beyond their narrow mandates in the allocation and registration of internet number resources within a specific region of the world. RIRs have facilitated collaboration and knowledge sharing by adopting a multistakeholder and regional approach, leading to a more connected and equitable internet landscape.
One of the RIRs’ main strengths is building community trust. This trust has been established through their work on regional and local issues such as connectivity and support for community networks and Internet Exchange Points (IXPs).
The EU’s initiative, the Global Gateway, was identified as a good example of a collaborative effort to bridge the digital divide. Notable efforts under the project involve forging alliances with countries in Latin America and the Caribbean, implementing the Building the Europe Link to Latin America (BELLA) program for fibre optic cables, establishing regional cybersecurity hubs and strengthening the overall digital ecosystem.
10. How do digital technologies impact the environment?
We’ve broken too many environmental records this year. June, July, and August 2023 are the hottest three months ever documented, September 2023 was the hottest September ever recorded; and 2023 is firmly set to be the warmest year on record. Global temperatures will likely surge to record levels in the next five years. Therefore, the discussion of the overall impact of digital technologies on the environment at the IGF was particularly critical.
Internet use comes with a hefty energy bill, even for seemingly small things like sending texts – it gobbles up data and power. In fact, the internet’s carbon footprint amounts to 3.7% of global emissions.
The staggering number of devices globally (over 6.2 billion) need frequent charging, contributing to significant energy consumption. Some of these devices also perform demanding computational tasks requiring substantial power, increasing the numbers. Moreover, the rapid pace of electronic device advancement and devices’ increasingly shorter lifespans have exacerbated the e-waste problem.
In contrast, digital technologies also have the potential to cut emissions by 20% by 2050 in the three highest-emitting sectors – energy, mobility, and materials. 2050 is a bit far away, though, and immediate actions are critically needed to hit the 2030 Agenda targets.
What can we do? To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about our available sustainable sources and establish standards for their use. If we craft and implement policies right from the inception of a new technological direction, we can create awareness among innovators and start-up stakeholders about its carbon footprint to ensure environmentally-conscious design.
Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly concerning AI and its environmental impact, as well as collaboration among organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is imperative for sustainable practices.
We can also harmonise measurement standards to track the environmental impacts of digital technologies. This will enable policymakers and stakeholders to develop more effective strategies for mitigating the negative impacts.
To analyse the discussions at IGF, we first recorded them. The total length of that footage is almost 15 days long: 14 days, 21 hours, 22 minutes, and 30 seconds, to be precise. Talk about a packed programme!
Then we used DiploAI to transcribe IGF2023 discussions verbatim and counted 3,242,715 words spoken. That is nearly three times the length of the longest book in the world – Marcel Proust’s À la recherche du temps perdu. If an IGF 2023 book of transcripts were published, an average reader, who reads 218 words per minute, would need 217 hours – that’s nine days! – to read it cover to cover.
Using DiploAI, we analysed this text corpus and extracted key points, totalling to 288,364 words. Then DiploAI extracted the essence of discussions and the most important words spoken. The 10 most mentioned words were: AI, internet, data, support, government, importance, technology, issue, regulation, and global. It is interesting to note that the 11th most mentioned word was digital.
Prefix monitor
Other prefixes followed a similar pattern compared to the previous three years.
Digital was still the most used prefix, with a total of 8,661 references. This is nearly a 63% increase in frequency compared to IGF 2022, when it was referenced 5,346 times.
Online and cyber took 2nd and 3rd places, respectively, with 3,682 and 3,532 mentions. While cyber remained in third place, there was a 98% increase since last year, when it was mentioned 1,789 times.
The word tech came in 4th place, as it did last year, a significant decrease compared to 2021, when it held the 2nd spot.
Finally, virtual remained in 5th place, accounting for 2.5% of the analysed prefixes.
Diplo and GIP at IGF 2023
Reporting from the IGF: AI and human expertise combined
With 300+ sessions and 15 days worth of video footage featuring 1,240 speakers and 16.000 key points, IGF2023 was the largest and most dynamic IGF gathering so far. For the 9th consecutive year, the GIP and Diplo provided just-in-time reports and analyses from the discussions. This year, we added our new AI reporting tool, to the mix. Diplo’s human experts and AI tool work together in this hybrid system to deliver a more comprehensive reporting experience.
This hybrid approach consists of several stages:
Online real-time recording of IGF sessions. First, our recording team set up an online recording system that captured all sessions at the IGF.
Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker contributed is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session and individual speakers. These graphical representations mapped the intricate connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis
Writing dailies. Our team of analysts used AI-generated reports to craft comprehensive daily analyses.
You can see the results of that approach – session reports and dailies – on our IGF2023 Report page.
You are presently reading the culmination of our efforts: the top highlights from the discussions at IGF2023. These debates are presented in a Q&A format, tackling the Global Digital Compact (GDC), AI, concerns about internet fragmentation, negotiations on cybercrime, digital taxation, misinformation, data governance, the digital divide, and climate change.
Diplo crew in Kyoto
Diplo and the GIP were actively engaged at IGF2023, organising and participating in various sessions.
8-12 October
Diplo and GIP booth at IGF 2023 village
Sunday, 8 October
Bottom-up AI and the right to be humanly imperfect (organised by Diplo) | Read more
Tuesday, 10 October
How to enhance participation and cooperation of CSOs in/with multistakeholder IG forums (co-organised by Diplo) | Read more
Wednesday, 11 October
Ethical principles for the use of AI in cybersecurity (participation by Anastasiya Kazakova) | Read more
Thursday, 12 October
IGF to GDC- An Equitable Framework for Developing Countries (participation by Sorina Teleanu) | Read more
Thursday, 12 October
ICT vulnerabilities: Who is responsible for minimising risks? (co-hosted by Diplo) | Read more
Next Steps?
Start preparing for IGF 2024 by following Digital Watch coverage of governance topics, actors, and processes.
As the conflict in the Middle East unfolds, and the world watches closely, those relying on social media for updates are left confused over what’s real and what’s not. This may be just the beginning of an age dominated by mis- and disinformation. In other news, there are new AI guidelines in the pipeline, while the EU has unveiled plans for a Digital Networks Act (which we’ll cover when things solidify a bit more).
Let’s get started.
Stephanie and the Digital Watch team
PS. Due to a technical glitch, this issue has been published a bit later than usual. Our apologies.
// HIGHLIGHT //
How the Middle East crisis is being (mis-)reported online
In recent days, as people have been grappling with the violence unfolding in Israel and Gaza, social media platforms have become inundated with graphic images and videos of the conflict. Without diminishing the gravity of what’s happening in the Middle East and the need to make it known, the problem with such social media content is that some of it is fake.
What’s fake, exactly? There’s a distinction between reporting something that didn’t happen and repurposing visuals from other conflicts for stronger impact. From a production point of view, there’s something sinister and malicious in fabricating a lie; during wartime, this is meant to raise alarm and stir up animosity. Reporting the truth but attaching a fake image is theoretically less sinister – although it is still a lie, and can fuel confusion, hostility, public safety risks, and harmful civil discourse among those who consume it.
Additionally complex. In some cases, the issue is more complex than this. Perpetrators go to the trouble of creating fake accounts, and of circulating uncaptioned imagery, leaving it to readers to draw their own conclusions. In this way, they can tap into biases and powerful emotions, such as fear, without having to take responsibility for the level of truthfulness of the content.
The worst part. Most parts of the world have taken sides. Polarisation has reached unprecedented heights. When individuals decide to condone a violent action (or not) based on whether an image really originates from their adversaries rather than their favoured faction, that brings out the worst in people. We won’t go into the gory details: Killing innocent children is an atrocity, regardless of who’s behind it – or whether a report has attached the correct image to it.
Where it’s happening. Misinformation is as old as humanity and decades old in its current recognisable form, but social media has amplified its speed and scale. To say that online misinformation spreads like wildfire is an understatement. The challenge is compounded when shared by people with large followings. This could also happen if the press falls victim to the misinformation that’s flowing into newsrooms at a staggering scale.
Deprioritised. Earlier this year, Meta, Amazon, Alphabet, and Twitter laid off many of the team members focusing on misinformation and hate speech. This was part of a post-COVID-19-induced restructuring aimed at improving financial efficiency.
The EU takes action: X, Google, Meta, TikTok ordered to remove fake content
It didn’t take long for European Commissioner Thierry Breton to request that X, YouTube (Google), Facebook (Meta), and TikTok take down fake content.
Each letter reminded the platforms of their obligations under the new Digital Services Act (DSA), including prompt responses to take-down requests by law enforcement. In X, Facebook, and TikTok’s case, the Commissioner gave the platforms 24 hours to respond.
The case of X. In TikTok and Microsoft’s case, things went more or less quiet. In X’s case, CEO Linda Yaccarino responded to complaints, confirming the removal of hundreds of Hamas-linked accounts and the removal or flagging of thousands of pieces of content – but this was either an unsatisfactory response or a predictable course of events that left the European Commission, just a day later, to send X a formal request for information. Breton tweeted the development as ‘a first step in our investigation to determine compliance with the DSA’, hinting that things will require more than just a handful of exchanged letters to be resolved.
Elections in sight. The immediate worry may well be the Middle East conflict, but the longer-term worry is the numerous elections in 2024 – from the EU’s parliament and those in European countries, to the US presidential elections. It’s a concern that affects many countries.
The restructuring may prove costlier for those platforms laying off disinformation teams to save money.
Digital policy roundup (9–16 October)
// AI GOVERNANCE //
G7 to agree on AI guidelines by year’s end, Japan PM confirms
Japan confirmed that G7 leaders will agree on international guidelines for users by the end of the year, as well as non-binding rules and a code of conduct for developers of AI systems by the end of the year. This was announced by Prime Minister Fumio Kishida during last week’s Internet Governance Forum (IGF) in Kyoto.
The texts form part of the Hiroshima AI Process, which was kickstarted during May’s G7 summit, held in Hiroshima. The upcoming summit will take place online.
Why is it relevant? There has been a lot of anticipation for the G7 rules on AI, even though they are non-binding. Japan, the current G7 president, will want to see its plans through by the end of the year, before it passes the baton to Italy.
ASEAN eyeing business-friendly AI rules
Southeast Asian countries are taking a business-friendly approach to AI regulation, according to a leaked draft text. The Association of Southeast Asian Nations (ASEAN) draft guide to AI ethics and governance asks companies to consider cultural differences and doesn’t prescribe categories of unacceptable risk.
The guide is voluntary and meant to guide domestic regulations. ASEAN’s hands-off approach is seen as more business-friendly, as it limits the compliance burden and allows for more innovation.
Why is it relevant?The EU has been discussing AI rules with countries in the region in a bid to convince them to follow its approach. But ASEAN’s approach clearly goes against the EU’s push for globally harmonised binding rules and is more aligned with other business-friendly frameworks.
// ANTITRUST //
Done deal: Microsoft’s acquisition of Activision Blizzard is approved
Microsoft has completed its USD68.7 billion acquisition of video games producer Activision Blizzard after the UK’s Competition and Markets Authority (CMA) approved the deal. The approval was granted after Microsoft presented the CMA with a restructured agreement in which the company said it would transfer the licensing rights for cloud streaming rights to Ubisoft, a proposed offer which the CMA had already said addressed their previous concerns.
The EU had already given the green light to the merger in May, but media reports said the European Commission was deciding whether it would look further into the restructured deal. Now it seems the European Commission won’t pursue this after all. However, the US Federal Trade Commission (FTC) intends to look into the licensing agreement Microsoft signed with Ubisoft.
Why is this relevant? Microsoft’s acquisition of Activision has been controversial. It’s the most expensive acquisition yet by Big Tech, so due to its scale, regulators feared it could hurt competition and give Microsoft too much power in the gaming market. European regulators are satisfied, but will these approvals solve the bigger problem of Big Tech accumulating ever more power?
Campaigns 88
Was this newsletter forwarded to you, and you’d like to see more?
IRS audit: Microsoft faces potential USD28.9 billion tax bill
The US Inland Revenue Service (IRS) has notified Microsoft that it owes USD28.9 billion in back taxes, penalties, and interest, covering the period 2004–2013 (which is nowhere near the USD160 million (GBP136 million) the company just paid the UK’s tax authority). The audit, which has been ongoing for over a decade, focuses on a deal where Microsoft transferred intellectual property to a factory in Puerto Rico for more favourable tax treatment.
Microsoft says the taxes it has already paid could decrease the final tax owed under the audit by up to USD10 billion. The company plans to appeal the IRS’ conclusions, and the case is expected to continue for several more years.
Why is it relevant?
First, it’s the largest audit in US history. The IRS may be looking at the Microsoft case as a chance to prove the agency’s effectiveness in being more aggressive against corporations with endless resources.
Second, it’s yet another example of Big Tech shifting income to low-tax jurisdictions specifically to lower their tax bill.
Third, it coincides with the OECD’s latest inroad into its overhaul of global tax rules: The OECD has just published the text of a multilateral convention to implement the so-called Amount A of Pillar One. In simpler terms, this part of the new global rules will oblige some of the largest tech companies in the world to pay tax where their users are located, rather than where their corporate offices are based.
The week ahead (16–23 October)
16–17 October: This year’s International Regulators Forum is being hosted in Cologne, Germany. The Small Nations Regulators Forum takes place tomorrow.
18 October: The US Federal Communications Commission meets on Wednesday to decide whether to kickstart the legislative process to restore the net neutrality rules it had introduced in 2015 (reversed in 2017).
18–19 October: The Organization for American States (OAS) Cyber Symposium 2023 takes place in The Bahamas. It’s organised in partnership with the National CIRT of The Bahamas.
20 October: The 27th EU-US Summit, in Washington DC, will bring together US President Joe Biden, European Council President Charles Michel, and European Commission President Ursula von der Leyen to talk about cooperation in areas including AI and digital infrastructure.
21–26 October: ICANN78 takes place in Hamburg, Germany, starting Saturday. It will be the organisation’s 25th Annual General Meeting.
#ReadingCorner
Google proposes framework for protecting kids online ‘Appropriate safeguards can empower young people and help them learn, connect, grow, and prepare for the future.’ This is how Google introduces its new framework for child safety, which tells policymakers how the company views existing and proposed rules concerning, for instance, age verification, parental consent, and personalised content. Read the blog and framework, published earlier today.
Les pays du G7 ont convenu de créer un code de conduite international pour l’IA qui établirait des principes pour la surveillance et le contrôle des formes avancées d’IA. Dans le même ordre d’idées, le Japon (qui préside actuellement le G7) et le Canada ont publié des codes de conduite volontaires à l’intention des entreprises qui développent l’IA.
Cette initiative s’inscrit dans la tendance récente qui consiste à utiliser des lignes directrices facultatives jusqu’à ce que des réglementations soient adoptées.
Le Comité international de la Croix-Rouge (CICR) a édicté huit règles d’engagement à l’intention des cyberpirates qui participent à des conflits, les avertissant que leurs actions peuvent mettre des vies en danger. Ces règles interdisent notamment les cyberattaques visant des civils, des hôpitaux et des installations humanitaires, ainsi que l’utilisation de logiciels malveillants ou d’outils similaires susceptibles de nuire à des cibles tant militaires que civiles.
Infrastructure
La Commission fédérale des communications (FCC) des États-Unis prévoit de rétablir les règles relatives à la neutralité de l’Internet qui ont été abrogées en 2017. La présidente de la FCC, Jessica Rosenworcel, a annoncé que la FCC proposait de reclasser le haut débit sous le titre II de la loi américaine sur les communications. Cela donnerait à la FCC plus d’autorité pour réglementer les fournisseurs d’accès à Internet, y compris la capacité d’empêcher les opérateurs de ralentir ou d’accélérer le trafic Internet vers certains sites web.
Huawei, le géant chinois de la technologie, a intenté une action en justice devant un tribunal de Lisbonne contre une résolution du Conseil de cybersécurité du Portugal (CSSC), qui interdit aux opérateurs d’utiliser leurs équipements dans les réseaux mobiles 5G à grande vitesse.
Économie de l’Internet
La Commission européenne a désigné six contrôleurs d’accès – à savoir Alphabet, Amazon, Apple, ByteDance, Meta et Microsoft – comme gardiens en vertu de la loi sur les marchés numériques (DMA), à l’issue d’une procédure d’examen de 45 jours. La désignation porte sur un total de 22 services de plateforme de base fournis par ces entreprises.
Dans un autre domaine, Amazon a temporairement obtenu une victoire dans une affaire concernant sa classification en tant que très grande plateforme en ligne (VLOP). Le tribunal de la Cour de justice de l’Union européenne (CJUE), à Luxembourg, a, en réponse à la requête d’Amazon, accordé des mesures provisoires, entraînant le report de certaines obligations au titre de la loi sur les services numériques (DSA). Cette décision intervient alors que des mesures strictes ont été prises dans le cadre de la loi sur les services numériques de l’UE (DSA), affectant 19 grandes plateformes en ligne et moteurs de recherche.
Les pratiques anticoncurrentielles (présumées) des grandes entreprises ont été sous les feux de la rampe le mois dernier. La Commission fédérale du commerce des États-Unis (FTC) et 17 procureurs généraux d’État ont poursuivi Amazon pour comportement anticoncurrentiel présumé. Le procès intenté par le ministère américain de la Justice contre Google, l’une des plus grandes affaires antitrust depuis des décennies, a débuté le 12 septembre 2023. Ce procès se concentre sur les activités de recherche de Google, qui seraient « anticoncurrentielles et discriminatoires », permettant à l’entreprise de conserver un monopole sur le marché de la publicité numérique. Dans une autre affaire concernant également Google, l’entreprise a annoncé un règlement provisoire aux États-Unis sur des allégations de monopole concernant la plateforme d’application Play Store.
Le commissaire irlandais à la protection des données a confirmé une amende de 345 millions d’euros (370 millions de dollars) à l’encontre de TikTok pour avoir enfreint les lois européennes sur la protection de la vie privée concernant le traitement des données personnelles des enfants. L’Administration nationale américaine des télécommunications et de l’information (NTIA) sollicite l’avis du public sur les risques liés à l’Internet pour les enfants et sur les moyens de les atténuer.
La loi sur la gouvernance des données,essentielle à la stratégie européenne en matière de données, est entrée en vigueur le 24 septembre 2023, avec pour objectif principal de faciliter l’échange sécurisé de données entre les secteurs et les membres de l’UE, en améliorant notamment l’utilisation des données du secteur non public.
Reporters sans frontières (RSF) a appelé le public à participer à la rédaction de la Charte de l’IA afin de clarifier la position de la communauté journalistique sur l’utilisation extensive des technologies de l’IA sur le terrain.
L’organisme norvégien de surveillance des données espère étendre ses amendes journalières d’un million de couronnes norvégiennes (93 000 USD) pour violation de la vie privée à l’encontre de Meta dans l’ensemble de l’UE et de l’Espace économique européen (EEE). Il appartient maintenant au Comité européen de la protection des données (CEPD) d’évaluer la situation.
Politique de contenu
Une cour d’appel fédérale américaine a étendu les limites de la communication de l’Administration Biden avec les plateformes de médias sociaux à l’Agence américaine pour la cybersécurité et la sécurité des infrastructures (CISA). Cette décision réduit considérablement la capacité de la Maison-Blanche et des agences gouvernementales à s’engager avec les plateformes de médias sociaux sur des questions de modération de contenu.
L’UE a publié son rapport sur la Décennie numérique, qui préconise des mesures pour atteindre les objectifs de la Décennie numérique d’ici à 2030.
De nouvelles données de l’UIT montrent que l’accès mondial à l’Internet s’est amélioré en 2023, avec plus de 100 millions de nouveaux utilisateurs dans le monde. Le sommet du G77 a adopté la déclaration de La Havane, qui met l’accent sur la science, la technologie et l’innovation, et décrit les actions futures du G77.
LES CONVERSATIONS DE LA VILLE – GENÈVE
Lors de la 54e session du Conseil des droits de l’Homme des Nations unies (CDH), un groupe de travail a discuté de la cyberintimidation à l’encontre des enfants, examinant le rôle des États, du secteur privé et des parties prenantes dans la lutte contre la cyberintimidation et l’autonomisation des enfants dans la sphère numérique. En outre, le Conseil a présenté un rapport de synthèse sur le rôle de la maîtrise du numérique, des médias et de l’information dans la promotion et l’exercice du droit à la liberté d’opinion et d’expression lors de la 53e session. Le Conseil a également examiné un rapport sur l’impact des nouvelles technologies destinées à la protection du climat.
Le Forum public 2023 de l’OMC s’est concentré sur le rôle du commerce dans la promotion d’un avenir respectueux de l’environnement, notamment sur le thème suivant : « La numérisation en tant qu’outil pour l’écologisation des chaînes d’approvisionnement ». Plus de 20 sessions ont été consacrées aux outils numériques et à leurs impacts.
La 8e session du Dialogue de l’OMPI s’est penchée sur l’IA générative et la propriété intellectuelle. Pendant deux jours, six groupes de discussion ont abordé les cas d’utilisation de l’IA générative, le contexte réglementaire, les préoccupations éthiques concernant les données d’apprentissage, la paternité, la propriété du travail créatif et les stratégies pour naviguer dans la propriété intellectuelle en matière d’IA générative
En bref
Le numérique à l’AGNU 78
Le débat général de l’Assemblée générale des Nations unies (AGNU) est une plateforme mondiale où les dirigeants du monde entier se réunissent pour aborder certaines des questions les plus urgentes auxquelles l’humanité est confrontée. L’un de ces sujets cruciaux est l’impact des technologies numériques.
Lors du débat général de l’AGNU 2023, 94 intervenants, dont le secrétaire général de l’ONU ainsi que des représentants du Saint-Siège et de l’UE, se sont penchés sur des thèmes numériques.
Ce résultat (94) représente une augmentation significative par rapport à notre première analyse en 2017, lorsque 47 pays s’étaient exprimés sur des sujets numériques. Sept ans plus tard, ce chiffre a doublé pour atteindre 94. Cette forte augmentation souligne la reconnaissance croissante de l’importance primordiale des technologies numériques aux plus hauts niveaux du discours diplomatique.
Dans un contexte plus large, les discussions liées à la technologie numérique représentaient 2,51 % de l’ensemble du corpus textuel produit lors des discours de l’AGNU 2023.
Nombre global d’intervenants mentionnant les questions numériques
Le débat général de 2023 a vu une augmentation substantielle des mentions de l’IA dans les déclarations nationales. Sur les 467 130 mots prononcés pendant le débat, 6 279 concernaient l’IA, ce qui confirme sa position de sujet numérique le plus fréquemment abordé. Ce regain d’intérêt peut être attribué, en partie, à l’attention généralisée suscitée par le lancement de ChatGPT.
L’IA a fait l’objet de 39 discours lors de l’AGNU 78, ce qui témoigne de son importance croissante. Toutefois, les dirigeants ont également exploré d’autres sujets liés au numérique, notamment le développement numérique (44), la cybersécurité (23), la politique de contenu (7), les considérations économiques (4) et les droits de l’Homme (6).
IA. L’évolution rapide de l’IA a suscité des inquiétudes quant à ses risques potentiels, du déplacement d’emplois aux cybermenaces. Si certains intervenants ont souligné le potentiel de transformation de l’IA dans les domaines de la santé et de l’éducation, beaucoup ont insisté sur la nécessité d’une gouvernance éthique et d’une coopération internationale. Un consensus s’est dégagé sur l’urgence de réglementer l’IA, de s’attaquer à ses applications militaires et d’établir des normes mondiales. Le rôle des Nations unies dans la facilitation de ces discussions et la promotion d’une utilisation responsable de l’IA a été un thème récurrent, avec des appels en faveur d’un Pacte numérique mondial et de la création d’une agence internationale de l’IA.
Développement numérique. Les dirigeants ont souligné la nécessité de combler la fracture numérique, de réduire les inégalités et de garantir un développement numérique inclusif. De nombreuses nations ont plaidé en faveur d’une coopération internationale par le biais d’initiatives telles que le Pacte mondial pour le numérique, afin de relever collectivement ces défis. L’importance des technologies numériques pour atteindre les objectifs de développement durable et favoriser la solidarité mondiale est un thème commun aux dirigeants.
Cybersécurité. L’évolution du paysage des menaces non traditionnelles pour la sécurité, en particulier la cybersécurité et la cybercriminalité, a fait l’objet d’un débat. Les dirigeants ont souligné la nécessité d’une coopération internationale et de cadres de gouvernance pour faire face aux cybermenaces transfrontalières, protéger les infrastructures critiques et lutter contre la cybercriminalité.
Politique de contenu. Les dirigeants ont abordé la question de la propagation inquiétante de la désinformation et des fausses informations, amplifiée par l’IA et les plateformes de médias sociaux. Ils ont souligné les risques pour la démocratie ainsi que l’augmentation de la violence et des conflits dans le monde réel provoqués par les discours de haine et la désinformation en ligne. Les efforts pour lutter contre la désinformation comprennent des propositions pour une charte des droits numériques et un code de conduite pour l’intégrité de l’information sur les plateformes numériques.
Économie. L’importance d’adopter la technologie numérique et d’encourager l’innovation pour renforcer les économies a été accentuée. Les efforts visant à réduire les barrières commerciales, à rechercher des accords de libre-échange et à passer à des économies numériques et vertes ont été mis en exergue. Droits de l’Homme. Les dirigeants ont exprimé leurs préoccupations concernant la surveillance en ligne, la collecte de données et les violations des droits de l’Homme. Ils ont appelé à des approches centrées sur l’homme et fondées sur les droits de l’Homme pour le développement et le déploiement des technologies.
La carte du monde met en évidence les pays qui ont abordé les questions numériques lors de l’AGNU 78.
La vision de l’UE en matière de numérique et d’IA en 2023 : discours de Mme von der Leyen
Dans son discours sur l’état de l’Union en 2023, la présidente de la Commission européenne, Ursula von der Leyen, a exposé sa vision de l’avenir numérique de l’Europe, en mettant particulièrement l’accent sur le rôle de l’IA. Le discours a mis en lumière les réalisations de l’Europe dans le domaine numérique ainsi que les mesures prises pour relever les défis et saisir les possibilités offertes par l’IA et les technologies numériques.
Mme von der Leyen prononçant son discours. Source : Commission européenne
L’investissement de l’Europe dans la transformation numérique
La présidente von der Leyen a commencé par mettre en avant l’importance de la technologie numérique dans la simplification des activités commerciales et de la vie quotidienne. Elle a souligné que l’Europe avait dépassé son objectif d’investissements dans les projets numériques dans le cadre de NextGenerationEU, les États membres utilisant ce financement pour numériser des secteurs clés tels que les soins de santé, la justice et les transports.
Gérer les risques numériques et protéger les droits fondamentaux
Toutefois, la présidente a également reconnu les défis posés par le monde numérique, notamment la désinformation, les contenus préjudiciables et les risques pour la vie privée. Elle a relevé que ces problèmes érodaient la confiance et violaient les droits fondamentaux. Pour contrer ces menaces, l’Europe a pris l’initiative de protéger les droits des citoyens grâce à des cadres législatifs tels que le DSA et le DMA, qui visent à créer un espace numérique plus sûr et à responsabiliser les géants de la technologie.
Le rôle de l’IA
La présidente von der Leyen a insisté sur le potentiel de l’IA à révolutionner les soins de santé, à accroître la productivité et à lutter contre le changement climatique. Mais elle a également mis en garde contre la sous-estimation des menaces réelles occasionnées par l’IA. Citant les préoccupations des principaux développeurs et experts de l’IA, elle a souligné l’importance d’atténuer les risques liés à l’IA à l’échelle mondiale.
Les trois piliers d’un cadre d’IA responsable
La présidente a exposé trois piliers clés pour le pilotage de l’Europe dans l’élaboration d’un cadre mondial pour l’IA : les garde-fous, la gouvernance et l’orientation de l’innovation.
Garde-fous : veiller à ce que le développement de l’IA reste centré sur l’homme, transparent et responsable. La loi sur l’IA, une loi globale sur l’IA favorable à l’innovation, a été présentée comme un modèle pour le monde entier. Il s’agit maintenant d’adopter rapidement les règles et de passer à la mise en œuvre.
Gouvernance : établir un système de gouvernance unique en Europe et collaborer avec des partenaires internationaux pour créer un groupe d’experts mondial similaire au Groupe d’experts intergouvernemental sur l’évolution du climat (GIEC) pour l’IA. Cet organe fournirait des informations sur l’impact de l’IA sur la société et garantirait des réponses coordonnées au niveau mondial.
Guider l’innovation : tirer parti du rôle prépondérant de l’Europe dans le domaine des supercalculateurs en mettant à la disposition des jeunes pousses de l’IA des ordinateurs à haute performance pour l’entraînement de leurs modèles. En outre, il est essentiel de favoriser un dialogue ouvert avec les développeurs et les entreprises d’IA, à l’instar des règles volontaires de sûreté, de sécurité et de confiance adoptées par les grandes entreprises technologiques aux États-Unis.
Commission ad hoc sur la cybercriminalité : principaux enseignements de la 6e session
La 6e session du Comité ad hoc sur la cybercriminalité a terminé ses travaux, mais de nombreuses questions restent en suspens. Alors que le dernier volet est prévu pour février 2024, les États ne se sont toujours pas mis d’accord sur l’utilisation des termes « cybercriminalité » ou « TIC à des fins malveillantes » dans la convention.
Le dernier projet (mis à jour le 1er septembre 2023) a également suscité un débat entre les États sur le champ d’application de la convention, la Chine et la Russie s’inquiétant du fait que le paysage évolutif des technologies de l’information et de la communication (TIC) n’a pas été pris en compte de manière adéquate. En ce qui concerne la criminalisation des infractions, la Russie a souligné la nécessité de sanctionner l’utilisation des TIC à des fins extrémistes et terroristes, et, avec la Namibie et la Malaisie, entre autres pays, a soutenu l’inclusion des actifs numériques dans le blanchiment des produits de la criminalité. Dans le même temps, certains pays, dont le Royaume-Uni et l’Australie, se sont opposés à leur inclusion, affirmant qu’ils n’entraient pas dans le champ d’application de la convention.
Les dispositions relatives aux droits de l’Homme ont suscité des inquiétudes non seulement parmi les États, mais aussi parmi les parties prenantes. Microsoft a notamment déclaré que les dispositions actuelles inscrites dans le dernier projet pourraient être « désastreuses pour les droits de l’Homme ». En ce qui concerne les mesures de protection des données, l’Afrique du Sud, les États-Unis et la Russie ont proposé la collecte de données relatives au trafic et l’interception de données relatives au contenu. Dans le même temps, Singapour et la Suisse se sont opposés à cette proposition, l’UE soulignant que de telles mesures constituent une menace pour les droits de l’Homme et les libertés fondamentales.
Les négociations sur la coopération internationale ont également rencontré des difficultés, la Russie rappelant l’importance d’établir une distinction entre le lieu de conservation des données et les lieux de traitement, de stockage et de transmission des données, notamment dans le cadre de l’informatique dématérialisée (cloud computing). Pour résoudre le problème de la « perte de localisation » des données, la Russie a proposé de se référer au deuxième protocole de la Convention de Budapest. En revanche, des pays comme le Pakistan, l’Iran, la Chine et la Mauritanie ont proposé d’inclure l’article 47 bis sur la coopération entre les autorités nationales et les fournisseurs de services. Pour l’essentiel, cette coopération devrait porter sur le signalement des délits de cybercriminalité tels qu’établis par la convention, le partage d’expertise, la formation, la préservation des preuves électroniques et la garantie de la confidentialité des demandes reçues des autorités chargées de l’application de la loi.
Une proposition intéressante a été faite par le Costa Rica et le Paraguay pour inclure le mot « durabilité » dans les articles 52 et 56 afin de fournir une assistance efficace et de traiter l’impact sociétal de la cybercriminalité. La question reste donc ouverte : existe-t-il déjà un traité ? Les États se sont-ils mis d’accord sur les dispositions ? Non. Les États tiendront-ils leur dernier round en février 2024 ? Oui. Que se passera-t-il en cas d’absence de consensus ? Le Bureau de l’Office des Nations unies contre la drogue et le crime (ONUDC) interviendra et confirmera que les décisions seront prises à la majorité des deux tiers des représentants présents et votants.
À venir : FGI 2023
L’édition 2023 du Forum sur la gouvernance de l’Internet (FGI) se tiendra à Kyoto, au Japon, du 8 au 12 octobre, sur le thème « L’Internet que nous voulons – l’autonomisation de tous ».
Le programme s’articule autour de huit sous-thèmes :
IA et technologies émergentes ;
éviter la fragmentation de l’Internet ;
cybersécurité, cybercriminalité et sécurité en ligne ;
gouvernance des données et transparence ;
fractures numériques et inclusion ;
gouvernance numérique mondiale et coopération ;
droits de l’Homme et libertés ;
durabilité et environnement.
Le Forum comprendra environ 300 sessions, avec une pléthore de présentations, y compris des sessions de haut niveau, des sessions principales, des ateliers, des forums ouverts, des séances de discussion éclair, des lancements et des récompenses, des sessions de mise en réseau, des événements du jour 0, des sessions de coalition dynamique, et des sessions d’initiatives nationales et régionales (NRI).
En outre, le village du FGI, où 76 exposants présenteront leur travail, sera ouvert aux visiteurs.
Restez informé sur les rapports du GIP !
La Geneva Internet Platform sera activement impliquée dans l’IGF 2023 en fournissant des rapports sur les sessions de l’IGF pour la 9e année consécutive. Cette année, nos experts humains seront rejoints par DiploAI, qui générera des rapports sur toutes les sessions du FGI.
Nous publierons également des rapports quotidiens sur le FGI tout au long de la semaine, et un rapport final sera publié au terme du FGI.
L’OIF organise un Café numérique francophone sur la « Découvrabilité et diversité culturelle et linguistique dans l’espace numérique »pour les délégations francophones auprès des Nations unies à New York
Campaigns 101
Suite à la publication de sa Contribution au Pacte numérique mondial (PNM) en avril 2023, remise à l’Envoyé pour les technologies des Nations unies le 03 mai 2023, l’Organisation internationale de la Francophonie (OIF) a mis en place les « Cafés numériques francophones ». Ce rendez-vous bimensuel a pour objectif de renforcer la sensibilisation des diplomates et experts francophones en charge du numérique au sein des Missions permanentes des Nations unies sur les implications diplomatiques des développements numériques, de faire un état des lieux régulier sur les processus en cours, d’encourager la concertation francophone à New York et, in fine, de favoriser une meilleure coordination des positions. Cette sensibilisation s’inscrit aussi dans le cadre du programme « D-CLIC, formez-vous au numérique » et de son volet 3 sur la sensibilisation à la gouvernance du numérique.
Ainsi, le deuxième « Café numérique francophone » couvre le thème de la « Découvrabilité et diversité culturelle et linguistique dans l’espace numérique » et aura lieu le 26 octobre 2023. Ce thème n’est pas anodin puisqu’il est une des propositions faites par l’OIF dans sa Contribution au PNM en complément des thèmes élaborés par les Nations unies : « Promotion de la diversité culturelle et linguistique dans le numérique ». L’OIF y plaide la défense de la diversité culturelle et linguistique dans l’espace numérique à travers un fort plaidoyer en faveur de la « découvrabilité » des contenus en ligne.
En effet, l’environnement numérique ne répond pas suffisamment aux enjeux du multilinguisme et le risque d’exclusion d’une grande partie des expressions culturelles induit par la « plateformisation » des modes de consommation et de distribution des contenus doit être pris en compte. Ce risque doit être atténué par des mesures propres à assurer la découvrabilité de tous les contenus sur la Toile. Ainsi, l’univers numérique doit refléter cette diversité en créant un écosystème favorable à l’affirmation et à la valorisation d’un pluralisme culturel et linguistique excluant tout monopole de la pensée ou forme d’hégémonie culturelle. Cela est d’autant plus opportun avec la montée en puissance de l’Intelligence artificielle (IA), et la manière dont les algorithmes génèrent du contenu dans différentes langues, ayant donc un impact sur la visibilité et la découvrabilité du contenu francophone en ligne. Il est ici question de l’importance de la gouvernance des algorithmes au service de la diversité et de la découvrabilité dans le cyberespace. La promotion de la richesse et la diversité culturelles de demain se feront notamment sur Internet et il est essentiel de construire dès maintenant l’environnement qui permettra de les sauvegarder. Ce seront les enjeux de ce thème qui sera animé par Monsieur Destiny Tchehouali, Professeur de communication internationale au Département de Communication sociale et publique de l’Université du Québec à Montréal (UQAM) et co-titulaire de la Chaire Unesco en communication et technologies pour le développement.
Au-delà de la sensibilisation et du renforcement des compétences sur des thématiques liées au développement du numérique, ce dialogue permettra de favoriser des convergences et positions communes de diplomates et délégations francophones au sein de différentes instances à New York, et notamment durant les négociations intergouvernementales qui s’ouvriront en décembre 2023 sur le PNM.
Le Groupe de travail exécutif sur le numérique (GTEN) rend son rapport et ses recommandations pour renforcer l’action de la Francophonie dans le champ du numérique
La Secrétaire générale de la Francophonie Louise Mushikiwabo a reçu des mains de l’Ambassadeur suisse Martin Dahinden le Rapport du groupe de travail exécutif la Gouvernance du numérique.
Ce rapport, mandaté par le Sommet de Djerba des Chefs d’Etat et de gouvernement des pays ayant le français en partage, a pour objectif de clarifier la valeur ajoutée de la Francophonie en général, et de l’OIF en particulier, dans la gouvernance du numérique. Il a été élaboré par un groupe de travail, le Groupe de travail exécutif sur le numérique (GTEN), constitué d’un nombre restreint de membres de haut niveau issus de pays représentatifs des territoires formant l’espace francophone. Placé sous la Présidence de l’Ambassadeur suisse Martin Dahinden, il est composé d’experts du Bénin, du Canada/Québec, de la République Démocratique du Congo, de la France, du Maroc, de la Roumanie, du Vietnam et de la Fédération Wallonie-Bruxelles.
De juin à septembre 2023, le Groupe s’est réuni à sept reprises et s’est notamment fondé sur plusieurs documents de référence de la Francophonie (Relevé des décisions, XVIIIe Conférence des chefs d’État et de gouvernement membres de l’OIF – 19 et 20 novembre 2022, Stratégie de la Francophonie numérique 2022-2026, Contribution de la Francophonie au Pacte numérique mondial) pour réfléchir de manière collaborative et itérative aux enjeux du numérique dans l’espace francophone.
Le rapport contient donc des axes prioritaires et des recommandations pour renforcer l’action de la Francophonie et de ses membres dans le champ du numérique. Il s’attache à faire des propositions opérationnelles sur chaque thématique suivante : la réduction de la fracture numérique et accès au numérique pour les populations de l’espace francophone ; le renforcement des capacités des acteurs nationaux et régionaux, avec une attention particulière aux femmes et aux jeunes ; les voix francophones dans la gouvernance du numérique, notamment à travers la consolidation des initiatives entre pays francophones en matière de régulation du numérique ; la découvrabilité du contenu francophone en contribuant à accroître la visibilité des contenus francophones en ligne ; et enfin la promotion de l’innovation numérique responsable, inclusive et respectueuse des droits de l’Homme.
La Secrétaire générale de la Francophonie, qui a placé le numérique au cœur de son action, s’est engagée à porter ces propositions devant les Ministres des Affaires étrangères de l’espace francophone qui se réuniront lors de la Conférence ministérielle de la Francophonie à Yaoundé les 4-5 novembre prochains.
Source de la photographie : OIF
L’OIF intervient au Forum Régional de Développement de l’UIT pour l’Afrique 2023 (Addis Abeba, 03-05 octobre 2023)
L’Organisation internationale de la Francophonie à travers sa Représentante permanente auprès de l’Union africaine, Mme Néfertiti Tshibanda, a pris part à Addis Abeba au Forum régional de développement de l’UIT pour l’Afrique (RDF-AFR) portant sur le thème : « Transformation numérique pour un avenir numérique durable et équitable : Accélérer la mise en œuvre des ODD en Afrique ». La session de haut niveau à laquelle a pris part Mme la Représentante avait comme thématique de discussion le développement et la transformation numériques de l’Afrique, avec les populations au cœur de ce processus. Lors de son intervention, Mme la Représentante est largement revenue sur l’historique de l’engagement et de la vision de la Francophonie pour le développement du numérique, l’action de l’OIF dans le domaine du renforcement des compétences numériques des populations francophones à travers notamment le programme D-CLIC mais aussi son engagement dans le domaine de la gouvernance du numérique. La diversité culturelle et linguistique dans l’espace numérique avec la découvrabilité des contenus en ligne est un des sujets privilégiés de l’action de l’OIF dans le champ de la gouvernance du numérique. Il fait d’ailleurs partie des deux thématiques (renforcement des capacités numériques et promotion de la diversité culturelle et linguistique dans le numérique) adjoints par l’OIF aux 7 sujets initiaux proposés par les Nations unies pour le Pacte numérique mondial. Pour rappel, l’OIF a remis sa contribution complète au Pacte numérique mondial à l’Envoyé pour les technologies des Nations Unies et l’a présenté aux délégations francophones à New York le 03 mai 2023.
Les Autorités de protection des données personnelles de l’espace francophone se réunissent au Maroc lors de la 14e conférence de l’AFAPDP
La Commission Nationale de Contrôle de la Protection des Données à Caractère Personnel (CNDP) du Maroc a accueilli la 14e conférence de l’Association Francophone des Autorités de Protection des Données Personnelles (AFAPDP) le 2 octobre 2023 à Tanger (Maroc). Cette conférence avait pour thème principal : « Enjeux des relations DPAs-GAMMAs : Exemple du Web scraping ».
Le moissonnage de données (web scraping) peut présenter des enjeux et défis majeurs pour la protection de la vie privée. L’extraction automatisée de données à partir du web peut impliquer l’aspiration de données personnelles, notamment sur les réseaux sociaux. Elle peut donc poser des problèmes par rapport aux principes et réglementations sur les données personnelles.
Il est à rappeler qu’avec 11 autres autorités de protection des données (Data protection authorities d’Australie, Canada, Royaume-Uni, Hong Kong, Suisse, Norvège, Nouvelle-Zélande, Colombie, Jersey, Argentine et Mexique) dans le monde, la CNDP avait déjà signée en août 2023 une lettre destinée aux GAMMAs (Google, Apple, Meta, Microsoft et Amazon) et d’autres réseaux sociaux comme X Corp (ex-Twitter) ou encore ByteDance Ltd (TikTok) pour les inviter à prendre des dispositions permettant de minimiser les risques d’atteinte à la vie privée pour les utilisateurs.
En savoir plus : https://www.afapdp.org
Événements à venir :
Conférence du Réseau francophone des régulateurs des médias – REFRAM (2023, Dakar), date à confirmer (https://www.refram.org)
Conférence du Réseau francophone de la régulation des télécommunications – FRATEL (25-26 octobre 2023, Rabat, Maroc) : Comment renforcer l’objectif de satisfaction des utilisateurs dans la régulation ? (https://www.fratel.org/)
Participation de l’OIF à l’Assemblée générale annuelle de l’ICANN (ICANN 78), Société pour l’attribution des noms de domaines et des numéros sur Internet (21-26 octobre 2023, Hambourg)
The third day always brings a peak in the IGF dynamics, as happened yesterday in Kyoto. The buzz in the corridors, bilateral meetings, and tens of workshops bring into focus the small and large ‘elephants in the room’. One of these was the future of the IGF in the context of the fast-changing AI and digital governance landscape.
What will the future role of the IGF be? Can the IGF fulfil the demand for more robust AI governance? What will the position of the IGF be in the context of the architecture proposed by the Global Digital Compact, to be adopted in 2024?
These and other questions were addressed in two sessions yesterday. Formally speaking, decisions about the future of the IGF will most likely happen in 2025. The main policy dilemma will be about the role of the IGF in the context of the Global Digital Compact, which will be adopted in 2024.
While governance frameworks featured prominently in the debates, a few IGF discussions dived deeper into the specificities of AI governance.
Yesterday’s sessions provided intriguing reflections and insights on cybersecurity, digital and the environment, human rights online, disinformation, and much more, as you can read below.
You can also read how we did our reporting from IGF2023. Next week, Diplo’s AI and Team of Experts will provide an overall report with the gist of our debates and many useful (and interesting)l statistics.
Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at digitalwatch@diplomacy.edu
Highlights from yesterday’s sessions
Kinkaku-ji Temple in Kyoto. Credit: Sasa VK
The day’s top picks
The future of the IGF
Ethical principles for the use of AI in cybersecurity
Inclusion (every kind of inclusion)
Digital Governance Processes
What is the future of the IGF?
It could be a counter-intuitive question given the success of IGF2023 in Kyoto. But, continuous revisiting of the purpose of IGF is built into its fundaments. The next review of the future of the IGF will most likely happen in 2025 on the occasion of the 20th anniversary of the World Summit on Information Society (WSIS) when the decision to establish the IGF was made.
In this context, over the last few days in Kyoto, the future of the IGF has featured highly in corridors, bilateral meetings, and yesterday’s sessions. One of the main questions has been what will be the future position of the IGF in the context of the Global Digital Compact (GDC), to be adopted during the Summit of the Future in September 2024. For instance, what will be the role of the IGF if the GDC establishes a Digital Cooperation Forum as suggested in the UN Secretary-General’s policy brief?
Debates in Kyoto reflected the view that fast developments, especially in the realm of AI, require more robust AI and digital governance. Many in the IGF community argue for a prominent role for the IGF in the emerging governance architecture. For example, the IGF Leadership Panel believes that it is the IGF that should participate in overviewing the implementation of the GDC. Creating a new forum would incur significant costs in finances, time, and effort. There is also a view that the IGF should be refined, improved and adapted to the rapidly changing landscape of AI and broader digital developments in order to, among other things, involve missing communities in current IGF debates. This view is supported by the IGF’s capacity to change and evolve, as has happened since its inception in 2006.
The Digital Watch and Diplo will follow the debate on the future of the IGF in the context of the GDC negotiations and the WSIS+20 Review Process.
AI
AI and governance
AI will be a critical segment of the emerging digital governance architecture. In the Evolving AI, evolving governance: From principles to action session, we learned that we could benefit from two things. First, we need a balanced mix of voluntary standards and legal frameworks for AI. It’s not about just treating AI as a tool, but regulating it based on its real-world use. Second, we need a bottom-up approach to global AI governance, integrating input from diverse stakeholders and factoring in geopolitical contexts. IEEE and its 400,000 members were applauded for their bottom-up engagement with regulatory bodies to develop socio-technical standards beyond technology specifications. The UK’s Online Safety Bill, complemented by an IEEE standard on age-appropriate design, is one encouraging example.
The open forum discussed one international initiative specifically – the Global Partnership on Artificial Intelligence (GPAI). The GPAI operates via a multi-tiered governance structure, ensuring decisions are made collectively, through a spectrum of perspectives. It currently boasts 29 member states, and others like Peru and Slovenia are looking to join. At the end of the year, India will be taking over the GPAI chair from Japan and plans to focus on bridging the gender gap in AI. It’s all about inclusion, from gender and linguistic diversity to educational programmes to teach AI-related skills.
AI and cybersecurity
AI could introduce more uncertainty into the security landscape. For instance, malicious actors might use AI to facilitate more convincing social engineering attacks, like spear-phishing, which can deceive even vigilant users. AI is making it easier to make bioweapons and propagate autonomous weapons, raising concerns about modern warfare. National security strategies might shift towards preemptive strikes, as commanders fear that failure to strike the right balance between ethical criteria and a swift military response could put them at a disadvantage in combat.
On the flip side, AI can play a role in safeguarding critical infrastructure and sensitive data. AI has proven to be a valuable tool in preventing, detecting, and responding to child safety issues, by assisting in age verification and disrupting suspicious behaviours and patterns that may indicate child exploitation. AI could be a game-changer in reducing harm to civilians during conflicts: It could reduce the likelihood of civilian hits by identifying and directing target strikes more accurately, thus enhancing precision and protecting humanitarian elements in military operations. One of yesterday’s sessions, Ethical principles for the use of AI in cybersecurity, highlighted the need for robust ethical and regulatory guidelines in the development and deployment of AI systems in the cybersecurity domain. Transparency, safety, human control, privacy, and defence against cyberattacks were identified as key ethical principles in AI cybersecurity. The session also argued that existing national cybercriminal legislation could cover attacks using AI without requiring AI-specific regulation.
Diplo’s Anastasiya Kazakova at the workshop: Ethical principles for the use of AI in cybersecurity.
The question going forward is: Do we need separate AI guidelines specifically designed for the military? The workshop on AI and Emerging and Disruptive Technologies in warfare called for the development of a comprehensive global ethical framework led by the UN. Currently, different nations have their own frameworks for the ethical use of AI in defence, but the need for a unified approach and compliance through intergovernmental processes persists.
Global negotiations for a UN cybercrime convention
Instruments and tools to combat cybercrime were high on the agenda of discussions. The negotiations about the possible UN cybercrime convention in the Ad Hoc Committee (AHC) are nearing the end, yet many open issues remain. While the mandate is clearly limited to cybercrime (broader mandate proposals, like the regulation of ISPs, were removed from the text), there is a need to precisely define the scope of the treaty. There is no commonly agreed-upon definition of cybercrime yet, and a focus on well-defined crimes that are universally understood across jurisdictions might be needed.
There are calls to distinguish between cyber-dependent (dependent upon some cyber element for its execution) serious crime like autonomous cyberweapon terrorist attacks and cyber-enabled (actions traditionally carried out in other environments, but now possible with the use of computer as well) actions like online speech which may hurt human rights., The treaty should also address safe havens for cybercriminals, since certain countries turn a blind eye to cybercrime within their borders — due to their limited capacity to combat it, or political or other incentives to ignore it.
Another major stumbling stone of the negotiations is how to introduce clear safeguards for human rights and privacy. Concerns are present over the potential misuse of the provision related to online content by authoritarian countries to prosecute activists, journalists, and political opponents. Yet the mere decision-making process for adopting the convention – which requires unanimous consensus, or, alternatively, a two-thirds majority vote – makes it unlikely that any provision curtailing human rights will be included in the final text.
The current draft includes explicit references to human rights and thus goes far beyond existing crime treaties (e.g. UNCTAD and UNCAC). A highly regarded example of an instrument that safeguards human rights is the Cybercrime Convention (known as the Budapest Convention) of the Council of Europe, which requires parties to uphold the principles of the rule of law and human rights; in practice, judicial authorities effectively oversee the work of the law enforcement authorities (LEA).
One possible safeguard to mitigate the risks of misuse of the convention is the principle of dual criminality, which is crucial for evidence sharing and cooperation in serious crimes. The requirement of dual criminality for electronic evidence sharing is still under discussion in the AHC.
Other concerns related to the negotiations on the new cybercrime convention include the information-sharing provisions (whether voluntary or compulsory), how chapters in the convention will interact with each other, and how the agreed text will manage to overcome jurisdictional challenges to avoid conflicting interpretations of the treaty. Discussions about the means and timing of information sharing about cybersecurity vulnerabilities, as well as reporting and disclosure, are ongoing.
A more robust capacity-building chapter and provisions for technical assistance are needed, apparently. Among other things, those provisions should enable collaborative capacities across jurisdictions and relationships with law enforcement agencies. The capacity-building initiative of the Council of Europe under the Budapest Convention can serve as an example (e.g. training in cybercrime for judges).
The process of drafting the convention benefited from the deep involvement of expert organisations like the United Nations Office on Drugs and Crime (UNODC), the private sector, and civil society. It is widely accepted that strong cooperation among stakeholders is needed to combat cybercrime.
The current draft introduces certain challenges for the private sector. Takedown demands as well as placing the responsibility for defining and enforcing rules on freedom of speech on companies, generate controversy and debate within the private sector: Putting companies in an undefined space confronts them with jurisdictional issues and conflicts of law. Inconsistencies in approaches across jurisdictions and broad expectations regarding data disclosure without clear safeguards pose particular challenges; clear limitations on data access obligations are also essential.
What comes next for the negotiations? The new draft of the convention is expected to be published in mid-November, and one final negotiation session is ahead in 2024. After deliberations and approval by the AHC (by consensus or two-thirds voting), the text of the convention will need to be adopted by the UN General Assembly and opened for ratification. For the treaty to be effective, accession by most, if not all, countries is necessary.
The success or failure of the convention depends on the usefulness of the procedural provisions in the convention text (particularly those relating to investigation, which are currently well-developed) and the number of states that ratify the treaty. Importantly, the success of the treaty implementation is also conditioned that it doesn’t impede existing functioning systems, such as the Budapest Convention, which has been ratified by 68 countries worldwide. An extended effect of a treaty would be the support given to cybercrime efforts by UN member states in passing related national bills.
Digital evidence for investigating war crimes
A related debate developed around cyber-enabled war crimes, due to the recent decision by the International Criminal Court (ICC) prosecutor to investigate such cases. The Budapest Convention applies to any crime involving digital evidence, including war crimes (in particular Article 14 on war crime investigations, Article 18 on the acquisition of evidence from any service provider, and Article 26 on the sharing of information among law enforcement authorities).
Of particular relevance is the development of tools and support to capture digital evidence, which could aid in the investigation and prosecution of war crimes. Some tech companies have partnered with the ICC to create a platform that serves as an objective system for creating a digital chain of custody and a tamper-proof record of evidence, which is critical for ensuring neutrality and preserving the integrity of digital evidence. The private sector also plays a role in collecting evidence: There are reports from multiple technology companies providing evidence of malicious cyber activities during conflicts. The Second Additional Protocol to the Budapest Convention offers a legal basis for disclosing domain name registration information and direct cooperation with service providers. At the same time, Article 32 of the Budapest Convention addresses the issue of cross-border access to data, but this access is only available to state parties.
Other significant sources of evidence are investigative journalism and open source intelligence (OSINT) – like the Bellingcat organisation – which uncover war crimes and gross human rights violations using new tools, such as the latest high-resolution satellite imagery. OSINT should be considered an integral part of the overall chain of evidence in criminal investigations, yet such sources should be integrated within a comprehensive legal framework. Article 32 of the Budapest Convention, for example, is already a powerful tool for member states to access OSINT from both public and private domains, with consent. Investigative journalism plays a role in combating disinformation and holding those responsible for war crimes accountable.
Yet, the credibility and authenticity of such sources’ evidence can be questioned. Technological advancements, such as AI, have enabled individuals, states, and regimes to easily manipulate electronic data and develop deepfakes and disinformation. When prosecuting cybercrime, it is imperative that evidence be reliable, authentic, complete, and believable. Related data must be preserved, securely guarded, protected, authenticated, verified, and available for review to ensure its admissibility in trials. The cooperation of state authorities could lead to the development of methodologies for verifying digital evidence (e.g. the work of the Global Legal Action Network).
Human rights
Uniting for human rights
‘As the kinetic physical world in which we exist recedes and the digital world in which we increasingly live and work takes up more space in our lives, we must begin thinking about how that digital existence should evolve.’ This quotation, published in a background paper to the session on Internet Human Rights: Mapping the UDHR to cyberspace, succinctly captures one of the central issues of our age.
The world today is witnessing a concerning trend of increasing division and isolationism among nations. Ironically, global cooperation and governance, the very reasons for IGF 2023, are precisely what we need to promote and safeguard human rights.
At the heart of yesterday’s main session on Upholding human rights in the digital age was the recognition that human rights should serve as an ethical compass in all aspects of internet governance and the design of digital technologies. But this won’t happen on its own: We need collective commitment to ensure that human rights are at the forefront of the evolving digital landscape, and we need to be deliberate and considerate in shaping the rules and norms that govern it.
The Global Digital Compact framework could promote human rights as an ethical compass by providing a structured and collaborative platform for stakeholders to align their efforts towards upholding human rights in the digital realm.
The IGF also plays a crucial role in prioritising human rights in the digital age by providing a platform for diverse perspectives, grounding AI governance in human rights, addressing issues of digital inclusion, and actively engaging with challenges like censorship and internet resilience.
Capitalist surveillance
In an era dominated by technological advancements, the presence of surveillance in our daily lives is pervasive, particularly in public spaces. Driven by a need for heightened security measures, governments have increasingly deployed sophisticated technologies, such as facial recognition systems.
As yesterday’s discussion on private surveillance showed, citizens also contribute to our intricate web of interconnected surveillance networks: Who can blame the neighbours if they want to monitor their street to keep it safe from criminal activity? After all, surveillance technologies are affordable and accessible. And that’s the thing: A parallel development that’s been quietly unfolding is the proliferation of private surveillance tools in public spaces.
These developments require a critical examination of their impact on privacy and civil liberties, and on issues related to consent, data security, and the potential for misuse. Most of us are aware of these issues, but the involvement of private companies in surveillance introduces a new layer of complexity.
Unlike government agencies, private companies are often not subject to the same regulations and transparency requirements. This can lead to a lack of oversight and transparency regarding how data is collected, stored, and used.
Additionally, the potential for profit-driven motives may incentivise companies to push the boundaries of surveillance practices, potentially infringing on individuals’ privacy rights. It’s not like we haven’t seen this before.
Ensuring ethical data practices
The exploitation of personal data without consent is ubiquitous. Experts in the session Decolonise digital rights: For a globally inclusive future drew parallels to colonial practices, highlighting how data is used to control and profit. This issue is not only a matter of privacy but also an issue of social justice and rights.
When it comes to children, privacy is not just about keeping data secure and confidential but also about questioning the need for collecting and storing their data in the first place. This means that the best way to check whether a user accessing an online service is underaged is to use pseudonymous credentials and pseudonymised data. Given the wave of new legislation requiring more stringent age verification measures, there’s no doubt that we will be discussing this issue much more in the coming weeks and months.
Civil society is perhaps best placed to hold companies accountable for their data protection measures and governments in check for their efforts in keeping children safe. Yet, we sometimes forget to involve the children themselves in shaping policies related to data governance and their digital lives.
Hence, the suggestion of involving children in activities such as data subject access requests. This can help them comprehend the implications of data processing. It can also empower them to participate in decision-making processes and contribute to ensuring ethical and responsible data practices. After all, the experts argue, many children’s level of awareness and concern about their privacy is comparable to that of adults.
Development
Digital technologies and the environment
The pandemic clearly showed the intricate connection between digital technologies and the environment. Although lower use of gasoline-powered vehicles led to a decrease in CO2 emissions during lockdowns, isolating also triggered a substantial increase in internet use due to remote work and online activities, giving rise to concerns about heightened carbon emissions from increased online and digital activities.
To harness the potential benefits of digitalisation and minimise its environmental footprint, we need to raise awareness about what sustainable sources we have available and establish standards for their use.
While progress is being made, there’s a pressing need for consistent international standards that consider environmental factors for digital resources. Initiatives from organisations such as the Institute of Electrical and Electronics Engineers (IEEE) in setting technology standards and promoting ethical practices, particularly in relation to AI and its environmental impact, as well as collaborations between organisations like GIZ, the World Bank, and ITU in developing standards for green data centres, highlight how working together globally is crucial for sustainable practices.
There’s no one-size-fits-all solution when it comes to meeting the needs of people with disabilities (PWD) in the digital space. First of all, the perduring slogan, ‘nothing about us without us’ must be respected. Accessibility by design standards like Web Content Accessibility Guidelines (WCAG) 2 are easily available through the W3C Accessibility Standards Overview. Although accessibility accommodations require tailored approaches to address the specific needs of both groups and individuals, standards offer a solid foundation to start with.
The inclusion of people with disabilities should extend beyond technical accessibility to include the content, functionality, and social aspects of digital platforms.The stigma PWD face in online spaces needs to be addressed by implementing policies that create a safe and inclusive online environment.
Importantly, we must take advantage of the internet governance ecosystem to ensure that
We support substantial representation from the disability community in internet governance discussions, beyond discussions on disabilities.
We stress the importance of making digital platforms accessible to everyone, no matter their abilities or disabilities, using technology and human empowerment.
We provide awareness-raising workshops for those unaware of the physical, mental, and cognitive challenges others might be facing, including those of us who suffer from one disability without understanding what others are facing.
We provide skills and training to effectively use available accommodations to overcome our challenges and disabilities.
We make available training and educational opportunities for persons with disabilities to be involved in the policymaking processes that involve us, making the internet and digital world better for everyone with the resulting improvements.
We support research to continue the valuable scientific improvements made possible by emerging technologies and digital opportunities.
Sociocultural
The public interest and the internet
The internet is widely regarded as a public good with a multitude of benefits. Its potential to empower communities by enabling communication, information sharing, and access to valuable resources was appreciated. However, while community-driven innovation coexists with corporate platforms, the digital landscape is primarily dominated by private, for-profit giants like Meta and X.
This dominance is concerning, particularly because it risks exacerbating pre-existing wealth and knowledge disparities, compromises privacy, and fosters the proliferation of misinformation.
This duality in the internet’s role demonstrates its ability to both facilitate globalisation and centralise control, possibly undermining its democratic essence. The challenge is even greater when considering that efforts to create a public good internet often lack inclusivity, limiting the diversity of voices and perspectives in shaping the internet. Furthermore, digital regulations tend to focus on big tech companies, often overlooking the diverse landscape of internet services.
To foster a public good internet and democratise access, there is a need to prioritise sustainable models that serve the public interest. This requires a strong emphasis on co-creation and community engagement. This effort will necessitate not only tailoring rules for both big tech and small startup companies but also substantial investments in initiatives that address the digital divide and promote digital literacy, particularly among young men and women in grassroots communities, all while preserving cultural diversity. Additionally, communities should have agency in determining their level of interaction with the internet. This includes enabling certain communities to meaningfully use the internet according to their needs and preferences.
Disinformation and democratic processes
In the realm of disinformation, we are witnessing new dynamics, with an expanded cast of individuals and group actors responsible for misleading the public, with the increasing involvement of politics and politicians.
Addressing misinformation in this fast-paced digital era is surely challenging, but not impossible. For instance, Switzerland’s resilient multi-party system was cited to illustrate how it can resist the sway of disinformation in elections. And while solutions can be found to limit the spread of mis- and dis-information online, they need to be put in place with due consideration to issues such as freedom of expression and proportionality. The Digital Services Act (DSA) – adopted in the EU – is taking this approach, although concerns were voiced about its complexity.
A UN Code of Conduct for information integrity on digital platforms could contribute to ensuring a more inclusive and safe digital space, contributing to the overall efforts against harmful online content. However, questions arose about its practical implementation and the potential impacts on freedom of expression and privacy due to the absence of shared definitions.
Recognising the complexity of entirely eradicating disinformation, some argued for a more pragmatic approach, focusing on curbing its dissemination and minimising the harm caused, rather than seeking complete elimination. A multifaceted approach that goes beyond digital platforms and involves fact-checking initiatives and nuanced regulations was recommended. Equally vital are efforts in education and media literacy, alongside the collection of empirical evidence on a global scale, to gain a deeper understanding of the issue.
Infrastructure
Fragmented consensus
Yesterday’s discussions on internet fragmentation built on those of the previous days. Delving into diverse perspectives on how to prevent the fragmentation of the internet is inherently valuable. But when there’s an obvious lack of consensus on even the most fundamental principles, it underlines just how critical the debate is.
For instance, should we focus mostly on the technical aspects, or should we also consider content-related fragmentation – and which of these are the most pressing to address? If misguided political decisions pose an immediate threat, should policymakers take a backseat on matters directly impacting the internet’s infrastructure?
One of these insights emphasised the need to distinguish content limitations from internet fragmentation. Content restrictions, like parental controls or constraints on specific types of content, primarily pertain to the user experience rather than the actual fragmentation of the internet. Labelling content-level limitations as internet fragmentation could be misleading and potentially detrimental. Such a misinterpretation might catalyse a self-fulfilling prophecy of a genuinely fragmented internet.
Another revolved around the role of governments, in some ways overlapping with content concerns. There’s apprehension that politicians might opt to establish alternate namespaces or a second internet root, thereby eroding its singularity and coherence. If political interests start shaping the internet’s architecture, it could culminate in fragmentation and potentially impede global connectivity. And yet, governments have been (and still are) essential in establishing obligatory rules affecting online behaviour when other voluntary measures proved insufficient.
A third referred to the elusive nature of the concept of sovereignty. Although a state holds the right to establish its own rules, should this extend to something inherently global like the internet? The question of sovereignty in the digital age, especially in the context of internet fragmentation, prompts us to reevaluate our traditional understanding of state authority in a world where boundaries are increasingly blurred by the whirlwinds of silt raised as governments search for survey markers in the digital realm.
Economic
Tax rules and economic challenges for the Global South
Over the years, the growth of the digital economy – and how to tax it – led to major concerns over the adequacy of tax rules. In 2021, over 130 countries came together to support the OECD’s new two-pillar solution. In parallel, the UN Tax Committee revised its UN Model Convention to include a new article on taxing income from digital services.
Despite significant improvements in tax rules, developing countries feel that these measures alone are insufficient to ensure tax justice for the Global South. First, these models are based on the principle that taxes are paid where profits are generated. This principle does not consider the fact that many multinational corporations shift profits to low-tax jurisdictions, depriving countries in the Global South of their fair share of tax revenue. Second, the two frameworks do not address the issue of tax havens directly, which are often located in the Global North. Third, the OECD and UN models do not take into account the power dynamics between countries in the Global North (which has historically been in the lead in international tax policymaking) and the Global South.
Countries in the Global South have adopted various strategies to tax digital services, including the introduction of digital services taxes (DSTs) that target income from digital services. That’s not to say that they’ve all been effective: Uganda’s experience with taxing digital services, for instance, had unintended negative consequences. In addition, unilateral measures without a global consensus-based solution can lead to trade conflicts.
So what would the experts advise their countries to do? Despite the OECD’s recent efforts to accommodate the interests of developing nations, experts from the Global South remain cautious: ‘Wait and see, and sign up later’ a concluding remark suggested.
Reporting from the IGF: AI and human expertise combined
We’ve been hard at work following the IGF and providing just-in-time reports and analyses. This year, we leveraged both human expertise and DiploAI in a hybrid approach that consists of several stages:
Online real-time recording of IGF sessions. Initially, our recording team set up an online recording system that captured all sessions at the IGF.
Uploading recordings for transcription. Once these virtual sessions were recorded, they were uploaded to our transcribing application, serving as the raw material for our transcription team, which helped the AI application split transcripts by speaker. Identifying which speaker made which contribution is essential for analysing the multitude of perspectives presented at the forum – from government bodies to civil society organisations. This granularity enabled more nuanced interpretation during the analysis phase.
AI-generated IGF reports. With the speaker-specific transcripts in hand (or on-screen), we utilised advanced AI algorithms to generate preliminary reports. These AI-driven reports identified key arguments, topics, and emerging trends in discussions. To provide a multi-dimensional view, we created comprehensive knowledge graphs for each session as well as for individual speakers. These graphical representations intricately mapped the connections between speakers’ arguments and the corresponding topics, serving as an invaluable tool for analysis (see the knowledge graph from Day 1 at IGF2023)
Writing dailies. To conclude the reporting process, our team of analysts used AI-generated reports to craft comprehensive daily analyses.
You can see the results of that approach on our dedicated page.
One part of Diplo’s Belgrade team at work. Does that clock say 2:30 a.m.? Yes, it does.
A part of our team attended the IGF in situ and participated in sessions as organisers, moderators and speakers. Here they are, on their last evening in Kyoto (above).