IGF Daily Summary
Tuesday, 29 November 2022
Welcome to the IGF Daily #1
As in the previous 8 years, we have a team of rapporteurs following most of the sessions and providing just-in-time reports from the Internet Governance Forum (IGF). Starting today, we are also publishing IGF Dailies, looking back at what happened the day before, but with a twist: Instead of summarising all the sessions, we are highlighting only what is new.
Think about it this way: Instead of saying that the discussions underscored the importance of bridging the digital divide once and for all, we will tell you if new solutions are proposed on how to get there. Our IGF dailies will also be enriched by data analyses and illustrations.
Do you like what you’re reading? Bookmark us at https://dig.watch/event/igf2022 and tweet us @genevagip.
Have you heard something new during the discussions, but we’ve missed it? Send us your suggestions at email@example.com.
Digital Watch team
Setting the tone
The 17th Internet Governance Forum (IGF) was officially opened on 29 November 2022 under the theme ‘Resilient internet for a shared sustainable and common future’. This overarching topic and its five themes, aligned with the UN Secretary-General’s envisioned Global Digital Compact, echoed throughout the opening speeches and the high-level panel that followed.
While the speakers saw the internet and digital technologies as accelerators for digital transformation and a springboard for the SDGs, they underlined the need for a human-centred and human rights approach to the digital future. Resilient digital infrastructure, interoperability, harmonisation of regulations, connectivity, affordability, and relevant content were mentioned as priorities for building an inclusive digital future. Challenges remain in identifying the boundaries of digital transformation and executing, implementing, and deploying digital transformation.
The dark side of the internet – including deadly disinformation, online bullying, and challenges to freedom of expression, among others – needs to be tackled if we are to forge a digital future where access to fast, safe, inclusive, and affordable internet is a given, not a privilege.
Speakers also reflected on the role of IGF and its contribution to the Global Digital Compact. The IGF is seen as a convener and a connector, creating a high-level playing field for sharing policy solutions, best practices, and experiences for identifying emerging issues. Discussants underscored that the value of the IGF is in its multistakeholder model. The role of the IGF Leadership Panel was highlighted as well.
Some speakers, however, called upon the IGF to up its game and produce more than just reflections. The IGF, they say, should put forward tangible proposals, whether for shaping global norms and standards, informing national-level regulations, connecting citizens with their governments, contributing to the Global Digital Compact, or contributing to the UN Summit for the Future in September 2024.
A High-Level Leaders Track discussion on digital trust and security drew parallels between digital transformation, security, and climate change.
The main challenge was finding a baseline for trust among countries while respecting their sovereignty and fostering cooperation. A tough topic on its own, it comes at a time when digital security is under threat: Malicious actors are targeting the critical infrastructures of hospitals, airports, and power grids, with devastating human consequences. The panellists explored this issue from two perspectives: What are the existing barriers to digital trust and security, and what practices are in place that can foster a common understanding on underlying principles of trust and security?
The speakers pointed out the need for trust between different stakeholders – governments, law enforcement, civil society, service providers, and users – to supporting collaboration. A greater challenge lies in fostering trust between states, where the discussion turned to current avenues for exchanging opinions on security on the international level.
While there are not many, the existing processes, such as the Ad Hoc Committee on Cybercrime, UNESCO’s Recommendation on the Ethics of Artificial Intelligence, the Paris Call for Trust and Security in Cyberspace, the Oxford Process, and the Geneva Dialogue have shown significant progress. According to panellists, this challenge predates the discussion on digital security – the quandary stems from trying to find common fundamental values between states, upon which trust and norms can be built.
Discussing fostering common understanding and meaningful, sustainable cooperation, the speakers agreed that dialogue and stakeholder engagement are the base, but new models of policy design must evolve. Using the examples of the Global Forum on Cyber Expertise (GFCE) and the NATO Centre of Excellence for Cybersecurity, the speakers reflected on inclusive participation in digital security discussions.
Turning to climate change and the role of digital technologies in supporting developing countries, the speakers highlighted the need to improve access to technologies, technology exchanges, and global value chains. The speakers underlined the importance of open data and open science, including UNESCO’s Recommendation on Open Science.
AI in the spotlight
Two sessions on the first day of the IGF focussed on AI
The session Affective computing: The Governance challenges introduced affective computing as a new term on the IGF agenda. This is usually associated with the use of AI to recognise, interpret, and simulate human emotions. It can detect anger, happiness, and excitement. Affective computing is about ‘machines knowing us better than we know ourselves’ as the power of tech platforms is often described.
Affective computing can be used in education, transportation, hiring, entertainment, and even digital love lives. In education, AI can track students’ moods and attention, in policing to discern deception, and in job interviews to determine applicants’ feelings about a company.
Discussion at the IGF demonstrated that affective computing still needs more capability to identify human emotions. Current research shows that emotions are not universal. They are relational, depending on local, cultural, and personal contexts. Emotions are also highly complex, ensembling hundreds of signals, from facial expressions to movements, body postures, choice of words, different abilities, and particular needs. Technology cannot yet capture the intrinsic complexity of emotions.
Affective computing carries bias, as it is trained mainly on images and expressions of people from developed countries. Thus, these technologies carry mainly the emotions of the Global North. Using them in other regions can cause significant damage to populations.
As a way to deal with the problems and risks of AI and affective technologies, new approaches are being developed. Microsoft came up with its 4Cs Ethical Guidelines: communication, consent, calibration, and contingency for affective computing systems. Other soft laws and non-binding guidelines exist, but over the last year, it has become clear that strong regulation is also needed.
AI governance and regulation was the focus of the session Realizing trustworthy AI through stakeholder collaboration examining how to apply the OECD’s AI Principles (2019) to the development of AI platforms. Many discussants argued for AI governance via experimentation and policy sandboxes. This approach can increase transparency, trust, and public support for AI platforms.
Standardisation is another indispensable approach to transform principles such as fairness and transparency into reality.
Professional and policy silos are becoming more problematic in AI than in other digital realms. Even tech companies, developers, engineers, product managers, and data scientists address AI from their own angles. They have even fewer bridges to the policy community and the general public than they do to each other. The OECD principles try to unite the tech and policy sectors’ unique knowledge. Coding competitions and hackathons are events that join specialists from different coding and policy communities on common challenges.
AI and affective computing, in particular, will require a lot of governance innovation to reduce risks and increase trust, transparency and overall inclusion in AI governance.
A Fragmented Internet?
The internet is global in its technical infrastructure but local in its consequences for economies, cultures, and societies, which should be reflected in its governance.
If we deal properly with this global/local interplay, internet fragmentation can be avoided, or at least slowed down.
This session outlined a catalogue of policies and approaches of governments and tech platforms that could lead toward internet fragmentation, including the vulnerability of submarine cables, tech platform policies, and government filtering.
A stronger push towards digital sovereignty as a part of national sovereignty is seen as an accelerator of fragmentation. After realising the importance of digital networks for national stability, especially during a pandemic, more and more countries are extending national sovereignty over digital networks and data.
Increasing fragmentation could lead toward the end of the unified and interoperable internet. The internet core infrastructure is very robust, surviving all challenges, including recent conflicts crossing national borders. Many discussants called for the development of standards to define hate speech, disinformation, objectionable online content, and other issues that could fragment content and data sharing on the internet. More focus on the bottom billion than the next billion could reduce the risks of social fragmentation and new divides triggered by internet policy dilemmas.
Respecting the principles outlined in the UN Charter will also help prevent internet fragmentation. The UN Global Digital Compact could help establish a new consensus on digital governance that would preserve the core technical infrastructure of the internet while providing space for other policies adjusted to regional, national, and cultural specificities.
Preventing fragmentation was one of the aspects covered by the session on the role of the community in achieving universal acceptance.
Universal acceptance fosters the use of web and email addresses in many languages and scripts. If the internet infrastructure can be used in different languages, it will reduce the risk of internet fragmentation.
Universal acceptance is primarily a societal value that should facilitate the inclusion of all internet users.
The growing erosion of digital rights
We start our coverage of human rights-related sessions with a stark reminder that the notion of privacy is eroding among the younger generation. Younger people, who represent up to one-third of the internet population, are growing up with a diluted understanding of what the right to privacy means, and what safeguards they are entitled to.
In the Global South, privacy and data protection rules have been enacted only in recent years, signalling an even stronger need for youth to be educated about human rights from an early age using language they can understand. Behavioural advertising or profiling for targeted advertising shouldn’t treat young users in the same way as adults.
Contributing to this problem is that the development of products and services does not always follow the privacy by design approach. Users shouldn’t have to monitor their privacy settings every time they install a new app. Many legal remedies exist for users who have been victims of data breaches. Their effectiveness largely relies on enforcement and regulatory oversight, which in some countries needs significant improvement.
When it comes to apps, the take-it-or-leave-it approach to signing up for an app or a service in exchange for relinquishing rights to user data should be replaced by a fairer system that gives users the option to limit the type or amount of data the app gathers. Better still, regulators should prohibit companies from gathering more data than they need, even if users might agree to sharing it. Young users, in particular, seldom understand the implications of such a choice.
We’re also reminded of another stark fact: Almost 20,000 webpages containing coerced self-generated child sexual abuse imagery of kids aged 7–10 were discovered in the first six months of 2022, according to data by the Internet Watch Foundation (IWF) released in August 2022. That’s an increase of over 360% compared to the previous year. While coercion is clearly abusive and illegal, there is other content voluntarily generated by kids that may be unwise, and can be misused. While there is much educational material available, governments and service providers need to design and create new, more user-friendly material that has children and adolescents as clear target audiences.
Concerning gender rights online, digital technology has amplified abusive behaviour against women and girls, leading to a spiralling problem of online violence. The measures undertaken by NGOs, the private sector, and governments are taking on the fight against online abuse as well as their resources permit. Stronger enforcement, local solutions addressing local contexts, and more funding for civil society would make a greater difference.
Data flows: Fragmentation vs harmonisation?
The discussion on economic and legal issues started on the first day, full steam ahead, with cross-border data flows. We were reminded that there are different approaches to data flows around the world; in China, India, the USA, the EU and elsewhere, each jurisdiction has; its own priorities and interests it wants to protect (safeguarding privacy, advancing the local economy, protecting (national) security, etc.).
While this regulatory fragmentation comes with challenges to trade and the global digital economy, harmonisation attempts – leaving aside their likelihood of success – run the risk of eliminating national characteristics.
And yet, there seem to be a few areas of agreement among speakers. One IGF session suggested the cross-border flow of non-personal data must be facilitated; that there is a need for minimal global rules for data transfers; and that African countries need to come together to strengthen their position on cross-border data flows.
Fighting untruths, such as online misinformation and disinformation, was the main sociocultural concern across workshops on Day 1.
A pre-bunking approach to fighting misinformation was put forward: In the case of fighting misinformation epidemics, people can be exposed to weakened doses of misinformation or disinformation techniques to develop cognitive antibodies over time, through a process known as psychological inoculation. An underlying challenge, however, is adapting those interventions to different cultural contexts.
Other suggested approaches included promoting quality information that complies with journalistic good practices and the design and implementation of digital literacy programmes to fight disinformation. It was, however, noted that if the recipients of such programmes cannot read or write, digital media training seems like an unrealistic approach to tackle this issue.
Participants also assessed governments’ role in internet governance, and noted that more policy innovations in internet governance are needed. This highlights that existing systems, such as WSIS, designed to foster the participation of governments in internet governance, remain insufficient. The UN Global Digital Compact should be a valuable avenue to address what should be the role of governments.