Highlights from Day 3
Internet infrastructure: Community networks, IPv6 and IoT
Bringing Internet connectivity to remote areas remains a challenge in many parts of the world. Community networks can help address this challenge. Regulators need to understand the value of community networks and support their deployment, through measures such as financing, facilitating access to spectrum, and simplifying licensing procedures. But policies in this field should be developed and implemented based on input from the unconnected communities, as they know best what their needs are.
The depletion of the IPv4 address pool (e.g. RIPE Network Coordination Centre allocated its last block of IPv4 addresses this week) and the need to speed up the transition to IPv6 have been discussed at IGF meetings over the past several years. This year was no exception. The message was consistent with that from previous IGFs: If we want one single worldwide network, it cannot rely only on IPv4 because its addresses have been largely depleted and its pool is insufficient to fully connecting people, organisations, communities, and devices around the world.
The deployment of IPv6 makes it possible for more devices to connect to the Internet. Consumer-based Internet of things (IoT) devices are part of our lives, but they were not designed with security in mind. Now, security threats are real and can lead to serious infringements on privacy and other human rights. Security measures and mechanisms for safeguarding human rights should be incorporated into these products. Local and global regulatory efforts can ensure that such measures are implemented, and could also tackle the issue of liability of device developers. The IGF ecosystem could help inform such policy making processes and empower policymakers to approach the issues more effectively. Beyond rules and regulations, users also need to be better informed about how to better protect themselves and their IoT devices.
In addition to consumer devices, smart cities promise to make urban living more safe, inclusive, and sustainable. While smart cities can improve diverse aspects of urban living, the question of data security and privacy inevitably arises. Without an appropriate regulatory mechanism, the technologies embedded in smart cities can limit the privacy of individuals.
What measures need to be put in place to protect and promote the human rights of every individual throughout the process of smart city development? States need to press the private sector to uphold safety regulations, and address loopholes in the existing legislation. Corporate actors need to adhere to self-designed guiding principles as well as conduct thorough risk assessments. The role of civil society is to advocate for the betterment of urban lives, including increased data security, privacy, and inclusivity, as well as monitoring and reporting behaviours of other actors that infringe on human rights.
Creating more trustworthy and transparent AI systems
A recurring point in IGF discussions on AI is the need for an equitable distribution of AI benefits at the global level. More international efforts are needed to empower developing countries to take advantage of the opportunities offered by AI. This includes, for example, assisting countries in developing national AI strategies, and providing capacity development opportunities for individuals to develop the digital skills needed to enable them to use AI for good. The FAIR Forward – Artificial Intelligence for All project, launched by the German government, is one such example.
AI systems need to embody characteristics and perspectives from developing countries. Today, the majority of datasets that are used by AI represent only a fraction of the global population. And this explains why, for example, voice assistants struggle with understanding African languages.
Further efforts are needed to create more accessible, inclusive, and trustworthy AI systems. One way to achieve this is by involving more actors in the design of AI systems, including end-users and professionals in design and the humanities. Aspects related to protecting children’s rights and child safety in the context of AI also deserve more attention. Children are the most vulnerable group, but they also have the most potential, so including them in the debates on the governance and future of AI is equally important.
Currently more than 100 ethical AI frameworks exist, but they remain voluntary and are not sanctioned. So what concrete measures can be taken to bring more ethics into AI systems? Tools such as technical audits, impact assessments, and regulation-based approaches are among the proposed solutions. Self-regulation might not be enough, neither for companies nor governments. Companies can explain to governments how they built the AI systems, but that doesn’t guarantee responsible implementation and use in public services. It is important that both sectors have fluid and evolving guidelines, as technologies develop fast and laws become outdated just as fast.
Data governance for developing countries
As the digital economy continues to grow, reconciling the rights of end-users and the interests of companies is becoming the core digital governance issue. Developed countries and regions are constantly updating their data governance frameworks, which sometimes tend to become international standards, such as the EU General Data Protection Regulation (GDPR). Likewise, as many Internet companies are headquartered in the USA, the extra-territorial effect of several US legislations transforms them into the de facto standards for many digital fields.
Developed countries also tend to lead the international discussions on data governance standards and rules. Developing countries often find themselves subject to rules set up elsewhere, and which might not reflect their cultural and policy context or not be in their direct interest. How to deal with this challenge? International co-operation is one option, making sure that developing countries are at the table in debates and processes around data governance. Dedicated capacity development programmes could help countries better prepare to participate in such processes.
Challenges in developing and implementing cyber norms
Geopolitical tensions hamper progress in negotiations on the application of international law to cyberspace. In the UN Group of Governmental Experts (GGE), different positions on a number of issues remain. While the applicability of international law is accepted in principle, some countries warn that the definitions of armed attack and ways to conduct attribution need to be clarified. For some countries, like Cuba, the applicability of international humanitarian law means legitimisation of cyber-warfare; for others, like The Netherlands, it ensures the protection of the civilian population when cyber means are used in conflict.
In negotiating norms and confidence building measures (CBMs), which is a multi-year process, governments usually focus on principles rather than on implementation. In the case of cyberspace, however, implementation is already coming to the agenda, due to the increasing effects of cyber-attacks. The technical community, with its experience and expertise from day-to-day cyber operations, can help accelerate the implementation of norms and CBMs in cyberspace.
In order not to reduce the efficiency of the response to incidents, norms and CBMs need to be aligned with operational practice. They should take into account experiences from major incidents in the past, as well as potential future challenges like political tensions and conflicts on regional and sub-regional levels. It is important to develop better mutual understanding between the two communities and maintain the dialogue.
Civil society organisations also play an important role in ensuring compliance with norms and in addressing threats. Still, many NGOs without ECOSOC accreditation enabling them to participate in UN processes were not allowed to participate in the multistakeholder discussions of the GGE and the Open-Ended Working Group (OEWG).
Human rights are an important component in the deliberations on cyber norms. Open consultations within the OEWG will provide an opportunity for vocally requesting the protection of human rights. The GGE mandate requires member states to elaborate on how the entire body of international law applies to cyberspace, including human rights. Other venues, such as the Freedom Online Coalition, can discuss the relations of human rights and cybersecurity in a multistakeholder manner and feed into the UN processes. The Global Forum on Cyber Expertise, at the same time, can help extend the outreach of the processes to developing countries that are less involved and involve their civil society and the private sector.
New controversies are open with the adoption of the resolution on cybercrime by the UN General Assembly’s Third Committee. Concern is stated by some countries, as expressed during the IGF session on norms in cyberspace, that, if adopted by the UN General Assembly, this resolution may have a detrimental effect on human rights, as it may enable governments to justify Internet shutdowns, censorship, and surveillance practices as combating cybercrime.
Encryption and trust in the Internet
The global Internet infrastructure should be viewed as a ‘zero trust’ system: Users shouldn’t trust in the confidentiality of their data as it is processed through the Internet. Encryption is, therefore, a necessary option to protect personal data, but is also essential for the security of industry data. Governments – in particular security services and law enforcement authorities – increasingly request ‘exceptional access’ to encrypted data.
However, even if some of such requirements may be legitimate, there is no technical means to implement this access without weakening encryption – which would in turn leave an opening for malicious actions. Alternative options that are being discussed include accessing the data before it is encrypted, or interfering with the management of the encryption keys when it is done by online service providers.
Child safety and digital literacy
Thirty years after the UN Convention on the Rights of the Child was adopted, the landscape of opportunities as well as threats for kids is significantly expanded by cyberspace. ‘Growing up in a Connected World’, a report produced by the Global Kids Online research network (run by UNICEF and the London School of Economics), reveals trends among kids from 18 surveyed countries: the majority of kids have high privacy skills, but are unable to verify the truthfulness of online information.
Children are increasingly exposed to online content related to abuse or violence; the number of reported cases of cyberbullying is not negligible; while many minors – and particularly their parents – don’t understand the risks of cyberbullying. Digital literacy should be the response to these challenges. Frameworks for digital literacy, defined by organisations such as UNICEF and the Council of Europe, cover issues such as digital citizenship and qualities of tolerance, or empathy. Yet progress in digital literacy is rarely evaluated and reliable data is lacking. Internet intermediaries can also help curb cyberbullying, in particular through developing apps that report violations of content policies, supporting research and education about cybersecurity, and providing guidelines for digital health.
Protecting human rights online starts with our physical data
Human rights discussions have covered many issues, ranging from the use of our physical characteristics to identify us, to freedom of expression, privacy online, and the protection of personal data in the development of AI systems.
Human identity and integrity, pillars of human rights, could be endangered by digital surveillance. Our faces and hands (fingerprints) are most often used for digital identification and surveillance purposes. There is an increase in legislation and policies to regulate the use of facial recognition technology in advanced AI-systems. Future regulatory, standardisation, and technical actions should focus more on how our physical data shapes our identity and human rights. Data and the human body featured prominently in discussions on gender issues online. For example, menstruation apps raise the risk for surveillance over biological functions. As one speaker argued, data analysis could lead to a fundamental reconceptualisation of what human bodies are.
The question of human identity was raised in the context of AI-driven categorisation and labeling of people from different social, age, and racial groups. This possibility raises the risk of misuse of AI by law enforcement agencies and other entities involved in screening activities.
Curbing the spread of unwanted content and dealing with collateral risks
Fighting against ‘fake news,’ hate speech, and disinformation could create collateral risks for online freedoms. This risk becomes real via concrete decisions of technical companies to, for example, remove content that could expose them to policy risks. The task is not simple. As a Google specialist indicated ‘we have 500 hours of new video content every minute and we know that less than 1%of this content is either illegal or violating our community guidelines. That doesn't say that there is not an issue and we don't need to take responsibility and scale up our operations. ... I'm just saying that we have to put this into perspective’. Similar challenges of dealing with a huge volume of content online exist with other platforms.
The risk of endangering freedom online through actions against illegal and dangerous content could also be addressed by developing carefully balanced policy frameworks, benchmarking, and due processes for dealing with content online that could endanger the rights of others or public safety and interests. Media literacy features prominently in UK and Finnish approaches to strengthening the resistance of Internet users against misinformation.
A concrete policy incorporating educational approaches for preserving freedom online while fighting ‘fake news’ and disinformation on the Internet will be high on the agenda at the next Freedom Online Coalition conference in Ghana (Accra, 6-7 February 2020).
Addressing the impact of cybersecurity measures on human rights
The discussion on privacy is linked to its relevance for digital business models. For tech companies, big data is considered a double-edged sword because it generates revenue, but it is also an important target for regulatory compliance. Since the drive for the use of more data, especially for new generations of AI application, creates new policy and business vulnerabilities, companies start looking for new approaches that could solve this ‘Catch 22’ situation. In China, companies are experimenting with a ‘federated learning’ approach for AI development, which preserves personal data on devices and servers under the control of users. Companies use and develop algorithms for AI while ensuring that data are locally stored.
New risks for privacy are created by the IoT, which enters the homes and lives of citizens worldwide. For example, in Canada, there are more than 114 million devices which equates to three IoT devices for each Canadian citizen. These devices collect a lot of data and, potentially, endanger the privacy of citizens. Users' privacy can be protected by technical standards for producers of IoT.
Another important policy interplay exists between human rights and cybersecurity. The OEWG provides a space for discussion on ensuring that cybersecurity measures agreed by governments do not negatively affect human rights online. Human rights and cybersecurity will feature prominently in next week’s consultation meeting of the OEWG (2-4 December) which will bring together member states and roughly 200 NGOs.
‘Technology makes things possible’
‘If for most people technology makes things easier, for people with disabilities, technology makes things possible’ summarises various discussions on disability rights, including access to websites for people with visual impairments or allowing students with visual impairments to take their university exams using audio technology or using Braille.
The discussion on children’s rights, one of the ongoing topics in the IGF discussion, was addressed from a novel perspective, by incorporating how AI impacts children’s rights from educational, privacy, and health perspectives. In this regard, UNICEF’s work on a set of draft principles on AI and child rights is notable.
Upholding the rule of law in cyberspace
The rule of law is enshrined in the UN Charter, human rights conventions, and national constitutions worldwide. The right to access justice is how the rule of law affects citizens worldwide. How can the rule of law and access to justice be upheld in cyberspace? How can citizens protect their legal and human rights in dealing with, for example, technical platforms far from their jurisdiction?
While it is widely accepted that law offline applies online, the main question is how law applies online. It is far from straightforward: it often involves a degree of arbitrariness, which, if unjustified, can lead to human rights breaches, lenient implementation and enforcement, and legal uncertainty in general.
Upholding the rule of law also requires the judiciary to be conversant with legal systems of other jurisdictions, due to the cross-border nature of cases and to the extraterritorial effect of laws (for instance, the EU’s GDPR).
In turn, legislators could benefit greatly from the insights of the judiciary, which is uniquely positioned to observe the effects of technology on society and the consequences of legislation. This year’s parliamentary track at the IGF is an important step towards closing the gap between legislative and judicial actors.
Promoting Internet access and boosting digital skills
When it comes to providing free access to the Internet, public libraries are undoubtedly important. They contribute to overcoming the digital divide; facilitate access to e-government services; foster digital literacy and ICT skills; and promote the creation of local content and use of local languages. Confirming the need for public libraries, the Dynamic Coalition on Public Access in Libraries explained that 400 000 public libraries worldwide meant 400 000 existing or potential public Internet access points.
Access should also be promoted through mentorship and digital literacy training. Business initiatives focused on promoting Internet access and digital knowledge will shift the balance toward users with better skills.
The development of ICT skills through formal and informal education helps both children (almost one in two users are kids) and adults. Teaching digital skills to young people requires having safe spaces where youth can learn, share, and exchange knowledge. There needs to be a more focussed approach and a systematic digital education plan which teachers can deliver.
To promote and ensure quality education (in line with SDG 4) and improving education services, data and big data allow educators to offer more tailored support to students, increase the cost-effectiveness of teaching, and allow for more effective testing.
Digital economy and cross-border data flows
Digital economy discussions during Day 3 focussed on cross-border data flows and on trade negotiations taking place at the World Trade Organisation (WTO) and in regional trade agreements (RTAs).
It is challenging to understand the contours of trade in the digital era. Different definitions persist at the international level, and what has been traditionally known as e-commerce is increasingly called digital trade. This change reflects the prominent role of data in trade relations. The governance of data will remain an important aspect of trade discussions in the years to come.
A growing consensus agrees that data flows should be an instrument to foster not only economic growth, but also social development. This is reflected in the concept of ‘Society 5.0’, currently being discussed at the G20, which represents the aspiration of a human-centred digital society. Policies that promote free flows of data boost economic development, but should also uphold human rights, as well as each country’s policy objectives. Data protection regulations, such as the GDPR, should not be considered a trade barrier. At the same time, unjustified barriers to the free flow of data and measures of data localisation are undesirable.
An increasing number of digital policy issues are being included in RTAs. These agreements create obligations in very important areas, including privacy and data protection. The Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) and the Agreement between the United States of America, the United Mexican States, and Canada (USMCA), for example, create an obligation for countries to allow the cross-border transfer of information, including personal information. These agreements limit the scope of policy options that countries have, since national laws will need to comply with international agreements. In spite of the importance of RTAs, the process of developing them is not transparent and does not allow for multistakeholder involvement.
There were diverging views on whether issues such as data flows should be discussed in trade fora. On the one hand, the free flow of data is seen as an enabler of e-commerce; it therefore seems inevitable that trade discussions will also touch upon issues related to data governance. On the other hand, decisions made in this area will have important consequences for the exercise of human rights and, therefore, they should be scrutinised by a larger group of stakeholders. No matter how global and diverse the discussion, it may be hard to achieve a universal agreement on the free flow of data given the significant differences of opinions and goals between countries.
Back to basics: Digital identity programmes should be more human-centric
The initial enthusiasm that a digital identity is per se good and useful has been replaced by a more balanced approach of analysing the risks that digital identity creates for citizens and societies. If not properly planned and implemented, digital identity projects can endanger citizen privacy, safety, and other fundamental rights. Adopting a more human-centric approach, that is, focusing more on the needs of the citizens for whom such programmes have been created, can shine a brighter light on current challenges and good practices related to digital IDs.
For instance, several service providers involved in digital ID programmes are based in Western countries. Yet, their products are used in other countries as well, with different cultural dynamics, policy context and legal mechanisms for the protection of human rights and privacy. Understanding local context is essential for developing digital ID approaches and applications that will facilitate digital inclusion while preserving human rights.
Given the increasing roll-outs of digital ID programmes, we can expect more human-centric issues to be raised, including the use of digital IDs in electoral processes, the contributions of digital IDs to attaining the SDGs, and the monopoly of certain tech companies involved in programmes (and the consequences for users).
Data Analysis: Day 3's Most Prominent Issues
Thrice as nice! For the third consecutive day, the issue of interdisciplinary approaches was the most dominant in discussions. Data governance and sustainable development also retained their second and third places in the top 10 most-discussed issues during Day 3.
Emerging technologies, capacity development, network security, and AI featured again among the most prominent issues. Privacy and data protection were less discussed than in Day 2, but infrastructure-related issues made it to the top 10, as several sessions discussed issues such as IPv6, Internationalised Domain Names, community networks and the IoT. Cybercrime, and cyberconflict and cyber warfare swapped places, coming in ninth and tenth places, respectively.
Looking at the distribution of topics by basket, cybersecurity took first place, with almost a fifth of all sessions belonging in this basket. Although the technology and infrastructure basket maintained an equal share (%) of sessions as in Day 2, it dropped to second place (total number). The order of the remaining baskets is the same as in Day 2, with a slight increase or decrease in percentages.
The word cloud based on a dataset of 335 keywords shows AI, data, and SDGs among the most prominent terms used in discussions.
PREFIX MONITOR: 'DIGITAL' IN THE LEAD
The use of prefixes in digital policy debates goes well beyond a mere reflection of the development of language. It can illustrate the evolution of the debate, and how certain issues are brought into the discussion, as others become less relevant.
Based on the analysis of 53 raw transcripts from Day 3, ‘digital’ remained in the lead as the most used prefix. ‘Online’, with a frequency rate of 24%, returned to second place, which could be attributed to a number of sessions dedicated to online health, online identity, and online inclusion. The prefix ‘cyber’ went back to third position, while ‘tech’ remained in fourth place. Mirroring previous days, the prefixes 'net', ‘e’ and ‘virtual’ remained in the bottom three spots, although a slight increase has been observed in the use of prefix 'e' which ended up in fifth place.
SOCIAL MEDIA ANALYSIS: WHO’S BUZZIN’ THE MOST?
A look at the most influential and active IGF-related Twitter accounts during the last seven days shows that their reach extends to thousands of users. With just a day to go, #IGF2019 will continue to benefit from this social media visibility.
Ai Quote of The Day
IQ’whalo shares with us new reflections on the current digital policy landscape, generated from an AI-powered analysis of yesterday’s transcripts.
‘And I think that maybe this is part of the paradox of the Internet, for instance, where we are exposed to more and more of the features of the West, culturally embedded, even to the point where some of the people who are accessing the Internet are being exploited by the West. We are also not as open-minded in our culture about questions of privacy and the otherness but also the question of where do we go when we aren't as technologically embedded.’
Don’t know yet who IQ’whalo is? We introduced him earlier this week.
Coming Up: IGF 2019 Final Report
While this is the last Daily Brief from IGF 2019, we have more in store for you.
Our Final Report, to be published on Monday, 2 December, will provide a thematic summary of the discussions held throughout the week. We will reflect on the topics that were debated the most by IGF participants: from digital inclusion to cybersecurity, and from the digital economy to human-centred AI.
Our Final Report will be supported by data analyses. Plus, our AI reporting system will generate a summary report based on processed transcripts from the IGF in Berlin.
Don't Miss Today
Legislative Main Session
10:00 – 13:00 | Convention Main Hall II & online
Recognising the important role of parliaments in digital policy debates, and taking advantage of the fact that many parliamentarians are attending this year’s IGF, a main session will be dedicated to legislative processes. It may be a starting point for a more consistent and active engagement of members of parliaments in IGF activities – something that has been missing over the past several years.
Concluding sessions for the IGF 2019’s main themes
11:10 – 13:00 | Raum I, Raum III, Raum V & online
This year’s IGF was structured around three main themes: digital inclusion; data governance; and safety, security, stability and resilience. As the forum wraps up today, three concluding sessions will be held on these themes, to provide insights from the discussions held throughout the week. For data governance, the concluding session promises a menu or roadmap of suggestions made by the community with regard to addressing data governance challenges. The digital inclusion concluding session will offer an overview of the main points raised during workshops and other sessions focused on how to make digital technologies more inclusive. The concluding session of the safety, security, stability and resilience theme will review material to be included in the IGF2019 messages on these time, while also providing a roadmap of recommendations made during the week on the four key subjects.