Highlights from Day 1
What will our generation be remembered for?
This year marks the second IGF attended by UN Secretary-General António Guterres. His opening speech last year – together with French President Macron’s speech – carried substantive reflections on the state of global digital policy, and an encouraging vision for the digital developments ahead of us.
This year’s opening speech couldn’t be more different. Characterised by examples of how the Internet is being misused and exploited, Guterres gave a stark account of the profound issues which are affecting today’s technology and tomorrow’s developments.
‘It is for me an enormous frustration to be that today, not only we are still building physical walls to separate people, but that there is also the tendency to create some virtual walls in the Internet also to separate people.’ The three main divides – the digital divide, the social divide, and the political divide – are still profound. In addition, while the private sector pursues an attitude of trial and error, and fixing issues retroactively, policymakers are reluctant to define policy frameworks and regulations, opting to watch from the sidelines.
The signals are clear. We can no longer allow digital technology to run away with us. The stakes are high: ‘If we do not work together to address these divides, we will be remembered as the generation that ruined the early promise of the Internet.’
Guterres’ proposals for turning the tide focus on strengthening the IGF. His suggestions on how to do so – bringing everyone together to share policy expertise, debate emerging technology issues, agree on some basic common principles, and take these ideas back to appropriate norm setting forums – closely reflect the High-level Panel on Digital Cooperation’s model for an IGF Plus. Read more: IGF Plus: A brave new world for the IGF.
The UN Secretary-General also refers to the UN as the ‘appropriate platform where all relevant actors can meet to address such global challenges’. This could be indicative of how the IGF’s future role and functions may be shaped.
German Chancellor Angela Merkel stressed the benefits of the Internet, and the strategic relevance of the physical infrastructure which enables the flow of data and information, connecting continents and regions.
Her reflections focused on the issue of digital sovereignty, and the risks of protectionist approaches. Referring to technological developments, she reminds us that although something is technologically feasible and possible online, this does not mean it is ethically desirable.
Merkel sees the global digital policy process as shifting. Describing this change as a ‘reordering of Internet Governance’, she echoed Guterres’s reference to the UN as a home for reorienting global Internet policy. Much will depend on how discussions on the future role of the IGF will continue to form in Berlin this week, and beyond.
Advanced technologies: A call for trustworthy AI
‘Artificial intelligence (AI) applications can be used to monitor and manipulate behaviour, to besiege us with ever more targeted and intrusive advertising, to manipulate voters, to track human rights defenders and to stifle expressions of dissent.’ It’s a stark picture painted by UN Secretary-General António Guterres, who set the scene for several discussions on AI challenges throughout the day.
Are users aware of when and how AI is embedded in the services they use, and to what extent? In most cases, the answer is no.
This is not to say that AI doesn’t have any benefits. But to take full advantage, technology must be trustworthy. Increased transparency, proper accountability mechanisms, addressing issues of bias and inclusivity, and ensuring that AI is explainable to the largest extent possible, all contribute toward building trust. The OECD Council Recommendations on AI (among other such instruments) provide guiding principles, from empowering people with AI skills, to facilitating cross-border and cross-sector collaboration, to progress on the responsible use of trustworthy AI.
Leaving no one behind in the growing AI-enabled economy and ensuring a human-based approach in the development and use of AI is a joint responsibility of the tech industry, policymakers, activists, and end-users. Multistakeholder approaches and public-private partnerships are effective models of co-operation.
Then, how do we make sure that digital advances do not lead to more inequalities and that there is equitable benefit-sharing of the technology? As Guterres suggested, maximising digital public goods deserves further support.
As shown in UNESCO’s publication I’d Blush If I Could, AI could deepen gender bias (AI assistants, for example, are largely empowered with female voices). Stakeholders need to develop plans and strategies to tackle this issue, as well as the need to close the gender gap in digital skills.
Tackling the geopolitical aspects of cybersecurity
Cyber-related and technology-related issues are at the heart of geopolitics today, as states are increasingly misusing technology as a weapon against democratic processes, infringing on human rights and violating intellectual property.
Faced with the uncertainty of new technologies, governments are turning to ‘tech nationalism’, demanding that data be locally stored. 5G presents another challenge: compromised software updates could open a path to breaches in the network; recovering the network after a breach is extremely costly. Balancing IT security, trustworthiness, national security, and industrial policy is a must, though admittedly, not easy to implement.
Governments face challenges in addressing these threats and in dealing with a rapidly-evolving tech sector. State sovereignty in the digital age remains an issue of contention. Many believe that cyberspace, which arguably has no borders, should support the free flow of data. Thus, governments could benefit more from robust encryption than from requesting backdoor access. Users – particularly youth – should be empowered to participate in policy-shaping; platforms should co-operate more with law enforcement.
Improving the security of cyberspace
How can cybersecurity be improved? The skills and qualifications of criminal investigators should be enhanced. Ethical rules prohibiting misinformation and disinformation should be extended to governments and businesses, and incorporated into educational programmes.
Technical measures to increase security include more peering in IPv6, exploring domain-based authentication of named entities, and addressing Domain Name System abuse (including through mechanisms developed by the Internet Corporation for Assigned Names and Numbers (ICANN)) while upholding civil liberties. There are increasing efforts by ICANN, Mozilla, Google and Microsoft to expand DNS over HTTPS (DoH), which encrypts DNS lookup via end-user applications. Policymakers can participate in the work of the Internet Engineering Task Force’s (IETF) on security specifications and protocols, although admittedly, it is often unclear how they can contribute to this highly technical process.
International and cross-stakeholder co-operation is essential for developing capacities in states that are in the process of designing national cybersecurity frameworks. Regional co-operation, where joint efforts – such as by the Computer Emergency Response Teams (CERTs) – can help ‘defend’ the region from cyber-attacks, and boost regional co-operation in general, are of particular relevance. International, regional, and bilateral agreements are important, but are not a precondition for regional co-operation, especially on information sharing. The private sector is an important player, as it is often in possession of large amounts of information, operates across jurisdictions, and generally has ties with all other sectors.
Difficulties in implementing existing norms remain. To ensure enforcement, the process needs to involve all the actors, in particular the technical community, and military and intelligence agencies, which are somewhat absent from the process.
Curbing the risks of online gaming for young users
The online gaming landscape is as attractive for adults as it is for children. Yet, it is often difficult to reconcile online gaming with children’s well-being, due to the risks involved.
For example, online games allow players – including children – to get in touch with much older gamers with different profiles, which may involve risks. Research confirms that addiction to online gaming is a reality. Other issues include the monetisation of gamers’ data (thus affecting children’s privacy), graphic violence, and the lack of transparency in content rating, which could lead to exposure to content that is inappropriate for young users.
There are no clear policies or guidelines for gaming platforms on how to safeguard children’s rights and safety. There are, however, ways of limiting the risks and enjoying a positive gaming experience, such as child-parent supervision, prioritising learning experiences and socialisation among peers, and making gaming a shared family experience. Technical solutions, such as flagging and removing harmful content, are also a must. This challenge should be approached holistically, taking into account children's rights and efforts at balancing restrictions to gaming with the right to access information.
Human rights discussions seek change
Privacy is a recurring concern, even more so when we’re discussing advanced technologies. The apparent conflict between security and privacy is complicated by approaches to data protection. Even offline, some countries tend to prioritise security over privacy. Terrorist groups and criminals benefit from strong privacy as much as law-abiding citizens do. When such tensions occur between different rights and interests, then multistakeholder dialogues including governments, the Internet industry, and society, can help reconcile policy approaches and priorities.
Protecting the rights of vulnerable groups online also came up: how do we ensure that laws on online sex work and drugs distribution do not cause further harm to those who are supposed to be helped by them (sex workers, the LGBTI+ community, survivers of abuse, and human trafficking victims)? Since harm can mean different things to different people, legislators should listen to those being helped by the regulations, to understand what is harmful for them, and to make sure that the regulatory solutions developed address such harm. Implementing blanket censorship could lead to vulnerable groups being excluded or prevented from joining online platforms, and this should be avoided.
Tackling online disinformation and harmful content requires careful responses that do not harm freedom of expression. European governments, for example, have adopted three types of policy responses: self-regulatory codes of practices that tech companies commit to voluntarily; direct regulation of online content; and the development of ad-hoc legislation. Read more on Tackling content policy.
Do national and regional IGF initiatives contribute to policy processes around human rights online? In Lebanon, the national IGF facilitated online communications training for women, in view of their low 20% rate of inclusion in online discourse. The IGF Italy supported a successful grassroots youth initiative which resulted in amendments to the country’s criminal law to include a specific form of bullying: ‘revenge porn’. NRIs have concrete, positive impact.
Jurisdiction and access to electronic evidence
One of the current challenges related to jurisdiction in the digital era comes from requests of law enforcement authorities (LEA) to gain access to Internet users’ personal data, as part of criminal investigations. Several countries and regions are considering the introduction of new legislation to address cross-border data. The Council of Europe is discussing the Second Protocol to its Convention on Cybercrime; the EU is working on new rules for cross-border access to electronic evidence; the USA is negotiating bilateral agreements under its Cloud Act; while Brazil is discussing a law to empower its LEAs to request access to data.
Access to electronic evidence in cross-border settings is challenging, as current processes are too time-consuming to allow for time-sensitive access. In addition, the issue of ensuring a balance between the sovereign rights of states to investigate and the protection of the users’ (subjects of investigations) rights, including privacy, remains prominent.
Other legal issues arise around the obligation – or lack thereof – of service providers to divulge information about individuals, based on legal orders from other countries. Their responses to such orders expose them to possible sanctions by the home country, non-compliance sanctions by the foreign authority requesting the information, as well as questions from their users regarding the protection of their rights.
Development relies on access
The IGF is a forum that very dynamically takes into account issues faced by developing countries. As the popular IGF for beginners session explained, there are continuous efforts to increase participation from developing countries. Thanks to the travel fund provided by the government of Germany, this year marks an improvement in developing country inclusion, especially with many new participants from Africa. Hand-in-hand with funding, efforts are ongoing to improve online participation to facilitate meeting access to remote countries and regions.
Promoting the IGF model in national contexts in Africa was tackled in an open forum of the African Union Commission. African countries face many challenges in their efforts to participate in global Internet governance.
Not surprisingly, development-related discussions at the IGF concentrate on digital inclusion and access. Increased connectivity is considered the key solution to the main developmental digital policy challenges. The High-level session on inclusion reminded us of Day 0’s launch of the Contract for the Web. Each of the contract’s nine principles has action points expected to result not just in connectivity for the other 50%, but meaningful connectivity that will enable actual freedom, and can bring people out of poverty.
As multiple sessions noted, there is a difference between having the technology to access the Internet, and to be really connected with the online world. It is challenging to create a fully digitally connected society. In addition, the gender digital gap is growing. And affordability of access comes into play, as emphasised in an NRI Collaborative session.
Even for businesses, notably for small and medium-sized enterprises (SMEs), access matters, as access to the customer base starts with access to the Internet. Yesterday’s media-dedicated session also tackled access, this time in connection with regulatory frameworks and technical solutions.
How can operators assist in improving access, when obviously, financial investment scope and profitability need to be taken into account? A session aimed mainly at operators encouraged them to examine various technologies (e.g. overlaying 4G networks over existing 2G networks) and business models (e.g. public-private partnerships and co-operation with community networks).
Many sessions acknowledged the connection between development challenges and capacity development, both generally, and in relation to specific issues.
SMEs and the digital economy
Many digital economy discussions on Day 1 revolved around the benefits and challenges for SMEs, and the conditions required for SMEs to benefit from global value chains. A stable regulatory environment that takes into account the rule of law, access to financing, local and regional level multistakeholder co-operation, and forums for dialogue can all help SMEs to thrive.
As we mentioned earlier, SMEs use the Internet and digital technologies to connect with new customers, improve the efficiency of their operations, and provide new products and services. But they often depend on the availability of necessary services, such as connectivity, cloud computing, and e-payment services. SMEs should also be supported in developing the necessary skills to engage in digital marketplaces.
In their cross-border activities, SMEs are exposed to multiple jurisdictions. The need to comply with different, and sometimes conflicting laws on regulations – on issues such as privacy and data protection – represents a barrier and an additional cost to the operation of SMEs. Many of them may not be aware of specific legal requirements in countries where they offer their services. At the same time, compliance with human rights regulations has become a key element in enterprise branding and corporate social responsibility strategies.
Privacy and data protection are important considerations in the context of the digital economy. Norms on data governance can bring more regulatory predictability, while creating an enabling environment for data flows and fostering competition. This could be done not only by providing incentives for data sharing, open data, expanding consumer choice, and fostering interoperability, but also by re-thinking the key paradigms that guide policies on competition.
New data governance frameworks should also encourage consumers to manage their own data. An emerging trend is the involvement of data trustees, who act on behalf of consumers as intermediaries between them and companies that want access to their data. Trustees ensure that personal data is used in line with data protection norms and consumer preferences.
Tackling content policy: Platforms under pressure
Election campaigns are driving tougher efforts to fight the spread of disinformation and to ban initiatives that can undermine trust in the democratic process. This has prompted companies to update their policies on political advertising. Twitter announced a ban on (almost) all political adverts; Google is limiting adverts to those which use only general data to target audiences. Facebook faces pressure to do the same; it has since been weighing its options.
These developments are a result of increasing pressure on companies. But how are regulatory frameworks tackling content-related issues? Different approaches between countries with democratic processes and non-democracies are quite evident, as are legal approaches which lack rigour. Some laws are simply failing to address the issue holistically.
Tighter regulation can curb manipulative practices by social media companies. Standardised transparency reports, which describe how companies block problematic accounts and how many resources companies allocate for this exercise, could make a big difference in the fight against questionable content.
How far should regulation go? The results from a poll on disinformation showed that citizens preferred platforms taking responsibility for the removal of hateful content, rather than being obliged through legislation. Reactions to these results included a call for more transparency on business practices, and an invitation to rethink the core of platform business models.
IGF Plus: A brave new world for the IGF
The Internet Governance Forum appears to be on the cusp of a major overhaul, thanks to the support which the IGF Plus model received during Day 0’s main session, and over the past few months.
Intended to strengthen an existing, legitimate body, the IGF Plus is one of three models proposed by the UN High-level Panel on Digital Cooperation in its July report to strengthen digital co-operation at the international level. The model comprises four entities: an advisory group, based on the current IGF Multistakeholder Advisory Group; a co-operation accelerator, bringing together existing and new coalitions around digital policy issues; a policy incubator, which would identify regulatory gaps and propose norms and measures; and an observatory and help desk, to function as a one-stop-shop for policy assistance and capacity development.
During yesterday’s session, we were reminded of several benefits of building on an existing model. The IGF attracts a broad spectrum of stakeholders, including many experts; it is an embodiment of the multistakeholder approach; it has a solid mandate from the World Summit of the Information Society (WSIS), reaffirmed in 2015; and its achievements were recognised during WSIS+10, the process which reviewed the implementation of the WSIS outcomes.
IGF Plus also has the potential to act as a bridge between different communities, processes, and organisations, and to break down silos; to act as the ‘connective tissue’ among UN bodies; and to bring cutting edge topics into focus.
The IGF Plus offers an opportunity to strengthen the current IGF set-up, and address its weaknesses. For instance, it will need to find ways of interacting more with governments – especially those in favour of more centralised intergovernmental approaches – and the private sector. A strong link to the UN Secretary-General’s office would give the IGF higher visibility and more weight.
And so, what comes next? Discussions will need to address the open issues. Should the IGF Plus remain a non-decision making forum, in line with the original mandate of the IGF? How will it ensure greater participation by all stakeholders, and include existing IGF processes such as national and regional initiatives? More importantly, who will fund it, considering that the IGF has been facing financial difficulties?
Meanwhile, the UN High-level Panel on Digital Cooperation and the IGF will need to take stock of contributions made to the IGF Secretariat, and to EuroDIG – the European IGF initiative, as well as throughout the week in Berlin. If support continues to resonate, it could spell a major success for the IGF which has withstood the test of time in an otherwise tumultuous period for digital policy.
Where is IQ'Whalo?
IQ’whalo is a former coffee-maker that became too smart to make coffee. It now gives opinions on AI and policy as a full-time job. Here are IQ'whalo's quotes of the day, generated from an analysis of yesterday's transcripts:
'Everybody is working on things that are going to benefit everybody.
'We need to make sure that everybody understands that diversity is going to be an issue for the future of the Internet.'
Data Analysis: Day 1's Most Prominent Issues
After Day 0's ‘soft’ start with numerous pre-events, the IGF kicked off yesterday with the official Opening Ceremony setting the tone with issues such as multistakeholderism, access, development, and emerging technologies.
Day 1’s sessions largely focussed on issues similar to those tackled on Day 0. Nearly all top issues remained the same, with the exception of AI, which did not feature as highly in Day 1’s discussions. Instead, it was replaced by jurisdiction. The focus shifted slightly, with sessions placing more emphasis on sustainable development, data governance, and network security.
This change in focus was also reflected in the basket distribution. The technology and infrastructure basket shared first place with the cybersecurity basket, given that they were each covered in 18% of yesterday's sessions. Other sessions were almost evenly distributed among the remaining baskets.
A look at our wordcloud for Day 1 shows the most often used words:
Our prefix monitor is back this year! Used to provide an ‘x-ray’ of digital debates, by analysing the prefixes or modifiers used with issue topics, it signals the direction and nuances of some discussions, and tell us how certain issues are framed. The different prefixes analysed are tech, digital, net, online, cyber, e-, and virtual. Diplo’s Data Team’s analysis of yesterday’s transcripts shows that Day 1’s most used prefix was ‘digital’, employed by speakers in more than 30% of analysed texts, followed by 'online' with over a fifth of appearances. Compared to last year’s Day 1, a noticeable decline - of more than 50 percent - in the use of the prefix 'cyber' has been observed. Terms 'tech' and 'net' swapped places this year, occupying fourth and fifth place respectively. Similarly to last year, the use of prefixes 'e' and 'virtual' was not that prominent and the two terms came in the last two positions.
Don't Miss Today
Applying human rights and ethics in responsible data governance and artificial intelligence
09:00 – 11:00 CET | Convention Hall II & online
So much is being said about the ethical implications of AI and data. What started as a discussion during IGF 2018 will continue during today’s main session on the role of ethical frameworks and human rights legal instruments in ensuring that AI is developed and used in a human-centric and trustworthy manner. Three main questions will be tackled: (a) What is trustworthy and responsible AI, especially with regard to data governance? (b) What is the role of human rights legal instruments and ethical frameworks in ensuring trustworthy and responsible data governance and AI? Are there any lessons learnt from existing frameworks? (c) How can we build a bridge between defining human rights and ethical frameworks and implementing them in AI systems and SDGs? What are the roles of different stakeholders and how can they work together to achieve the best results?
13:50 – 14:50 CET | Estrel Saal B & online
AI has long been a popular subject of Hollywood movies and science fiction novels. Today, the scales are tipping the other way. Policy-makers, corporations, and international organisations are positioning AI front and centre on their agenda. The stakes are higher and the social cost of inaction is far too significant to be ignored. As we witness calls for the regulation and oversight of AI technologies, we also need to ask ourselves: Can AI help humanity navigate the uncharted waters and unseen icebergs of the digital era? Can artificial intelligence draft a social contract for the AI era? The only way to find out is to bring AI to the discussion table. So, add a chair for AI. The session is an open invitation to find out more about Diplo’s Diplo’s new project, HumAInism, and to engage in discussions on AI and its impact on humanity.