Issue no. 22 of the Geneva Digital Watch newsletter, published on 30 June 2017, by the Geneva Internet Platform (GIP) and DiploFoundation. Contributors: Stephanie Borg Psaila, Jovan Kurbalija, Virginia Paque, Marilia Maciel, Roxana Radu, Vladimir Radunović, Barbara Rosen Jacobson, Sorina Teleanu. Design by Viktor Mijatović, layout by Aleksandar Nedeljkov, Diplo’s CreativeLab
The UN Group of Governmental Experts (GGE), tasked with examining cyber-threats and making recommendations, was unable to reach consensus on its final report during its last meeting on 19–23 June.
Previous reports introduced the principle that existing international law applies to the digital space, and developed norms and principles of responsible behaviour of states in cyberspace. Although the reports are not legally binding, they carry significant influence in the field of global cybersecurity.
While previous UN GGE reports will remain valid and applicable, the group’s future is uncertain. In its absence, states may move more towards bilateral agreements, a trend which was particularly prevalent in 2015 and 2016. Scroll down for further analysis of the future of the UN GGE further down.
The Internet industry is under increasing pressure by governments to provide digital information to be used in criminal investigations and anti-terrorist activities. Traditional channels for international cooperation are slow and cumbersome. A regular legal process for obtaining digital evidence via Mutual Legal Assistance Treaties (MLATs) may take at least ten months. To bring the legal system up to speed for the digital era, Google has proposed new norms for providing digital evidence to foreign governments.
Google’s proposal would allow law enforcement to request digital evidence directly from Internet companies, bypassing the need to go through MLAT channels. According to the proposal, this would work only between countries that adhere to privacy, human rights, and due process standards.
Google’s proposal comes only a few months after Microsoft’s proposal for a Digital Geneva Convention, which outlines new cybersecurity norms for governments and the Internet industry. The private sector is increasingly stepping into a norm-developing role, which was previously mainly the ambit of governments. Companies have a stake in initiating discussions and proposing solutions on issues, such as cybersecurity, that affect their business directly.
Cyber-attacks and terrorist attacks often spur calls for action. The London Bridge attack in the UK, on 3 June, triggered a call for more regulation of the Internet. British Prime Minister Theresa May said that new international agreements should be introduced to regulate the Internet and to deprive extremists of their safe spaces online.
Echoing this call, Australian Prime Minister Malcolm Turnbull referred to extremist content and terrorists’ use of the Internet and the risks of ‘ungoverned spaces’. His call to strengthen security agencies’ ability to legally compel a company to assist with decryption came ahead of the meeting of the Five Eyes (FVEY), a guarded alliance of five countries (USA, Canada, UK, Australia, New Zealand) formed to tackle security issues.
Meanwhile, the UK and France launched a joint campaign to combat terrorist content. Although the UK is already working with Internet companies to stop the spread of extremist material, May and French President Emmanuel Macron agreed that those firms must do more ‘and abide by their social responsibility to step up their efforts to remove harmful content’.
Internet companies have come under attack for failing to adequately address the surge and spread of extremist content on their platforms. In response, several initiatives have been launched, such as Facebook’s Online Civil Courage Initiative, and the Global Internet Forum to Counter Terrorism (initiated by Facebook, YouTube, Twitter, and Microsoft).
Blocked access to the Internet was reported in several countries throughout June.
Ethiopia blocked access to the Internet across the country to counter the risk of national exam papers leaking online. Egypt blocked access to more than 50 news websites and companies offering virtual private network (VPN) services that could help Egyptians circumvent the block.
The Cyberspace Administration of China (CAC) ordered Internet companies to shut down close to 60 entertainment news accounts. This is in line with the country’s efforts to counter ‘excessive reporting on the private lives of, and gossip about, celebrities’.
Among the concerns of the Special Rapporteur on the protection and promotion of the right to freedom of opinion and expression, outlined in his annual report, is the fact that states are increasingly demanding providers of telecommunications and Internet services to comply with censorship requests.
Two years after social media companies started to introduce specific policies on digital legacy, a controversial court judgement placed this issue in focus.
The case concerns the Facebook account of a 15-year-old girl who died in 2012 following a train accident. Her parents tried to find out whether the girl was cyberbullied in the lead-up to the incident. In 2015, a Berlin regional court ruled in favour of the parents, arguing that the contents of the girl’s account are analogous to letters and diaries and ‘can be inherited regardless of their content’. The Appeals Court overturned the decision, stating that ‘a contract existed between the girl and the social media company and that it ended with her death’.
From a legal perspective, three main questions arise:
The case, which is expected to continue, brought digital legacies into perspective, including the fact that many jurisdictions do not yet tackle the transfer of digital content when a user passes away. In addition, Internet companies have their own ways of dealing with content and user accounts following the death of a user. The lack of adequate legal provisions may soon need to be addressed.
The GIP Digital Watch observatory is currently mapping the legal status of digital content across jurisdictions, and the treatment of digital legacies by social media networks.
The monthly Internet Governance Barometer of Trends tracks specific Internet governance issues in the public policy debate, and reveals focal trends by comparing the issues every month. The barometer determines the presence of specific IG issues in comparison to the previous month. Learn more about each update.
The British Prime Minister called for new rules to deprive extremists of their safe spaces online; the Australian Prime Minister called for weakening strong encryption; the UK and France launched a joint campaign to combat terrorist content.
Facebook, Microsoft, Twitter, and YouTube formed a Global Internet Forum to Counter Terrorism which will develop technological solutions, such as a hash database for extremist content, and undertake research to guide policymakers. The companies will also collaborate with the UN Security Council Counter-Terrorism Executive Directorate and the ICT4Peace Foundation to establish a knowledge-sharing network, techagainstterrorism.org
ICTs and the Internet hold the key to sustainable development and contribute to every area of development. The annual WSIS Forum tackled many aspects related to the SDGs, highlighting projects and initiatives that are working to attain the goals.
In its latest meeting, the UN GGE did not reach consensus on a final report (read our analysis) while Google proposed a new framework that would allow governments to request digital evidence for law enforcement investigations directly from Internet companies.
A new ransomware, titled Petya, paralysed institutions worldwide after infecting Windows-based systems in more than 65 countries.
The Council of the European Union launched an initiative to develop a Cyber Diplomatic Toolbox – a framework for joint diplomatic response by the EU to deter cyber-attacks and respond to cyber-threats.
The personal data of almost 200 million US citizens was leaked and uploaded to an Amazon cloud server, available to anyone with a direct link.
The European Commission has fined Google €2.42 billion for non-compliance with EU antitrust rules. The Commission said that Google abused its dominant market position as a search engine by giving an illegal advantage to its comparison shopping service to the detriment of other similar services.
A New York judge ruled that that former Uber drivers are eligible for unemployment benefits. China adopted guidelines for the sharing economy – a booming sector which is expected to account for around one-tenth of the country’s GDP by 2020. The guidelines aim to further boost mass innovation and entrepreneurship. The European Parliament issued clearer guidelines on the collaborative economy, with the intention of clarifying open issues.
After months of negotiations, Indonesia and Google reached a tax settlement for 2016, although the figures were not disclosed. As of 1 July, Australia will apply a 10% goods and services tax on digital products and services from overseas that are bought in Australia. Meanwhile, the Canadian government rejected a proposal to impose a 5% tax on broadband Internet streaming services. In Russia, Google was blocked for several hours on 22 June in a bid to enforce a tax ruling made in 2016.
The Special Rapporteur on the protection and promotion of the right to freedom of opinion and expression published his annual report, examining the role of states in undermining freedom of expression online and other implications for online rights. The report also includes recommendations for states and private actors.
Ethiopia blocked access to the Internet to counter the risk of national exam papers leaking online. The shutdown was meant to prevent a repeat of the 2016 leak of exam questions.
The Court of Appeals in Berlin, Germany, ruled that parents of a 15-year-old girl who was killed by a train in 2012, have no right to access her Facebook account.In overturning a 2015 regional court ruling in favour of the parents, the court ruled in favour of Facebook, stating that ‘a contract existed between the girl and the social media company and that it ended with her death’.
The Supreme Court of Canada ordered Google to globally de-index websites belonging to a firm which was unlawfully selling the intellectual property of another company.
Chinese scientists reported successful quantum satellite communication tests between two points 1200 km distant from each other. In the coming years, the highly secure nature of quantum communication could become an alternative to current communication channels.
Airlines are expected to have better Internet connections on board their flights, after Viasat launched its new satellite. Viasat-2 will operate above the Americas and Atlantic Ocean.
Six of the top ten countries leading in IPv6 adoption are European countries, Akamai’s First Quarter 2017 State of the Internet report shows. Deployment is also increasing globally, the Internet Society reports.
In the USA, net neutrality supporters are planning an online protest on 12 July to advocate against the plans of the Federal Communications Commission (FCC) to roll back net neutrality rules. The Internet-wide day of action to save net neutrality will involve major Internet companies such as Amazon, Mozilla, and Reddit, as well as organisations such as the Electronic Frontier Foundation, the World Wide Web Foundation, and Public Knowledge.
Researchers from OpenAI and DeepMind have been working on an AI algorithm that learns from human feedback, as a way to make AI safer. Since problems associated with the concept of reinforcement learning can be dangerous, the proposed method would rely considerably on human feedback. Meanwhile, a group of UK-based researchers from the Alan Turing Institute have argued that current regulations are not sufficient to address issues such as transparency and accountability; new rules and guidelines are needed.
Scientists at CERN are deploying AI to protect the CERN grid from cyber-threats. They are working with an AI system which is being taught to distinguish between safe and threatening behaviour on the CERN network and take action when it detects a problem.
The drone market is growing steadily. In the EU, public-private partnership SESAR JU, which coordinates research on air traffic management, has published the U-space blueprint to make the use of drones in low-level airspace safe, secure, and environmentally friendly.
The summit, held 6–9 June, brought together experts from the Internet of Things (IoT) industry and research community, for discussions on current and emerging IoT technologies, the use of IoT in areas such as smart homes and public buildings, energy efficiency, and connected vehicle services. Other sessions discussed the new challenges for network infrastructure that are emerging with the increase in IoT devices. One key solution for addressing such challenges lies in speeding up the deployment of Internet protocol version 6 (IPv6).
Artificial intelligence (AI) may have a larger impact than the Industrial Revolution. This was the echoing opening message of the summit, held 7-9 June, which brought together AI experts, international organisations, and academics to discuss the possibilities of AI. While the possibilities seem endless, some experts also offered cautionary perspectives on the limits and challenges, such as the fact that there is more to ‘intelligence’ than machines can replicate, and that AI can widen the digital divide.
The Forum, held 12-16 June, brought together the ICT for Development community, to look at how the world is progressing on its way to sustainable development and what still needs to be done. There was broad agreement on the fact that information and communications technologies (ICTs) and digital solutions can drive progress towards the sustainable development goals (SDGs). For this potential to be fully exploited, there is a need for enhanced efforts in areas such as deploying infrastructures, building confidence and trust in the use of ICTs, promoting digital literacy and bridging other digital divides. The Geneva Internet Platform provided just-in-time reports from the forum. Read the session reports, and download the Summary Report.
Organised on 14 June by the Graduate Institute of International and Development Studies, the event focused on the issue of fake news and the role confirmation bias plays in how we interpret news and information. Fake news comes in many facets, and is fast becoming a ‘democratic problem’. Dealing with this at an individual level, critical thinking is important, as users should be able to rebut false information. As a society, a combination of factors can help address the negative implications of fake news: transparency by Internet companies, education and awareness raising, and adding warnings to sponsored content.
The event, organised on 14 June, was dedicated to exploring possible solutions for ending Internet shutdowns and surveillance around the world. The rise in state-sanctioned denial of access to the Internet and state access to personal data has significant implications for human rights. In this context, a call was made for the Human Rights Council to provide guidance to member states on the basic minimum of human rights online, while private companies were urged to implement the UN Guiding Principles on Business and Human Rights.
At its 35th session, held 6–23 June, the Council discussed, among others, two reports related to human rights in the digital environment. The report on the Promotion, protection and enjoyment of human rights on the Internet: ways to bridge the gender digital divide from a human rights perspective, prepared by the UN High Commissioner for Human Rights, contains recommendations to ensure that ICTs are accessible to women on an equal basis. The Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression examines states’ obligation to protect and promote freedom of expression online, while focusing on the issues of Internet and telecommunications shutdowns, government access to user data, and net neutrality.
The Research Colloquium, held on 23 June as part of the Geneva Internet L@w Summer School, gathered young researchers, senior academics, and students attending the summer school. Young researchers presented their research projects on issues such as AI, autonomous vehicles, trust on the Internet, and consumer protection. A very vibrant discussion highlighted the impact of these new technological developments on legal and policy systems. In particular, participants focused on applying existing rules to new developments and identifying areas where new legal rules and policy approaches are needed.
The UN GGE was unable to reach consensus on its final report during its last meeting. What did the members agree on then, and what does this mean for the future of the UN-mandated group?
Consensus may not have been reached over the final report, yet there was broad agreement among experts on a number of points. This is how UN GGE Chair Karsten Geier, Head of the Cyber Policy Coordination Staff at the Federal Foreign Office of Germany, summed up the outcome of the latest UN GGE meeting.
Speaking at the Cyber Week conference in Tel Aviv (25–29 June), Geier explained that the agreement related to emerging risks (including the use of ICT by terrorists), capacity-building measures to be undertaken, and confidence-building measures and norms (including raising awareness among senior decision-makers, conducting exercises, defining protocols for notifications about incidents, warnings when critical infrastructure is attacked, and preventing non-state actors from conducting cyber-attacks).
There was also a general understanding that there is space for further work on a final document. There was no consensus in the GGE at this point on what options states might have to respond to cyber-attacks, and if and how to take the process further under the UN.
Some believe that the right of states to respond to cyber-attacks using non-cyber means can act as a deterrent on carrying out attacks. According to Christopher Painter, the US Coordinator for Cyber Issues at the United States Department of State, different options should be discussed, including diplomatic notes, economic and even ‘Internet connectivity’ sanctions. But a Cuban delegate expressed public concerns that this could increase the militarisation of cyberspace, and equate cyber-attacks with armed attacks as defined by the UN Charter.
As for the future of the process, most experts recognise the value of the GGE’s work, yet the Cuban delegate called for the creation of an Open-ended Working Group of the First Committee of the UN General Assembly. Shanghai Cooperation Organisation members, however, prefer to remain open to negotiations on an international Code of Conduct under the auspices of the UN.
Such a proposal is being openly rejected by the US government. The USA believes that the focus should move away from developing new norms, to ensuring the adherence by states to the agreed voluntary norms, through discussing ways to act against misbehaving states. Thomas Bossert, the US Assistant to the President for Homeland Security and Counterterrorism in the White House, called for considering options other than just the UN for imposing consequences on cyber-attackers – particularly bilateral agreements and establishing a coalition of like-minded countries.
The UN GGE Chair emphasised that most experts agreed they could work further on final changes to the text, and underlined that the deliberations are not concluded; there are still some options to be explored towards a possible compromise.
While there is no doubt that the existing work of the UN GGE remains relevant, failure to deliver a consensus report may leave the future of the GGE open, and the dialogue on the conduct of states in cyberspace unresolved.
Artificial intelligence has been around for many years. Launched as a field of research more than 60 years ago, AI is now shaping the so-called fourth industrial revolution. In a two-page special, we look at its implications for and applications in our daily lives.
Many consider that the official birth of AI as an academic discipline and field of research was in 1956, when participants at the Dartmouth Conference coined the term ‘AI’ and talked about the fact that ‘every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to stimulate it.’ From that moment on, AI has been continuously evolving and has found its use in many areas, from manufacturing and agriculture, to online services and cybersecurity solutions.
Technology has an enormous potential to bring positive change in society and help address some of the challenges we face today. Participants at the AI for Good Summit and the WSIS Forum, both held this month in Geneva, talked about how AI can help advance the sustainable development agenda and identify solutions for problems such as crime, poverty, climate change, and hunger. But they also raised concerns over the unintended consequences of AI and its impact on the economic, social, and cultural aspects of society.
Many debates revolve around the disruptions that AI could bring to the labour market. A recent survey conducted among machine learning researchers found that, in their view, AI will outperform humans in many domains in the next 40 years. This means that tasks currently performed by humans will be automated, and some jobs are to become obsolete. How can these concerns be addressed? Stopping technological progress is not an option, and many agree that efforts should be oriented towards better preparing the labour force for the new requirements of the world of work.
Other concerns are related to safety and security. Using AI in real life applications, such as driverless cars, brings into focus the question of human safety. Training algorithms to take into account multiple factors when making a decision (much like humans do) remains an area of intensive research. In one example, researchers from OpenAI and DeepMind have been working on an AI algorithm that learns from human feedback, as a way to make AI safer.
As AI systems involve judgments and decision-making, questions have also been raised regarding ethics, accountability, and transparency. But how can discrimination and bias in decisions taken by algorithms be avoided? If, for example, AI algorithms are used to track down extremist or hateful content online, how do we make sure that the algorithms are unbiased when determining what is and what is not inappropriate content? And who should be held accountable if an AI system does not act as expected?
These are some of the questions that researchers are looking into. The Institute for Electrical and Electronics Engineers (IEEE) has launched a Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, aimed at contributing to ensuring that technologists are educated, trained, and empowered to prioritise ethical considerations in the design of intelligent systems. Researchers at the University of California, Berkeley, and the Max Planck Institute for Informatics have been working on developing AI algorithms that can ‘explain themselves’, by designing a ‘pointing and justification’ system enabling algorithms to point to the data used to make a decision and justify why it was used that way.
These are only some of the concerns surrounding AI. The good news is that not only researchers, but also governments, intergovernmental organisations, the private sector, and civil society are increasingly considering these concerns. The Partnership on AI is one example in this regard: the initiative, launched in September 2016 by Amazon, DeepMind/Google, Facebook, IBM, and Microsoft, to develop best practices on AI-related challenges and opportunities, has recently expanded to include partners such as the United Nations Children’s Fund (UNICEF), the Electronic Frontier Foundation, and Human Rights Watch.
Keep track of the latest policy discussions on artificial intelligence: /ai
Artifcial intelligence is implemented in many areas. Let’s take a look...