Florida Department of Economic Opportunity finds that Uber drivers are not employees. Decision comes after Uber appealed original decision where the agency ruled that drivers were employees.
Florida Department of Economic Opportunity finds that Uber drivers are not employees. Decision comes after Uber appealed original decision where the agency ruled that drivers were employees.
International Telecommunication Union (ITU)’s comprehensive report on global ICT regulatory developments, confirms that future network traffic will increasingly be driven by machine-to-machine (M2M) traffic generated by billions of connected devices. One billion IoT devices are expected to be shipped during 2015, to reach a predicted installed base of 2,8-billion connected devices by end of the year.
Historically, telecommunications, broadcasting, and other related areas were separate industry segments; they used different technologies and were governed by different regulations. The broad and prevailing use of the Internet has aided in the convergence of technological platforms for telecommunications, broadcasting, and information delivery. Today, we can make telephone calls, watch TV, and share music on or via our Internet-connected devices. Only a few years ago, such services were handled by different technological systems.
In the field of traditional telecommunications, the main point of convergence is represented by Voice over Internet Protocol (VoIP) services. The growing popularity of VoIP systems such as Skype and Viber is based on lower price, the possibility of integrating data and voice communication lines, and the use of advanced PC- and mobile-devices-based tools. With YouTube and similar services, the Internet is also converging with traditional multimedia and entertainment services. Such services – which use the Internet as the delivery platform – are known as over-the-top (OTT) services.
Convergence is also discussed in relation to new business models enabled by the Internet, such as the sharing economy, which, in general terms, involves the use of digital platforms for the provision of ‘offline’ services (e.g. Uber for transportation, and AirBnB for accommodation). With advancements in the areas of the Internet of Things and artificial intelligence, the integration of these new technologies into existing products, services, and business processes is also increasingly seen as a matter of convergence.
While this digital convergence is going ahead at a rapid pace, more and more attention is paid to the related economic and legal implications.
At the economic level, convergence has started to reshape traditional markets by putting companies that previously operated in separate domains, into direct competition. While new business models are emerging, existing ones see themselves threatened. For example, traditional telecom operators have been complaining about the fact that OTT services threaten their businesses; mobile telephony service providers, in particular, have seen drops in the usage of classical voice services, as customers are now more inclined to use VoIP services.
Faced with such challenges, companies take different approaches. Some insist that the competition brought by OTT services is unfair, as OTT providers are in most cases not subject to the same complex regulatory provisions. Others have taken proactive measures, by, for example, changing their business models to introduce new services to compensate for those less used. Another frequent strategy consists in merger and acquisition, when smaller, new-on-the-market OTT providers merge with or ar acquired by larger companies. In a more recent approach, OTT and telecom providers have started to conclude partnerships aimed to bring advantages to both sides: for telecom providers, partnerships with OTT providers bring them a competitive advantage, as well as added value for end-users; OTT providers, on the other hand, would have their services easier to find and access, thanks for partnerships with carriers. These models, however, raise concerns related to their compliance with the network neutrality principles, in cases when, for example, carriers choose to offer their clients unlimited and/or free access only to some OTT services.
Convergence has in many cases lead to fears of the ‘Uber syndrome’ among business leaders: the scenario in which a competitor with a completely different business model enters the industry and flattens competition. Such was the case when Uber entered the taxi market by innovating on the technological aspect; as a consequence, traditional taxi companies and drivers, who businesses were threatened, filed lawsuits in courts across the world in protest against the new unregulated entrant in the market. At the EU level, for example, the Court of Justice of the European Union was asked to determine whether Uber could be considered a transport service provider, or a digital platform facilitating the provision of information society services. Courts were also asked whether Uber drivers are independent contractors or Uber employees.
The legal system was the slowest to adjust to the changes caused by technological and economic convergence. Each segment – telecommunications, broadcasting, and information delivery – has its own special regulatory framework. This convergence opens up several governance and regulatory questions:
At international level, governance mechanisms are mainly used for the exchange of best practices and experiences. The International Telecommunication Union's Development Sector (ITU-D) has a study group on the converging environment. The Council of Europe has a Steering Committee on Media and Information Society, covering one aspect of convergence: the interplay between traditional and new digital media.
At national level, countries are addressing convergence in various ways. Some countries, such as several EU member states, India, and Kenya have chosen flexible approaches towards regulating convergence, by simply addressing the issue from the perspective of net neutrality: users should be allowed to choose any type of applications or services provided over IP networks. Other countries have created new legal and regulatory frameworks for converged services. In Korea, for example, IPTV services are subject to legal provisions in terms of licensing requirements and service obligations. The EU is also exploring the introduction of legal obligations for providers of OTT services and the conditions under which such providers should be subject to the same regulatory requirements as traditional telecom operators. In some countries, convergence is addressed through self-regulation. And there are yet other countries – such as Belize, the United Arab Emirates, and Morocco – which have chosen, at one point or another, to explicitly ban OTT services through regulation, or to ask that access to such services is blocked by telecom providers.
This article argues that sharing economy business models must demonstrate a greater willingness to collaborate with governments, to help shape emerging regulatory frameworks, and to take an active part in countering the recent volleys of negative publicity that could undermine their innovative potential. It also outlines some ideas on how to underpin such a strategy.
In this article, the author calls for universities to pay more attention to the continuously changing IT industry landscape characterised by technology convergence, and to partner with the industry in order to 'jointly develop technology'.
The latest edition of glossary, compiled by DiploFoundation, contains explanations of over 130 acronyms, initialisms, and abbreviations used in IG parlance. In addition to the complete term, most entries include a concise explanation and a link for further information.
The book, now in its sixth edition, provides a comprehensive overview of the main issues and actors in the field of Internet governance and digital policy through a practical framework for analysis, discussion, and resolution of significant issues. It has been translated into many languages.
The report outlines predictions of the development of the technology, media, and telecommunications sectors in 2017. It covers issues such as: biometric security, distributed denial of service attaches, self-driving vehicles, 5G networks, machine learning, and Internet of Things as a service.
Report on the implications of technological and economic convergence for the regulation of the digital ecosystem. The report focuses on six areas of regulatory policy: access regulation, barriers to entry and exit, privacy and data protection, merger review, spectrum management, and universal availability and access
The report, based on a survey conducted among industry leaders around the world, looks into how these leaders view the competition challenges brought by the 'digital invaders' (providers of over the top services and new digital startups) and the growing industry convergence trends.
The report provides an overview of over-the-top (OTT) services and analyses the impact such services have on the traditional electronic communications sector, in terms of competition and consumer protection (within the framework of the European regulations on electronic communications).
This report maps the broadband market developments in the EU in 2015.
The study explores current and emerging business models for over-the-top (OTT) services (such as Voice over IP and video streaming services), looks into the costs and barriers to European online service development (including OTT), and offers an overview of the regulatory framework for online services in Europe and in some other countries.
The study looks at how top executives from IT companies perceive the so-called 'disruptive innovation' and the competition challenges brought by new market entrants with different business models. Industry convergence is tackled in the study, as the main trend anticipated for the coming three to five years.
The report analyses the changing business and consumers behaviours led by technological innovation, and looks at the impact of these changes on national ICT policies and regulations.
This report examines and documents evolutions and emerging opportunities and challenges in the digital economy. It provides a comprehensive overview of the digital economy, including matters of infrastructure, policy, net neutrality, development, privacy and security.
The report analyses the convergence between technology, media, and entertainment, explores the drivers of this convergence, and identifies potential obstacles.
The report looks into how regulators could improve laws, regulations, and policies in order to ensure an adequate consumer protection in an environment characterised by technology convergence.
The session discussed the role of courts, public and private, in Internet governance (IG). The moderator, Dr Jovan Kurbalija, Director, DiploFoundation, introduced key trends which framed the debate: the increasing participation of courts in IG, growing challenges to the protection of citizens’ online rights, and the fact that we are at the brink of another information revolution, with artificial intelligence (AI) at the forefront. Welcoming remarks were made by Mr Michael Kleiner, Economic development officer, State of Geneva Directorate General for Economic Development, Research and Innovation (DG DERI), who provided a framework for the discussion as part of the Geneva Digital Talks, a process drawing on practical solutions and cyber expertise in Geneva. The Geneva Digital Talks will be concluded at the Internet Governance Forum in December and will hopefully continue in 2018.
The speakers set out the scene for the discussion: Dr Roxana Radu, Manager, Geneva Internet Platform (GIP), analysed the past regulation of traditional IG matters, contrasting them with present technological issues and emerging dilemmas. Mr Vincent Subilia, President, Swiss Chambers' Arbitration Institution (SCAI), provided a link between digital policy and the practice of arbitration in Geneva. Prof Dr Jacques de Werra, Vice-rector, University of Geneva, discussed how a system of micro-justice, based on values and standardisation, could assist in settling private Internet-related disputes that may arise between Internet platforms and their users.
Radu began by offering an overview of legal issues in Internet governance debates. Dividing her presentation into two parts, she first analysed the evolution of regulation in traditional IG matters, as well as the preferred legal instruments on a hard-soft law continuum. While in the late 1980s ‘hard’ instruments were the default option for regulating the Internet, post-1995 it was primarily via ‘soft’ mechanisms, such as guiding principles, model laws and global strategies that the Internet was regulated. The latter were also used to address the two greatest issues from 2005 onward, cybersecurity and civil liberties. In the second part, Radu discussed the legal implications, present and future, of three digital trends: sharing economy, digital rights, and AI. Using the examples of Uber (taken from a DiploFoundation original study) and Google to illustrate the first two tendencies, she approached AI differently, through questions concerning accountability, ownership, citizenship, and the social and political rights of robots (including replication, voting and taxation). To conclude, Radu posited that the hybrid nature of new business models produces legal uncertainty, whereas AI and emerging technologies require ethical clarifications to begin with.
Subilia established a link between the broad topics in digital policy and the practice of arbitration. SCAI has issued more than 1000 awards since its establishment. While it does not yet provide online services, this may fit within the organisation’s plans. Subilia stated that SCAI is contemplating adding innovative tools to its process in order to continually improve quality, cost-effectiveness and speed for awards (recognised in the 149 countries that ratified the New York Convention). On the matter of speed, Subilia noted that SCAI introduced an expedited process as early as 2004, ‘time is indeed of the essence’ such as for example when it comes to the domain names dispute resolution. Lastly, Subilia mentioned the recently launched ejust service as example of online arbitration.
According to de Werra, it is not uncommon for courts to step in when regulation is not clear enough. There is therefore no reason to worry if courts sometimes engage in digital policy making by rendering decisions in Internet-related cases. However, what poses a problem is the fact that private actors (specifically Internet platforms) can (and may even have to) engage in quasi-judicial activities by rendering decisions which could have a major impact on millions of Internet users around the globe. This is precisely what has been taking place since the well-known 2014 decision by the Court of Justice of the European Union on the so-called ‘right to be forgotten’ (more precisely the right to be de-indexed). Google has since had to decide on hundreds of thousands of requests for removing content, and the persons who were not satisfied with the decisions made by Google generally did not challenge the decisions before the relevant bodies because of the costs and other burdens associated with such proceedings. In the Internet age, traditional court proceedings before national courts do not appear as the most appropriate way to decide Internet-related disputes which can arise between Internet platforms and their users. Cases like the ones pertaining to the ‘right to be forgotten’ consequently confirm the need for a system of online micro-justice, whereby a neutral, trusted private party would judge each individual case in order to address the challenges of what de Werra has called ‘Massive Online Micro Justice’ (MOMJ). Geneva and Switzerland can bring their tradition and expertise in international dispute resolution in order to formulate digital dispute resolution policy proposals that would respond to the challenges of MOMJ. This is what the University of Geneva’s digital policy project, the ‘Geneva Internet Disputes Resolution Policies 1.0’ was intended for. Such a system should reflect key values, such as transparency, expertise and efficiency, and human-based justice (and not AI-driven justice).
Kurbalija encouraged the ensuing discussion by asking the panellists and the audience whether they ever needed access to justice in online matters, personally or institutionally. The ‘right to be forgotten’ procedures, data breaches imperatives, the lengthy time of court proceedings, and the disconnect between global technologies and local jurisdiction were among the topics addressed.
Mr Dunstan Allison-Hope, Managing Director at Business for Social Responsibility (BSR), gave a background on destructive machines as technologies that have been programmed to do things, and discussed the need to find out how to bring remedy when decisions are made by machine.
Mr Amol Mehra, Executive Director at International Corporate Accountability Roundtable (ICAR), said that the discussion should focus on the impact machines have on humans, and the impact of mechanisation of less skilled labours.
Mr Steve Crown, Vice President and Deputy General Counsel at Microsoft, commented that it is the responsibility of businesses to respect human rights, and there are potential risks in the evolution of artificial intelligence (AI). Large amounts of data are fed into a machine and instructing it what to do, through identifying patterns and collaborations. But machines have no empathy or emotions, and the quality of data input has an impact on the effectiveness of the machine. However, human errors and prejudices can be fed into machines, resulting in disastrous consequences.
Crown proposed that as a remedy to such challenges, scientists must strive to programme machines to help humans and ensure the transparency of data input to uphold peoples’ integrity.
Dr Sandra Wachter, Researcher in Data Ethics at University of Oxford and Turing research fellow at Alan Turing Institute, commented about the need of accountability on decisions made by machines. Individuals have a right to know about their data held by machines. To achieve this, companies must update privacy policies, to inform individual on the data the companies collect and how that data may be used. According to Wachter, this would need to be guided by domestic legislation with regulation mechanisms.
Ms Alex Walden, Counsel for Free Expression and Human Rights at Google, stated that a billion people use Google services every day and a billion people’s new data is added every day. Walden said that Google is able redress data protection violations through applicable jurisdiction, and that technology is always being improved to recognise democratic principles. Walden pointed that Google has policies that prohibit violence, extremism, and terrorism, and that have teams reviewing materials in different languages. Exceptions are applicable to education and artistic materials. In collaboration with civil society organisations, Google is helping inform companies on how they can respond to human rights violations through technology.
Ms Cindy Woods, Legal Policy Associate at the International Corporate Accountability Roundtable (ICAR), highlighted that the increased displacement of humans by machines is a human rights concern. There are alarming figures of workers replaced by machines. Woods pointed that robots are another example of destructive technology and that it is projected that by 2020, using a robot will be 4 times cheaper than human labour. The International Labour Organisation (ILO) projects that 2/3 of humans working in the garment industry can be replaced by machines, and yet in countries such Cambodia, the garment industry constitutes 80% of the total labour force.
Mr Theodore Roos, Project Collaborator for Future of Work at World Economic Forum (WEF), stated that the WEF has a project on preparing for future work. Roos stated that there are different solutions required by countries in developed and developing countries, but also within same group. One solution is education, not just in schools, but lifelong education for people to get new skills and adapt to new work.
Roos also proposed social services, for instance, compensating people not working, allowing people to move to counties where work is available, and encouraging and rewarding people working in human capital sectors, for instance, education and health.
The moderator, Ms Leslie Johnston, Executive Director at the C&A Foundation, introduced the panellists and mentioned that they are technologists who can explain how technology can promote human rights.
Ms Jessi Baker, Founder of Provenance, commented that Provenance’s vision is to use blockchain technology to improve access to information about businesses and human rights. Baker explained that blockchain is a new type of database that facilitates the exchange of information at a global level. Baker gave the example of encrypted currency, and explained that blockchain is a decentralised network with pieces of data coming from different sources, and that it is beyond government control. He argued that this allows data to flow down the market through the supply chain. According to him, there is a need to digitalise data to empower individuals along the supply chain, rather than have top down solutions. Baker gave an example of the fishing industry’s project in South East Asia, in which blockchain connected fishermen and end users, and is helping to reduce abuse of prices along the supply chain.
Ms Beth Holzman, Director for Worker Engagement at Laborlink, observed that technology can unlock the voice of workers. Laborlink technology enables worker engagement and the attainment of unfiltered feedback which provides a better understanding of labour issues. Moreover, quality data improves working conditions. Holzman said that Laborlink China collected 32 000 survey responses from across 20 Chinese factories, and in Bangladesh it collected over 32 000 survey responses from workers in 40 factories. This has given a platform to 47% of workers to report grievances and 38% of them have been effectively assisted.
Holzman said that to achieve meaningful results, there is a need to take necessary action:
Furthermore, Holzman pointed out that their goal was to put workers at the centre, that worker engagement must be at the core of all factory engagements, and to enhance responsible sourcing of programmes by companies.
In connection to remedy, she claimed that worker perspectives are key to risk assessment, mitigation, and prevention, and that companies need to focus on remedy criteria. Holzman concluded by saying that in future, there must be consent on access to individual information and defined pathways for remedy.
Dr Venkat Maroju, Chief Executive Officer at Source Trace, commented that smallholder farmers are the backbone of agriculture productivity and that farmers should be engaged to improve productivity. Using the digital power of mobile technology brings change through digital transactions. However, most of the farmers are in rural areas with poor connectivity and mostly offline.
Maroju stressed that quality infrastructure, digital payment, training in certification, and training in modern and digital financing through co-operatives is helping to improve farmers’ activity. Companies need to work on social accountability, and Source Trace is helping companies and cooperatives to translate their policies and make them work in areas of social enterprise and social audit by use of technology.
The amount of money in the value chain is at grass root level, making it difficult to attract information and communication technology (ICT). Illiteracy of farmers and lack of support in rural areas are key challenges in digitalising rural farmers.
Mr Kenton Harmer, Certification Director at Equitable Food Initiative (EFI), explained that the EFI is a skills-building and standards-setting organisation. EFI innovation provides relationship building, leadership, and team training, and enhances companies’ compliance with standards, audit, and certification. The initiative uses digital technologies in these activities, as they help to identify and come up with remedies to work-related problems.
Throughout the 2017 edition of the Geneva Peace Week, it became clear that digital technology has important implications for conflict prevention, albeit in two distinct and contradictory ways. Some sessions identified the ways in which digital technology can assist in the prevention of conflict. They highlighted the potential of e-commerce, big data, artificial intelligence (AI), and geographic information systems. Yet, at the other end of the spectrum, there was a focus on the ways in which digital technologies have given rise to increased threats. How to respond to the risk to cyberconflict? What will happen if new technologies, such as big data and AI, are used for the wrong purposes?
Opportunities for conflict prevention
One of the opportunities posed by digital technology is in the realm of e-commerce. With the launch of the e-caravan for peace, the International Trade Centre and the Permanent Mission of Japan showed that e-commerce can advance economic empowerment, including that of women and migrants in conflict situations. Trade in war zones can be a force for good, and e-commerce can allow for the integration of disempowered communities in the economy.
Gaming is another emerging avenue of contribution to conflict prevention. UNITAR presented its recently developed peacekeeping game Mission Zhobia. Throughout the game, skills and knowledge can be developed and tested in the safe environment of a simulated game. By training on issues such as conflict analysis, engaging stakeholders, building trust and adapting to new challenges, the game teaches key competencies for peacebuilding.
Emerging technologies may have extensive potential in untangling the complexity in which conflicts are embedded. Big data could provide real-time, objective information to conflict analyses and early warning systems, and the visualisation of big data could provide clarity on conflict patterns. Geographic information systems and satellite data – which could be considered one of the earliest forms of big data – can provide important insights in early warning systems and the utility of open source-based information was also discussed. Yet big data can be complex, biased and multi-interpretable, and their collection can give rise to data protection concerns that need to be taken into account. AI systems have turned out to be effective in tackling well-defined problems; nonetheless, their utility in complex settings and social contexts has so far remained limited.
Threats to conflict
One of the recurring themes during the Geneva Peace Week was the search for an appropriate response to the risk of cyberconflict. One initiative was brought forward earlier this year by Microsoft’s President Brad Smith, who proposed a Digital Geneva Convention. The utility of such a convention was discussed during one of the roundtables at the opening of the Geneva Peace Week. Discussants agreed that challenges brought by digitalisation require new norms and regulations. However, due to the important role of non-state actors in cyber warfare and the key concerns regarding the responsibility of the private sector, a Digital Geneva Convention might not be able to solve the key issues.
Further building on this topic, the session on Preventing cyber conflicts: Do we need a cyber treaty?, discussed, among other things, whether the existing legal framework is sufficiently equipped to deal with cyber threats. The panellists agreed that any new convention needs to be drafted with the participation of all the stakeholders and that governments need to take action to address vulnerabilities and externalities. Another session tackled a particular cyber challenge – the creation of a safer Internet for children, dealing with the development of a strategy to combat sexual violence against children.
The topic was concluded with a keynote lecture by Smith, who explained the rationale behind the proposed Digital Geneva Convention, relating it to the history of the establishment of the ICRC and the Geneva Conventions. His keynote was followed by a panel discussion with humanitarian and human rights perspectives and comments from the participants and online audience.
Besides the Internet as we know it today, emerging technologies are giving rise to new threats as well. Big data risks leading to mass surveillance and AI could empower lethal autonomous weapons systems. The face of war and conflict prevention will continue to be affected by technology, highlighting the need to continue the discussion on how to mitigate technology threats while promoting technology as a conflict prevention tool.
The ‘Future of Work’ discussion took place on 2 November 2017 at the Auditorium of The Graduate Institute of International and Development Studies (IHEID). Mr Ryan Avent, Senior Editor and Economics Columnist, The Economist, moderated the panel discussion and welcomed the audience by reminding them of the crucial impact that automation will have on the job market.
The discussion was launched by Dr Richard Baldwin, Professor of International Economics, IHEID, who considered what kind of impact technology will have on economic growth in the upcoming years. He affirmed that figures do not point towards an optimistic future: in the past information and communications technologies (ICTs) (i.e. automation) have mainly affected the manufacturing sector; however, recently technology has come to affect the service sector where it is estimated that about two-thirds of people are employed. Moreover, job replacement in the service sector will be faster in pace when compared to the manufacturing sector. He then argued that this phenomenon is already occurring in specific sectors, such as the web development sector where remote working (i.e. ‘telecommuting’) is already possible. In these cases, most the work carried out is domestic.
Dr Baldwin stated that such an impact is certainly posing challenges on two grounds. On the one hand, we need not to forget the consequences at a society level: there is the danger of a popular backlash blaming the job losses on technologies (rather than on countries’ policies). On the other hand, not the whole service sector will be automated. Artificial intelligence (AI) is actually ‘almost intelligence’ as computers can recognise common patterns; hence, some functions of the service sector (e.g. parking a car) will be difficult to automate. Moreover, computers can ‘learn’ only when clear, sufficient sets of data are available. An example of this aspect is Swedish-speaking robot ‘Emilia’ which is not able to speak Swedish dialects because of the lack of sufficient data. The limit of machine-learning is represented by uncertainty and unpredictability which results, in job market terms, in soft skills.
Finally, Dr Baldwin asked the audience whether job loss caused by automation, and job creation due to reconfiguration of the market, will result in a zero-sum effect. The answer to this question varies depending on economies. Regarding developing countries, the main drive is currently economic growth characterised by industrialisation following the value chain and agglomeration economy (e.g. ports, roads), and emerging markets will be spreading more with a micro perspective. Concerning developed countries, the perspective is not optimistic as automation is causing big disruption with numerous backlashes (e.g. regulations regarding Uber and Airbnb). In the longer run, such economies will see a constant readjustment of job skills (e.g. shorter training spans) with more focus posed on non-automatable skills (e.g. soft skills).
Avent opened the panel discussion by asking the panellists three provoking questions.
1. What would happen if 50% of the workers went to the gig economy?
Ms Linda Kromjong, Secretary General, International Organization of Employers (IOE), asserted that that scenario will not be ‘as big of a change as we think’, considering that we are already working in a gig economy. The core element will be the pace with which the job market will adjust to the economy’s configuration which will depend solely on countries. She stressed that the key words are agility and flexibility. For example, considering workers’ high mobility, she suggested that pensions and security systems should be linked to the people rather than being linked with the country of work.
Mr Lawrence Jeff Johnson, Deputy Director, Research Department, International Labour Organization (ILO), stressed more the workers’ perspective during the job market reconfiguration. He considered that currently five billion people are economically active but about 1.5 billion are considered to be ‘vulnerable workers’ and will eventually be struck hard by automation processes. Moreover, when talking about such a phenomenon, he remarked that there is always uncertainty in referring to the exact time in which such a backlash and market configuration will happen.
2. What would happen if robots took 60% of the jobs?
Ms Shea Gopaul, Executive Director, Global Apprenticeships Network, considered that there will be ‘new colours jobs’: as part of the job market adjustment, new skills will emerge and be required and consequently new positions will be created or readjusted.
Kromjong maintained that job markets have always been under constant readjustment vis-à-vis technology changes, but in the case of automation also management positions will need to be monitored closely.
Johnson focused on the governance aspect, arguing that automation forces us to think how to ensure that such rapid change will not completely disrupt the job market. As in the past, also now there are some professions that are forced to confront decline (e.g. attorneys) and some others a change in nature (e.g. from secretary to assistant).
3. What would happen if all courses moved online?
Dr Baldwin drew attention to the process of active learning: education is not merely a matter of assimilating concepts, but the social component is key. He recognised that online learning would impact negatively on middle-level universities; however, high level academic institutions will still gain from it as the focus is on networking and formation of intellectual groups.
Kromjong agreed with Dr Baldwin’s argument: e-learning is an important driver in education, but it will never replace human interaction. Moreover, if all courses moved online, the digital divide would seriously hinder the goal of universal education accessible for all.
The session ended with consensus among the speakers on the fact that in spite of e-learning’s significant advantages, the importance of human interaction and team skills cannot be replaced and/or taught online.
This session addressed the role of artificial intelligence (AI) in conflict resolution, and considered its positive and negative effects on society.
Providing a technical perspective, Mr Marc-Oliver Gewaltig, co-director of Neurorobotics at the Human Brain Project, emphasised: ‘Don’t believe everything you hear about artificial intelligence’. According to Gewaltig, it is important to think about what intelligence means, and to consider that not everything that appears intelligent actually is intelligent. The capabilities of today’s AI systems are still very narrow, even though people mistake them for being intelligent. Although AI systems are able to tackle well-defined problems for which information is generally available, difficulty arises when they have to deal with ‘noisy, vaguely defined problems’, such as social contexts. Therefore, AI systems are far from able to deal with the human, emotional, social, and economical aspects of decision-making.
Relating AI to global security, Mr Jean Marc Rickli, global risk and resilience cluster leader at the Leadership, Crisis and Conflict Management Programme of the Geneva Centre for Security Policy (GCSP), pointed at today’s exponential technical growth. Although he agreed that there is a lot of hype around AI, there are important consequences related to the deployment of AI technologies, such as the robotisation of the workforce and the risks related to autonomous weapons systems. He concluded that although success in the creation of AI ‘could be the biggest event in the history of civilisation’, there is an urgent need for education on the impact of AI, and for the improved governance of dual-use technologies.
The remainder of the session took the form of a debate between participants, on whether society will gain from AI or lose from it. Those who were convinced of AI’s positive aspects argued that it will help us better understand the complexity of today’s world, drive sustainable development, and remove language barriers. Those who were less optimistic about AI’s positive impact pointed at surveillance, the dehumanisation of war, and the lack of regulations and checks on these systems.
Mr Andy Bates, Executive Director, United Kingdom, Europe, Middle East & Africa, Global Cyber Alliance, introduced the Global Cyber Alliance, and then stated how cybercrime has overtaken normal crime in terms of economic value. Despite the increasing economic risk of cybercrime, he argued that ‘cybercrime is just crime’, pointing out that it is crime adapting to modern tools. In his opinion, the responses should not basically differ too much from the measures taken to address other forms of crime. He highlighted that cybercrime is usually serial in nature, with many criminals potentially using the same vulnerability and being repeat offenders. He discussed the human psychological aspect in the context of phishing and spoofing emails as well as structural issues with the Internet.
He presented a tool called DMARC, which enables individuals and companies to register domains that then establish a handshake between actors to monitor email trustworthiness. In addition, he presented the Internet Immune System, a blacklist given to top level Internet service providers (ISPs) to track pages which contain malware. He argued that ISPs should work towards cleaning up the internet for individuals.
Lastly Bates outlined future scenarios, focussing mostly on the importance of sharing of information across private and public sectors, together with measures that would seek to prevent duplication. In addition to this he mentioned how reporting about cybercrime could be centralised. As a concluding remark he pointed out that individuals need to use common sense and intelligence when addressing cybercrime.
Dr Gustav Lindstrom, Head of the Emerging Security Challenges Programme, Geneva Centre for Security Policy (GSCP), gave a presentation which focussed on the issues and trends for future consideration in the field of cybersecurity. Firstly, he stressed that raising awareness needs to be a constant process. Due to its constantly changing nature, cybercrime should be seen as an emerging threat.
Lindstrom’s second point focussed on the key aspects of evolving technology and services which remain beneficial for us but also pose security challenges. He discussed many developments such as cloud computing, as the cloud is an attractive target for attacks. He described how the cloud can be used to hide malware. In addition to cloud computing, he mentioned how big data, through injecting false data, poses security threats in addition to the privacy issues. He also discussed the issue of 3D printing which can be used to circumvent existing measures, while providing potentially dangerous tools. Circumventing existing measures is also a risk posed by distributed ledger technologies. As a final aspect of this, artificial intelligence and machine learning, despite their ground-breaking advantages, run the risk of being misused and compromised.
The Internet of Things (IoT) can provide benefits, but it also opens the door for many new potential threats. Lindstrom pointed out how the shift in states’ cyber defence and offence poses a challenge. He argued that an increasing number of countries have developed capabilities to move from defence to offence, with roughly 30 countries having dual capabilities, but this number is hazy as is the boundary between defence and offence. As such, Lindstrom suggested, offensive cyber operations will likely increase and cyber weapons might be updated at a fast pace, especially in terms of delivery mechanisms. As a final point, while there are differences in state capabilities, all countries will try to seek to utilise zero-day vulnerabilities to their advantage. He then concluded his presentation by pointing out the increasing role of the private sector in the field, which is not only due to financial aspects but also due to the proliferation of public-private partnerships.
In this lecture, which ended the 2017 Latsis Prize ceremony, Mr Jacques Attali, president, Positive Planet Foundation, discussed whether and how humanity can put artificial intelligence (AI) at its service. Mr Denis Duboule, president, Latsis Foundation, welcomed Attali onto the stage, noting that the speaker is renown-worldwide. Attali’s presentation consisted of two parts. First, he took a look at some of the sensitive themes related to AI. Then, he explained why he believes that we have the means to master this problem.
Attali believes that because AI is ‘a machine capable of learning’, it will one day be able to gain consciousness. Although this may raise great concerns, humanity has already experienced the beginning of two other AI issues. The job market will see great change, and machines will bring about the termination of certain posts. Additionally, although innovation will create a small number of new tasks/jobs, they will be too few in perspective. Moreover, concerning the military applications of AI, replacing men with robots seems beneficial to humanity. Yet, this does not come without its own risks. Robots could decide to take decision making in times of warfare into their own hands. Moreover, because they are set to become self-aware, it is possible that they decide to turn against humans, to kill to avoid being ‘killed’.
In light of these dangers, should AI be banned before it is too late? Despite the need to be carefully monitored, AI could fulfil one of humanity’s oldest dreams: immortality. Attali’s foundation focuses on the protection of common goods and the well-being of future generations, which can only be achieved if our species survives. By transferring our consciousness to machines, we can maximise our chances of achieving immortality. The answer to, ‘can AI become a problem?’ is the same as to ‘can AI be useful to us?’: yes.
Attali believes that AI can be applied to many areas. Be it in medicine, security, or policymaking, the foremost condition is that it is done in the service of humanity. To achieve this, we must observe some axioms. He suggested that we build upon the three rules postulated by Isaac Asimov as they are a good basis, but not precise enough. To Attali, it is imperative that we retain our ability to shut down AI, which represents our control over it. Furthermore, we must ensure it does not acquire a survival instinct, or our lives will be at risk. Lastly, he proposed that the international community formulate a charter to the specify rights and responsibilities pertaining to AI.
The issue of rights dovetails with Attali’s next point: perhaps we should not ask whether we can put AI at our service, but whether we deserve it. Taking a look at our history should make us have serious reservations. Nevertheless, not only can we teach AI morals, it can also serve to foster our own altruistic behaviour. What is more, AI could be used to deter our worst impulses, the only exception being euthanasia, a call that should always be made by a human being. This offers a segue into the question of acceptance: should we accept that machines will be present in every aspect of our lives? Although we still have some time to decide, we do not long.
Lastly, he highlighted how underdeveloped humanity’s natural intelligence is. Collectively, our computing power should be greater than that of any machine. Thus, we should not forego the task of developing our own intelligence, ensuring equal access to knowledge and to activities that can expand it in its many kinds. After all, intelligence comes in various forms: creative, adventurous, transgressive, altruistic, and maybe the best among them, that of love, and this is the one we must put at the service of humanity.
Attali’s answers to the ensuing questions were as follows. On the impact of AI in governance structures, he remarked that, unlike the market, political structures are already quite artificial, thus AI can improve them. On whether the knowledge gap in technology could increase overall inequality, he replied that the real risk is the suppression of the means and will to learn, emphasising that ‘I [Attali] do not believe that mass unemployment will be a fatality, what is important is to offer mass continued education’. Finally, asked about his opinion on taxing robots, he affirmed that he prefers to tax those who profit from them.
This side event of the Human Rights Council’s 36th session, organised by the Permanent Observer of the Holy See Mission to the UN and other international organisations in Geneva and the Permanent Mission of the Principality of Liechtenstein in Geneva, discussed the potential impact of artificial intelligence (AI) on justice systems and human rights.
The panel was opened by Mr Eric Salobir, President of OPTIC, who emphasised that the link between justice and AI is not just found in science fiction, but has already been tested and employed in judicial systems.
In his opening remarks, H.E. Archishop Ivan Jurkovic, Permanent Observer of the Holy See Mission, spoke about the importance of considering human dignity in discussions on AI, as well as the risk of machines substituting humans in certain key areas, such as education. H. E. Ambassador Peter Matt, Permanent Representative of Liechtenstein, explained that AI encompasses both opportunities and threats, especially related to the human rights to privacy and non-discrimination. He added that addressing these challenges effectively requires multistakeholder engagement.
Next, Prof. Pierre Vandergheynst, Professor at Ecole Polytecnique Fédérale de Lausanne, provided an introduction to AI and the way it could be applied to the judicial system. Although it is not a new concept, AI is mostly understood today as machine learning, powered by algorithms, which are based on data. Ultimately, ‘whoever controls data, controls AI’. AI’s predictive power comes from its ability to model the reasoning from the raw data to the final outcome.
There are several examples of AI being reasonably accurate in predicting verdicts and risk assessments. Yet, decisions based on AI cannot be easily disputed, as the patterns discovered by AI cannot be interpreted and clarified. If AI decisions are based on biased data, rooted in human judgement (such as previous verdicts), they risk disproportionally and negatively affecting certain population groups.
Prof. Louis Assier Andrieu, Professor at the Law School of SciencesPo in Paris, and Research Professor at the National Center of Scientific Research, provided a more in-depth analysis of the interplay between AI and legal traditions. According to him, both common and civil law are based on fictions, that would be internalised by AI. Common law’s fiction is based on its assumption that legal decisions can be based on previous cases; yet, ‘one never enters the same river twice’. Civil law assumes that laws and codes encompass every imaginable case, and that abstract rules can be applied to a variety of cases. To address these fictions, it could be useful to look at more communal, non-Western forms of justice.
Assier Andrieu highlighted the fact that France is already experimenting with predictive justice using big data, to make institutions more rational and less dependent on human bias. However, judgement ultimately needs trust. With 93% of the private practitioners in the USA fearing to be replaced by robots, ‘where is the trust in the making of algorithms and the predefinitions used?’ Can we trust AI to decide something as important as legal judgement? Salobir added that we need to consider whether AI makes judgements based on consequence or correlation, and whether it judges the individual or the group to which it belongs.
Prof. Lorna McGregor, Professor and Director of the Human Rights Centre, University of Essex, concluded the panel discussion by relating AI to human rights. She explained that it is ‘crucial’ to understand our current and future environment, to make sense of their human rights implications. AI could provide opportunities in making progress towards the sustainable development goals by creating efficiency, cost-effectiveness, and improvements through disaggregated data. It can help allocate resources and predict crime.
AI can also generate risks for human rights, not only by creating privacy threats and facilitating surveillance, but also by creating inequalities and discrimination. While the big data on which AI is based is extensive, it is neither complete nor perfect. This imperfect data feeds algorithms and AI, and can ‘bake discrimination into algorithms’. As a result, human bias is ‘accentuated, rather than resolved’. Echoing Vandergheynst, she repeated that AI decisions cannot easily be challenged, and that judges and lawyers might not be sufficiently equipped to understand the accuracy of these decisions.
McGregor concluded that international human rights law could provide a framework to address the risks posed by AI. We also need to consider the responsibility of states and business actors, as well as identifying red lines when the risks look too great to proceed.
The European Broadcasting Union (EBU) Big Data Conference brought together heads of digital, marketing, communications and legal departments alongside technologists and academics to share best practices for big data media strategies, with a focus on public service media (PSM).
The conference was opened by Ms Ingrid Deltenre, Director General of the EBU, who stressed the importance of big data as a ‘game changer’, as it allows for an improved understanding of the audience. As such, big data is not an aim in itself, but a tool to better engage with audiences. Mr Guillaume Klossa, Director of Public Affairs & Communications at the EBU, further elaborated on the importance of collaboration to break the silos between media, communications, legal, public affairs, technologists, and other experts and to ‘create conditions for EBU’s members to embrace big data as rapidly as possible’.
The first panel discussed the creation of trust and the importance of ethics, laws, and human rights. Ms Lokke Moerel, Senior of Counsel at Morrison & Foerster LLP, explained that ethics rules need to be respected in order to achieve social acceptation of the use of big data. This includes not only an adherence to laws, but also to underlying ethics and social conventions, including principles of non-instrumentalisation, non-discrimination, equity, and consent. Ideally, these conventions need to be analysed prior to implementation, as privacy or ethics-by-design, as they will not be automatically rectified through an ‘invisible hand’: everybody is violating the rules in order to keep up with the competition. Prof. Alessandro Mantelero, Politecnico di Torino, echoed Moerel’s comprehensive approach and pleaded for preventive policies to address and mitigate potential privacy, ethical, and social impacts. These impact assessments should be based on human rights charters and community values, but tailored to the specific context in which the data is used.
Mr Joseph Cannataci, UN Special Rapporteur on the Right to Privacy, looked deeper into the concept of privacy and argued for a ‘very very very’ critical approach to big data. Privacy needs to be understood both as the protection of personal data, but also as an enabler to the free development of personality and to other human rights, including freedom of expression. After elucidating several international legal challenges, he claimed that ‘In an Internet without borders, we need safeguards without borders and remedies across borders’. According to Mr Joe McNamee, Executive Director of European Digital Rights, privacy challenges are primarily related to a ‘broken market’ in which ‘consequences don’t matter’ and incentives for transparency and accountability are lacking. The launch of the EU’s General Data Protection Regulation (GDPR) is an opportunity to move away from a ‘race to the bottom’ and ‘move the balance back towards the trust and reliability we expect’. Finally, Mr Pierre-Nicolas Schwab, Big Data/CRM Manager at the RTBF, shared their experience of privacy protection, addressing three problems: over-personalisation, the absence of alternative viewpoints, and threats to privacy. To mitigate these challenges, education is key, and should focus on empowering users to control their data, to build knowledge on personalisation, and to open the black boxes and ‘show’ algorithms.
During the Q&A that followed, the speakers argued for a more inclusive approach to the challenge of trust-building. Cannataci stressed the importance of moving beyond transparency on algorithms (which will only be understood by less than 1% of the population), towards a total societal impact assessment. Mantelero underlined the need to be open to discussion and ask advice from experts from different fields, including anthropology, to establish trust. Schwab agreed that data scientists should not be left alone, and that there is a continued need for sociological models.
Personalisation and recommendation systems: Delivering quality
This panel, moderated by Mr Alberto Messina, R&D coordinator RAI Centre for Research and Technological Innovation, discussed the opportunities of recommendation and personalisation systems, as well as the risks of filter bubbles that could arise from them. First, Mr Andrew Scott, Launch Director myBBC, shared experiences on the balance between automation and smart curation. While the former is entirely algorithmic, the latter requires human capabilities and allows for the provision of ‘breadth’, linking users to content they do not necessarily expect. Mr Mika Rahkonen, Head of Development, News and Current Affairs at Yle, debunked five misconceptions about news personalisation and filter bubbles:
While these misconceptions might be giving rise to a false understanding of challenges, Mr Michael de Lucia, Head of Digital Innovation at RTS, provided three examples of key challenges related to the data-driven era: infobesity, competition with Internet giants, and algorithms and artificial intelligence, which might result in the PSM's inability to stand out. These challenges can be mitigated by adapting a more global and co-operative approach, and continuously learn and share experiences.
The discussion that followed reiterated that personalisation and smart curation ultimately aim at understanding the user and providing content in a better way. There is enough quality content, but the question relates to ‘getting the right content to the right people at the right time through the right devices’. Although it is difficult to compete with the ‘infobesity’ on the web, by adopting trust, transparency, and the right tone of voice, PSM can make a difference and avoid that ‘a lot of good content goes to waste’.
Algorithms and online platforms: Limitations and opportunities
Mr Robert Amlung, Head of Digital Strategy at ZDF, moderated the discussion, which aimed at sharing experiences to find out how the combination of platforms and algorithms can make sure that great content is easily found by users. According to Mr Michael Hlobil, Data Insights COE Solution Architect at Microsoft, the most important thing in this effort is to ‘document what you do’ and to have a diverse team of experts involved. Furthermore, data quality is crucial: ‘if you put shitty input, you get shitty output’. Mr Michael Paustian, Creative Director of Axel Springer, elaborated on the importance of human involvement: ‘an algorithm itself is just a set of instructions’, the process of getting there is scientific and can be understood as a dialogue with the problem. Yet Mr Rigo Wenning, Legal Counsel at W3C, wondered whether we have ‘the right instructions’? There are many ‘dumb algorithms’ that ‘make people naked’. Furthermore, challenges remain regarding the re-centralisation of the web and the power of its gatekeepers, which continuously change the rules. This problem was also emphasised by Mr Sylvain Lapoix, freelance data journalist, who explained that Internet platforms have become Internet service providers, leaving the PSM vulnerable and dependent.
Data Journalism: New possibilities for investigation, collaboration and ubiquity
This discussion focused on the utility of big data for journalism and reporting, and was moderated by Mr Laurens Cerulus, Reporter at Politico Europe. Mr Mirko Lorenz, Information Architect at Deutsche Welle, states that although individual media outlets might be small, ‘collectively we’re really big’ and can compete. There is a need to push back against fake news and invest in stories with data narratives and to think about the future of content for future generations. Zooming in on the value of data journalism, Mr Neal Rothleder, CTO of ORBmedia, explained how data brings new perspectives by seeing large pieces of the world all-together, by grasping how things are changing over time, and by providing new views on complex situations.
Although data journalism has many promises, it might not be easy to get it right at once. Mr Jan Lukas Strozyk, Journalist at NDR, added 5 lessons from his experience in data journalism:
Once data-driven stories are created, they need to be visible, and Mr Roland Schatz, CEO of Media Tenor International, spoke about the importance of knowing the audience that is to be attracted. Furthermore, they do not necessarily have to be time and resource-consuming if they are produced through partnerships with organisations that already collect extensive amounts of data, and with researchers and academics who can assist in the data analysis.
The following Q&A addressed the importance of building partnerships and collaborating with other sectors, the need for the organisation as a whole to be more data-aware (train people in Excel!), as well as bridging the gap between the data scientists and editorial teams: the former might lack the knowledge on how to build the narrative, while the latter might not know what is possible with the data. This requires a cultural shift in organisation.
The path towards a data company
This round-table focused on the new mindsets, technologies, tools, and strategies needed for the creation of a data company, and was moderated by Mr Aleksi Rossi, Head of Interfaces at Yle. Mr Ignacio Gomez, Director of Analytics and Future Media at RTVE, started by explaining the need to connect ‘tv people’ with data teams, as they are two halves of a brain that does not work in sync yet. Mr Sanjeevan Bala, Head of Data Planning and Analytics at Channel 4, added that there is a need for senior buy-in, as data should be in every part of the business. Within the business, there is a need for co-operation across different teams. Furthermore, recruiting from outside the broadcasting sector might help, as it allows for learning from other practices. These insights were echoed by Mr Dieter Boen, Manager of Research and Innovation at VRT, as the panel concluded that collaboration is key: identify shared interests with others and seize opportunities to collaborate: ‘better together!’.
The second day of the conference on Big Data Conference: Serving Citizens, on 22 March 2017 at the EBU Headquarters in Geneva, started with welcome remarks by Mr Guillaume Klossa, Director of Public Affairs and Communications at EBU, who reaffirmed the conference’s purpose: developing strategies and implementing recommendation systems aiming at fostering citizens’ trust in data.
The first panel, ‘How Can Big Data Help Public Service Media Better Serve Citizens?’, explored the possibility for Public Service Media Companies (PSMC) to better accommodate citizens’ demand and use of digital content. Mr Gilles Marchand, General Director of Télévision Suisse Romande (TSR) and Radio Télévision Suisse (RTS), first considered that competition in this sector is increasing considerably.
He stressed the importance of optimising the current co-operative processes among different PSMCs. In particular, he suggested a threefold approach, based on intelligence (optimisation of the distribution of all content), community (involvement of the public through the use of smart data), and journalism (smart data can optimise the user-on-user content and consequently increase public trust).
The second speaker, Dr Mirko Schäfer, Leader of the Utrecht Data School, discussed the positive use of datification as a potential means of fostering a European public sphere. He considered that online active participation is closely linked to civic action. Moreover, recalling the need for strategy expressed during the previous day, he reaffirmed that big data should be approached from a top-down perspective (that is, at the top decision-making levels) rather that with a bottom-up approach.
The second panel, Audience Measurement: Evolution or Revolution?, included three main speakers moderated by Mr Kristian Tolonen, Head of the Audience Research Department at Norwegian National Broadcasting (NRK), who opened the discussion by illustrating the importance of big data for audience measurement. In particular, he considered that the use of big data is beneficial on four dimensions: the target (the profile of the audience), the source (shifting from a one-source measurement system to hybrid solutions), the time (optimising the time required through measurement), and the level of the discussion (the depth of the information collected).
Mr Emil Pawłowski, Chief Science Officer at Gemius, further considered whether the big changes that have modified consumption patterns in the past decades should push for a re-evaluation of the existing measurement techniques. Currently, accurate measurement is impaired by economic reasons - conducting research on a small panel is expensive - and by a fragmentation of data caused by the existence of multiple browsers. He affirmed that the ultimate goal of audience measurement will be multimedia research, that is, a measurement system that would encompass simultaneously all the media used by the consumer (Internet, television, radio, press) rather than analysing such media separately.
Mr Nick Johnson, Solutions Architect at Ireland’s National Television and Radio Broadcaster (RTE), centred his speech on the challenges faced when measuring the performance of RTE programs across all its platforms. Consumption patterns have changed over the past years because they are technologically driven by the Internet and smart devices; hence he explained the difficulty for RTE to assess the total value of its assets and to measure it efficiently.
Lastly, Dr Uffe Høy Svenningsen, Audience Researcher at Danmarks Radio (DR), illustrated the new TV-audience measurement system launched in Denmark on 1 January 2017. This innovative approach is based on four main sources: a basic panel, a digital panel, a web profile panel, and census data. The information coming from all these sources is combined and calibrated to produce a more accurate measurement. Despite the fact that such system allows for a better measurement of overall consumption, there are still challenges regarding the calibration of the information obtained (e.g. regarding determined on-demand programmes) and the actual mapping of all the content consumed by the viewer.
The event continued with two Toolbox sections, which offered speakers a space to deliver hands-on experience illustrating specific case studies.
The first, on Privacy Policies, was moderated by Mr Pierre-Nicolas Schwab, Big Data/CRM Manager at Radio Télévision Belge de la Communauté Française (RTBF), who recalled the crucial role of education and consumers’ trust. He illustrated the four-step approach taken at RTBF based on a confidentiality charter, a single-sing-on platform, a recommendation system (sensitive to ethical concerns, marginalised groups, gender equality), and an educational programme to Artificial Intelligence (AI).
Ms Lucy Campbell, Marketing Director TV & Digital at RTE, presented RTE’s single-sign-on platform: myRTE. Digitalisation services are raising consumers’ expectations, and the proliferation of actors providing media content is making this sector very competitive. For these reasons, RTE has inaugurated a consumer experience strategy aimed at achieving a better understanding of the audience by providing personalised services and thus better experiences.
Mr Peter Farrell, Head of Legal BBC Workplace and Information Rights, complemented Ms Campbell’s speech on the necessity of rendering digital content more personal and relevant. He presented the myBBC single-sign-on platform and stressed the importance of building consumers’ trust towards the platform through clear and transparent privacy policies.
The second toolbox, on The Innersource Approach to Personalisation, focused on the Personalisation for EACH (PEACH) technology system. Mr Michael de Lucia, Head of Digital Innovation at RTS, reminded the audience that such system aims to deliver personalized media recommendations to audiences which are increasingly accessing content on demand, through a variety of devices and platforms. As Anselm Eickhoff, Software Architect at Bavarian Broadcasting (BR) further explained, PEACH system aspires to deliver ‘the right content, at the right time, to the right person, on the right device’.
Furthermore, Mr Michael Barroco, Head of Software Engineering at EBU, illustrated the organisational structure of the projects. PEACH is a cross-organisational system developed by the EBU, featuring two main stakeholders: BR and RTS. More specifically the team is composed of developers as well as data scientists functioning collaboratively as single Scrum team across organisations and borders.
The event concluded with a presentation of Project Kelvin by Mr Bram Tullemans, Project Manager at EBU. This project aims at using real-time data collected from video players in order to produce information that can optimise the distribution flow of content. The ultimate goal is to identify the content-delivery method that performs best.
This luncheon discussion, organised by the Think Tank Hub, addressed the current changes in the labour market driven by the fourth industrial revolution. The topic was presented by guest speaker Jan Smit, a Partner of the Centre for Strategy & Evaluation Services, which recently published the report ‘Industry 4.0’ for the European Parliament.
Smit first addressed the phenomenon of ‘uberisation’, both as a narrow phenomenon affecting the transportation sector and as a broad trend visible in other sectors such as as journalism, tourism, finance, delivery services - with important consequences for society at large. This trend is put into motion by developments in technology, which could be presented as ‘Industry 4.0’. While the fourth industrial revolution is often either presented as an opportunity for increased productivity or in relation to IT security, Smit focused on its consequences on work and labour. He addressed the following issues:
These issues generate policy challenges for governments, as they can ultimately affect the ‘tenants of the world order’.
The Q&A session addressed a wide range of related topics, including the possibility of increased polarisation and inequality between developed and developing countries, and the question of whether these challenges are inherently new, or whether they are old challenges in a new context. Wider topics were also addressed, such as Internet governance, data protection, and the potential effects of artificial intelligence.
The paper provides an overview of the implications of convergence in the telecommunications, information technology, media, and entertainment sectors, and looks into possible regulatory responses.
Several sessions at IGF 2016 tackled issues related to convergence and over-the-top (OTT) services. It was stressed that any regulations in these areas should consider the need to foster innovation and future market development (Are We All OTTs? Dangers of Regulating an Undefined Concept - WS191). Human rights aspects also need to be taken into account, especially when it comes to blocking access to services such as Voice over Internet Protocol - VoIP (VoIP Crackown: Implications for Gov, Telecom & Civil Society - WS262).