In the view of the permeability of algorithmic technics and automated data processing in all aspects of the contemporary life, the Committee of Ministers of the CoE has drafted recommendation to member states to evaluate the impacts of the application of algorithmic systems in public and private spheres on the exercise of human rights and fundamental freedoms. The document outlines that the misuse of algorithmic systems can jeopardise the rights to privacy, freedom of expression and prohibition of discrimination provided by the European Convention for the Protection of Human Rights and Fundamental Freedoms. Although public and private sector initiatives to develop ethical guidelines for the design, development and deployment of algorithmic systems are welcome, they do not substitute the duty of member States to guarantee that human rights obligation are embedded into all steps of their algorithmic operations. In addition, member States should ensure appropriated regulatory frameworks to promote human rights-respecting technological innovation by all actors. The guidelines for States on actions to address the use of algorithmic system include data quality and modelling standards; principles of transparency and contestability; provision of effective judicial and non-judicial remedies to review algorithmic decisions; the implementation of precautionary measures to maintain control over the use of algorithmic systems; and empowerment through research and public awareness. The document also engages responsibilities for private actors that member States should ensure, including guidelines on data quality and modelling.  ​

The last ITU Plenipotentiary Conference, held in Dubai, managed to make Resolution WGPL/3 through adoption. It was recognised that emerging over-the-top (OTT) telecommunications technologies pose both opportunities and regulatory challenges for national telecommunication regulations. Specifically, the Resolution resolves (1) 'to raise awareness and promote a common understanding and dialogue among stakeholders for enabling OTT environment and ecosystem within the remit of ITU'; (2) 'to continue fostering studies on OTT aspects'; (3) 'to foster capacity-building programmes among ITU members in order to share information related to best practices and technical guidance on OTTs, especially for developing countries'. It also instructed the Secretary-General to continue collaboration with other relevant organisations to further the objectives of the resolution; and to submit an annual report on the council on the activities undertaken under the resolution.

The Committee of Ministers drafted a Declaration to draw attention to the member States to the rights of all human beings to take decisions and form opinions independently of automated systems. The document underlines the risks of using massive amounts of personal and non-personal data to sort and micro-target people, to identify vulnerabilities, and to reshape social environments to achieve specific goals and vested interests. The draft encourages member States (1) to consider additional protective frameworks to address the impacts of the targeted use of data on the exercise of human rights; (2) to initiate inclusive public debates on permissible forms of persuasion and unacceptable manipulation; (3) to empower users by promoting digital literacy on how much data are generated and used for commercial purposes.

A court in London, the UK, has granted Uber a 15-month 'probationary' licence to operate in the British capital. The licence, which is shorter than the five-year licence that the company applied for last year and was denied by the Transport for London (TfL) authority, is subject to several strict conditions. The company is asked to implement a new governance structure, notify TfL of its operations in areas that may be of concern, report safety-related complaints, and have an independent assurance audit report every six months. Uber is also required to demonstrate that it has changed its policies and ways of operation, as a condition to maintain its right to operate in London.

At an event in San Francisco, USA, IBM has presented an artificial intelligence (AI) system that can engage in reasoned arguments with humans on complex topics. Called Project Debater, the system was designed to debate around 100 topics, in a pre-determined debate format: a four-minute introductory speech, a four-minute rebuttal to the opponent's arguments, and a two-minute closing statement. Project Debater, trained in advance on debating methods, but not the details of the debates, relied on a collection of 300 million news articles and academic papers (previously indexed) to construct its case. While it was noted by several observers that some of the points made by the system were either quoting sources or merely reusing parts of articles, the system also tried to 'directly argue with points that its human opponents made, in nearly real time'. IBM explains on its website that Project Debater 'digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent'. The company estimates that, at the moment, the system could have a meaningful debate on the 100 topics it was designed for about 40% of the time. The overall objective of the project is to 'help people reason by providing compelling, evidence-based arguments and limiting the influence of emotion, bias, or ambiguity'.​

On 18 June 2018, the European Commission hosted a high-level meeting with representatives of European philosophical and non-confessional organisations, on the topic 'Artificial intelligence (AI): addressing ethical and social challenges'. Chaired by the Commission's Vice-President Andrus Ansip, the meeting addressed the potential impact of AI on fundamental rights such as privacy, dignity, consumer protection, and non-discrimination. It also explored the impact of AI on social inclusion and the future of work. Ansip reminded participants that the European Commission is working on 'ethical guidelines for the development of AI for good and for all' and that the elaboration of these guidelines 'requires an open discussion on key issues such as the importance of diversity and gender balance in AI to avoid biased decisions'. The guidelines are expected to be finalised by the end of 2018.



The GIP Digital Watch observatory is provided by

in partnership with

and members of the GIP Steering Committee


GIP Digital Watch is operated by

Scroll to Top