AI ethics: Privacy, transparency and knowledge construction

13 Nov 2018 16:45h - 18:15h

Event report

[Read more session reports and live updates from the 13th Internet Governance Forum]

Issues around Artificial Intelligence (AI) are highly complex. Ethical matters raised by algorithms and AI are global, but also specific in different cultural contexts. While governments are increasingly trying to regulate the AI field, the lack of understanding of the technological features of the autonomous systems calls for more support across the multistakeholder community. We should aim to use AI for socially beneficial projects and standard setting can further this goal.

Session moderator, MrKuo-Wei Wu (Asia-Pacific Network Information Centre (APNIC)) invited the speakers to address the issues around the ethics of Artificial Intelligence (AI) systems. 

Ms Yik Chan Chin (Xi’an Jiaotong-Liverpool University) and Ms Chen Chengfeng (Tsinghua University) presented the general issues of AI ethics. Chin pointed out that AI research and development in China focus on manufacturing and intellectual property rights, whereas in Europe focus is more on ethics. Since 2017 China has a new generation AI policy in which ethical principles were proposed by industry and the civil sector rather than the government. In particular, the Chinese strong cultural context with ‘ethics’ differs from that in the West. Additionally, AI research started late and is not systematic, and people lack a deep understanding of privacy issues. So, as a result, AI is strong in application, but weak in ethical code and regulation. Chengfeng elaborated two models of ethics behind fake news detection, one based on a content-model algorithm and another based on a social-context algorithm.

Increasingly, states are attempting to address these issues and regulate the AI space, but they are often unaware of what to do exactly. MrAnsgar Koene (University of Nottingham) reported on the work of the Institute of Electrical and Electronics Engineering (IEEE) as a standards setting organisation that advises governments. He noted the activities on the European Union level and the already established High Level Expert Group on Artificial Intelligence that aims to develop clear ethical guidelines in the field. The IEEE can give clear advice on the actual features of technology and how to assess them. Koene noted the IEEE’s Global Initiative on Ethics of Autonomous and Intelligent Systems. The Initiative names 13 standards that cover a variety of issues including the transparency of autonomous systems, the state of privacy, and algorithmic bias considerations. 

From the perspective of a social network, Mr Wang Shu (Weibo, China) spoke about tackling ‘Internet rumours’ as he termed it. China currently has more than 1 billion active users on social media and proliferation of fake news is one of the biggest issues. In preventing fake news on Weibo, the company launched a special webpage where users could report rumours, as the company labels them. ‘Rumours are not allowed’, Shu said. He added that Weibo deployed a user credit system that restricts the speech of a user whose rumour index is high.

Mr Félicien Vallet (Privacy Technologist at the Commission Nationale de l’Informatique et des Libertés (CNIL)) explained that since 2016 CNIL has had the mission to reflect on ethical issues of new technologies. After a nation-wide research project, in 2017 CNIL published a report on the ethical matters raised by algorithms and AI. The report identified four main concerns: first, the loss of responsibility and control due to relying on AI; second, biases, discrimination, and exclusion; third, profiling and personalised services and their consequences for community and connectivity; and fourth, privacy around AI and vast amounts of data. Ethical considerations, as this report advocates, should be based on a humanist view, promoting principles of fairness and of continued attention and vigilance. 

Mr Jake Lucchi (Head of AI Policy, Google Asia-Pacific) stressed that since Google started using neural machine learning, the AI application has improved. According to Lucchi, AI can be used for socially beneficial things. Google also dealt with problems such as machine learning fairness and is focused on improving their machine models and data. When tackling AI issues, a diverse team is necessary, which needs a common ethical code and normative framework. In 2018 the company launched AI at Google guiding principles that outlines its aims.

Mr Yuqiang Chen (4th Paradigm) talked about how AI could serve humans better. We first should know how AI works, how datasets are used, how objects are translated, and how features are added. In his opinion, algorithms should not be in the focus in explaining AI to the general public. Chen said that AI can serve humans better when privacy is protected in using historical data, when individuals are not overlooked by the majority, and when we work to prevent AI from doing evil. 

 

By Jana Mišić