Meaningful human control of AI decisions

Resource type
Reports

Image
event
assignment
Session ID
3

[Read more session reports and live updates from the EuroDig 2019]

The session focused on the economic and social potential of artificial intelligence (AI) applications. It analysed questions of ethics, trust, and meaningful human control while proposing alternative solutions to effectively tackle the problem.

Ms Mieke van Heesewijk (Programme Manager, SIDN Fund) moderated the session and introduced AI and its economic and social potential. As the Internet has a massive impact on people’s lives, AI and the way it is developed and used also has an important influence on people's lives. As more and more decisions will be made by algorithms in the future, many questions arise on possible ways for developing more responsible AI for society. The panel focused on the following ones: How do we, as humans, stay in control? How can we ensure that AI is open to human intervention? Which decisions should be made by AI and which should not?

Addressing the topic from a social perspective, Mr Tin Geber (Social Innovation Specialist, Hivos) talked about AI through the lens of rights, responsibilities, and justice. He structured his speech around three leading questions: What does AI do? What does AI need? Is it enough? Three concepts, both with positive and negative aspects, can be used to briefly define what AI does: prediction, pattern recognition, and real time monitoring. AI allows prediction activities through the automated analysis of data which help understand and forecast future scenarios, for instance, this can have a positive impact in the prevention of emergencies. Nevertheless, AI can also exacerbate inequalities as it can predict only what it knows through the data provided, which might be affected by existing biases. Through pattern recognition, AI allows to find and recognise meaning and patterns that would be otherwise humanly impossible to identify; it facilitates the understanding of complexity. However, potential harm can be caused in losing safety in obscurity. Finally, real time monitoring allows targeted support as well as supercharged surveillance. While real time monitoring has represented an important resource in refugee crisis management, there are many databases of biometric information of refugees who did not give consent or had no choice but to give consent to share their data. With regards to what AI needs, three aspects can be underlined. First, AI needs good models to learn and give predictions, in order to tackle the model mismatch; however, more investment is needed. Secondly, high data quality is required; this need could be mitigated with data literacy meant to implement a better gathering of data by default. Thirdly, AI requires granularity; the more detailed the data is, the better the prediction will be. Nevertheless, this raises important privacy issues. A mitigation approach could be represented by responsible data which is meant to prioritise people’s rights to consent, privacy, security, and ownership when using data. Finally, 'good' AI is not enough and it needs to be complemented with elements such understandable AI and meaningful oversight.

Ms Valerie Frissen (Professor Digital Technologies and Social Change at Leiden University, CEO at SIDN fund) introduced a different approach to the topic, which can be conceptualised as 'Living with AI'. While the current public debate is dominated by a dystopian perspective on AI, there are a lot of positive examples in the use of AI; one of them is a Dutch app that is able to detect symptoms of skin cancer. Therefore, a question emerges: Is the current framing effective and appropriate? Or should a different approach should be put in place?

The development of AI leads to fundamental questions related to the nature of the intelligence of this technology, on whether it is possible to trust the technology, on the quality and value of the data in terms or sovereignty and ownership. Questions of ethics also emerge. Nonetheless, we are using the wrong kind of questions when thinking about ethics in AI. Ethics should not be the definition of boundaries, but the shaping of the co-existence with technology. There is a shared responsibility of machines and humans to live in this ecosystem of machines and technology. Therefore, 'doing ethics' could be contextualised as a three-way approach: (a) tech for good, which maximise societal benefits; (b) ethics in and for design, achieved through code of conducts as well as value-based design and principles and methods; (c) ethics by design, characterised by explainability, interoperability, privacy, and transparency to cite a few.

Ms Linnet Taylor (Associate Professor at Tilburg Institute for Law, Technology and Society [TILT]) suggested addressing the issue as a political and democratic problem rather than merely technical one. She gave the ethical philosophical example of the trolley problem, in which a runaway trolley is moving toward incapacitated people lying on the tracks and a person is standing next to a lever that controls a switch and who has to make an ethical decision in directing or redirecting the trolley. While the current debate on AI and discussions about ethics tend to accept the status quo, other options could be contemplated: What if we uninvent the trolley or the technology and invent something else instead? This would mean rethinking about the status quo. The question could indeed be not how to insert AI in a system, but whether to insert AI in the first place.

People are thinking about ethics as the basis for law, but the danger is to think about ethics as an escape from law; the people who control people and the people who control technology need to be in the picture. At the moment, the debate is deeply mechanical, failing to get to the political level. Indeed, talking about ethics is not necessarily linked to politics. Not enough research is currently available on democratic decision-making around AI in order to inform a debate on redlines for AI. Solutions could take the form of further engagement with academics and investment in research on the topic; addressing the issue through the lens of a social justice problem; keeping in mind that the status quo does not have to be driven by commercial interests; and a regulatory approach that does not prioritise privacy over human experimentation rules.
 

By Stefania Pia Grottola

Share on FacebookTweetShare