How do you embed trust and confidence in AI?

12 Nov 2020 13:20h - 14:20h

Event report

Trust in the Internet is increasingly defined by the trust in artificial intelligence (AI) systems. Ms Catherine Kummer ( Global Deputy Vice Chair, Public Policy, EY) talked about the global study done by Ernst and Young and the Future Society on bridging the AI trust gap.  The biggest ethical priority gap lies between the concerns of the private and public sector in regard to fairness and avoiding bias. The second gap concerns innovation, as companies may not be fully assessing ethical risks during their research and development, or at least not in a way that meets the expectations of policymakers and the public. Poor alignment diminishes public trust, so strong governance and appropriate, consistent regulations can lead to more trust.

Good governance also has to be ensured internally within organisations, added Mr Ansgar Koene (Global AI Ethics and Regulatory Leader, EY ). Best practices have to be shared to prevent harm, stay in line with regulations, and help further develop trusted AI frameworks. Another component is understanding how people experience their engagement with AI technologies, what the issues are, and what they see as problematic. Ms Clara Neppel (Senior Director, European Business Operations, IEEE) talked about the 2015 global initiative on autonomous systems that produced the report Ethically Aligned Design that invites engineers, together with different stakeholders, to identify issues and come up with solutions. The outcomes range from recommendations for the technical community, policymakers and international organisations, to standards.

Ensuring trust is also a matter of justice. Mr Parminder Jeet Singh (Executive Director, IT for Change) said that regulation is important, but it has to include the questions of power, economics, social culture, equality, and adapt them for the digital autonomous age. According to Singh, if rules are embedded into the technical architecture, actors will comply, and through that, societies can make social governance democratic.

Ms Yohko Hatada (Founder and Director, EMLS RI (Evolution of Mind Life Society Research Institute) stated that trust and justice in autonomous systems depend on confidence and self-understanding. Hatada noted that for us to understand the technologies, we have to understand ourselves and what kind of future we want. Democracies are not stable everywhere, and AI can help or hinder them. We should aim to build a long-lasting global civilization and support it through an international governing system.

Global differences in context were also discussed. Mr Abdul-Hakeem Ajijola (Chair, African Union Cybersecurity Expert Group (AUCSEG)) stated that AI in one part of the world might have different repercussions in another. AI, especially in the developed world, is being used to develop killing machines and new machines of war. Therefore, ‘we must ask ourselves who bears responsibility for the mistakes of AI developed in one part of the world, but applied in another place and context,’ Ajijola stressed.

Bearing context in mind, participants noted several key attributes of trusted AI systems. Singh highlighted two principles. First, political regulation, and who actually has control across all levels. Second, architectural regulation at the system-design level. Ajijola added that we need a high level of transparency. The value chain of AI is too opaque. Also, we need legislation that is more flexible and responsive to the development of technologies. Koene stressed that accountability is frequently mentioned. ‘In order to be able to establish trust in the system, it is important that we have clear lines of accountability, that may include the requirements around the use of standards, contractual requirements around third party system,’ he said. Neppel noted liability as linked to responsibility. Traditionally, accountability lies with the entity that puts a product or service on the market, but this is more difficult in a self-learning system because they might change their output over time or in specific contexts. She added that we need an ecosystem of trust. Engineers have an individual responsibility to be ethical, but they also work within a hierarchy of power in an organisation. ‘It’s important to have the structures and hierarchy so that they know with whom to discuss if they encounter moral dilemmas,’ said Happer.

The ecosystem of trust has to include the wider population, and be built with a conscious engagement, and with regard to ethical decisions. According to Koene and Ajijoli, different biases and cultural differences need to be better discussed in regard to development and deployment. Intelligence systems should be made in a way which allows individuals to have a hand over things that affect them, while AI has the indecency to hyper-concentrate. Perhaps we need a new social contract, and institutions that would give us predictable, rule-based behaviours with a focus on trust.