What would it take to trust AI?

10 Apr 2019 15:00h - 16:30h

Event report

 

The panel addressed the critical question of what would it take to trust artificial intelligence (AI) systems. Ms Kerstin Vignard (Deputy Director, UN Institute for Disarmament Research [UNIDIR]) remarked that the arms control mantra ‘trust and verify’ is applicable to AI as well. How do we trust AI and how do we verify this trust? Vignard noted that ‘it is not just about trusting the technology, but also about trusting the humans who have a role in shaping and deploying that technology’.

In his opening remarks, Mr Houlin Zhao (Secretary-General, International Telecommunication Union [ITU]) invited all to join the AI for Good Global Summit and promote global multistakeholder dialogue towards safe, trusted, and inclusive AI. As a positive trend, he noted that this year’s summit will have 1000 participants and many are from developing countries.

Mr Malcolm Johnson (Deputy Secretary-General, ITU) explained that the ITU’s core function is developing technical standards, ensuring interoperability, and providing a harmonised spectrum for AI. AI benefits are revealed through data analysis. The ITU aims to ensure that all countries benefit equally from AI, so providing connectivity in order to gather data from the people that can benefit the most from AI, is a critical issue.

From the government perspective, Ms Minete Libom Li Likeng Mendomo (Minister of Posts and Telecommunications, Cameroon) spoke about responsibility and consent. Regulation and legislation are essential tools for attributing responsibility when an AI system fails due to hacking or malfunctions. ‘The government must ensure that data is collected using well-defined ethical guidelines’, she said. Empowering the scientific community is important to ensure robustness, fairness, explainability, and compensation.

Talking about the regulator, Ms Nora Mulira (Commissioner, Uganda Communication Commission) remarked that developing countries as consumers should leverage their potential as markets, to have a voice heard when discussing AI. Collaborative legal frameworks, innovative regulation and evaluation, and youth innovation in AI development are all key aspects to be discussed with early-bird companies and countries. Active, credible dialogue builds trust and transforms consumers beyond mere spectators. Challenges include raising awareness, creating buy-ins, integrating needs of the local community, and bridging the global digital divide.

Technology has always had an effect on human life, from the plow, the printing press, and the Internet, to today’s AI. But we are still unsure of the true potential and wide-reaching dangers of AI. Because of this, ‘We have to be careful where we deploy it,’ emphasised Mr Bruce McConnell (Executive Vice President, EastWest Institute). McConnell said that AI is an unpredictable technology. As it develops, humans are less in the loop and in control, and the dynamics change. In the cases where human lives are at stake or affected, and where privacy is affected, we should be careful about deploying it until we understand it better.

Trust through transparency with AI or machine learning (ML) systems was the main point of Mr Jeff Greene (Vice President, Global Government Affairs & Policy, Symantec). Transparency about the use of rules is important, but as most of the general public can not comprehend it, it will not necessarily lead to greater trust in ML systems. Symantec has a centre for advanced ML in order to understand what researchers mean when they use ML and help find a way for people to understand these processes better. Transparency has to be meaningful and understandable to build trust.

Mr Ibrahim Alfuraih (Deputy Governor for Strategy and Planning, National Cybersecurity Authority [NCA], Saudi Arabia) emphasised that as cyber-attacks become bigger and more complex, AI can play a major role. Globally, entities need to act quickly to embed AI, train cybersecurity experts, create appropriate regulation, and address legal, policy, and ethical issues. In Saudi Arabia, the NCA is in charge of building trust in the cyberspace and is developing a national cybersecurity strategy, creating laws, and developing a massive public outreach programme.

From a civil society perspective, Ms Marie-Laure Lemineur (Deputy Executive Director, ECPAT International) noted that it is hard for humans as moral beings to trust amoral systems such as AI. To build trust, we need to develop autonomous systems that are hybrid (not solely automated), user-centric, not driven only by profits, and to measure success of AI in terms of human well-being. ‘We have to accept that AI is not a techno-scientific problem’, Lemineur stated.

 

by Jana Mišić