Transformations on the Horizon
AI for Good Global Summit 2018
15 May 2018 10:00h - 17 May 2018 18:30h
15 May 2018 02:00h
The session was open by Ms Anja Kaspersen (Director, United Nations Office for Disarmament Affairs), who introduced the speakers.
Mr Wolfram Burgard (Professor of Computer Science, Albert-Ludwigs-Universitat Freiburg) started his intervention by noting that there is a need to transform the way we think about artificial intelligence (AI) and take a more positive attitude. AI is already a part of our lives and we see it in multiple applications, from web services and games, to manufacturing and agriculture. As the technology continues to progress, it is expected to play an increasingly important role in several areas. For example, highly accurate navigation systems empower industrial robots to move with more agility from one place to another, and, thus, enhance productivity. The same systems are crucial for companies working in the field of self-driving cars. In healthcare, big data, algorithms, and neural networks are used in multiple applications, from diagnoses for certain diseases to neuro-robots which help people with disabilities perform daily tasks. In agriculture, AI brings precision farming, supporting a more efficient and sustainable use of resources. These are only few examples which show that AI is an important tool for the well being of society.
Responding to a question from the audience about the risks associated with AI, Burgard acknowledged that one of the main challenge with AI agents is that they need to operate in a world that they do not fully know. Taking the example of self-driving cars, the technology needs to be able to take into consideration the environment in which it operates, and there is still much work to be done by researchers to empower algorithms in this regard.
Ms Terah Lyons (Executive Director, Partnership on AI) spoke about the work that the Partnership on AI plans to do to support the development of AI technology, which would benefit everyone. The partnership, which over 50 members from both private companies and non-profit entities, is intended to serve as an open multistakeholder platform dedicated to fostering discussions and a public understanding on the implications AI has on people and society, and to facilitating the development of best practices on AI technologies. Its members share the belief that AI holds the promise to raise the quality of peoples ‘lives, and to help humanity address some of its most pressing problems, such as poverty and climate change. The partnership will focus on six major areas of work: safety-critical AI; fair, transparent, and accountable AI; collaborations between people and AI systems; AI, labour and the economy; social and societal influences of AI; and AI and social good.
Lyons underlined the need for an active understanding of the challenges associated with the development and use of AI. These challenges can only be addressed in a multistakeholder and multidisciplinary manner, and this is also the case when it comes to developing policies and regulations in the field of AI. Moreover, it is important to start addressing these concerns now, if we are to be able to develop AI for the benefit of social good.
Ms Celine Herweijer (Partner, Innovation and Sustainability, PricewaterhouseCoopers UK) started by stating that the Earth has never been under so much strain, with many species being at the risk of extinction, the chemistry of oceans changing at a rapid pace, the air and water quality dropping, and climate change exacerbating. This is the backdrop against which the fourth industrial revolution is happening, and technologies such as AI can be put to use to address some of the Earth’s major challenges. For example, smart transportation systems are crucial for managing climate change, while precision agriculture allows for a more efficient use of natural resources.
It is in this context that the Fourth Industrial Revolution for the Earth initiative was started. It functions as a multistakeholder platform dedicated to developing a research base for applications for the Earth, supporting breakthroughs in this area, and building an accelerator platform to support projects and ventures to address the use of technology for the benefit of the Earth.
Herweijer noted that sustainability and responsibility principles need to be embedded into AI systems. It is also important to consider the risks of AI leading to bias and deepened inequalities in the early stages of developing AI applications. In addition, once developed and put to use, these applications should be monitored constantly so as to identify possible negative implications that may not have been considered during the development stage.
Mr Wendell Wallach (Consultant, Ethicist, and Scolar and the Yale University’s Interdisciplinary Center for Bioethics) spoke about the importance of looking not only at the benefits of AI, but also at the potential risks and undesirable consequences. He called for a distinction to be made between outwardly-turning and inwardly-turning AI for good. Outwardly-turning AI for good is about the potential of AI to help achieve the sustainable development goals (SDGs). But then we should also consider the impact of AI on areas such as decent work and global inequality, which are covered by the SDGs as well. While AI can help achieve the SDGs, it can also undermine our ability to achieve some of them. An inwardly-turning AI for good is about mitigating the harms that come with the progress of AI, and making sure that we do not go on a path we actually do not want. It is therefore important to look at both sides of AI for good, and devise technological and governance solutions to have an appropriate oversight over the technologies we develop.
In response to a question from the audience about whether we should focus more on issues such as rights and responsibilities for AI systems, Wallach pointed out that while such issues could be considered by researchers, we should focus more on the real challenges we have today. We should put more emphasis on the AI implications that are truly feasible and require immediate attention, and maybe less on those related to technologies we do not yet have.
During the discussions, a point was made that there is a mismatch between the adoption rate of AI technology and the ability to understand it. To address this, emphasis should be placed on issues such as audits for AI systems, the ethics of AI, and AI explainability. At the moment, many of the processes behind AI applications function as ‘black boxes’, and it is not clear how they make certain decisions or reach certain conclusions. While work is being done to make algorithms more explainable, we might need to live with the fact that humans might not be able to understand some systems. In such cases, it is important to carefully assess the risks of such systems during the development phase, test them in simulation environments, and continue to monitor them while in use, to be able to correct possible negative implications.
The session ended with a discussion on education systems and the need to adapt them to an increasingly AI-driven society. Investments are needed to enhance education systems and make sure that they prepare the needed amount of AI engineers and data scientists. At the same time, the nature of education needs to be changed, so AI is taught from a multidisciplinary perspective, combining, for example, technology with ethics. Re-training the current work force is also an important element to be considered, especially given the fact that AI progress leads to some jobs being made obsolete.
Kaspersen concluded the session by stating that the biggest transformation brought about by AI is about us, humans, and about how we adapt, evolve, govern, and educate ourselves and the world we live in.