How to keep AI from slipping beyond our control

15 May 2018

Event report

The conference was hosted at the Graduate Institute and was moderated by Prof. Richard E. Baldwin (International Economics, the Graduate Institute). In his opening remarks, Baldwin spoke about how technology is perceived by some as slipping off its ‘ethical leash’.

Mr Wendell Wallach (Senior Advisor, the Hastings Center, Yale University) acknowledged that most people, despite an ongoing debate between technology optimists and pessimists, are not as categorical in their appreciation of artificial intelligence (AI) and new technologies. Unfortunately, these debates are not carried out publicly. So far, some policies – such as a worldwide prohibition on human cloning and regulations on bioethics – have been put in place but without having entailed broader discussions and a platform to gather peoples’ opinions on the matter. According to Wallach, self-driving cars might have the possibility to spearhead the discussion about technological evolution in society given the recent trends and their visibility.

When talking about AI, many questions are still unanswered. Do we know enough about human intelligence, genomes, and climate influences to try to replicate them through AI technology? In that regard, technological evolution is not moving as fast as one might think. The further our knowledge expands, the more we are faced with new limits. For this reason, the full implementation of AI in our everyday lives might not be as quick as technology optimists foresee. After briefly retracing AI’s evolution, starting from its inception in 1956, Wallach explained that deep learning algorithms were a great stepping stone for AI, although machine learning still lacks the same capacity that humans have to understand its learning.

Furthermore, ethics will have to evolve as AI will gradually make it into the midst of our societies. Wallach further expanded on the concept of ‘technological unemployment’, a term coined by John Maynard Keynes to describe the disappearance and disruption of jobs due to technological developments. The speaker agreed that some tasks might well be automated in the future and that certain jobs will become obsolete, but noted that most jobs will still require human oversight. Therefore, the challenge posed by the automation of jobs through AI technology lies not with the complete loss of employment sectors, but rather with the rapid automation of some work processes that will require less human action in the future.

Wallach identified the use of lethal autonomous weapons (LAWS) as another challenge. Inherently, the use of these weapons bares the bigger question of weakening the principle of holding human lives above machines. The use of LAWS questions this premise if we allow them to target humans without human oversight. Further, LAWS raise the question of what can be considered truly autonomous. Do we have the capacity to understand how machines come to a certain conclusion and are we willing to trust machines if we cannot fully grasp the full extent of machine-made decisions?

According to him, these issues are matched with another tendency which is the tendency to completely disregard low-probability events. These might not seem important but can cause ’black swan events’. These types of events derive from a low-probability and have a very high impact. Thus, certain accidents might occur while we attributed a low likelihood to them. The question then is, whether or not to go through with certain types of developments while being well aware of the risks involved.

Given the dual use of AI technology for good or evil, Wallach believes that scientists and developers need to be held accountable for their research. They need to take responsibility for the tools they develop and maybe raise awareness about the risks of using them.

Wallach finished his presentation by discussing the potential mechanisms for preventing AI from ‘slipping out of control’. Therein, governance coordinating committees should be installed as good faith brokers, keeping track of technological developments. Further layers of protection could be achieved through the finding of feasible and cost-effective solutions to limit AI’s influence. Researchers should look for ways to implement ethical considerations from the inception of their inventions, and corporate oversight boards should look over these advancements and identify critical issues related to them. According to Wallach, soft governance is essential in that field because it is adaptable to the rapid evolution of technology. Hard governance through laws and treaties should therefore mostly set a broader framework to set boundaries for the use of AI and other technologies.