Can we put artificial intelligence at the service of mankind?
In this lecture, which ended the 2017 Latsis Prize ceremony, Mr Jacques Attali, president, Positive Planet Foundation, discussed whether and how humanity can put artificial intelligence (AI) at its service. Mr Denis Duboule, president, Latsis Foundation, welcomed Attali onto the stage, noting that the speaker is renown-worldwide. Attali’s presentation consisted of two parts. First, he took a look at some of the sensitive themes related to AI. Then, he explained why he believes that we have the means to master this problem.
Attali believes that because AI is ‘a machine capable of learning’, it will one day be able to gain consciousness. Although this may raise great concerns, humanity has already experienced the beginning of two other AI issues. The job market will see great change, and machines will bring about the termination of certain posts. Additionally, although innovation will create a small number of new tasks/jobs, they will be too few in perspective. Moreover, concerning the military applications of AI, replacing men with robots seems beneficial to humanity. Yet, this does not come without its own risks. Robots could decide to take decision making in times of warfare into their own hands. Moreover, because they are set to become self-aware, it is possible that they decide to turn against humans, to kill to avoid being ‘killed’.
In light of these dangers, should AI be banned before it is too late? Despite the need to be carefully monitored, AI could fulfil one of humanity’s oldest dreams: immortality. Attali’s foundation focuses on the protection of common goods and the well-being of future generations, which can only be achieved if our species survives. By transferring our consciousness to machines, we can maximise our chances of achieving immortality. The answer to, ‘can AI become a problem?’ is the same as to ‘can AI be useful to us?’: yes.
Attali believes that AI can be applied to many areas. Be it in medicine, security, or policymaking, the foremost condition is that it is done in the service of humanity. To achieve this, we must observe some axioms. He suggested that we build upon the three rules postulated by Isaac Asimov as they are a good basis, but not precise enough. To Attali, it is imperative that we retain our ability to shut down AI, which represents our control over it. Furthermore, we must ensure it does not acquire a survival instinct, or our lives will be at risk. Lastly, he proposed that the international community formulate a charter to the specify rights and responsibilities pertaining to AI.
The issue of rights dovetails with Attali’s next point: perhaps we should not ask whether we can put AI at our service, but whether we deserve it. Taking a look at our history should make us have serious reservations. Nevertheless, not only can we teach AI morals, it can also serve to foster our own altruistic behaviour. What is more, AI could be used to deter our worst impulses, the only exception being euthanasia, a call that should always be made by a human being. This offers a segue into the question of acceptance: should we accept that machines will be present in every aspect of our lives? Although we still have some time to decide, we do not long.
Lastly, he highlighted how underdeveloped humanity’s natural intelligence is. Collectively, our computing power should be greater than that of any machine. Thus, we should not forego the task of developing our own intelligence, ensuring equal access to knowledge and to activities that can expand it in its many kinds. After all, intelligence comes in various forms: creative, adventurous, transgressive, altruistic, and maybe the best among them, that of love, and this is the one we must put at the service of humanity.
Attali’s answers to the ensuing questions were as follows. On the impact of AI in governance structures, he remarked that, unlike the market, political structures are already quite artificial, thus AI can improve them. On whether the knowledge gap in technology could increase overall inequality, he replied that the real risk is the suppression of the means and will to learn, emphasising that ‘I [Attali] do not believe that mass unemployment will be a fatality, what is important is to offer mass continued education’. Finally, asked about his opinion on taxing robots, he affirmed that he prefers to tax those who profit from them.