Artificial Intelligence, Ethics, and the Future of Work

Author
Su Sonia Herring

Moderator Ms Rinalia Abdul Rahim (Managing Director, Compass Rose Sdn Bhd) introduced the flow of the workshop, based on interaction with participants following context setting by key participants. 

Mr Vint Cerf (Internet Pioneer) talked of the ethics of artificial intelligence (AI) that seem to mainly rest with developers and their moral responsibility, attention to bias, and constant questioning of all possible outcomes. Mr Olivier Bringer (Deputy and Acting Head of Unit, Next Generation Internet Unit, Directorate-General for Communications Networks, Content and Technology, European Commission) followed by setting the European context on AI and related technologies, noting heavy investment in AI, and its effects on the job market. He mentioned existing frameworks on privacy and cybersecurity, adding that more needs to be done policy-wise. Bringer stressed the importance of transparency and of having at least a basic understanding of incoming technologies for healthy development. The AI Initiative’s report was considered a crucial step to further strategise legal frameworks and decide whether an ethical component should be introduced.

Mr Claudio Lucena (Professor and former Dean, Law Faculty at Paraiba State University, and Research Fellow, Research Center for Future of Law,) emphasised that AI is not a monolithic, blanket technology, and specific approaches must be adopted. Lucena pointed to a possible re-coding of the rule of law according to varying sectors and jurisdictions, such as national strategies being implemented across the globe including robotics commissions. 

From the audience Mr Patrick Penninckx (Head of Department – Information Society, Council of Europe (CoE)) took the floor underlining that AI is already having direct effects on daily lives in various countries, so policies need to catch up as soon as possible. He further elaborated that if AI is going to be used in law, a common framework is a must. Efforts by governments and intergovernmental organisations are not enough without sufficient co-operation with the technical community and the private sector. 

Views were voiced that it is not feasible to expect adherence to ethical codes from developers, which was opposed by Lucena. The burden of development needs to be shifted; the excuse ‘I’m just an engineer’ does not work when it comes to AI -related technologies.

Mr Christian Djeffal (Project Lead for IoT and eGovernment, Alexander von Humboldt Institute for Internet and Society) compared the race of AI development to an arms race. The importance of international agreements, such as the Declaration to Cooperate on Artificial Intelligence signed by 25 EU countries and Norway, gains more gravity. Djeffal stressed how vital interdisciplinary teams were in relieving the burden of development that is on engineers. He mentioned the need to formally educate engineers on ethics and voiced support for formation of interdisciplinary teams. 

One participant raised the social contact aspect of a future where AI is heavily used, and of research displaying human contact’s profound impact on the psychological wellbeing of human beings. Another mentioned the safety aspect of autonomous machines and vehicles, using the example of the Uber self-driving car which killed a person. We need to consider whether all automation is actually safer for humans while reconsidering assumptions.

Co-moderator Ms Maarit Palovirta (Senior ManagerRegional Affairs Europe, Internet Society) introduced the second part of the panel which focused on stakeholder perspectives. Ms Mariam Sharangia (Specialist of Strategic Development,Georgia’s Innovation and Technology Agency) talked of the government’s perspective which focuses on privacy. The goal is to create very specific frameworks regarding privacy. When it comes to AI, the Georgian government is implementing encouraging policies for start-ups and other businesses that focus on AI. However, these efforts need to be supported by modernising education to prepare the workforce. 

Ms Clara Sommier (Public Policy & Government Relations Analyst, Google) talked of the business perspective, noting that they are training their developers on fairness. She mentioned that there are millions of people who hate their jobs and AI may help by doing those jobs for people. Including and not leaving anyone behind with training in digital skills, and a project that helps a million Europeans find a job were pointed out by Sommier. She also underlined the big role to be played by humans, as jobs we cannot anticipate currently are being created in place of those certain to disappear. 

Ms Leena Romppainen (Chair, Electronic Frontier Finland) took the floor to talk of the civil society perspective using positive and negative examples from science fiction for reflection on possible future scenarios. She also talked of work in the future becoming more difficult with more complex jobs for humans as simpler tasks are taken over by AI. Unintended and unexpected consequences of the development of AI and robotics are unavoidable. Djeffal took the floor once again, mentioning the already visible effects of algorithms on the work force, such as downscaling, which may mean that even if people do not lose their jobs, they will be paid less while some of their work becomes automated. He also commented on the value of the work of the European Commission as it is a great example of detailed, concrete explanation related to AI.

Ms Annette Muehlberg (Head, ver.di Digitalisation Project Group) remarked that how we live and work should be our decision and not a matter of scoring by machine learning algorithms. She made clear points for the need for AI to be accountable, transparent, modifiable in short, and not a black box. Muehlberg also recognised the good that can come through AI to improve the lives of workers and citizens. She pointed out that we are already facing issues on the use of algorithms where human input is not considered; there is no chance to integrate human knowledge while working with algorithms as the latter is always seen as right.

Final remarks included a participant posing the question: ‘If a machine learns something about a person that they themselves do not know, who does that data belong to?’ Bringer stressed that budgetary allocation to digital skills in the next EU budget was very large. One participant raised the issue of the obligation to share trade secrets that will arise in the future of AI, while another noted that as many algorithms are being treated as trade secrets, transparent practices would prove very difficult. Muehlberg tackled this topic by making a distinction between algorithms that rule public goods and services versus private ones.


Share on FacebookTweet