Global AI governance for sustainable development

1 Dec 2022 12:05h - 13:35h

Session page

Event report

Some consider that artificial intelligence (AI) is powerful and thus necessary for solving global problems which humans have not been capable of tackling so far, as well as for driving development for social good. Others see that AI can be dangerous: black boxes, replication of social bias, etc. While much hope is invested into AI with regard to achieving sustainable development goals, there is not much progress in formulating a governance framework for this technology and its applications globally. The session discussed the weighing of AI’s promises and harms, the way forward with its development and governance, and the roles of different stakeholders. 

What are the impacts of AI? In the survey conducted during the session, most of the audience perceived AI as promising global productivity and economic growth. Meanwhile, some also associate AI with the risk of unemployment. The (potential) impact of AI on our quality of life was seen as both a matter of hope and concern. Much like industrial revolutions in the past, the emergence of AI brings about inevitable shifts that are inherent in the new generation and cannot simply be labelled as good or bad. Take the job market as an example: AI will take up many repetitive tasks but will also generate new jobs. 

However, as one speaker pointed out, it is implausible to say that enhanced global productivity and economic growth are by default equivalent to sustainable development. The first problem is that individuals and businesses in developed countries are inclined to be the ones gaining the most from AI applications, even when these are developed with cheap labour and data from people in developing countries. For instance, while self-driving cars are used in Germany, AI is trained in Kenya. The second problem is that data from developing countries tend to be missing from datasets that are used to train AI applications portrayed as providing solutions to global problems. 

The speakers in the session stressed the importance of inclusive, participatory and multistakeholder approaches to defining AI governance frameworks, which would be essential for nurturing trust in AI applications. One challenge is that there is currently no global consensus on the governance framework for AI, and more efforts should be directed towards driving consensus through multistakeholder initiatives. 

As one speaker noted, we have paid much attention to AI as a product being beneficial to one individual sector, such as water systems, agriculture, etc., but we have not paid enough attention to the harms that may occur throughout the whole life cycle of AI as an invention: where in the world it is produced, the labour conditions for the production of AI, its environmental impacts, and such. Without duly noting and tackling these problems, we are only being taken further away from achieving sustainable development. 

The roles and responsibilities of different stakeholders in enabling human-centric AI were also highlighted. Developers have a duty to fulfil ethical codes in their work. Civil society should take up the mission of bringing the voices and real-life experiences of people into the discussion on the use and development of AI. State actors should ensure that a wide range of stakeholders is involved in the development of policies and regulations for AI, in addition to the private sector which develops and applies AI.

By Sherry Shek


The session in keywords