[Read more session reports and updates from the 14th Internet Governance Forum]
The Organisation for Economic and Co-operation and Development (OECD) Principles on AI set guidelines for the responsible stewardship of artificial intelligence (AI). Converting these principles to the practice is essential, and this can be done by focusing on two aspects: promoting human-centric and trustworthy AI, and strengthening the multistakeholder approach.
The OECD established the expert group on AI (AIGO) in May 2018. The expert group issued the OECD Council Recommendations on AI, which was adopted as the OECD Principles on AI in May 2019. The principles aim at the responsible stewardship of AI and provide five recommendations for national and international policy-making on AI. The recommendations are as follows:
- Facilitate public and private investment in research and development to encourage innovation in trustworthy AI.
- Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms for the data and knowledge sharing.
- Ensure a policy environment that will support the deployment of trustworthy AI systems.
- Empower people with the skills for AI, and support workers to adjust to an AI-enabled labour market.
- Cross-border and cross-sector collaboration to progress on the responsible utilisation of trustworthy AI.
The panellists agreed that increasing trustworthiness of AI should be one of the priorities of the implementation of the OECD Recommendations. To fully embrace the potentials of AI, it is integral to have a proper governance system that puts necessary regulations in place to safeguard fundamental human rights. Ms Carolyn N’Guyen (Director of Technology Policy, Microsoft) emphasised that promoting the openness of dataset and of risk assessment can make AI more trustworthy. As a standard-setter, Mr Mina Hanna (Co-Chair, Policy Committee of the IEEE Standards Association's Global Initiative on Ethics of Autonomous and Intelligent Systems) commented that the development of ethical and normative regulation on AI can increase transparency, accountability in AI, and address the issue of algorithmic biases and inclusivity. Moreover, the explainability of AI was pointed out as a key catalyst to improve the trustworthiness of AI by several panellists.
The importance of a multistakeholder approach on AI was also highlighted throughout the session. As AI has a multitude of impacts, AI developers and implementers alone cannot ensure the non-harmful use of AI. To harness the AI-enabled economy, public-private partnerships (PPPs) are an effective model of co-operation as they can facilitate knowledge and statistics sharing. Ms Valeria Milanes (Executive Director, Association for Civil Rights (ADC) and CSISAC Steering Committee member) highlighted that initiatives on AI led by civil society actors, such as the Universal Guidance for Artificial Intelligence, have contributed to the AI-related discussions at the international level. With efforts from civil society actors, the importance of developing human-centric AI and its deployment is internationally recognised, and the key principles are reflected in the OECD Recommendation.
The session underlined the importance of leaving no one behind in an AI-enabled economy. Significant challenges remain, as a solution for the algorithmic biases has not been solidified. However, efforts have been made at the international level to empower women and other marginalised populations through AI. Ms Sasha Rubel (Programme Specialist, Knowledge Societies Division, Communication and Information Sector, United Nations Educational, Scientific and Cultural Organization (UNESCO)) introduced the UNESCO publication ‘I’d Blush If I Could’, which explains how AI can potentially deepen gender bias and outlines strategies to close the gender gap in digital skills. Ensuring equitable benefit-sharing of AI for all is a challenge assigned not only to the tech industry but also to policymakers, activists, and consumers.
By Nagisa Miyachi