Main session: Addressing advanced technologies, including AI

2 Dec 2022 08:15h - 09:45h

Session page

Event report

AI increasingly shapes our economy and society.  The deployment of AI algorithms makes our lives easier in many ways, but these technologies come with pitfalls. Mainly, algorithmic decision-making could result in bias, discrimination, harmful stereotypes, and wider social inequality, while AI-based systems may pose risks to privacy, consumer protection, or even human safety.

The panellists raised the issue that there is still no consensus on the definition of AI, at least in terms of legal frameworks. While many countries consider that the term ‘AI’ should be reserved for fully automated decision-making systems, that is not the case, for instance, in Latin American regulations. Consequently, implementing regulations that consider a strict definition of AI might leave citizens in many countries uncovered and unprotected. This is a reflection of a deeper issue related to the lack of participation that countries in the Global South have in the conversations about AI ethics.

This problem is not only notable from the legal perspective, but also echoes from the technological development side. In the deployment of hybrid systems that focus on the convergence between humans and machines, there is a need for a broader understanding of what constitutes sources of human intelligence. There is a pressing demand for inclusion coming from countries of the Global South where AI systems are being implemented, but whose populations are not taking part in the building or auditing of said systems. AI systems developed in and for Global North countries do not necessarily address specific needs of the context elsewhere. This deepens the scenario of dependency and inequality between regions. Panellists also acknowledged the urgent need to educate and train citizens in tech literacy so that they can have a say and participate in the development and audit of AI systems that directly impact their lives.

Given the need for broader inclusion, throughout the last couple of years efforts have been made towards reaching a globally binding agreement to regulate AI. However, the panellists expressed scepticism regarding the possibility of reaching one globally binding agreement that would satisfy every stakeholder, or at least not in one step. Multilateral negotiations done well take a very long time, to the point that any agreements stemming from them would become outdated even before they reach the implementation stage. Also, they usually produce a watered-down version of what nobody is opposed to instead of strengthening what everybody wants to achieve together. As a solution to these problems, a semi bottom-up approach was proposed, which does not require one single binding agreement. This approach would entail different stages, where agreements at the regional level would be built first and then different interfaces for cross-border cooperation, product, or knowledge transfer would be defined. 

The Council of Europe (CoE), an intergovernmental organisation with 46 member states, is currently working on developing legal frameworks for the ethical development and use of AI. Naturally, there are many laws issued on the matter that are already in place, but the CoE believes that there are gaps in the interpretation of these regulations and an additional set of instruments is needed to guarantee fair and ethical use of AI systems. There should be a transversal legal instrument establishing fundamental guiding principles to design, develop, and deploy AI-based systems. Companies in the private sector are also increasingly incorporating tools that audit for bias and enable a somewhat better understanding of how their algorithms work for their audience.

By Paula Szewach 

 

The session in keywords

WORDCLOUD Main session Addresing advanced technologies IGF2022