Beyond Ethics Councils: How to Really do AI Governance

Resource type

Session ID
Workshop 175

[Read more session reports and updates from the 14th Internet Governance Forum]

Ethics frameworks and regulation are two sides of the same coin. Although voluntary, ethics frameworks can help us re-evaluate the existing regulation and ensure that artificial intelligence (AI) governance is not detached from its societal context. Tools such as technical algorithmic audits, impact assessment, and less self-regulation are valuable. However, we should question AI, looking back on what we already have and not treating AI systems as the ultimate solution to societal problems.

Currently more than 100 ethical frameworks exist, and while companies and government are aware of them, they remain voluntary and are not sanctioned. The round-table moderator, Ms Corinne Cath-Speth (Oxford Internet Institute) asked if these frameworks indeed guide the industry’s AI systems or remain open-ended notions, distant from the legal frameworks.

What does ethical AI or machine learning (ML) governance look like? It was agreed that ethical frameworks are necessary as a complement to the legal ones. Not everything that is legal is ethical, and vice versa. Mr Bernard Shen (Microsoft) said that companies understand that they need trust from consumers to survive, so ethical guidelines together with appropriate legislation are welcomed by Microsoft. But what is appropriate, both in ethics and legislation? Ms Vidushi Marda (ARTICLE 19) stressed that we are not critical enough towards ML. We should understand that using ML is not inevitable nor is applicable in every instance. Marda added that the idea of ethics being exceptional to ML is a myth. ‘We need to go back to the drawing board, engage with existing regulation and open up the opaque way that ethics frameworks are built today’, she said. Ms Fieke Jansen (Data Justice Lab, Cardiff University) agreed with Marda that ML is not inevitable and that existing regulation has to address these questions first. Do we want big tech companies to have access to our private information? DeepMind had access to millions of users’ health data in the United Kingdom because existing regulation allowed it. ‘We have to unpack what are the drivers behind the ML application on social problems of technology’, Jansen emphasised.

Ethical AI governance has to be contextual because every AI system will behave differently in a different setting. Marda noted that she does not aim to understand all 100 ethical frameworks as they ‘don't actually meaningfully change how the systems are designed or developed or even deployed’. Jansen said that the police in the UK use AI systems, but regular police officers do not necessarily understand what that means for the community and their future work. The issues of explainability and lack of skills for proper impact assessment, combined with the fact that ethical frameworks have no sanctions, is dangerous because no one takes responsibility for problems. She added that companies do care about responsible behaviour and consumer trust, but Jansen replied that trust is also contextual. It is not only about systems working properly but also about individuals knowing how to deal with them when they do not. At the same time, context includes historical awareness that certain datasets give skewed results and application.

What can then be done to emphasise ethics in AI systems? Tools such as technical audits, impact assessments, and regulation-based approaches were mentioned. Shen said that self-regulation is not enough, neither for companies nor governments. Companies can explain to governments how they built the AI systems, but that does not guarantee responsible implementation and use in public services. It is important that both sectors have fluid and evolving guidelines as technologies develop fast and laws become outdated just as fast. Proposals from the audience included opening the algorithms, checking them against the accepted error rate and then making a judgement whether to use it or not. Another proposal is to focus on intersectionality and be aware of potential ‘neo-colonial’ impact of AI-based decisions. Marda and Jansen warned that historical datasets training AI tend to be discriminatory and biased, and called for greater oversight, especially in the public sector. We need to ask why certain technologies become a part of public services and pay attention to the California ban on facial recognition for police work.

By Jana Mišić

Share on FacebookTweetShare