Why digital transformation and AI matter for justice

30 Nov 2022 14:30h - 15:30h

Session page

Event report

Artificial intelligence (AI) tools have great potential in improving the efficiency of judicial processes, but they also give rise to risks with regard to the trustworthiness and legitimacy of judicial systems overall. To harness the benefits and mitigate the harms of AI tools, judicial operators around the world must identify contextualised issues that arise with different AI applications and pave the way to strengthen the rule of law accordingly. The workshop turned the spotlight to African regions and various innovative instruments that judicial operators and technical experts have devised. 

Speakers from UNESCO drew two broad lessons from their experience in teaching a Massive Open Online Course (MOOC) on AI and the Rule of Law, conducting the Artificial Intelligence Needs Assessment Survey in Africa in 2021, and hosting the Southern Africa Sub-Regional Forum on Artificial Intelligence in 2022. First, there is an urgency in capacity development within the executive, legislative, and judiciary divisions in the public sector, especially in terms of a better understanding of AI applications in each respective field. Second, there is a need to decolonise the datasets used for AI training. Many AI tools, built mostly by the Global North countries, are trained on low-quality and non-representative data from African countries. These datasets often do not include entries based on local languages, disregarding more than a thousand different languages used on the continent. With these lessons in mind, UNESCO advocated that in the development of technologies, we must anchor ourselves in the ROAM approach–R for human rights, O for open to all, A for accessible by all, and M for multistakeholderism. 

Speakers from civil society and the private sector presented their solutions and approaches to the adoption of AI in the judicial system. A common observation is the rapid digitalisation of many parts of the judiciary processes. During the COVID-19 pandemic, many African countries have been turning to virtual courts and, stemming from that, to a gradual uptake of digital platforms for court-related functions. The Nigerian example illustrated that a case that goes through the traditional judicial process could take years to be resolved. The private sector hence provided multiple digital tools powered by AI, including digitised legal reports, annotated laws of the Federation, AI-assisted document review, an appellate feedback system for judges, and e-registry. These provisions enable judicial operators like judges and prosecutors to access cases and extract pertinent legal provisions or jurisprudence with much more efficacy. The efficiency gain would be a valued public good for citizens as well, increasing their motivation to participate in judicial processes. 

With such a phenomenon came several conundrums. At the fundamental level, many in developing countries still face accessibility barriers. Without being connected to the internet, the most marginalised and vulnerable would only be further excluded from the vast digital transformation. It also aggravates the problem of representation in AI tool designs, as the needs of the unconnected are often not considered by AI developers. AI tools developed in the Global North are often not applicable to the Global South as many local languages and vernacular are not included in the training. This exacerbated inequality in access to justice must be addressed. 

Furthermore, the role of judiciary operators has evolved accordingly. Lawyers and judges now need to become digitally literate when it comes to AI technologies to comprehend how a particular application is developed from the dataset and whether unjust biases might arise from it. For instance, the famed COMPAS recidivism judgment software used by the Wisconsin government in the United States has been riddled with biases and other negative externalities. Judicial experts will have to reconfigure how such software would fare in the African context and address unforeseen challenges. In the same vein, judges have to grapple with the potential of adopting AI systems to make the first judgment before human scrutiny. Whether judicial rigour and justness could be maintained remains to be seen. 

Last, the principles of transparency and explainability will determine the trustworthiness of AI tools and, thus, the legitimacy of a digitalised judicial system. The question starts from who designed the algorithms to what data is collected and ends with how a specific algorithm makes explainable decisions. Speakers concluded that only when we could understand decision-making of all these processes would we be able to infuse trust in the judicial system. 

By Yung-Hsuan Wu

 

The session in keywords

WS350 WORDCLOUD Why Digital Transformation and AI Matter for Justice IGF2022