Judiciary engagement

25 Jun 2025 13:15h - 14:30h

Session at a glance

Summary

This discussion focused on the integration of artificial intelligence in judicial systems worldwide and whether AI can replace the human element in courts. The panel brought together judges, legal experts, and international organizations to examine both the opportunities and challenges of AI deployment in justice systems. Judge Eliamani Laltaika opened by highlighting current AI applications in courts, including administrative automation, scheduling, legal research, and predictive analytics, noting that Tanzania has already implemented AI for filing and scheduling systems.


Justice Adil Magid from Egypt raised critical concerns about AI replacing human judgment, questioning whether judges might become “endangered species” as AI-generated judgments emerge in some jurisdictions. He emphasized the fundamental challenge of maintaining judicial conviction and the presumption of innocence in criminal cases, arguing that AI contradicts established principles in civil law systems where conviction must be based on the judge’s personal belief. UNESCO representative Tatevik Grigoryan presented survey findings from 96 countries showing that only 4% of judicial practitioners are trained in AI, highlighting a significant capacity gap and the need for comprehensive guidelines.


Technical expert Gabriella Marcelja discussed current risks including historical biases, AI hallucinations, lack of transparency, and automation bias, referencing cases like the UK’s post office scandal where flawed computer evidence led to wrongful convictions. Professor Milos Jovanovic addressed sovereignty concerns, emphasizing that different technological zones and geopolitical contexts require nation-specific AI models aligned with local legal frameworks and values. The discussion revealed a consensus that while AI offers significant benefits for administrative tasks and efficiency, human oversight remains essential, particularly in criminal justice, and comprehensive legislation is urgently needed to govern AI use in judicial systems.


Keypoints

## Major Discussion Points:


– **AI Implementation in Judicial Systems Worldwide**: Discussion of how AI is currently being used in courts globally, including administrative automation, scheduling, e-filing, legal research, predictive analytics, and in some cases, assisting with sentencing and judgment drafting, with examples from Tanzania, Egypt, and other jurisdictions.


– **Risks and Challenges of AI in Criminal vs. Civil Justice**: Examination of the distinction between using AI in civil versus criminal proceedings, with particular emphasis on how AI use in criminal cases may conflict with fundamental legal principles like presumption of innocence and the requirement for judges’ personal conviction in decision-making.


– **Need for Legislative Framework and Regulation**: Strong consensus among speakers that comprehensive legislation and regulatory frameworks are essential before widespread AI adoption in judicial systems, as current laws were developed before the AI era and don’t adequately address AI-related challenges.


– **Global Digital Divide and Sovereignty Concerns**: Discussion of inequalities between Global North and South in AI implementation, including infrastructure limitations, data quality issues, and concerns about who trains AI models and potential biases, as well as the need to maintain judicial sovereignty while using globally-developed AI tools.


– **Stakeholder Impact and Human Element Preservation**: Consideration of how AI implementation affects various stakeholders (lawyers, civil society, creators, court users) and the unanimous agreement that AI should remain a tool to assist rather than replace human judgment in judicial decision-making.


## Overall Purpose:


The discussion aimed to explore the current state and future implications of AI integration in judicial systems worldwide, examining both opportunities and challenges while seeking to establish best practices for ethical and effective implementation that preserves the integrity of justice systems.


## Overall Tone:


The discussion maintained a cautiously optimistic but predominantly conservative tone throughout. Speakers acknowledged AI’s potential benefits while expressing serious concerns about risks and implementation challenges. The tone was professional and scholarly, with judicial practitioners and experts sharing practical experiences and theoretical frameworks. There was a consistent emphasis on the need for careful, regulated approaches rather than rapid adoption, reflecting the legal profession’s inherent conservatism when dealing with fundamental changes to justice systems.


Speakers

**Speakers from the provided list:**


– **Eliamani Laltaika** – Judge, Moderator of the session


– **Adel Maged** – Justice, Egyptian criminal justice system expert


– **Tatevik Grigoryan** – UNESCO representative working on AI and rule of law programs


– **Gabriella Marcelja** – Expert in technical and legal aspects of AI, focusing on security risks and LLMs


– **Milos Jovanovic** – Professor of cyber security and cyber diplomacy


– **Mohamed Farahat** – Lawyer and legal consultant from Egypt


– **Maureen Fondo** – Doctor of Philosophy candidate in intellectual property, Head of copyright and related rights at Africa Regional Intellectual Property Organisation (ARIPO)


– **Slyvia Chirawu** – Justice Sylvia Chirawu, Judge of the High Court of Zimbabwe (appointed 2017), former legal practitioner since 1984, former National Director of Women and Law in Southern Africa Zimbabwe chapter, former Deputy Dean Faculty of Law University of Zimbabwe


– **Audience** – Various audience members including Senator Dick Keplung (Nigerian Senator) and Christian Fazili (Public prosecutor from DR Congo)


**Additional speakers:**


None identified beyond those in the provided speakers names list.


Full session report

# Summary Report: Artificial Intelligence in Judicial Systems Panel Discussion


## Executive Summary


This panel discussion, moderated by Judge Eliamani Laltaika, brought together judges, legal experts, and international organizations to examine the integration of artificial intelligence in judicial systems worldwide. The discussion focused on current AI applications, implementation challenges, and whether AI can or should replace human judgment in courts. Participants shared experiences from Tanzania, Zimbabwe, Egypt, and international perspectives from UNESCO, while highlighting the need for comprehensive legislation and careful implementation approaches.


## Current AI Applications in Judicial Systems


### Successful Implementations


Judge Eliamani Laltaika outlined various ways AI is currently being utilized in judicial systems globally, including administrative automation, scheduling, dockets management, e-filing systems, legal research assistance, predictive analytics, online dispute resolution, evidence e-discovery, and judgment drafting assistance. Tanzania’s experience with AI implementation for filing and scheduling has successfully eliminated manual processes, demonstrating tangible efficiency gains.


Justice Sylvia Chirawu from Zimbabwe reported that her country adopted an integrated electronic case management system, with implementation cascading from the High Court to Magistrate Courts. Zimbabwe is also developing a national AI policy framework to guide future implementation.


Technical expert Gabriella Marcelja emphasized AI’s potential to prevent errors caused by human oversight or fatigue while boosting efficiency and reducing operational costs.


### Emerging Challenges


However, significant concerns emerged about AI implementation risks. Marcelja identified security vulnerabilities, historical biases embedded in training data, AI hallucinations producing false information, and lack of transparency in decision-making processes. She referenced the UK’s post office scandal as an example of how flawed computer evidence led to wrongful convictions, demonstrating the consequences of excessive trust in automated systems.


Justice Adel Maged from Egypt raised concerns about deep fake evidence and AI-generated submissions by litigants, including cases where litigants use AI to analyze judges’ past decisions to predict outcomes. These developments create unprecedented authentication challenges for courts.


## The Question of AI Replacing Human Judgment


Justice Maged posed a provocative question: “Are judges endangered species?” He noted that some jurisdictions are already seeing AI-generated judgments with appeals being filed against such decisions, indicating this has moved beyond theoretical discussion.


Justice Maged explained that according to Egyptian legal system principles, criminal conviction must be based on the judge’s personal belief and conviction, creating fundamental incompatibility with AI-assisted decision-making in criminal proceedings. He distinguished between Anglo-Saxon and civil law systems, suggesting that AI implementation cannot follow a uniform global approach due to different legal traditions and philosophies.


While AI might be acceptable in administrative procedures, Justice Maged argued that criminal justice requires extreme caution, as AI systems designed to predict re-offending behavior may contradict the fundamental principle of presumption of innocence.


## International Perspectives and Capacity Gaps


### UNESCO’s Global Assessment


Tatevik Grigoryan from UNESCO presented survey findings from over 560 respondents across multiple countries, revealing that only 4% of surveyed judicial practitioners have received AI training. This highlights significant preparation needed before widespread AI deployment.


UNESCO has developed global toolkits, online courses, and draft guidelines to support judicial practitioners. Grigoryan announced plans for an updated Massive Online Open Course (MOOC) launching in early 2026, covering data protection, cybersecurity, and other emerging AI topics. She noted that Colombia has adopted AI guidelines for its judicial system and that the EU AI Act classifies judicial AI systems as “high risk.”


### Sovereignty and Technological Considerations


Professor Milos Jovanovic introduced the concept of technological sovereignty, arguing that different geopolitical zones maintain varying perspectives on human rights, justice, and governance that influence AI training and outcomes. He emphasized that national sovereignty requires maintaining control over AI systems aligned with national legislative frameworks rather than relying entirely on foreign-trained models.


## Global Digital Divide and Infrastructure Challenges


Mohamed Farahat, speaking from an Egyptian legal perspective, addressed significant inequalities between Global North and South that affect AI implementation. Infrastructure gaps, including unreliable electricity, limited internet connectivity, and insufficient computing resources, create fundamental barriers to AI deployment in many developing countries.


Farahat also highlighted data quality issues arising from Western-trained AI models that may not suit African or other regional contexts, as AI systems trained primarily on Western legal precedents may not adequately represent diverse legal traditions and cultural contexts.


## Legislative and Regulatory Needs


Multiple speakers emphasized the critical need for updated legislation and regulatory frameworks. Justice Maged argued that comprehensive legislation is essential before allowing AI evidence in courts, as current laws cannot adequately handle AI deployment without proper rules and guidelines.


Justice Chirawu noted that Zimbabwe’s evidence laws, like those in many countries, were developed before the AI era and require amendment to address AI-specific challenges of authenticity, reliability, and transparency.


Dr. Maureen Fondo emphasized that AI policies affect all court users and stakeholders, requiring balanced approaches and suggesting that countries should conduct situational analysis and benchmark with other nations’ AI approaches.


Senator Dick Keplung from Nigeria emphasized the need for legislators to work closely with the judiciary in developing AI legislation, highlighting the importance of cross-branch collaboration.


## Risk Assessment and Security Concerns


The discussion revealed multiple categories of risk associated with AI implementation, including historical biases in training data, AI hallucinations, manipulation vulnerabilities, and lack of transparency in decision-making processes. The automation bias phenomenon, where users over-rely on AI outputs without proper critical scrutiny, emerged as a particular concern.


The emergence of sophisticated AI-generated content, including deep fake videos and synthetic documents, creates unprecedented challenges for evidence authentication. Courts must develop new protocols and expertise to distinguish between genuine and AI-generated evidence.


## Key Areas of Agreement and Divergence


Participants demonstrated consensus on several principles: AI should assist rather than replace human judgment, particularly in criminal cases; comprehensive legislation and regulatory frameworks are urgently needed; and significant risks exist including bias, authenticity issues, and over-reliance on automated systems.


However, disagreements emerged regarding implementation timelines and approaches. Some speakers from countries with successful implementations expressed more optimism about current capabilities, while others emphasized the need for comprehensive legal frameworks and extensive caution before broader implementation.


## Conclusion


The discussion revealed that while AI offers potential benefits for judicial systems, implementation requires careful consideration of technical, legal, ethical, and cultural factors. Participants unanimously rejected the idea that AI should replace human judgment in courts, emphasizing the irreplaceable nature of human empathy, wisdom, and moral reasoning in judicial decision-making.


The path forward requires balancing innovation with caution, developing comprehensive regulatory frameworks, extensive capacity building, and graduated implementation approaches that preserve human oversight while harnessing AI’s potential benefits. The significant challenges identified, from global digital divides to sovereignty concerns, indicate that realizing AI’s benefits while avoiding risks will require sustained effort and unwavering commitment to fundamental justice principles.


*Note: This discussion was subject to time constraints with speakers having limited presentation time, which affected the depth of coverage on some topics.*


Session transcript

Eliamani Laltaika: ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ forefront of research on the impact of AI in the justice system, and Professor Soudin tells different stories on how AI is used by courts around the world. For example, he indicates that AI is used for administrative automation, scheduling, dockets management, e-filing, and I can attest to this because this is what we do in Tanzania. We no longer have manual filing, we no longer have manual scheduling. This is all done by AI. It’s also used for legal research and predictive analytics. Tools like LexisNexis, ROS, intelligence use predictive analytics, etc. by Westlaw Edge in the U.S. is getting its way through many other parts of the world. It’s used cautiously, sparingly, in some eastern countries to assist judges in sentencing. It’s also used in online dispute resolutions for small civil claims and traffic and accessibility enhancement, evidence e-discovery, and lastly, some judges in some countries use it for judgment drafting to kind of tell them how to go about after summarizing all the texts. They say no size fits all. If this book was written with examples mainly from the West, probably that is not what is happening in some other parts of the world. At this juncture, I welcome Justice Adil Magid to tell us his experiences and how to avoid algorithmic injustice in a criminal court. The floor is yours, Honorable Justice Adil Magid.


Adel Maged: Thank you. Good morning, everybody. Allow me, Your Honor, to thank the administration, the Secretariat, for allowing this justice track to start efficiently. I hope it will expand, and I thank you personally, our moderator, for your efforts to make this track happen. Thank you. I would like to start rephrasing this question. Can AI replace the human element in courts? Because we are seeing now that AI is replacing many jobs. It’s a reality. So my question to all of you, are judges endangered species? I am seriously speaking now, because we have to start to hear about judgments in some jurisdictions deployed and issued by artificial intelligence and some of the judgments has been appealed already before higher courts. I’m not going to mention specific jurisdiction but now it is a case that you are studying and in Egypt, for example, in our criminal justice system it Should I In Egypt in our criminal justice system It is possible that some litigants would submit evidence that are created by AI In this case, what are you going to do? And you know with the deep fake some videos you can distinct if it is correct or real or authentic or not What we can do? This question another question Litigants themselves the parties can use AI To determine Their strategy in some submission of the cases and litigation as well Because they can feed AI with all the cases Issued by a certain judge in a certain jurisdiction to predict his judgment This could happen We have some risks from AI And I have no doubt that AI Have have great benefits. We can use AI in legal research but this has been a case in the US if you are using AI in legal research for lawyers and They submit AI with some hallucination Brought some cases which does not do not exist. What is the case here? Who is responsible when it comes to AI Deployment in the justice system who is responsible for the acts of the AI? It is not the AI. It is not the company It is a person who are using AI So if a judge or a lawyer is using AI To collect presidents to extract provisions Then it is his own responsibility to verify this presidents or jurisprudence as well If I am going to talk about a criminal justice system I will distinct criminal justice in general. I will distinct between civil justice, criminal justice Because in civil justice, it is possible to accept AI in several forms for example AI Can be can be used in the administration of justice to allocate cases to judges But in criminal justice we have to accept AI with cautious because for example When we are using AI in criminal justice system, we have also distinct between Anglo-saxon criminal justice system, common law system, and civil law system, Latin law system, like Egypt. In the US, in the UK, they use judiciary in their system It is different in our system. We don’t use a ritual. It is a conviction of the judge himself So according to our established principles and rules at the Court of Cassation, the Supreme Court of Egypt, for example in criminal proceedings the conviction should be based on the judge’s belief himself and he shouldn’t base his conviction on other things or person’s opinion So what about evidence submitted to him in a form of EI? What about the judges who search for precedence or to conclude their judgments with EI? This will contradict, of course, with an established principle in the Egyptian legal system, in the Latin legal system, and other legal systems which are conviction or innocence is based on the judge’s conviction Another principle, fundamental legal principle, in criminal justice is a presumption of innocence We have been hearing about EI systems and applications that are used to predict for example re-offending, for example, so it has been used in some jurisdictions for judges to decide on the acquittal of a person or sending him to jail, for example, with EI. Is this permissible? According to our justice system, it is not permissible because it is contradicts the principle of the presumption of innocence Thank you So I would like to conclude by the following EI is acceptable in administrative procedures before the courts because it doesn’t touch upon the person’s culpability, innocence But when it comes to criminal procedures, we have to be cautious while using EI also in all So, what I’m saying is that all justice system to use AI, my belief, and this is what I am keeping demanding, we need legislation. We can’t leave it to judges or lawyers to present AI evidence without rules. This is what I believe, and this is what I have repeated in many sessions in the past, and this is what I believe. So, I believe that legislation is important, and I would like to conclude with a general comment, if you don’t mind, because I have attended this forum, and I was hearing about this contradiction of opinion about the governance of Internet, but I don’t hear about regulation. If we would like to govern the Internet, AI, or technologies, we need legislation. Thank you.


Eliamani Laltaika: Thank you very much. Thank you, judge. Thank you very much. You can get from the speaker that that is somehow a conservative approach, and we need all this. There are certainly other judges who are doing that, and I think that is a good thing, because we can get to other tools in a much, much speedier way than our more experienced colleagues. How do we strike the right balance? We strike the right balance by getting tools developed. My second speaker is from UNESCO, and UNESCO has been working on this for a long time, and they have been working on this for a long time, and UNESCO has been working on this for a long time, and they have been working on this for a long time, and they have come as an empire to try to put some guideposts on how to approach this very, very thick forest called AI and judging. Ms Gregorian will enlighten us on what exactly UNESCO has been doing in the past few years, and, at the end, if time allows, we will have a chance to hear from her.


Tatevik Grigoryan: Thank you very much, and I would like to invite Ms Gregorian to come up to the stage, if it is possible, to show just a few slides just now. Thanks for the introduction, and thank you so much, judge, for your insightful input. So, just to give you an overview of how we started, and why we started working with the judiciary, and you mentioned a few of the things that we have been doing in the past, but I will now present the survey that we conducted among the judicial practitioners that was conducted. We had respondents from about 96 countries, and how we started working on the AI, some of you, or hopefully many of you know that UNESCO overall, in 2021, launched adopted ethics recognition. So, we have about 70 countries that are using AI, and we have over 70 countries that are using it to assess the situation on the ground, and, following this, in 2022, UNESCO then launched an AI and the rule of law program to, with the aim to support judiciary practitioners to assess the situation on the ground, and, following this, in 2022, UNESCO then launched an AI and the rule of law program to, with the aim to support judiciary practitioners to assess the situation on the ground, and, following this, in 2022, UNESCO then launched an AI and the rule of law program to support judiciary practitioners to be able to better integrate the AI into their work, and the goal is, as I mentioned, to engage the stakeholders within the justice system to support the AI and the rule of law in the judicial context. So, as I said, I mentioned, which we conducted in 2023, we had over 560 judicial actors from 96 countries, and the topics we covered were mainly on the use of AI in the judicial context, the perception of arising risks, and the need for the use of AI in the judicial context. So, I think that’s it. Thank you. Thank you very much. foundation around that time period because it was a very down-to-earth process of computing, drafting documents, brainstorming, and the other aspects you can see as well, and the other aspect that the survey covered was also this perception of real-time and the perceptions associated with the risk results of brainwaves and brain energy and obviously capacity. We don’t know the resource of the data that is generated, so this poses serious risk, and I mentioned that 4 percent of the respondents were trained in AI, so there is a need for more data to be generated, so this gap on the training led UNESCO and actually one more thing, you can see here the countries that actually have guidelines for the use of generative AI in court and tribunals. So to respond to this need, UNESCO restricted the community and the decision makers of the countries so that we could not hire the database to train the judges, the judiciary actors, so UNESCO has this AI, judges’ initiative which is ten year program in 2021, where we discuss issues related to freedom of expression, safety of journalists, and so on and so forth. We have a global toolkit on AI and the rule of law which has already reached to 8,000 judicial practitioners. We also launched this global toolkit on AI and the rule of law which aims to support the judicial practitioners in these efforts. I’m aware of time, so I’ll go through quickly. We also have a massive online course which is available on our website, so if you want to learn more about it, you can go there and use the tool, which is completely free of charge, and in 2026, in the beginning, we are planning to launch even an updated MOOC which will cover various aspects on data protection, cyber security, and other emerging topics and persisting topics. In the meantime, lots of human resources are available and the key to developing sculpture on digital around Audi Cosplay but also international and regional, both in person and online. I will go through quickly. So, now, we also in May 2025, we launched, you saw that there aren’t many countries who have guidelines for the use of AI in the judicial system. So, we launched a draft in May 2025, which has one purpose of support, giving guidelines to support the judicial system to use AI in a manner that aligns with the principles of human rights and the rule of law, and there we have a set of principles that can guide these efforts. Cautious of time, I will not go through all of them, but there are a couple of countries that have guidelines for the use of AI in the judicial system, and we launched a draft in May, after the decision of the Constitutional Court of Columbia, it was the first country to adopt these guidelines for the use of AI in the judicial system, and then we have a few other countries that are already considering this. So, to sum up, we have the right to use AI in the judicial system, and we have a repository of AI tools, which are verified AI tools, which can be used also to further support the work in the judicial system. I’ll stop here. If any questions I will be happy to take. Thank you. Another clap for UNESCO, please.


Eliamani Laltaika: Thank you. Thank you very much. We are moving on. And Gabriela is next to me here, and she shares both the technical and legal aspects of our presentation. And you will agree with me that everything that has been said by UNESCO is changing very quickly. The survey that was taken in 2023 may not necessarily reflect what is happening now. And I welcome Gabriela to tell us what are current risks and how can we mitigate them as we go along launching different tools. You’re welcome. Thank you.


Gabriella Marcelja: Thank you very much for the invitation. So I will perhaps skip some parts that were already discussed, because simply we all know that preventing errors due to human oversight or fatigue, or we want to be quicker, cheaper, just boost efficiency, of course. We have AI being used in different ways, from reviewing contracts to administrative tasks and scheduling courts, et cetera. So here, what we want to focus on is also the concept of AI as a non-futuristic idea. So it’s not something that it’s new. It’s already in the courtrooms, as we’ve seen. And the question is really like, it’s not like an if we use it, but how we use it, right? And the question is really how to make it just. In this sense, when we speak about judiciary in AI, we do have to focus on security risks posed by LLMs, especially in the context of insider threats. So this means the system appears aligned and cooperative, but basically may act against institutional interests. And so here we are talking also about trust, about control, about oversight in the use of AI in judiciary. So very briefly on this flip side of the coin is that we do have risks like historical biases. You can think about the Deliveroo case in Italy, the Compass System case in the US. We have hallucinations and manipulations that are even having situations where there was a report of a US lawyer quoting non-existent case law suggested by a judge deputy. And of course, he was disbarred from the Bar Association. So then we have the lack of transparency. So the black box problem is still there. We have the General Federal Constitutional Court that rule against the use of the so-called Palantir-Gautam surveillance system, which was a software used by the police. Citing, so they stopped it because they cited its lack of transparency and violation of privacy. So the real danger here is really judges and legal professionals might over rely on AI outputs today, and so without perhaps having this legal or critical, let’s put it, scrutiny. So this is also a phenomenon that today we call the so-called automation bias, and we saw it as a real world problem in the UK’s post office scandal. So this is something that we see a situation where flawed computer evidence was basically leading to wrongful convictions. So there is, of course, as also the judges said before, there is a huge regulatory gap. So technology, of course, moves much faster than the legal sector is. And so here what we are seeing also is that we lack the understanding even of what’s ethical, what’s not ethical. So the guidelines are, of course, being drafted, but it needs to be done perhaps in line also with the technical expertise at the moment. What we have also is the so-called moving of the human judgment problem, which is basically a little bit of the idea of having courts or online sentencing, like the first metaverse court hearing that happened in Colombia in 2023, which actually raises a serious concern about removing this human judgment and empathy from the system, especially in criminal cases. So perhaps for the administrative part, that wouldn’t be a huge issue, but for the criminal one, it definitely does. So when it comes to how the future looks like and what we can do, we definitely need to always keep in mind that AI is just a tool to assist us, and it’s not something that can be a replacement for human judgment. We do have the EU AI Act, which is a step in the right direction, I would say, because in any case, it’s already classifying AI systems used in the judiciary as high risk and basically imposing strict regulations on that part. There is also the initiative of the European Ethical Charter on AI in judicial systems that basically provides crucial principles on that. And since we are in a judging panel, I just want to conclude here with some projects that are actually interesting to explore further, and that is the cross-justice and facililex that are basically working on AI tools to assist judges with legal research and judicial cooperation. There are a few other cases that I’m not going to get into, but just to conclude, I would say that the real challenge right now is really how we integrate AI in a wise and ethical way so that the human element really does not define justice, but it actually just supports us as a tool. Thank you.


Eliamani Laltaika: Thank you very much, Gabriela. To wind up this part on on-site speakers, we all know that judiciaries are still very conservative. A president from one country is not binding in another. You only follow what your Supreme Court said, not what the Supreme Court of some other countries said. To explain this further, we will invite a professor of cyber security and cyber diplomacy to tell us how countries still maintain their sovereignty and their judicial independence, even though the tools we are developing and AI and science in general is actually universal. Welcome,


Milos Jovanovic: Thank you, Mr. Moderator. It’s my pleasure to be here in Norway, in Storm. Yeah, so this is a very interesting topic and I think we should go deeply to understand how at national levels it should be managed. So the main question speaking about judiciary engagement is sovereignty, because there is a different legal models in different countries, of course, and we should maintain that every AI system should be according to national legislative framework. So when we speak about judiciary engagement, we should count on legal, geopolitical and technological sovereignty. So if we use some generative AI systems, I think it’s a good idea to start with research and so on and so on, but we cannot rely on decisions based in this artificial intelligence model, especially if you have different technological zones. And I always like to mention, you know, the world is not unipolar anymore, so we have different fragmentation processes across the globe. For example, something is really, you know, favorable in some parts of the world, for example, in the United States, in Europe, in Asia, in Latin America. So when we speak about some aspects, speak about human rights, what is favorable in China, in Russia, in the United Kingdom, in the United States, we have different views. So speaking about AI tools, I think we need human-centric approach to be in position to control outcome of artificial intelligence. systems. There is a question, you know, of course, who trained these systems? When we speak about LLM models, about, you know, sources we get into AI systems, it’s a big question, you know. When you have different technological zones, for example, Western European technological zone, Russian technological zone, Chinese technological zone, it’s about sovereignty and who trained this models. For example, when I ask some question in Western AI model about Taiwan and in China, maybe I will get different answer. For example, if I ask some Chinese chatbot about Taiwan, I will get strictly answer that Taiwan is a part of China. And this is, you know, we are speaking about modalities and how in some countries we see different approaches and different narratives, and that’s very important. So, maintaining sovereignty in different aspects, speaking about legal, about geopolitical, about technical systems, it’s a key. And when we speak about global cooperation, definitely we need global cooperation. This is a key, but sovereignty starts with informed national decision-making. So, I mean, the resolution of this issue is to strengthen national capacity, speaking about artificial intelligence, and many countries, especially in Europe, here in the north, we have really good and strong institutes here, and many countries work on this issue, because I think that the path is from national to global. So, we need to ensure that national capacities and same level or so, you know, to be able to integrate with global, with global interfaces, I would say, on that way. And when we speak about sovereignty, about, you know, judiciary engagement, and generally about justice, it’s a question, what is justice now? And do we have, you know, common approach speaking about some, you know, I would say resolution. So, to conclude, I think that we can’t fully, you know, fully rely on artificial intelligence system, especially on systems like GPT-8. and we should invest a lot of time and energy and resources and money to train models and to make specific AI tools which will be aligned with national legislation, and that’s the key. So, human responsibility should stay in centre, at least until we achieve some progress, and, of course, all tools and tools and technologies, and we should be aware of that, and we should be aware of that, and we should be aware of that, and we should be aware of that. So, all AI systems and all AI systems should be closely monitored and analyzed, speaking about outcomes. So, my conclusion in one sentence is that we can’t rely 100% on AI tools until further development. Thank you very much.


Eliamani Laltaika: Thank you. So, I’m going to give you a brief introduction to part 1. Part 2 is led by three speakers. I’m sorry, last time I said two, but they’re actually three, and these are Mr Mohammed Farhat is a lawyer and legal consultant. He’s a lawyer and he’s a lawyer from Zimbabwe. He’s a lawyer from Zimbabwe. He attended last year and made several presentations. We have Maureen Fondo, she’s head of copyright and related rights at Africa regional intellectual property organisation, and we will conclude with a presentation by an equally senior judge. Let’s hear what the two speakers have to say. Please listen carefully. for judges and how to navigate through. A warm welcome to Mr Mohamed Farahat all the way from Egypt. You have five minutes.


Mohamed Farahat: So thanks for the invitation and I think one of the main questions when we come to speak about deploying AI in the digital system, to speak about the digital divide, there is a lot of points. I think we need days and days to speak about using our deploying AI system in the digital system and I would like to thank our colleague from UNESCO, there’s a lot of tools you can use to know more. But I think, let me speak about the digital divide or AI divide between the global and the north. Because I think it’s very, very important in our talk because, let me go back to what the owner of justice mentioned in the beginning about the benefit from using AI in courts and in legal system in general. So AI’s potential to revolutionise justice system from predictive policing or to legal research, case management, contract reviewing. Of course I will speak about the judges and the lawyers because I’m a lawyer so let me also involve the lawyers and they start to relay on using AI to do their works. And even about the sentencing recommendations, there’s a lot of cases that the judges now just using AI to asking for recommendation about the sentence based on the facts presented in the case. But however, we must consider the existing I would like to load it slowly, thank you. Sorry, however, we are, okay. So however we must consider the existing global inequalities and its impact. So if this inequality between north and global, south and north, global north is effect or has impact in using AI, I think it’s yes. Is effect, why? Because of course we speak about AI divide or digital divide between the global north and south is not merely about internet access, is not internet access issue anymore, but also now it’s cover all the web or the technologies. For example, if we come speak to about the infrastructure, this one of the like a clear example that illustrates the disabilities or inequalities between the global north and the global south. Infrastructure in many countries in the global south, if we speak on like Africa, some countries in Asia, some countries in MENA region, this is between this infrastructure or lack of robust and reliable digital infrastructure. Like, and also the electricity and the power, high speed internet and the advanced computing resources that are necessary to develop and deploy or even to using the AI technology in reliable way. So this infrastructure is very needed when we come to decided to using AI in judiciary because this sector is one of most of countries now, like for example, in Egypt, Egypt has adopted the second version of. AI strategy, and they added the justice sector to be one of the sectors that should be deploying AI. And I think we have some programs, I think, that can cover if there is some time for that. And also, one of the points to illustrate the inequalities between Global North and South is the data availability and quality. And as our colleague speak about the cyber security and cyber unity, and he mentioned that clearly who is trained with these LLMs. So I think it’s one of also the problem, because in the availability of the data and the quality. And I think the quality of the data is one of the main point when we come to using AI in the in the course. And who will train this data, or these LLMs, these applications? Who is trained? It’s Western countries. When we come to, we cannot like just to come and apply this technology in the South countries or in African countries. Why? Because I think where most this, there is a lot of cases that witness it, this technology is based on bias. So I don’t speak about, I will not go in details, but I think the data and what, how this model is trained, I think could make some differential between or inequality.


Eliamani Laltaika: Thank you. So we appreciate that intervention. We appreciate your five minutes are over. Thank you so much. We’ll probably get some time later during Q&A. Thank you so much. Next in our list is a doctor of philosophy, candidate in intellectual property and head of copyright at the ARIPO. And we know that economists have a concept called externalities, that you may plan something, but it affects others. We invite specifically Ms. Fondo, who is a lawyer and also doing some work in copyright which touches on economics and rights of different groups, to tell us what the use of AI in the judiciary can mean to other groups such as lawyers, civil society organization, creators, will use of chart GPT, for example, deprive rights of well-known authors of law books, etc, etc. We do not want to confine our discussions to only the courtroom and the judges. Karimu, Ms. Fondo.


Maureen Fondo: Thank you so much, Dr. Judge Eliaman Laltaika, your excellencies, lords and lady justices, learned colleagues, distinguished ladies and gentlemen, all protocol observed. I thank the organizers for this opportunity available to me to speak in this panel, to share knowledge. And I appreciate the previous part of the panel one for sharing their insights. Well, as legal professions. We have different roles that can really help in shaping the IP ecosystem and also the judiciary. So when we look at the AI, the policies that are availed, that should be there. When we look at it, we know that in the international conventions and treaties. At the moment, we have the human centric aspect within those treaties and discussions are ongoing. So this this. really affects whatever happens on the contracting parties and also on the national systems that they have. So as legal professions, what we need to do is to have those platforms to see how best then can we put our voice out to be able to influence the policies that will shape AI and also the legal frameworks within our countries. So we have our individual law societies, we have the regional law societies whereby they avail platforms in terms of discussions of this important discourse and also such kind of forums like this one, whereby we can be able to get insights and information on what we have to do. So there is need to undertake situational analysis to see what the other countries are doing and be able to benchmark with what we want to have. And looking at other countries such as the EU AI Act, we have the UK guidelines, we have the New Zealand, the Singapore, the US, we try to contextualize according to the region that we are in, whether if it’s in Africa, Asia, or in any other region and the country that we are in to try to tailor make it, what are the gaps within the judiciary system? What are the gaps within the IP ecosystem? Because there are several stakeholders who are involved when it comes to this justice stakeholders. So what are their pros and cons? What should they be able to benefit? And how is the judiciary supposed to act? So in terms of the consultation, there is need to engage each and every part of it so that we can have the policy framework being formulated. And the policy framework will really… help us to know whether we need a legal framework of what kind of manner within that country, and the legal framework will now be able to determine in terms of creating and co-creating value whether the law reforms are supposed to be done in that spectrum, so that to see how the AI can be used, to what limitations can the judicial officers use, how can they relate, how can they exercise the due process of the law and not to overstep or not to have undue influence in terms of their decisions and reasoning that they may have, so the principles of justice to be maintained and as lawyers and also as scholars whereby we are undertaking different researches, filling in gaps that are there and making recommendations, so we need to apply these recommendations and have platforms where we can speak and be able to publish and other people, the public, to benefit, so there should be a balanced approach, not only on one side but to look on all the bigger picture of what the AI policy should entail, who does it affect, all the court users are affected with this AI, if it’s guidelines or regulations and the like, and how best can it be made to benefit all stakeholders, so thank you very much for that opportunity. Thank you,


Eliamani Laltaika: thank you very much Miss Fondo for that intervention, it’s definitely upon us to consider all our different stakeholders and how they are affected by the decision we take to use or not to use AI in the judicial process. To conclude this part, I’m really honoured to welcome a very experienced judge, her name is Madam Justice Sylvia Chiworu and she is a legal practitioner since 1984 when she was registered as an advocate, she went into private practice for 10 years, she later became the National Director of Women and Law in Southern Africa, Zimbabwe chapter. Later, she became an academic where she rose through different ranks to become Deputy Dean, Faculty of Law, University of Zimbabwe, and was appointed Judge of the High Court on the 17th of December 2017, and she now presides over the Civil Division, and she also spent some time at the Family Law Court in Harare. A warm welcome, Madam Justice Sylvia. Five minutes.


Slyvia Chirawu: Thank you very much, Honorable Judge, and I just want to tell all participants that I only joined this group yesterday evening, but I’ll do my best to speak on the issues being raised. And first and foremost, I would like to say that Zimbabwe adopted what we call an integrated electronic case management system in 2022, initially in the Commercial Division of the High Court, then also Social Court, and now the General Division of the High Court, and now it has cascaded to the Magistrate Court. So, Zimbabwe really is on course in terms of electronics. Now, coming specifically to the issue of evidence, I’m sure Zimbabwe is not alone, in the sense that most of the laws that deal with the disability of evidence were developed way before the AI era. So in Zimbabwe we have the Criminal Procedure and Evidence Act, we also have the Civil Evidence Act, and more recently we have the Cyber and Data Protection Act, and we also have the Criminal Law Court. But when we look at these laws, what is apparent is that when they were developed, AI was not an issue. But of course over the years there has been some move to actually make some evidence admissible, for instance electronic evidence and documents generated from computers. So that evidence is also now admissible in terms of the laws, but generally what we are saying is that the laws have not been amended to keep up with the data protection act, but it really is a collection of data and use. So now when we are looking at AI, of course the challenges have been mentioned, those that have to do with authenticity and reliability and lack of transparency. So now when it comes to access to justice,


Eliamani Laltaika: Thank you very much Madam Justice, would you conclude please?


Slyvia Chirawu: Yes, so we see that in terms of the way forward, perhaps it would be to actually have a specific policy framework. Zimbabwe as a country is actually developing a policy framework on AI, so the judiciary could actually jump in and also develop its own policies that are specific to the judiciary, and also re-look at the laws that deal with evidence, with a view to amending them with respect to AI. In terms of the human element, there must also be an acceptance that the human element is evident because the court is always needed. Thank you, thank you very much.


Eliamani Laltaika: Okay, thank you. We appreciate that. Okay, thank you. Thanks a lot. As you have heard, Madam Justice is emphasizing the importance of legislation that any decision we take, and because parliamentarians are also here, we need you to legislate. Yes, I’m coming. Yes, I’m coming to that. A big clap to our online speakers. I am told we have five minutes left. Three minutes or so, so I’ll violate some rules by allowing a few questions to come. And I’ll start with the Honorable Member of Parliament, one of you, you can share each other’s question, and then I’ll go to the gentleman there. We start here. Yes, yes.


Audience: The leadership of this session, highly esteemed members of the judiciary, my dear colleagues, I’m Senator Dick Keplung. I’m a senator representing the good people of Nigeria, 10th Senate. I came along with my colleagues, my chairman, another senator, and chairman of the House of Representatives are here. I feel like making a small contribution. The topic is, can AI replace human element in courts? Answer is no, it cannot. The AI for me, improves the human lawyer in efficiency. It improves. AI is scientific. It can be controlled. And therefore, if anybody controls the AI aspect in court, it will interpret that thing. There must, just as set by the high court in Egypt, that there must be a need for high legislation so that there’s no way Supreme Court will give a judgment when there are no legislations. And I would have loved that when it comes to topic like this as members of the judges are discussing, if the legislators should be part of this discussion, I mean, experience with the judges. Thank you very much. I thank you for the opportunity. I would wish that there must be high legislation. Thank you very much.


Eliamani Laltaika: Thank you very much. The Honorable MP is echoing what the judge from Zimbabwe was saying. We generally need guidance in terms of legislation. Gentleman, please.


Audience: Thank you for the insightful presentation. I’m Christian Fazili from the DR Congo. I work as a public prosecutor. You are a journalist? I’m a public prosecutor from DR Congo. Public prosecutor? Yes. Okay. So, as a public prosecutor in DRC, I confront two AI-driven threats to our justice system amidst ongoing violence, as you know. The AI fuels disinformation, exacerbating ethnic conflicts, and the AI generates evidence by armed groups. So, I would like to know, how can international communities…


Eliamani Laltaika: I’m sorry there is another session particularly related to how AI can be used for good instead of for wars and all harms you have mentioned. We will invite you to that one. Thank you. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.


E

Eliamani Laltaika

Speech speed

93 words per minute

Speech length

1288 words

Speech time

830 seconds

AI is used for administrative automation, scheduling, dockets management, e-filing, legal research, predictive analytics, online dispute resolution, evidence e-discovery, and judgment drafting assistance

Explanation

AI has multiple applications in justice systems ranging from basic administrative tasks to more complex analytical functions. These applications span the entire judicial process from case filing to judgment writing assistance.


Evidence

Tools like LexisNexis, ROS intelligence, and Westlaw Edge use predictive analytics; some judges use AI for judgment drafting after summarizing texts


Major discussion point

Current Applications and Benefits of AI in Justice Systems


Topics

Legal and regulatory | Development


Tanzania has successfully implemented AI for filing and scheduling, eliminating manual processes

Explanation

Tanzania serves as a practical example of successful AI implementation in judicial administration. The country has moved away from manual systems to AI-powered automation for basic court operations.


Evidence

Tanzania no longer has manual filing or manual scheduling – all done by AI


Major discussion point

Current Applications and Benefits of AI in Justice Systems


Topics

Development | Legal and regulatory


Disagreed with

– Adel Maged
– Milos Jovanovic

Disagreed on

Readiness for AI implementation in judicial systems


A

Adel Maged

Speech speed

122 words per minute

Speech length

884 words

Speech time

433 seconds

Deep fake evidence and AI-generated submissions by litigants create authentication challenges

Explanation

AI technology enables the creation of sophisticated fake evidence that can be difficult to distinguish from authentic materials. This poses significant challenges for judges in determining the authenticity and reliability of evidence presented in court.


Evidence

Deep fake videos where you cannot distinguish if they are real or authentic; litigants can submit evidence created by AI


Major discussion point

Risks and Challenges of AI in Judicial Systems


Topics

Cybersecurity | Legal and regulatory


Agreed with

– Gabriella Marcelja
– Milos Jovanovic

Agreed on

Significant risks exist with AI implementation including bias, authenticity issues, and over-reliance


Comprehensive legislation is essential before allowing AI evidence in courts, as current laws cannot handle AI deployment without proper rules

Explanation

The legal framework needs to be updated to address AI-specific challenges before courts can safely integrate AI technologies. Without proper legislation, there are no clear guidelines for judges and lawyers on how to handle AI-related evidence and procedures.


Evidence

Cases of AI hallucination in US legal research where lawyers submitted non-existent cases; need for rules on who is responsible for AI acts


Major discussion point

Legal and Regulatory Framework Needs


Topics

Legal and regulatory | Human rights principles


Agreed with

– Gabriella Marcelja
– Maureen Fondo
– Slyvia Chirawu
– Audience

Agreed on

Need for comprehensive legislation and regulatory frameworks for AI in judicial systems


Disagreed with

– Eliamani Laltaika
– Milos Jovanovic

Disagreed on

Readiness for AI implementation in judicial systems


AI is acceptable in administrative procedures that don’t affect culpability, but criminal justice requires extreme caution

Explanation

There should be a clear distinction between using AI for administrative tasks versus substantive legal decisions. Administrative functions like case allocation are less problematic than using AI for decisions that affect a person’s guilt or innocence.


Evidence

AI can be used for administration of justice to allocate cases to judges, but criminal justice requires caution due to different legal systems


Major discussion point

Distinction Between Civil and Criminal Justice Applications


Topics

Legal and regulatory | Human rights principles


Agreed with

– Gabriella Marcelja
– Milos Jovanovic
– Slyvia Chirawu
– Audience

Agreed on

AI should assist rather than replace human judgment in judicial decision-making


Disagreed with

– Gabriella Marcelja

Disagreed on

Acceptability of AI in criminal vs civil justice systems


Criminal conviction must be based on judge’s personal belief according to Egyptian legal principles, making AI-assisted decisions problematic

Explanation

In civil law systems like Egypt’s, judicial decisions must be based on the judge’s personal conviction rather than external influences. This fundamental principle conflicts with AI-assisted decision-making in criminal cases.


Evidence

Established principles at Egypt’s Court of Cassation require conviction based on judge’s belief, not other persons’ opinions; this contradicts AI evidence submission


Major discussion point

Distinction Between Civil and Criminal Justice Applications


Topics

Legal and regulatory | Human rights principles


AI systems predicting re-offending contradict the presumption of innocence principle in criminal justice

Explanation

Using AI to predict future criminal behavior for sentencing decisions violates the fundamental legal principle of presumption of innocence. Such systems make assumptions about guilt or future behavior that conflict with established legal principles.


Evidence

AI systems used in some jurisdictions to predict re-offending for judges to decide on acquittal or imprisonment


Major discussion point

Distinction Between Civil and Criminal Justice Applications


Topics

Human rights principles | Legal and regulatory


Different legal systems (common law vs. civil law) require different approaches to AI integration

Explanation

The integration of AI in judicial systems must account for fundamental differences between legal systems. Common law systems that use juries operate differently from civil law systems that rely on judge’s personal conviction.


Evidence

US and UK use juries in their system, different from Egyptian system; Anglo-Saxon vs Latin law systems have different approaches


Major discussion point

Distinction Between Civil and Criminal Justice Applications


Topics

Legal and regulatory


T

Tatevik Grigoryan

Speech speed

179 words per minute

Speech length

908 words

Speech time

304 seconds

UNESCO launched AI ethics recommendations in 2021 and an AI and rule of law program in 2022, conducting surveys across 96 countries

Explanation

UNESCO has taken a leading role in developing international frameworks for AI ethics and judicial applications. Their comprehensive approach includes both policy development and practical research to understand global implementation challenges.


Evidence

Over 560 judicial actors from 96 countries participated in 2023 survey; UNESCO has AI ethics recommendations adopted by member states


Major discussion point

International Initiatives and Guidelines


Topics

Legal and regulatory | Development


Disagreed with

– Milos Jovanovic

Disagreed on

Approach to AI sovereignty and international cooperation


UNESCO developed global toolkits, online courses, and draft guidelines to support judicial practitioners, with Colombia being the first to adopt AI guidelines

Explanation

UNESCO has created practical resources to help judicial systems implement AI responsibly. These tools provide concrete guidance for practitioners while Colombia serves as a pioneering example of national adoption.


Evidence

Global toolkit reached 8,000 judicial practitioners; massive online course available free; Colombia first country to adopt AI guidelines for judicial system


Major discussion point

International Initiatives and Guidelines


Topics

Development | Legal and regulatory


Only 4% of surveyed judicial practitioners were trained in AI, highlighting the need for capacity building

Explanation

There is a significant gap between AI implementation and judicial practitioners’ understanding of the technology. This low training rate indicates an urgent need for educational programs and capacity development initiatives.


Evidence

Survey results showing only 4% of respondents were trained in AI


Major discussion point

International Initiatives and Guidelines


Topics

Development | Capacity development


G

Gabriella Marcelja

Speech speed

132 words per minute

Speech length

769 words

Speech time

347 seconds

AI can prevent errors due to human oversight or fatigue while boosting efficiency and reducing costs

Explanation

AI offers practical benefits in judicial systems by addressing human limitations and improving operational efficiency. The technology can help reduce mistakes that occur due to human factors while making processes faster and more cost-effective.


Evidence

AI being used for reviewing contracts, administrative tasks, and scheduling courts


Major discussion point

Current Applications and Benefits of AI in Justice Systems


Topics

Development | Economic


AI systems pose security risks through insider threats where systems appear cooperative but may act against institutional interests

Explanation

Large Language Models present unique security challenges in judicial contexts where systems may appear to be functioning properly while actually working against institutional goals. This creates issues of trust, control, and oversight in AI deployment.


Evidence

LLMs pose security risks in context of insider threats; systems appear aligned but may act against institutional interests


Major discussion point

Risks and Challenges of AI in Judicial Systems


Topics

Cybersecurity | Legal and regulatory


Historical biases, hallucinations, manipulations, and lack of transparency create significant risks, including cases of lawyers citing non-existent case law

Explanation

AI systems carry forward historical biases from their training data and can generate false information that appears credible. The lack of transparency in AI decision-making processes compounds these problems, leading to serious professional consequences.


Evidence

Deliveroo case in Italy, Compass System case in US; US lawyer citing non-existent case law and being disbarred; German Federal Constitutional Court ruling against Palantir-Gautam surveillance system


Major discussion point

Risks and Challenges of AI in Judicial Systems


Topics

Legal and regulatory | Human rights principles


Agreed with

– Adel Maged
– Milos Jovanovic

Agreed on

Significant risks exist with AI implementation including bias, authenticity issues, and over-reliance


Automation bias leads to over-reliance on AI outputs without critical scrutiny, as seen in the UK’s post office scandal

Explanation

There is a dangerous tendency for legal professionals to trust AI outputs without proper verification or critical analysis. This over-reliance can lead to serious miscarriages of justice when flawed AI systems produce incorrect results.


Evidence

UK’s post office scandal where flawed computer evidence led to wrongful convictions


Major discussion point

Risks and Challenges of AI in Judicial Systems


Topics

Legal and regulatory | Human rights principles


There is a regulatory gap as technology moves faster than legal frameworks, creating uncertainty about ethical boundaries

Explanation

The rapid pace of technological development outstrips the ability of legal systems to create appropriate regulatory frameworks. This creates uncertainty about what constitutes ethical AI use in judicial contexts.


Evidence

Technology moves much faster than legal sector; lack of understanding of what’s ethical vs not ethical


Major discussion point

Legal and Regulatory Framework Needs


Topics

Legal and regulatory | Human rights principles


Agreed with

– Adel Maged
– Maureen Fondo
– Slyvia Chirawu
– Audience

Agreed on

Need for comprehensive legislation and regulatory frameworks for AI in judicial systems


The EU AI Act classifies judicial AI systems as high-risk, providing regulatory framework for ethical AI use

Explanation

The European Union has taken a proactive approach to AI regulation by specifically identifying judicial applications as high-risk and imposing strict regulations. This provides a model for other jurisdictions to follow.


Evidence

EU AI Act classifies AI systems used in judiciary as high risk with strict regulations; European Ethical Charter on AI in judicial systems


Major discussion point

International Initiatives and Guidelines


Topics

Legal and regulatory | Human rights principles


AI should remain a tool to assist rather than replace human judgment, especially in criminal cases where empathy is crucial

Explanation

The fundamental role of AI should be supportive rather than substitutive, particularly in criminal justice where human qualities like empathy are essential. The challenge is integrating AI wisely while preserving the human element that defines justice.


Evidence

First metaverse court hearing in Colombia in 2023 raises concerns about removing human judgment and empathy


Major discussion point

Human Element Preservation


Topics

Human rights principles | Legal and regulatory


Agreed with

– Adel Maged
– Milos Jovanovic
– Slyvia Chirawu
– Audience

Agreed on

AI should assist rather than replace human judgment in judicial decision-making


Disagreed with

– Adel Maged

Disagreed on

Acceptability of AI in criminal vs civil justice systems


M

Milos Jovanovic

Speech speed

138 words per minute

Speech length

675 words

Speech time

292 seconds

Different technological zones and training sources create bias issues, with AI models reflecting the perspectives of their creators

Explanation

AI systems are trained within specific technological and cultural contexts, leading to biased outputs that reflect the worldview of their creators. This is particularly problematic in a multipolar world with different values and perspectives on justice and human rights.


Evidence

Different answers about Taiwan from Western vs Chinese AI models; Western AI models vs Russian/Chinese technological zones


Major discussion point

Risks and Challenges of AI in Judicial Systems


Topics

Legal and regulatory | Human rights principles


Agreed with

– Adel Maged
– Gabriella Marcelja

Agreed on

Significant risks exist with AI implementation including bias, authenticity issues, and over-reliance


National sovereignty requires maintaining control over AI systems aligned with national legislative frameworks rather than relying on foreign-trained models

Explanation

Countries must maintain sovereignty over their judicial systems by developing AI tools that align with their specific legal frameworks and values. Relying on foreign-trained AI models could compromise judicial independence and national legal principles.


Evidence

Different legal models in different countries; need for AI systems according to national legislative frameworks; fragmentation across different technological zones


Major discussion point

Global Digital Divide and Sovereignty Issues


Topics

Legal and regulatory | Development


Disagreed with

– Tatevik Grigoryan

Disagreed on

Approach to AI sovereignty and international cooperation


Different geopolitical zones have varying perspectives on human rights and justice that affect AI training and outcomes

Explanation

Global fragmentation means that different regions have fundamentally different approaches to human rights and justice concepts. AI systems trained in one region may not be appropriate for use in another due to these philosophical and legal differences.


Evidence

Different views on human rights between China, Russia, UK, US; world is not unipolar with different fragmentation processes


Major discussion point

Global Digital Divide and Sovereignty Issues


Topics

Human rights principles | Legal and regulatory


International cooperation is needed while maintaining national sovereignty in AI governance

Explanation

While global cooperation on AI governance is essential, it must be balanced with respect for national sovereignty and decision-making autonomy. The path forward requires strengthening national capacities before effective global integration.


Evidence

Need for global cooperation but sovereignty starts with informed national decision-making; path from national to global


Major discussion point

Legal and Regulatory Framework Needs


Topics

Legal and regulatory | Development


Judges remain essential as human responsibility must stay central until further AI development occurs

Explanation

Human oversight and responsibility must remain at the center of judicial decision-making until AI technology develops further safeguards and reliability. Complete reliance on AI systems is premature given current technological limitations.


Evidence

Can’t fully rely on AI systems like GPT until further development; need for human-centric approach to control AI outcomes


Major discussion point

Human Element Preservation


Topics

Human rights principles | Legal and regulatory


Agreed with

– Adel Maged
– Gabriella Marcelja
– Slyvia Chirawu
– Audience

Agreed on

AI should assist rather than replace human judgment in judicial decision-making


Disagreed with

– Eliamani Laltaika
– Adel Maged

Disagreed on

Readiness for AI implementation in judicial systems


M

Mohamed Farahat

Speech speed

131 words per minute

Speech length

679 words

Speech time

309 seconds

Infrastructure gaps between Global North and South affect reliable AI deployment, including electricity, internet, and computing resources

Explanation

Many countries in the Global South lack the basic infrastructure necessary for reliable AI implementation in judicial systems. This includes fundamental requirements like stable electricity, high-speed internet, and advanced computing resources.


Evidence

Lack of robust digital infrastructure in Africa, some Asian countries, and MENA region; need for electricity, high-speed internet, and advanced computing resources


Major discussion point

Global Digital Divide and Sovereignty Issues


Topics

Development | Infrastructure


Data availability and quality issues arise from Western-trained AI models that may not suit African or other regional contexts

Explanation

AI systems trained primarily on Western data may not be appropriate for use in African or other regional contexts due to different legal, cultural, and social frameworks. The quality and relevance of training data significantly impacts AI system effectiveness.


Evidence

Most LLMs trained by Western countries; technology based on bias when applied in South countries or African countries


Major discussion point

Global Digital Divide and Sovereignty Issues


Topics

Development | Legal and regulatory


M

Maureen Fondo

Speech speed

130 words per minute

Speech length

635 words

Speech time

290 seconds

Legal professions must engage in policy formulation through law societies and forums to influence AI frameworks

Explanation

Legal professionals have a crucial role in shaping AI policies and frameworks through their professional organizations and participation in policy discussions. This engagement is essential to ensure that AI development serves the interests of justice and legal practice.


Evidence

Individual law societies, regional law societies provide platforms for discussions; forums like this one provide insights


Major discussion point

Legal and Regulatory Framework Needs


Topics

Legal and regulatory | Development


Agreed with

– Adel Maged
– Gabriella Marcelja
– Slyvia Chirawu
– Audience

Agreed on

Need for comprehensive legislation and regulatory frameworks for AI in judicial systems


AI policies affect all court users and stakeholders, requiring balanced approaches that consider broader impacts beyond just judges

Explanation

AI implementation in judicial systems has wide-ranging effects on various stakeholders including lawyers, litigants, and civil society organizations. Policy development must take a holistic approach that considers all affected parties rather than focusing solely on judicial needs.


Evidence

All court users are affected by AI guidelines or regulations; need to look at bigger picture of what AI policy should entail


Major discussion point

Stakeholder Impact and Consultation Needs


Topics

Legal and regulatory | Human rights principles


Situational analysis and benchmarking with other countries’ approaches is necessary for contextualizing AI frameworks

Explanation

Countries should conduct thorough analysis of their current situation and learn from other jurisdictions’ experiences before implementing AI frameworks. This includes studying successful models while adapting them to local contexts and needs.


Evidence

Need to benchmark with EU AI Act, UK guidelines, New Zealand, Singapore, US approaches; contextualize according to region and country


Major discussion point

Stakeholder Impact and Consultation Needs


Topics

Development | Legal and regulatory


Legal professions must create platforms for discourse and influence policy-making through research and recommendations

Explanation

Legal professionals and scholars have a responsibility to conduct research, identify gaps, and make recommendations that can influence AI policy development. This requires creating and utilizing platforms for knowledge sharing and public engagement.


Evidence

Lawyers and scholars undertaking research, filling gaps, making recommendations; need platforms to speak, publish, and benefit public


Major discussion point

Stakeholder Impact and Consultation Needs


Topics

Development | Legal and regulatory


S

Slyvia Chirawu

Speech speed

113 words per minute

Speech length

402 words

Speech time

211 seconds

Zimbabwe adopted an integrated electronic case management system in 2022, cascading from High Court to Magistrate Court

Explanation

Zimbabwe has successfully implemented electronic systems across its court hierarchy, starting with specialized divisions and expanding to general courts. This demonstrates practical progress in judicial digitization in an African context.


Evidence

System initially in Commercial Division, then Social Court, General Division of High Court, now cascaded to Magistrate Court


Major discussion point

Current Applications and Benefits of AI in Justice Systems


Topics

Development | Legal and regulatory


Zimbabwe’s evidence laws were developed before the AI era and need amendment to address AI-specific challenges of authenticity, reliability, and transparency

Explanation

Existing legal frameworks in Zimbabwe, like many other countries, were created before AI became prevalent and are inadequate for addressing AI-related evidence issues. These laws need updating to handle modern technological challenges in the courtroom.


Evidence

Criminal Procedure and Evidence Act, Civil Evidence Act, Cyber and Data Protection Act developed before AI era; some electronic evidence now admissible but laws haven’t kept up


Major discussion point

Legal and Regulatory Framework Needs


Topics

Legal and regulatory | Development


Agreed with

– Adel Maged
– Gabriella Marcelja
– Maureen Fondo
– Audience

Agreed on

Need for comprehensive legislation and regulatory frameworks for AI in judicial systems


Courts will always need human judges to maintain the essential human element in justice

Explanation

Despite technological advances, the human element remains irreplaceable in judicial decision-making. Courts require human judgment, empathy, and understanding that cannot be replicated by artificial intelligence systems.


Evidence

Acceptance that human element is evident because court is always needed


Major discussion point

Human Element Preservation


Topics

Human rights principles | Legal and regulatory


Agreed with

– Adel Maged
– Gabriella Marcelja
– Milos Jovanovic
– Audience

Agreed on

AI should assist rather than replace human judgment in judicial decision-making


A

Audience

Speech speed

117 words per minute

Speech length

279 words

Speech time

142 seconds

The human element in courts is irreplaceable, with AI serving only to improve efficiency rather than replace judicial decision-making

Explanation

AI cannot and should not replace human judges but rather should serve as a tool to enhance their efficiency and capabilities. The fundamental human aspects of judicial decision-making remain essential and irreplaceable.


Evidence

AI improves human lawyer efficiency; AI is scientific and can be controlled; need for legislation so Supreme Court can give judgments


Major discussion point

Human Element Preservation


Topics

Human rights principles | Legal and regulatory


Agreed with

– Adel Maged
– Gabriella Marcelja
– Milos Jovanovic
– Slyvia Chirawu

Agreed on

AI should assist rather than replace human judgment in judicial decision-making


Agreements

Agreement points

Need for comprehensive legislation and regulatory frameworks for AI in judicial systems

Speakers

– Adel Maged
– Gabriella Marcelja
– Maureen Fondo
– Slyvia Chirawu
– Audience

Arguments

Comprehensive legislation is essential before allowing AI evidence in courts, as current laws cannot handle AI deployment without proper rules


There is a regulatory gap as technology moves faster than legal frameworks, creating uncertainty about ethical boundaries


Legal professions must engage in policy formulation through law societies and forums to influence AI frameworks


Zimbabwe’s evidence laws were developed before the AI era and need amendment to address AI-specific challenges of authenticity, reliability, and transparency


The human element in courts is irreplaceable, with AI serving only to improve efficiency rather than replace judicial decision-making


Summary

Multiple speakers emphasized the critical need for updated legislation and regulatory frameworks to govern AI use in judicial systems, as existing laws were developed before the AI era and are inadequate for current challenges


Topics

Legal and regulatory | Human rights principles


AI should assist rather than replace human judgment in judicial decision-making

Speakers

– Adel Maged
– Gabriella Marcelja
– Milos Jovanovic
– Slyvia Chirawu
– Audience

Arguments

AI is acceptable in administrative procedures that don’t affect culpability, but criminal justice requires extreme caution


AI should remain a tool to assist rather than replace human judgment, especially in criminal cases where empathy is crucial


Judges remain essential as human responsibility must stay central until further AI development occurs


Courts will always need human judges to maintain the essential human element in justice


The human element in courts is irreplaceable, with AI serving only to improve efficiency rather than replace judicial decision-making


Summary

There is strong consensus that AI should serve as a supportive tool rather than a replacement for human judges, with particular emphasis on preserving human judgment in criminal cases


Topics

Human rights principles | Legal and regulatory


Significant risks exist with AI implementation including bias, authenticity issues, and over-reliance

Speakers

– Adel Maged
– Gabriella Marcelja
– Milos Jovanovic

Arguments

Deep fake evidence and AI-generated submissions by litigants create authentication challenges


Historical biases, hallucinations, manipulations, and lack of transparency create significant risks, including cases of lawyers citing non-existent case law


Different technological zones and training sources create bias issues, with AI models reflecting the perspectives of their creators


Summary

Speakers identified multiple serious risks including biased training data, authentication challenges with AI-generated evidence, and the danger of over-relying on AI systems without proper verification


Topics

Legal and regulatory | Cybersecurity | Human rights principles


Similar viewpoints

Both speakers from African countries demonstrated successful implementation of AI and electronic systems in their judicial systems, showing practical progress in judicial digitization

Speakers

– Eliamani Laltaika
– Slyvia Chirawu

Arguments

Tanzania has successfully implemented AI for filing and scheduling, eliminating manual processes


Zimbabwe adopted an integrated electronic case management system in 2022, cascading from High Court to Magistrate Court


Topics

Development | Legal and regulatory


Both speakers highlighted concerns about Western-dominated AI training creating bias and inappropriateness for non-Western legal and cultural contexts

Speakers

– Milos Jovanovic
– Mohamed Farahat

Arguments

Different technological zones and training sources create bias issues, with AI models reflecting the perspectives of their creators


Data availability and quality issues arise from Western-trained AI models that may not suit African or other regional contexts


Topics

Development | Legal and regulatory


Both speakers emphasized the importance of international cooperation and learning from other countries’ experiences while developing AI frameworks for judicial systems

Speakers

– Tatevik Grigoryan
– Maureen Fondo

Arguments

UNESCO developed global toolkits, online courses, and draft guidelines to support judicial practitioners, with Colombia being the first to adopt AI guidelines


Situational analysis and benchmarking with other countries’ approaches is necessary for contextualizing AI frameworks


Topics

Development | Legal and regulatory


Unexpected consensus

Conservative approach to AI in criminal justice versus acceptance in administrative functions

Speakers

– Adel Maged
– Gabriella Marcelja
– Milos Jovanovic

Arguments

AI is acceptable in administrative procedures that don’t affect culpability, but criminal justice requires extreme caution


AI should remain a tool to assist rather than replace human judgment, especially in criminal cases where empathy is crucial


Judges remain essential as human responsibility must stay central until further AI development occurs


Explanation

Despite coming from different backgrounds and regions, speakers showed unexpected consensus on taking a conservative approach to AI in criminal justice while being more accepting of AI in administrative functions. This nuanced position suggests a sophisticated understanding of the different risks involved


Topics

Human rights principles | Legal and regulatory


Global digital divide affecting AI implementation in judicial systems

Speakers

– Mohamed Farahat
– Milos Jovanovic
– Maureen Fondo

Arguments

Infrastructure gaps between Global North and South affect reliable AI deployment, including electricity, internet, and computing resources


National sovereignty requires maintaining control over AI systems aligned with national legislative frameworks rather than relying on foreign-trained models


AI policies affect all court users and stakeholders, requiring balanced approaches that consider broader impacts beyond just judges


Explanation

Speakers from different regions and backgrounds unexpectedly converged on recognizing that AI implementation in judicial systems cannot be divorced from broader issues of global inequality, sovereignty, and stakeholder impacts. This holistic view was unexpected given the technical focus of the topic


Topics

Development | Legal and regulatory | Human rights principles


Overall assessment

Summary

The speakers demonstrated remarkable consensus on key principles: the need for comprehensive legislation, the importance of preserving human judgment, and recognition of significant implementation risks. There was also agreement on the distinction between administrative and criminal justice applications of AI.


Consensus level

High level of consensus on fundamental principles with nuanced understanding of implementation challenges. This strong agreement across diverse backgrounds suggests these principles could form the basis for international guidelines and national policy development. The consensus implies that while AI adoption in judicial systems is inevitable, it must be carefully regulated and implemented with strong safeguards to protect human rights and judicial integrity.


Differences

Different viewpoints

Acceptability of AI in criminal vs civil justice systems

Speakers

– Adel Maged
– Gabriella Marcelja

Arguments

AI is acceptable in administrative procedures that don’t affect culpability, but criminal justice requires extreme caution


AI should remain a tool to assist rather than replace human judgment, especially in criminal cases where empathy is crucial


Summary

While both speakers acknowledge the need for caution in criminal justice, Maged takes a more restrictive position based on Egyptian legal principles that require judge’s personal conviction, while Marcelja focuses more on preserving human empathy and judgment across all judicial applications


Topics

Legal and regulatory | Human rights principles


Approach to AI sovereignty and international cooperation

Speakers

– Milos Jovanovic
– Tatevik Grigoryan

Arguments

National sovereignty requires maintaining control over AI systems aligned with national legislative frameworks rather than relying on foreign-trained models


UNESCO launched AI ethics recommendations in 2021 and an AI and rule of law program in 2022, conducting surveys across 96 countries


Summary

Jovanovic emphasizes national sovereignty and the risks of using foreign-trained AI models, while Grigoryan promotes international cooperation and standardized approaches through UNESCO initiatives


Topics

Legal and regulatory | Development


Readiness for AI implementation in judicial systems

Speakers

– Eliamani Laltaika
– Adel Maged
– Milos Jovanovic

Arguments

Tanzania has successfully implemented AI for filing and scheduling, eliminating manual processes


Comprehensive legislation is essential before allowing AI evidence in courts, as current laws cannot handle AI deployment without proper rules


Judges remain essential as human responsibility must stay central until further AI development occurs


Summary

Laltaika presents a more optimistic view of current AI implementation success, while Maged and Jovanovic emphasize the need for comprehensive legal frameworks and caution before broader deployment


Topics

Legal and regulatory | Development


Unexpected differences

Global vs national approaches to AI governance

Speakers

– Tatevik Grigoryan
– Milos Jovanovic
– Mohamed Farahat

Arguments

UNESCO developed global toolkits, online courses, and draft guidelines to support judicial practitioners, with Colombia being the first to adopt AI guidelines


Different technological zones and training sources create bias issues, with AI models reflecting the perspectives of their creators


Data availability and quality issues arise from Western-trained AI models that may not suit African or other regional contexts


Explanation

Unexpectedly, there’s a fundamental disagreement between international standardization advocates and those emphasizing regional/national sovereignty, with concerns about Western bias in global AI systems creating tension with international cooperation efforts


Topics

Legal and regulatory | Development | Human rights principles


Optimism vs pessimism about current AI implementation

Speakers

– Eliamani Laltaika
– Slyvia Chirawu
– Adel Maged

Arguments

Tanzania has successfully implemented AI for filing and scheduling, eliminating manual processes


Zimbabwe adopted an integrated electronic case management system in 2022, cascading from High Court to Magistrate Court


Deep fake evidence and AI-generated submissions by litigants create authentication challenges


Explanation

Unexpectedly, there’s a divide between speakers from African countries – some highlighting successful implementations while others emphasize significant challenges and risks, suggesting different experiences or perspectives on AI readiness


Topics

Development | Legal and regulatory


Overall assessment

Summary

Main disagreements center on the pace and approach to AI implementation in judicial systems, the balance between international cooperation and national sovereignty, and the appropriate level of caution in different types of legal proceedings


Disagreement level

Moderate to high disagreement with significant implications – while speakers generally agree on the need for human oversight and legal frameworks, they fundamentally disagree on implementation strategies, timing, and governance approaches. This could lead to fragmented global approaches to AI in justice systems, with some countries moving ahead rapidly while others remain cautious, potentially creating disparities in access to justice and technological capabilities across different legal systems.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from African countries demonstrated successful implementation of AI and electronic systems in their judicial systems, showing practical progress in judicial digitization

Speakers

– Eliamani Laltaika
– Slyvia Chirawu

Arguments

Tanzania has successfully implemented AI for filing and scheduling, eliminating manual processes


Zimbabwe adopted an integrated electronic case management system in 2022, cascading from High Court to Magistrate Court


Topics

Development | Legal and regulatory


Both speakers highlighted concerns about Western-dominated AI training creating bias and inappropriateness for non-Western legal and cultural contexts

Speakers

– Milos Jovanovic
– Mohamed Farahat

Arguments

Different technological zones and training sources create bias issues, with AI models reflecting the perspectives of their creators


Data availability and quality issues arise from Western-trained AI models that may not suit African or other regional contexts


Topics

Development | Legal and regulatory


Both speakers emphasized the importance of international cooperation and learning from other countries’ experiences while developing AI frameworks for judicial systems

Speakers

– Tatevik Grigoryan
– Maureen Fondo

Arguments

UNESCO developed global toolkits, online courses, and draft guidelines to support judicial practitioners, with Colombia being the first to adopt AI guidelines


Situational analysis and benchmarking with other countries’ approaches is necessary for contextualizing AI frameworks


Topics

Development | Legal and regulatory


Takeaways

Key takeaways

AI should serve as a tool to assist rather than replace human judgment in judicial systems, with the human element remaining irreplaceable especially in criminal cases


Comprehensive legislation and regulatory frameworks are urgently needed before widespread AI deployment in courts, as current laws were developed before the AI era


There must be clear distinctions between AI applications in administrative procedures versus criminal justice, with criminal cases requiring extreme caution due to principles like presumption of innocence and judge’s personal conviction


A significant global digital divide exists between Global North and South affecting AI implementation, including infrastructure gaps, data quality issues, and bias from Western-trained AI models


International cooperation through organizations like UNESCO is essential, but national sovereignty must be maintained in developing AI frameworks aligned with local legal systems and cultural contexts


Current risks include automation bias, hallucinations, deep fake evidence, lack of transparency, and historical biases that could lead to wrongful convictions


Only 4% of surveyed judicial practitioners are trained in AI, highlighting a critical capacity building need


Stakeholder consultation is essential as AI policies affect all court users, not just judges, requiring balanced approaches that consider broader societal impacts


Resolutions and action items

UNESCO to launch an updated MOOC (Massive Online Open Course) in early 2026 covering data protection, cybersecurity, and other emerging AI topics


Legal professions should engage through law societies and forums to influence AI policy formulation and create platforms for discourse


Countries should conduct situational analysis and benchmark with other nations’ AI approaches to contextualize frameworks for their regions


Zimbabwe is developing a national AI policy framework, with the judiciary planning to develop specific judicial policies and amend evidence laws


Need for legislators to work closely with judiciary in developing AI legislation, as emphasized by the Nigerian Senator’s call for joint discussions


Unresolved issues

How to address AI-generated evidence authentication challenges, including deep fake videos and documents


Who bears responsibility when AI systems make errors or provide biased recommendations in judicial decisions


How to balance efficiency gains from AI with maintaining due process and human judgment requirements


How to address the digital divide and ensure equitable AI access across different regions and legal systems


How to handle AI-fueled disinformation and evidence manipulation by armed groups in conflict zones like DR Congo


How to ensure AI training data represents diverse legal traditions and cultural contexts rather than just Western perspectives


How to maintain judicial independence and sovereignty while benefiting from internationally developed AI tools


How to update evidence laws across different legal systems to properly handle AI-related challenges


Suggested compromises

Start with AI implementation in administrative and civil procedures while maintaining strict human oversight in criminal cases


Develop national AI frameworks that align with international guidelines while preserving local legal sovereignty


Use AI for legal research and case management support while requiring human verification of all AI outputs


Implement gradual AI adoption with extensive training programs and capacity building before full deployment


Create hybrid approaches where AI assists but never replaces human decision-making in judicial processes


Establish clear guidelines distinguishing between acceptable AI uses (scheduling, research) and restricted uses (sentencing recommendations in criminal cases)


Develop region-specific AI tools trained on local legal data while learning from international best practices


Thought provoking comments

Can AI replace the human element in courts? Because we are seeing now that AI is replacing many jobs. It’s a reality. So my question to all of you, are judges endangered species? I am seriously speaking now, because we have to start to hear about judgments in some jurisdictions deployed and issued by artificial intelligence and some of the judgments has been appealed already before higher courts.

Speaker

Adel Maged


Reason

This comment reframes the entire discussion by posing the fundamental existential question about the future of judicial roles. Rather than just discussing AI as a tool, Justice Maged challenges participants to confront whether AI might actually replace judges entirely. The provocative framing of judges as potentially ‘endangered species’ forces a deeper examination of the human element’s irreplaceable value in justice.


Impact

This comment shifted the discussion from technical implementation details to fundamental philosophical questions about justice and human judgment. It established a more serious, contemplative tone and prompted subsequent speakers to address the irreplaceable aspects of human judgment in judicial decision-making.


In criminal justice we have to accept AI with cautious because… according to our established principles and rules at the Court of Cassation, the Supreme Court of Egypt, for example in criminal proceedings the conviction should be based on the judge’s belief himself and he shouldn’t base his conviction on other things or person’s opinion… This will contradict, of course, with an established principle in the Egyptian legal system, in the Latin legal system, and other legal systems which are conviction or innocence is based on the judge’s conviction.

Speaker

Adel Maged


Reason

This insight reveals the deep tension between AI assistance and fundamental legal principles across different legal systems. Justice Maged demonstrates how AI integration isn’t just a technical challenge but potentially conflicts with core judicial philosophies about personal conviction and the presumption of innocence.


Impact

This comment introduced crucial legal system distinctions (common law vs. civil law) and highlighted how AI implementation cannot be uniform globally. It prompted later speakers to consider sovereignty and cultural differences in judicial approaches, making the discussion more nuanced and internationally aware.


So when we speak about some aspects, speak about human rights, what is favorable in China, in Russia, in the United Kingdom, in the United States, we have different views. So speaking about AI tools, I think we need human-centric approach to be in position to control outcome of artificial intelligence systems… When you have different technological zones, for example, Western European technological zone, Russian technological zone, Chinese technological zone, it’s about sovereignty and who trained this models.

Speaker

Milos Jovanovic


Reason

This comment introduces the critical concept of ‘technological sovereignty’ and reveals how AI systems embed the values and perspectives of their creators. Jovanovic’s insight about different answers regarding Taiwan from Western vs. Chinese AI systems illustrates how seemingly neutral technology actually carries cultural and political biases.


Impact

This intervention fundamentally changed the discussion’s scope from technical implementation to geopolitical implications. It highlighted that AI in judiciary isn’t just about efficiency or accuracy, but about whose values and worldviews are embedded in the systems making or assisting judicial decisions.


So if this inequality between north and global, south and north, global north is effect or has impact in using AI, I think it’s yes… Infrastructure in many countries in the global south… lack of robust and reliable digital infrastructure… And also, one of the points to illustrate the inequalities between Global North and South is the data availability and quality… Who is trained? It’s Western countries.

Speaker

Mohamed Farahat


Reason

This comment exposes the global digital divide’s impact on AI implementation in judiciary systems. Farahat reveals how infrastructure limitations and Western-trained AI models create systemic inequalities that could perpetuate injustice rather than enhance it in developing countries.


Impact

This intervention brought equity and global justice concerns to the forefront, shifting the discussion from technical capabilities to accessibility and fairness. It highlighted how AI implementation could exacerbate existing global inequalities rather than democratize access to justice.


We can’t leave it to judges or lawyers to present AI evidence without rules… If we would like to govern the Internet, AI, or technologies, we need legislation.

Speaker

Adel Maged


Reason

This comment identifies the regulatory vacuum as a critical vulnerability in AI implementation. Justice Maged emphasizes that without proper legislative frameworks, AI use in courts becomes arbitrary and potentially dangerous, regardless of good intentions.


Impact

This call for legislation became a recurring theme that multiple speakers echoed, including the parliamentarian in the audience. It shifted focus toward the urgent need for regulatory frameworks and prompted discussion about the role of legislators in AI governance.


Overall assessment

These key comments transformed what could have been a technical discussion about AI tools into a profound examination of justice, sovereignty, equity, and governance. The discussion evolved from practical implementation questions to fundamental philosophical and systemic challenges. Justice Maged’s existential question about judges as ‘endangered species’ set a serious tone that elevated the entire conversation. The insights about technological sovereignty and global inequalities revealed that AI in judiciary isn’t just about efficiency—it’s about whose values shape justice and who has access to these powerful tools. The repeated calls for legislation created a clear action-oriented conclusion, emphasizing that technical capabilities must be matched with appropriate governance frameworks. Together, these comments created a comprehensive dialogue that addressed AI in judiciary from multiple critical perspectives: philosophical, legal, geopolitical, and practical.


Follow-up questions

Are judges endangered species? Can AI replace the human element in courts?

Speaker

Adel Maged


Explanation

This fundamental question addresses the core concern about whether AI will eventually replace human judges entirely, which is critical for understanding the future role of judiciary professionals


Who is responsible when AI systems make errors or provide false information in legal contexts?

Speaker

Adel Maged


Explanation

This addresses the accountability gap when AI tools like legal research systems produce hallucinations or incorrect case citations, which has already occurred in real cases


How should courts handle AI-generated evidence, including deepfakes and synthetic media?

Speaker

Adel Maged


Explanation

As AI-generated content becomes more sophisticated, courts need clear protocols for authenticating and evaluating such evidence


How can legal systems maintain the principle of presumption of innocence when using predictive AI systems?

Speaker

Adel Maged


Explanation

AI systems that predict recidivism or criminal behavior may conflict with fundamental legal principles, requiring careful consideration of their appropriate use


How do we strike the right balance between conservative judicial approaches and embracing new AI tools?

Speaker

Eliamani Laltaika


Explanation

This addresses the tension between judicial caution and the potential benefits of AI adoption in court systems


How can countries maintain judicial sovereignty while using universal AI tools?

Speaker

Eliamani Laltaika


Explanation

This explores how different legal systems can preserve their independence and unique approaches while adopting globally developed AI technologies


How can the digital divide between Global North and South be addressed in AI deployment for justice systems?

Speaker

Mohamed Farahat


Explanation

This addresses infrastructure, data quality, and training disparities that affect equitable access to AI benefits in judicial systems


How can legal professionals influence AI policies and frameworks to protect stakeholder interests?

Speaker

Maureen Fondo


Explanation

This concerns the role of lawyers and legal professionals in shaping AI governance to ensure balanced consideration of all affected parties


How should existing evidence laws be amended to accommodate AI-generated evidence and tools?

Speaker

Slyvia Chirawu


Explanation

Many evidence laws predate AI technology and need updating to address authenticity, reliability, and admissibility of AI-related evidence


What specific legislative frameworks are needed to govern AI use in judicial systems?

Speaker

Multiple speakers (Adel Maged, Slyvia Chirawu, Senator Dick Keplung)


Explanation

Multiple participants emphasized the urgent need for comprehensive legislation to guide AI implementation in courts rather than leaving it to individual discretion


How can international cooperation be balanced with national sovereignty in AI governance for justice systems?

Speaker

Milos Jovanovic


Explanation

This addresses the challenge of maintaining national legal autonomy while participating in global AI governance initiatives


What are the long-term implications of removing human judgment and empathy from judicial processes?

Speaker

Gabriella Marcelja


Explanation

This concerns the fundamental nature of justice and whether certain aspects of judicial decision-making require irreplaceable human elements


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.