Is the world ready for AI to rule justice?

AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As technology reshapes the way we work, communicate, and make decisions, its potential to transform legal processes is becoming increasingly difficult to ignore. The justice system, however, is one of the most ethically sensitive and morally demanding fields in existence. 

For AI to play a meaningful role in it, it must go beyond algorithms and data. It needs to understand the principles of fairness, context, and morality that guide every legal judgement. And perhaps more challengingly, it must do so within a system that has long been deeply traditional and conservative, one that values precedent and human reasoning above all else. Jet, from courts to prosecutors to lawyers, AI promises speed, efficiency, and smarter decision-making, but can it ever truly replace the human touch? 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI in courts: Smarter administration, not robot judges… yet

Courts across the world are drowning in paperwork, delays, and endless procedural tasks, challenges that are well within AI’s capacity to solve efficiently. From classifying cases and managing documentation to identifying urgent filings and analysing precedents, AI systems are beginning to serve as silent assistants within courtrooms. 

The German judiciary, for example, has already shown what this looks like in practice. AI tools such as OLGA and Frauke have helped categorise thousands of cases, extract key facts, and even draft standardised judgments in air passenger rights claims, cutting processing times by more than half. For a system long burdened by backlogs, such efficiency is revolutionary.

Still, the conversation goes far beyond convenience. Justice is not a production line; it is built on fairness, empathy, and the capacity to interpret human intent. Even the most advanced algorithm cannot grasp the nuance of remorse, the context of equality, or the moral complexity behind each ruling. The question is whether societies are ready to trust machine intelligence to participate in moral reasoning.

The final, almost utopian scenario would be a world where AI itself serves as a judge who is unbiased, tireless, and immune to human error or emotion. Yet even as this vision fascinates technologists, legal experts across Europe, including the EU Commission and the OECD, stress that such a future must remain purely theoretical. Human judges, they argue, must always stay at the heart of justice- AI may assist in the process, but it must never be the one to decide it. The idea is not to replace judges but to help them navigate the overwhelming sea of information that modern justice generates.

Courts may soon become smarter, but true justice still depends on something no algorithm can replicate: the human conscience. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for prosecutors: Investigating with superhuman efficiency

Prosecutors today are also sifting through thousands of documents, recordings, and messages for every major case. AI can act as a powerful investigative partner, highlighting connections, spotting anomalies, and bringing clarity to complex cases that would take humans weeks to unravel. 

Especially in criminal law, cases can involve terabytes of documents, evidence that humans can hardly process within tight legal deadlines or between hearings, yet must be reviewed thoroughly. AI tools can sift through this massive data, flag inconsistencies, detect hidden links between suspects, and reveal patterns that might otherwise remain buried. Subtle details that might escape the human eye can be detected by AI, making it an invaluable ally in uncovering the full picture of a case. By handling these tasks at superhuman speed, AI could also help accelerate the notoriously slow pace of legal proceedings, giving prosecutors more time to focus on strategy and courtroom preparation. 

More advanced systems are already being tested in Europe and the US, capable of generating detailed case summaries and predicting which evidence is most likely to hold up in court. Some experimental tools can even evaluate witness credibility based on linguistic cues and inconsistencies in testimony. In this sense, AI becomes a strategic partner, guiding prosecutors toward stronger, more coherent arguments. 

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

AI for lawyers: Turning routine into opportunity

The adoption of AI and its capabilities might reach their maximum when it comes to the work of lawyers, where transforming information into insight and strategy is at the core of the profession. AI can take over repetitive tasks: reviewing contracts, drafting documents, or scanning case files, freeing lawyers to focus on the work that AI cannot replace, such as strategic thinking, creative problem-solving, and providing personalised client support. 

AI can be incredibly useful for analysing publicly available cases, helping lawyers see how similar situations have been handled, identify potential legal opportunities, and craft stronger, more informed arguments. By recognising patterns across multiple cases, it can suggest creative questions for witnesses and suspects, highlight gaps in the evidence, and even propose potential defence strategies. 

AI also transforms client communication. Chatbots and virtual assistants can manage routine queries, schedule meetings, and provide concise updates, giving lawyers more time to understand clients’ needs and build stronger relationships. By handling the mundane, AI allows lawyers to spend their energy on reasoning, negotiation, and advocacy.

AI is reshaping the justice system with unprecedented efficiency, but true progress depends on whether humanity is ready to balance innovation with responsibility and ethical judgement.

Balancing promise with responsibility

AI is transforming the way courts, prosecutors, and lawyers operate, but its adoption is far from straightforward. While it can make work significantly easier, the technology also carries risks that legal professionals cannot ignore. Historical bias in data can shape AI outputs, potentially reinforcing unfair patterns if humans fail to oversee its use. Similarly, sensitive client information must be protected at all costs, making data privacy a non-negotiable responsibility. 

Training and education are therefore crucial. It is essential to understand not only what AI can do but also its limits- how to interpret suggestions, check for hidden biases, and decide when human judgement must prevail. Without this understanding, AI risks being a tool that misleads rather than empowers. 

The promise of AI lies in its ability to free humans from repetitive work, allowing professionals to focus on higher-value tasks. But its power is conditional: efficiency and insight mean little without the ethical compass of the human professionals guiding it.

Ultimately, the justice system is more than a process. It is about fairness, empathy, and moral reasoning. AI can assist, streamline, and illuminate, but the responsibility for decisions, for justice itself, remains squarely with humans. In the end, the true measure of AI’s success in law will be how it enhances human judgement, not how it replaces it.

So, is the world ready for AI to rule justice? The answer remains clear. While AI can transform how justice is delivered, the human mind, heart, and ethical responsibility must remain at the centre. AI may guide the way, but it cannot and should not hold the gavel.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adobe unveils AI Foundry for enterprise model building

Adobe has launched a new enterprise service allowing firms to build custom AI models. The platform, called Adobe AI Foundry, lets companies train generative AI on their branding and intellectual property.

Based on Adobe’s Firefly models, the service can produce text, images, video, and 3D content. Pricing depends on usage, offering greater flexibility than Adobe’s traditional subscription model.

Adobe’s Firefly technology, first introduced in 2023, has already helped clients create over 25 billion assets. Foundry’s tailored models are expected to speed up campaign production while maintaining consistent brand identity across markets.

Hannah Elsakr, Adobe’s vice president for generative AI ventures, said the tools aim to enhance, not replace, human creativity. She emphasised that Adobe’s mission remains centred on supporting artists and marketers in telling powerful stories through technology.

The company believes its ethical approach to AI training and licensing could set a standard for enterprise-grade creative tools. Analysts say it also positions Adobe strongly against rivals offering generic AI solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI strengthens controls after Bryan Cranston deepfake incident

Bryan Cranston is grateful that OpenAI tightened safeguards on its video platform Sora 2. The Breaking Bad actor raised concerns after users generated videos using his voice and image without permission.

Reports surfaced earlier this month showing Sora 2 users creating deepfakes of Cranston and other public figures. Several Hollywood agencies criticised OpenAI for requiring individuals to opt out of replication instead of opting in.

Major talent agencies, including UTA and CAA, co-signed a joint statement with OpenAI and industry unions. They pledged to collaborate on ethical standards for AI-generated media and ensure artists can decide how they are represented.

The incident underscores growing tension between entertainment professionals and AI developers. As generative video tools evolve, performers and studios are demanding clear boundaries around consent and digital replication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI still struggles to mimic natural human conversation

A recent study reveals that large language models such as ChatGPT-4, Claude, Vicuna, and Wayfarer still struggle to replicate natural human conversation. Researchers found AI over-imitates, misuses filler words, and struggles with natural openings and closings, revealing its artificial nature.

The research, led by Eric Mayor with contributions from Lucas Bietti and Adrian Bangerter, compared transcripts of human phone conversations with AI-generated ones. AI can speak correctly, but subtle social cues like timing, phrasing, and discourse markers remain hard to mimic.

Misplaced words such as ‘so’ or ‘well’ and awkward conversation transitions make AI dialogue recognisably non-human. Openings and endings also pose a challenge. Humans naturally engage in small talk or closing phrases such as ‘see you soon’ or ‘alright, then,’ which AI systems often fail to reproduce convincingly.

These gaps in social nuance, researchers argue, prevent large language models from consistently fooling people in conversation tests.

Despite rapid progress, experts caution that AI may never fully capture all elements of human interaction, such as empathy and social timing. Advances may narrow the gap, but key differences will likely remain, keeping AI speech subtly distinguishable from real human dialogue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI is transforming patient care and medical visits

AI is increasingly shaping the patient experience, from digital intake forms to AI-powered ambient scribes in exam rooms. Stanford experts explain that while these tools can streamline processes, patients should remain aware of how their data is collected, stored, and used.

De-identified information may still be shared for research, marketing, or AI training, raising privacy considerations.

AI is also transforming treatment planning. Platforms like Atropos Health allow doctors to query hundreds of millions of records, generating real-world evidence to inform faster and more effective care.

Patients may benefit from data-driven treatment decisions, but human oversight remains essential to ensure accuracy and safety.

Outside the clinic, AI is being integrated into health apps and devices. From mental health support to disease detection, these tools offer convenience and early insights. Experts warn that stronger evaluation and regulation are needed to confirm their reliability and effectiveness.

Patients are encouraged to ask providers about data storage, third-party access, and real-time recording during visits. While AI promises to improve healthcare, realistic expectations are vital, and individuals should actively monitor how their personal health information is used.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IAEA launches initiative to protect AI in nuclear facilities

The International Atomic Energy Agency (IAEA) has launched a new research project to strengthen computer security for AI in the nuclear sector. The initiative aims to support safe adoption of AI technologies in nuclear facilities, including small modular reactors and other applications.

AI and machine learning systems are increasingly used in the nuclear industry to improve operational efficiency and enhance security measures, such as threat detection. These technologies bring risks like data manipulation or misuse, requiring strong cybersecurity and careful oversight.

The Coordinated Research Project (CRP) on Enhancing Computer Security of Artificial Intelligence Applications for Nuclear Technologies will develop methodologies to identify vulnerabilities, implement protection mechanisms, and create AI-enabled security assessment tools.

Training frameworks will also be established to develop human resources capable of managing AI securely in nuclear environments.

Research organisations from all IAEA member states are invited to join the CRP. Proposals must be submitted by 30 November 2025, with participation encouraged for women and young researchers. The IAEA offers further details through its CRP contact page.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic unveils Claude Life Sciences to transform research efficiency

Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.

The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.

The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.

An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.

Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.

While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!