AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Pakistan launches national AI innovation competition

Pakistan’s Ministry of Planning, Development, and Special Initiatives has launched a national innovation competition to drive the development of AI solutions in priority sectors. The initiative aims to attract top talent to develop impactful health, education, agriculture, industry, and governance projects.

Minister Ahsan Iqbal said AI is no longer a distant prospect but a present reality that is already transforming economies. He described the competition as a milestone in Pakistan’s digital history and urged the nation to embrace AI’s global momentum.

Iqbal stressed that algorithms now shape decisions more than traditional markets, warning that technological dependence must be avoided. Pakistan, he argued, must actively participate in the AI revolution or risk being left behind by more advanced economies.

He highlighted AI’s potential to predict crop diseases, aid doctors in diagnosis, and deliver quality education to every child nationwide. He said Pakistan will not be a bystander but an emerging leader in shaping the digital future.

The government has begun integrating AI into curricula and expanding capacity-building initiatives. Officials expect the competition to unlock new opportunities for innovation, empowering youth and driving sustainable development across the country.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Indonesia promises to bolster digital sovereignty and AI talent on Independence Day

Indonesia marked its 80th Independence Day by reaffirming its commitment to digital sovereignty and technology-driven inclusion.

The Ministry of Communication and Digital Affairs, following President Prabowo Subianto’s ‘Indonesia Incorporated’ directive, highlighted efforts to build an inclusive, secure, and efficient digital ecosystem.

Priorities include deploying 4G networks in remote regions, expanding public internet services, and reinforcing the Palapa Ring broadband infrastructure.

On the talent front, the government launched a Digital Talent Scholarship and AI Talent Factory to nurture AI skills, from beginners to specialists, setting the stage for future AI innovation domestically.

In parallel, digital protection measures have been bolstered: over 1.2 million pieces of harmful content have been blocked, while new regulations under the Personal Data Protection Law, age-verification, content monitoring, and reporting systems have been introduced to enhance child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India must ramp up AI and chip production to meet global competition

At the Emkay Confluence in Mumbai, Chief Economic Adviser V. Anantha Nageswaran emphasised that while trade-related concerns remain significant, they must not obscure the urgent need for India to boost its AI and semiconductor sectors.

He pointed to AI’s transformative economic potential and strategic importance, warning that India must act decisively to remain competitive as the United States and China advance aggressively in these domains.

By focusing on energy transition, energy security, and enhanced collaboration across sectors, Nageswaran argued that India can strengthen its innovation capacity and technological self-reliance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West Midlands to train 2.3 million adults in AI skills

All adults in the West Midlands will be offered free training on using AI in daily life, work and community activities. Mayor Richard Parker confirmed the £10m initiative, designed to reach 2.3 million residents, as part of a wider £30m skills package.

A newly created AI Academy will lead the programme, working with tech companies, education providers and community groups. The aim is to equip people with everyday AI know-how and the advanced skills needed for digital and data-driven jobs.

Parker said AI should become as fundamental as English or maths and warned that failure to prioritise training would risk deepening a skills divide. The programme will sit alongside other £10m projects focused on bespoke business training and a more inclusive skills system.

The WMCA, established in 2017, covers Birmingham, Coventry, Wolverhampton and 14 other local authority areas in the UK. Officials say the AI drive is central to the region’s Growth Plan and ambition to become the UK’s leading hub for AI skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns of harmful AI use after model backlash

OpenAI chief executive Sam Altman has warned that many ChatGPT users are engaging with AI in self-destructive ways. His comments follow backlash over the sudden discontinuation of GPT-4o and other older models, which he admitted was a mistake.

Altman said that users form powerful attachments to specific AI models, and while most can distinguish between reality and fiction, a small minority cannot. He stressed OpenAI’s responsibility to manage the risks for those in mentally fragile states.

Using ChatGPT as a therapist or life coach was not his concern, as many people already benefit from it. Instead, he worried about cases where advice subtly undermines a user’s long-term well-being.

The model removals triggered a huge social-media outcry, with complaints that newer versions offered shorter, less emotionally rich responses. OpenAI has since restored GPT-4o for Plus subscribers, while free users will only have access to GPT-5.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Instagram Map lets users share location with consent

Instagram has introduced an opt-in feature called Instagram Map, allowing users in the US to share their recent active location and explore location-based content.

Adam Mosseri, head of Instagram, clarified that location sharing is off by default and visible only when users choose to share.

Confusion arose as some users mistakenly believed their location was automatically shared because they could see themselves on the map upon opening the app.

The feature also displays location tags from Stories or Reels, making location-based content easier to find.

Unlike Snap Map, Instagram Map updates location only when the app is open or running in the background, without providing continuous real-time tracking.

Users can access the Map by going to their direct messages and selecting the Map option, where they can control who sees their location, choosing between Friends, Close Friends, selected users, or no one. Even if location sharing is turned off, users will still see the locations of others who share with them.

Instagram Map shows friends’ shared locations and nearby Stories or Reels tagged with locations, allowing users to discover events or places through their network.

Additionally, users can post short, temporary messages called Notes, which appear on the map when shared with a location. The feature encourages cautious consideration about sharing location tags in posts, especially when still at the tagged place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber Force proposal gains momentum in Washington

A new commission will begin work next month to explore creating a standalone Cyber Force as a military service. The Centre for Strategic and International Studies leads the effort in collaboration with the Cyber Solarium Commission 2.0.

The study responds to ongoing weaknesses in how the US military organises, trains and equips personnel for cyber operations. These shortcomings have prompted calls for a dedicated force with a focused mission.

The Cyber Force would aim to improve readiness and capability in the digital domain, mirroring the structure of other service branches. Cyber operations are seen as increasingly central to national security.

Details of the commission’s work will emerge in the coming months as discussions shape what such a force might look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over children’s use of AI chatbots

The growing use of AI chatbots and companions among children has raised safety concerns, with experts warning of inadequate protections and potential emotional risks.

Often not designed for young users, these apps lack sufficient age verification and moderation features, making them vulnerable spaces for children. The eSafety Commissioner noted that many children are spending hours daily with AI companions, sometimes discussing topics like mental health and sex.

Studies in Australia and the UK show high engagement, with many young users viewing the chatbots as real friends and sources of emotional advice.

Experts, including Professor Tama Leaver, warn that these systems are manipulative by design, built to keep users engaged without guaranteeing appropriate or truthful responses.

Despite the concerns, initiatives like Day of AI Australia promote digital literacy to help young people understand and navigate such technologies critically.

Organisations like UNICEF say AI could offer significant educational benefits if applied safely. However, they stress that Australia must take childhood digital safety more seriously as AI rapidly reshapes how young people interact, learn and socialise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy investigates Meta over AI integration in WhatsApp

Italy’s antitrust watchdog has investigated Meta Platforms over allegations that the company may have abused its dominant position by integrating its AI assistant directly into WhatsApp.

The Rome-based authority, formally known as the Autorità Garante della Concorrenza e del Mercato (AGCM), announced the probe on Wednesday, stating that Meta may have breached European Union competition regulations.

The regulator claims that the introduction of the Meta AI assistant into WhatsApp was carried out without obtaining prior user consent, potentially distorting market competition.

Meta AI, the company’s virtual assistant designed to provide chatbot-style responses and other generative AI functions, has been embedded in WhatsApp since March 2025. It is accessible through the app’s search bar and is intended to offer users conversational AI services directly within the messaging interface.

The AGCM is concerned that this integration may unfairly favour Meta’s AI services by leveraging the company’s dominant position in the messaging market. It warned that such a move could steer users toward Meta’s products, limit consumer choice, and disadvantage competing AI providers.

‘By pairing Meta AI with WhatsApp, Meta appears to be able to steer its user base into the new market not through merit-based competition, but by ‘forcing’ users to accept the availability of two distinct services,’ the authority said.

It argued that this strategy may undermine rival offerings and entrench Meta’s position across adjacent digital services. In a statement, Meta confirmed cooperating fully with the Italian authorities.

The company defended the rollout of its AI features, stating that their inclusion in WhatsApp aimed to improve the user experience. ‘Offering free access to our AI features in WhatsApp gives millions of Italians the choice to use AI in a place they already know, trust and understand,’ a Meta spokesperson said via email.

The company maintains its approach, which benefits users by making advanced technology widely available through familiar platforms. The AGCM clarified that its inquiry is conducted in close cooperation with the European Commission’s relevant offices.

The cross-border collaboration reflects the growing scrutiny Meta faces from regulators across the EU over its market practices and the use of its extensive user base to promote new services.

If the authority finds Meta in breach of EU competition law, the company could face a fine of up to 10 percent of its global annual turnover. Under Article 102 of the Treaty on the Functioning of the European Union, abusing a dominant market position is prohibited, particularly if it affects trade between member states or restricts competition.

To gather evidence, AGCM officials inspected the premises of Meta’s Italian subsidiary, accompanied by Guardia di Finanza, the tax police’s special antitrust unit in Italy.

The inspections were part of preliminary investigative steps to assess the impact of Meta AI’s deployment within WhatsApp. Regulators fear that embedding AI assistants into dominant platforms could lead to unfair advantages in emerging AI markets.

By relying on its established user base and platform integration, Meta may effectively foreclose competition by making alternative AI services harder to access or less visible to consumers. Such a case would not be the first time Meta has faced regulatory scrutiny in Europe.

The company has been the subject of multiple investigations across the EU concerning data protection, content moderation, advertising practices, and market dominance. The current probe adds to a growing list of regulatory pressures facing the tech giant as it expands its AI capabilities.

The AGCM’s investigation comes amid broader EU efforts to ensure fair competition in digital markets. With the Digital Markets Act and AI Act emerging, regulators are becoming more proactive in addressing potential risks associated with integrating advanced technologies into consumer platforms.

As the investigation continues, Meta’s use of AI within WhatsApp will remain under close watch. The outcome could set an essential precedent for how dominant tech firms can release AI products within widely used communication tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!