The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.
Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.
Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.
US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.
The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The company is engaging thousands of visitors, including farmers and policymakers, by spotlighting digital inclusive finance, insurance and smart infrastructure innovations.
The display features EcoCash mobile payments, Moovah Insurance for agricultural and business risks, and digital entertainment platforms. A standout addition is Econet’s smart water metres, which provide real-time monitoring to help farmers and utilities manage water use, minimise waste and support sustainable development in agriculture.
Econet emphasises that these solutions reinforce its vision of empowering communities through accessible technology. Smart infrastructure and financial tools are presented as vital enablers for productivity, resilience and economic inclusion in Zimbabwe’s agricultural sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
According to the IT regulator, Nigeria is preparing a national framework to guide responsible use of AI in governance, healthcare, education and agriculture.
NITDA Director General Kashifu Abdullahi told a policy lecture in Abuja that AI could accelerate economic transformation if properly harnessed. He emphasised that Nigeria’s youthful population should move from being consumers to becoming innovators and creators.
He urged stakeholders to view automation as an opportunity to generate jobs, highlighting that over 60% of Nigerians are under 25. Abdullahi described this demographic as a key asset in positioning the nation for global competitiveness.
Meanwhile, a joint report from the Digital Education Council and the Global Finance & Technology Network found that AI boosts productivity, though adoption remains uneven. It warned of a growing divide between organisations that use AI effectively and those falling behind.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.
Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.
The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s rollout of GPT-5 has faced criticism from users attached to older models, who say the new version lacks the character of its predecessors.
GPT-5 was designed as an all-in-one model, featuring a lightweight version for rapid responses and a reasoning version for complex tasks. A routing system determines which option to use, although users can manually select from several alternatives.
Modes include Auto, Fast, Thinking, Thinking mini, and Pro, with the last available to Pro subscribers for $200 monthly. Standard paid users can still access GPT-4o, GPT-4.1, 4o-mini, and even 3o through additional settings.
Chief executive Sam Altman has said the long-term goal is to give users more control over ChatGPT’s personality, making customisation a solution to concerns about style. He promised ample notice before permanently retiring older models.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Zimbabwe’s Information and Communication Technology Minister, Tendai Mavetera, revealed the second draft of the National AI Policy during the AI Summit for Africa 2025 in Victoria Falls, hosted by Alpha Media Holdings and AIIA.
Though the policy was not formalised during the summit, Mavetera stated it is expected to be launched by 1 October 2025 at the new Parliament building, with presidential presence anticipated.
The strategy is designed to foster an Africa where AI serves humanity, ensuring connectivity in every village, education access for every child, and opportunity for every young person.
Core features include data sovereignty and secure data storage, with institutions like TelOne expected to host localised solutions, moving away from past practices of storing data abroad.
At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.
Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.
While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles.
Improving access to justice
Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain.
NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.
While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure.
Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.
AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law.
Risking human rights
While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.
Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.
Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight.
Image via Pixabay / jessica45
Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.
However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors.
While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.
The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance.
The policy path forward
As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.
The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights.
Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems.
The future of justice
AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.
However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support.
Image via Pixabay / souandresantana
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Pakistan’s Ministry of Planning, Development, and Special Initiatives has launched a national innovation competition to drive the development of AI solutions in priority sectors. The initiative aims to attract top talent to develop impactful health, education, agriculture, industry, and governance projects.
Minister Ahsan Iqbal said AI is no longer a distant prospect but a present reality that is already transforming economies. He described the competition as a milestone in Pakistan’s digital history and urged the nation to embrace AI’s global momentum.
Iqbal stressed that algorithms now shape decisions more than traditional markets, warning that technological dependence must be avoided. Pakistan, he argued, must actively participate in the AI revolution or risk being left behind by more advanced economies.
He highlighted AI’s potential to predict crop diseases, aid doctors in diagnosis, and deliver quality education to every child nationwide. He said Pakistan will not be a bystander but an emerging leader in shaping the digital future.
The government has begun integrating AI into curricula and expanding capacity-building initiatives. Officials expect the competition to unlock new opportunities for innovation, empowering youth and driving sustainable development across the country.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Priorities include deploying 4G networks in remote regions, expanding public internet services, and reinforcing the Palapa Ring broadband infrastructure.
On the talent front, the government launched a Digital Talent Scholarship and AI Talent Factory to nurture AI skills, from beginners to specialists, setting the stage for future AI innovation domestically.
In parallel, digital protection measures have been bolstered: over 1.2 million pieces of harmful content have been blocked, while new regulations under the Personal Data Protection Law, age-verification, content monitoring, and reporting systems have been introduced to enhance child safety online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At the Emkay Confluence in Mumbai, Chief Economic Adviser V. Anantha Nageswaran emphasised that while trade-related concerns remain significant, they must not obscure the urgent need for India to boost its AI and semiconductor sectors.
He pointed to AI’s transformative economic potential and strategic importance, warning that India must act decisively to remain competitive as the United States and China advance aggressively in these domains.
By focusing on energy transition, energy security, and enhanced collaboration across sectors, Nageswaran argued that India can strengthen its innovation capacity and technological self-reliance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!