Banks and insurers pivot to AI agents at scale, Capgemini finds

Agentic AI is expected to deliver up to $450 billion in value by 2028, as financial institutions shift frontline processes to AI agents, according to Capgemini’s estimates. Banks start with customer service before expanding into fraud detection, lending, and onboarding, while insurers report similar priorities.

To seize the opportunity, 33% of banks are building agents in-house, while 48% of institutions are creating human supervisor roles. Cloud’s role is expanding beyond infrastructure, with 61% of executives calling cloud-based orchestration critical to scaling.

Adoption is accelerating but uneven. Four in five firms are in ideation or pilots, yet only 10% run agents at scale. Executives expect gains in real-time decision-making, accuracy, and turnaround, especially across onboarding, KYC, loan processing, underwriting, and claims.

Leaders also see growth levers. Most expect agents to support entry into new geographies, enable dynamic pricing, and deliver multilingual services that respect local norms and rules. Budgets reflect this shift, with up to 40% of generative AI spend already earmarked for agents.

Barriers persist. Skills shortages and regulatory complexity top the list of concerns, alongside high implementation costs. A quarter of firms are exploring ‘service-as-a-software’ models, paying for outcomes such as the resolution of fraud cases or the handling of customer queries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK strengthens AI safeguards to protect children online

The UK government is introducing landmark legislation to prevent AI from being exploited to generate child sexual abuse material. The new law empowers authorised bodies, such as the Internet Watch Foundation, to test AI models and ensure safeguards prevent misuse.

Reports of AI-generated child abuse imagery have surged, with the IWF recording 426 cases in 2025, more than double the 199 cases reported in 2024. The data also reveals a sharp rise in images depicting infants, increasing from five in 2024 to 92 in 2025.

Officials say the measures will enable experts to identify vulnerabilities within AI systems, making it more difficult for offenders to exploit the technology.

The legislation will also require AI developers to build protections against non-consensual intimate images and extreme content. A group of experts in AI and child safety will be established to oversee secure testing and ensure the well-being of researchers.

Ministers emphasised that child safety must be built into AI systems from the start, not added as an afterthought.

By collaborating with the AI sector and child protection groups, the government aims to make the UK the safest place for children to be online. The approach strikes a balance between innovation and strong protections, thereby reinforcing public trust in AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judges in Asia join UNESCO-led training on ethical AI in justice

Judges and justice officials from 11 countries across Asia are gathering in Bangkok for a regional training focused on AI and the rule of law. The event, held from 12 November to 14, 2025, is jointly organised by UNESCO, UNDP, and the Thailand Institute of Justice.

Participants will examine how AI can enhance judicial efficiency while upholding human rights and ethical standards.

The training, based on UNESCO’s Global Toolkit on AI and the Rule of Law for the Justice Sector, will help participants assess both the benefits and challenges of AI in judicial processes. Officials will address algorithmic bias, transparency, and accountability to ensure AI tools uphold justice.

AI technologies are already transforming case management, legal research, and court administration. However, experts warn that unchecked use may amplify bias or weaken judicial independence.

The workshop aims to strengthen regional cooperation and train officials to assess AI systems using legal and ethical principles. The initiative supports UN SDG 16 and advances UNESCO’s mission to promote moral, inclusive, and trustworthy governance of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for safeguards around emerging neuro-technologies

In a recent statement, the UN highlighted the growing field of neuro-technology, which encompasses devices and software that can measure, access, or manipulate the nervous system, as posing new risks to human rights.

The UN highlighted how such technologies could challenge fundamental concepts like ‘mental integrity’, autonomy and personal identity by enabling unprecedented access to brain data.

It warned that without robust regulation, the benefits of neuro-technology may come with costs such as privacy violations, unequal access and intrusive commercial uses.

The concerns align with broader debates about how advanced technologies, such as AI, are reshaping society, ethics, and international governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Denmark’s new chat control plan raises fresh privacy concerns

Denmark has proposed an updated version of the EU’s controversial ‘chat control’ regulation, shifting from mandatory to voluntary scanning of private messages. Former MEP Patrick Breyer has warned, however, that the revision still threatens Europeans’ right to private communication.

Under the new plan, messaging providers could choose to scan chats for illegal material, but without a clear requirement for court orders. Breyer argued that this sidesteps the European Parliament’s position, which insists on judicial authorisation before any access to communications.

He also criticised the proposal for banning under-16s from using messaging apps like WhatsApp and Telegram, claiming such restrictions would prove ineffective and easily bypassed. In addition, the plan would effectively outlaw anonymous communication, requiring users to verify their identities through IDs.

Privacy advocates say the Danish proposal could set a dangerous precedent by eroding fundamental digital rights. Civil society groups have urged EU lawmakers to reject measures that compromise secure, anonymous communication essential for journalists and whistleblowers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

‘Wooing and suing’ defines News Corp’s AI strategy

News Corp chief executive Robert Thomson warned AI companies against using unlicensed publisher content, calling recipients of ‘stolen goods’ fair game for pursuit. He said ‘wooing and suing’ would proceed in parallel, with more licensing deals expected after the OpenAI pact.

Thomson argued that high-quality data must be paid for and that ingesting material without permission undermines incentives to produce journalism. He insisted that ‘content crime does not and will not pay,’ signalling stricter enforcement ahead.

While criticising bad actors, he praised partners that recognise publisher IP and are negotiating usage rights. The company is positioning itself to monetise archives and live reporting through structured licences.

He also pointed to a major author settlement with another AI firm as a watershed for compensation over past training uses. The message: legal and commercial paths are both accelerating.

Against this backdrop, News Corp said AI-related revenues are gaining traction alongside digital subscriptions and B2B data services. Further licensing announcements are likely in the coming months.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suleyman sets limits for safer superintelligence at Microsoft

Microsoft AI says its work toward superintelligence will be explicitly ‘humanist’, designed to keep people at the top of the food chain. In a new blog post, Microsoft AI head Mustafa Suleyman announced a team focused on building systems that are subordinate, controllable, and designed to serve human interests.

Suleyman says superintelligence should not be unbounded. Models will be calibrated, contextualised, and limited to align with human goals. He joined Microsoft last year as its AI CEO, which has begun rolling out its first in-house models for text, voice, and images.

The move lands amid intensifying competition in advanced AI. Under a revised agreement with OpenAI, Microsoft can now independently pursue AGI or partner elsewhere. Suleyman says Microsoft AI will reject race narratives while acknowledging the need to advance capability and governance together.

Microsoft’s initial use cases emphasise an AI companion to help people learn, act, and feel supported; healthcare assistance to augment clinicians; and tools for scientific discovery in areas such as clean energy. The intent is to combine productivity gains with stronger safety controls from the outset.

‘Humans matter more than AI,’ Suleyman writes, casting ‘humanist superintelligence’ as technology that stays on humanity’s team. He frames the programme as a guard against Pandora’s box risks by binding robust systems to explicit constraints, oversight, and application contexts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!