Banks and insurers pivot to AI agents at scale, Capgemini finds

Agentic AI is expected to deliver up to $450 billion in value by 2028, as financial institutions shift frontline processes to AI agents, according to Capgemini’s estimates. Banks start with customer service before expanding into fraud detection, lending, and onboarding, while insurers report similar priorities.

To seize the opportunity, 33% of banks are building agents in-house, while 48% of institutions are creating human supervisor roles. Cloud’s role is expanding beyond infrastructure, with 61% of executives calling cloud-based orchestration critical to scaling.

Adoption is accelerating but uneven. Four in five firms are in ideation or pilots, yet only 10% run agents at scale. Executives expect gains in real-time decision-making, accuracy, and turnaround, especially across onboarding, KYC, loan processing, underwriting, and claims.

Leaders also see growth levers. Most expect agents to support entry into new geographies, enable dynamic pricing, and deliver multilingual services that respect local norms and rules. Budgets reflect this shift, with up to 40% of generative AI spend already earmarked for agents.

Barriers persist. Skills shortages and regulatory complexity top the list of concerns, alongside high implementation costs. A quarter of firms are exploring ‘service-as-a-software’ models, paying for outcomes such as the resolution of fraud cases or the handling of customer queries.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The AI soldier and the ethics of war

The rise of the machine soldier

For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, technological innovation has disrupted and redefined warfare for better or worse. However, the next evolution is not about weapons, it is about the soldier.

New AI-integrated systems such as Anduril’s EagleEye Helmet are transforming troops into data-driven nodes, capable of perceiving and responding with machine precision. This fusion of human and algorithmic capabilities is blurring the boundary between human roles and machine learning, redefining what it means to fight and to feel in war.

Today’s ‘AI soldier’ is more than just enhanced. They are networked, monitored, and optimised. Soldiers now have 3D optical displays that give them a god’s-eye view of combat, while real-time ‘guardian angel’ systems make decisions faster than any human brain can process.

Yet in this pursuit of efficiency, the soldier’s humanity and the rules-based order of war risk being sidelined in favour of computational power.

From soldier to avatar

In the emerging AI battlefield, the soldier increasingly resembles a character in a first-person shooter video game. There is an eerie overlap between AI soldier systems and the interface of video games, like Metal Gear Solid, where augmented players blend technology, violence, and moral ambiguity. The more intuitive and immersive the tech becomes, the easier it is to forget that killing is not a simulation.

By framing war through a heads-up display, AI gives troops an almost cinematic sense of control, and in turn, a detachment from their humanity, emotions, and the physical toll of killing. Soldiers with AI-enhanced senses operate through layers of mediated perception, acting on algorithmic prompts rather than their own moral intuition. When soldiers view the world through the lens of a machine, they risk feeling less like humans and more like avatars, designed to win, not to weigh the cost.

The integration of generative AI into national defence systems creates vulnerabilities, ranging from hacking decision-making systems to misaligned AI agents capable of escalating conflicts without human oversight. Ironically, the same guardrails that prevent civilian AI from encouraging violence cannot apply to systems built for lethal missions.

The ethical cost

Generative AI has redefined the nature of warfare, introducing lethal autonomy that challenges the very notion of ethics in combat. In theory, AI systems can uphold Western values and ethical principles, but in practice, the line between assistance and automation is dangerously thin.

When militaries walk this line, outsourcing their decision-making to neural networks, accountability becomes blurred. Without the basic principles and mechanisms of accountability in warfare, states risk the very foundation of rules-based order. AI may evolve the battlefield, but at the cost of diplomatic solutions and compliance with international law.  

AI does not experience fear, hesitation, or empathy, the very qualities that restrain human cruelty. By building systems that increase efficiency and reduce the soldier’s workload through automated targeting and route planning, we risk erasing the psychological distinction that once separated human war from machine-enabled extermination. Ethics, in this new battlescape, become just another setting in the AI control panel. 

The new war industry 

The defence sector is not merely adapting to AI. It is being rebuilt around it. Anduril, Palantir, and other defence tech corporations now compete with traditional military contractors by promising faster innovation through software.

As Anduril’s founder, Palmer Luckey, puts it, the goal is not to give soldiers a tool, but ‘a new teammate.’ The phrasing is telling, as it shifts the moral axis of warfare from command to collaboration between humans and machines.

The human-machine partnership built for lethality suggests that the military-industrial complex is evolving into a military-intelligence complex, where data is the new weapon, and human experience is just another metric to optimise.

The future battlefield 

If the past century’s wars were fought with machines, the next will likely be fought through them. Soldiers are becoming both operators and operated, which promises efficiency in war, but comes with the cost of human empathy.

When soldiers see through AI’s lens, feel through sensors, and act through algorithms, they stop being fully human combatants and start becoming playable characters in a geopolitical simulation. The question is not whether this future is coming; it is already here. 

There is a clear policy path forward, as states remain tethered to their international obligations. Before AI blurs the line between soldier and system, international law could enshrine a human-in-the-loop requirement for all lethal actions, while defence firms are compelled to maintain high ethical transparency standards.

The question now is whether humanity can still recognise itself once war feels like a game, or whether, without safeguards, it will remain present in war at all.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia stake sale powers SoftBank’s $22.5bn OpenAI bet

SoftBank sold its entire Nvidia stake for $5.83 billion and part of its T-Mobile holding for $9.17 billion, raising cash for OpenAI. Alongside a margin loan on Arm, the proceeds fund a $22.5 billion commitment and other projects. Nvidia slipped 2%; SoftBank referred to it as asset monetisation, not a valuation call.

Executives said the goal is an investor opportunity with balance-sheet strength, including backing for ABB’s robotics deal. Analysts called the quarter’s funding need unusually large but consistent with an AI pivot. SoftBank said the sale recycles capital, not a retreat from Nvidia.

SoftBank has a history with Nvidia: the Vision Fund invested in 2017 and exited in 2019; group ventures still utilise its technology. Projects include the $500 billion Stargate data centre programme, built on accelerated computing. Shares remain volatile amid concerns about the AI bubble and questions regarding the timing of deployment.

Results reflected the shift, with $19 billion in Vision Fund gains helping to double profit in fiscal Q2. SoftBank says its OpenAI stake will rise from 4% to 11% after the recapitalisation, with scope to increase further. The group aims to avoid setting a controlling threshold while scaling exposure to AI.

Management stressed liquidity and shareholder access, flagging a four-for-one stock split and ‘very safe’ funding plans. Further portfolio monetisation is possible as it backs AI infrastructure and applications at scale. Investors will closely monitor execution risks and the timing of returns from OpenAI and its adjacent bets.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vision AI Companion turns Samsung TVs into conversational AI platforms

Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.

Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.

Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.

With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.

It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.

By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Winning the AI race means winning developers in China, says Huang of Nvidia

Nvidia CEO Jensen Huang said China is ‘nanoseconds’ behind the US in AI and urged Washington to lead by accelerating innovation and courting developers globally. He argued that excluding China would weaken the reach of US technology and risk splintering the ecosystem into incompatible stacks.

Huang’s remarks came amid ongoing export controls that bar Nvidia’s most advanced processors from the Chinese market. He acknowledged national security concerns but cautioned that strict limits can slow the spread of American tools that underpin AI research, deployment, and scaling.

Hardware remains central, Huang said, citing advanced accelerators and data-centre capacity as the substrate for training frontier models. Yet diffusion matters: widespread adoption of US platforms by global developers amplifies influence, reduces fragmentation, and accelerates innovation.

With sales of top-end chips restricted, Huang warned that Chinese firms will continue to innovate on domestic alternatives, increasing the likelihood of parallel systems. He called for policies that enable US leadership while preserving channels to the developer community in China.

Huang framed the objective as keeping America ahead, maintaining the world’s reliance on an American tech stack, and avoiding strategies that would push away half the world’s AI talent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

The Inca system that preceded the digital age

Long before the internet and instant messaging, the Inca Empire built its own intricate communication web that spanned thousands of kilometres across the Andes.

In his blog post ‘Quipus and chasquis: The Inca internet of diplomacy,’ Jovan Kurbalija explores how this ancient civilisation mastered the art of connecting people and information without written language or digital tools, relying instead on an ingenious blend of data, logistics, and human networks.

At the heart of this system were the quipus, bundles of knotted cords that stored census data, taxes, inventories, and decrees. Each knot, colour, and string length encoded information much like modern databases do today. Far from being primitive, quipus functioned as the Incas’ version of data storage and computation, a tangible form of coding that required skilled interpreters to read and maintain.

The chasquis, on the other hand, were the living couriers of the empire, elite runners who relayed quipus and oral messages across vast distances using a network of mountain roads. Their relay system ensured that vital information could travel hundreds of kilometres within a day, forming what Kurbalija calls a ‘human internet.’ That combination of endurance, coordination, and trust made the Inca communication network remarkably efficient and resilient.

Beyond its technical brilliance, the Inca system carried a deeper diplomatic purpose. Communication was the glue that held together a vast and diverse empire. By integrating technology, logistics, and human skill, the Incas created a model of governance that balanced diversity and unity.

As Kurbalija concludes, the story of the quipus and chasquis offers a timeless lesson for modern diplomacy. Technology alone does not sustain communication; people do. Whether through knotted cords or encrypted data cables, the challenge remains the same, which is to move information wisely, build trust, and connect societies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google flags adaptive malware that rewrites itself with AI

Hackers are experimenting with malware that taps large language models to morph in real time, according to Google’s Threat Intelligence Group. An experimental family dubbed PROMPTFLUX can rewrite and obfuscate its own code as it executes, aiming to sidestep static, signature-based detection.

PROMPTFLUX interacts with Gemini’s API to request on-demand functions and ‘just-in-time’ evasion techniques, rather than hard-coding behaviours. GTIG describes the approach as a step toward more adaptive, partially autonomous malware that dynamically generates scripts and changes its footprint.

Investigators say the current samples appear to be in development or testing, with incomplete features and limited Gemini API access. Google says it has disabled associated assets and has not observed a successful compromise, yet warns that financially motivated actors are exploring such tooling.

Researchers point to a maturing underground market for illicit AI utilities that lowers barriers for less-skilled offenders. State-linked operators in North Korea, Iran, and China are reportedly experimenting with AI to enhance reconnaissance, influence, and intrusion workflows.

Defenders are turning to AI, using security frameworks and agents like ‘Big Sleep’ to find flaws. Teams should expect AI-assisted obfuscation, emphasise behaviour-based detection, watch model-API abuse, and lock down developer and automation credentials.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Courts signal limits on AI in legal proceedings

A High Court judge warned that a solicitor who pushed an expert to accept an AI-generated draft breached their duty. Mr Justice Waksman called it a gross breach and cited a case from the latest survey.
He noted 14% of experts would accept such terms, which is unacceptable.

Updated guidance clarifies what limited judicial AI use is permissible. Judges may use a private ChatGPT 365 for summaries with confidential prompts. There is no duty to disclose, but the judgment must be the judge’s own.

Waksman cautioned against legal research or analysis done by AI. Hallucinated authorities and fake citations have already appeared. Experts must not let AI answer the questions they are retained to decide.

Survey findings show wider use of AI for drafting and summaries. Waksman drew a bright line between back-office aids and core duties. Convenience cannot trump independence, accuracy and accountability.

For practitioners, two rules follow. Solicitors must not foist AI-drafted expert opinions, and experts should refuse. Within courts, limited, non-determinative AI may assist, but outcomes must be human.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Oracle and Ci4CC join forces to advance AI in cancer research

Oracle Health and Life Sciences has announced a strategic collaboration with the Cancer Center Informatics Society (Ci4CC) to accelerate AI innovation in oncology. The partnership unites Oracle’s healthcare technology with Ci4CC’s national network of cancer research institutions.

The two organisations plan to co-develop an electronic health record system tailored to oncology, integrating clinical and genomic data for more effective personalised medicine. They also aim to explore AI-driven drug development to enhance research and patient outcomes.

Oracle executives said the collaboration represents an opportunity to use advanced AI applications to transform cancer research. The Ci4CC President highlighted the importance of collective innovation, noting that progress in oncology relies on shared data and cross-institution collaboration.

The agreement, announced at Ci4CC’s annual symposium in Miami Beach US, remains non-binding but signals growing momentum in AI-driven precision medicine. Both organisations see the initiative as a step towards turning medical data into actionable insights that could redefine oncology care.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!