AI agents redefine customer service efficiency

Companies are transforming routine customer interactions into effortless experiences using AI-powered agents. Instead of endless phone transfers, users now get instant answers or bookings through Agentforce-powered systems.

The focus is not on selling more products, but on improving satisfaction with existing services.

Travel platform Engine is already seeing results. Its Agentforce assistant, Eva, can process partial booking cancellations in seconds by combining customer data with internal booking tools.

By narrowing Eva’s focus to a handful of topics, Engine improved both response speed and customer satisfaction by six points. The result is less frustration, reduced hold times, and smoother travel management.

Retailer Williams Sonoma, Inc. is also personalising customer interactions through its virtual assistant, Olive. Beyond processing returns, Olive provides menu suggestions, wine pairings, and meal preparation schedules to help customers host effortlessly.

The aim, according to Chief Technology and Digital Officer Sameer Hassan, is to deliver experiences that teach and inspire rather than promote sales.

Luxury fitness brand Equinox follows a similar path. Its AI assistant now helps members find and book classes directly, reducing clicks and improving usability. As EVP and CTO, Eswar Veluri said simplifying patterns is key to enhancing member experience through innovative tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption surges with consumers but stalls in business

In a recent analysis, Goldman Sachs warned that while AI is rapidly permeating the consumer market, enterprise integration is falling much further behind.

The report highlights consumer-facing tools, such as chatbots and generative creative applications, driving the usage surge, but finds that business uptake is still ‘well below where we expected’ a year or two ago.

Goldman’s analysts point out a striking disjunction: consumer adoption is high, yet corporations are slower to embed AI deeply into workflows. One analyst remarked that although nearly 88 % of companies report using AI in some capacity, only about a third have scaled it enterprise-wide and just 39 % see measurable financial impact.

Meanwhile, infrastructure spending on AI is exploding, with projections of 3-4 trillion US dollars by the end of the decade, raising concerns among investors about return on investment and whether the current frenzy resembles past tech bubbles.

For policy-makers, digital-economy strategists and technology governance watchers, this gap has important implications. Hype and hardware build-out may be outpacing deliverables in enterprise contexts.

The divide also underlines the need for more precise metrics around productivity, workforce adaptation and organisational readiness in our discussions around AI policy and digital diplomacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chubb launches digital insurance engine with AI recommendations

A global insurance leader, Chubb, launched a new AI-driven embedded insurance optimisation engine within its Chubb Studio platform during the Singapore FinTech Festival. The announcement marks a significant step in enabling digital distribution partners to offer personalised insurance products more effectively.

The engine uses proprietary AI to analyse customer data, identify personas, recommend relevant insurance products (such as phone damage, travel insurance, hospital cash or life cover) at the point of sale, and deliver click-to-engage options for higher-value products.

Integration models range from Chubb-managed to partner-managed or hybrid, giving flexibility in how partners embed the solution.

From a digital-economy and policy perspective, this development highlights how insurance firms are leveraging AI to personalise customer journeys and integrate insurance seamlessly into consumer platforms and apps.

The shift raises essential questions about data utilisation, transparency of recommendation engines and how insurers strike the balance between innovation and consumer protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Language models mimic human belief reasoning

In a recent paper, researchers at Stevens Institute of Technology revealed that large language models (LLMs) use a small, specialised subset of their parameters to perform tasks associated with the psychological concept of ‘Theory of Mind’ (ToM), the human ability to infer others’ beliefs, intentions and perspectives.

The study found that although LLMs activate almost their whole network for each input, the ToM-related reasoning appears to rely disproportionately on a narrow internal circuit, particularly shaped by the model’s positional encoding mechanism.

This discovery matters because it highlights a significant efficiency gap between human brains and current AI systems: humans carry out social-cognitive tasks with only a tiny fraction of neural activity, whereas LLMs still consume substantial computational resources even for ‘simple’ reasoning.

The researchers suggest these points as a way to design AI models that are more brain-inspired, selectively activating only those parameters needed for particular tasks.

From a policy and digital-governance perspective, this raises questions about how we interpret AI’s understanding and social cognition.

If AI can exhibit behaviour that resembles human belief-reasoning, oversight frameworks and transparency standards become all the more critical in assessing what AI systems are doing, and what they are capable of.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google launches Private AI Compute for secure cloud-AI

In a move that underscores the evolving balance between capability and privacy in AI, Google today introduced Private AI Compute. This new cloud-based processing platform supports its most advanced models, such as those in the Gemini family, while maintaining what it describes as on-device-level data security.

The blog post explains that many emerging AI tasks now exceed the capabilities of on-device hardware alone. To solve this, Google built Private AI Compute to offload heavy computation to its cloud, powered by custom Tensor Processing Units (TPUs) and wrapped in a fortified enclave environment called Titanium Intelligence Enclaves (TIE).

The system uses remote attestation, encryption and IP-blinding relays to ensure user data remains private and inaccessible; ot even Google’s supposed to gain access.

Google identifies initial use-cases in its Pixel devices: features such as Magic Cue and Recorder will benefit from the extra compute, enabling more timely suggestions, multilingual summarisation and advanced context-aware assistance.

At the same time, the company says this platform ‘opens up a new set of possibilities for helpful AI experiences’ that go beyond what on-device AI alone can fully achieve.

This announcement is significant from both a digital policy and platform economy perspective. It illustrates how major technology firms are reconciling user privacy demands with the computational intensity of next-generation AI.

For organisations and governments focused on AI governance and digital diplomacy, the move raises questions about data sovereignty, transparency of remote enclaves and the true nature of ‘secure ‘cloud processing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI system tracks tsunami through atmospheric ripples

Scientists have successfully tracked a tsunami in real time using ripples in Earth’s atmosphere for the first time.

The breakthrough came after a powerful 8.8 magnitude earthquake struck off Russia’s Kamchatka Peninsula in July 2025, sending waves racing across the Pacific and triggering NASA’s newly upgraded Guardian monitoring system.

Guardian uses AI to detect disruptions in satellite navigation signals caused by atmospheric ripples above the ocean.

These signals revealed the formation and movement of tsunami waves, allowing alerts to be issued up to 40 minutes before they reached Hawaii, potentially giving communities vital time to respond.

Researchers say the innovation could transform global disaster monitoring by enabling earlier warnings for tsunamis, volcanic eruptions, and even nuclear tests.

Although the system is still in development, scientists in Europe are working on similar models that could expand coverage and provide life-saving alerts to remote coastal regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI faces major copyright setback in US court

A US federal judge has ruled that a landmark copyright case against OpenAI can proceed, rejecting the company’s attempt to dismiss claims brought by authors and the Authors Guild.

The authors argue that ChatGPT’s summaries of copyrighted works, including George R.R. Martin’s Game of Thrones, unlawfully replicate the original tone, plot, and characters, raising concerns about AI-generated content infringing on creative rights.

The Publishers Association (PA) welcomed the ruling, warning that generative AI could ‘devastate the market’ for books and other creative works by producing infringing content at scale.

It urged the UK government to strengthen transparency rules to protect authors and publishers, stressing that AI systems capable of reproducing an author’s style could undermine the value of original creation.

The case follows a £1.5bn settlement against Anthropic earlier this year for using pirated books to train its models and comes amid growing scrutiny of AI firms.

In Britain, Stability AI recently avoided a copyright ruling after a claim by Getty Images was dismissed on grounds of jurisdiction. Still, the PA stated that the outcome highlighted urgent gaps in UK copyright law regarding AI training and output.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brussels leak signals GDPR and AI Act adjustments

The European Commission is preparing a Digital Package on simplification for 19 November. A leaked draft outlines instruments covering GDPR, ePrivacy, Data Act and AI Act reforms.

Plans include a single breach portal and a higher reporting threshold. Authorities would receive notifications within 96 hours, with standardised forms and narrower triggers. Controllers could reject or charge for data subject access requests used to pursue disputes.

Cookie rules would shift toward browser-level preference signals respected across services. Aggregated measurement and security uses would not require popups, while GDPR lawful bases expand. News publishers could receive limited exemptions recognising reliance on advertising revenues.

Drafting recognises legitimate interest for training AI models on personal data. Narrow allowances are provided for sensitive data during development, along with EU-wide data protection impact assessment templates. Critics warn proposals dilute safeguards and may soften the AI Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google and Cassava expand Gemini access in Africa

Google announced a partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-free Gemini usage for eligible users coordinated through Cassava’s network partners. The initiative aims to address affordability and adoption barriers for mobile users.

A six-month trial of the Google AI Plus plan is part of the package. Benefits include access to more capable Gemini models and added cloud storage. Coverage by regional tech outlets reported the exact core details.

Education features were highlighted, including NotebookLM for study aids and Gemini in Docs for writing support. Google said the offer aims to help students, teachers, and creators work without worrying about data usage. Reports highlight a focus on youth and skills development.

Cassava’s role aligns with broader investments in AI infrastructure and services across the continent; recent announcements reference model exchanges and planned AI facilities that support regional development. Observers see momentum behind accessible AI tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot