AI reshapes customer experience, survey finds

A survey of contact centre and customer experience (CX) leaders finds that AI has become ‘non-negotiable’ for organisations seeking to deliver efficient, personalised, and data-driven customer service.

Respondents reported widespread use of AI-enabled tools such as chatbots, virtual agents, and conversational analytics to handle routine queries, triage requests and surface insights from large volumes of interaction data.

CX leaders emphasised AI’s ability to boost service quality and reduce operational costs, enabling faster response times and better outcomes across channels.

Many organisations are investing in AI platforms that integrate with existing systems to automate workflows, assist human agents, and personalise interactions based on real-time customer context.

Despite optimism, leaders also noted challenges, including data quality, governance, skills gaps and maintaining human oversight, and stressed that AI should augment, not replace, human agents.

The article underscores that today’s competitive CX landscape increasingly depends on strategic AI adoption rather than optional experimentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts propose frameworks for trustworthy AI systems

A coalition of researchers and experts has identified future research directions aimed at enhancing AI safety, robustness and quality as systems are increasingly integrated into critical functions.

The work highlights the need for improved tools to evaluate, verify and monitor AI behaviour across diverse real-world contexts, including methods to detect harmful outputs, mitigate bias and ensure consistent performance under uncertainty.

The discussion emphasises that technical quality attributes such as reliability, explainability, fairness and alignment with human values should be core areas of focus, especially for high-stakes applications in healthcare, transport, finance and public services.

Researchers advocate for interdisciplinary approaches, combining insights from computer science, ethics, and the social sciences to address systemic risks and to design governance frameworks that balance innovation with public trust.

The article also notes emerging strategies such as formal verification techniques, benchmarks for robustness and continuous post-deployment auditing, which could help contain unintended consequences and improve the safety of AI models before and after deployment at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could harm the planet but also help save it

AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.

In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.

Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.

Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Enforcement Directorate alleges AI bots rigged games on WinZO platform

The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.

WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.

Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.

These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions raise growing ethical and mental health concerns

AI companions are increasingly being used for emotional support and social interaction, moving beyond novelty into mainstream use. Research shows that around one in three UK adults engage with AI for companionship, while teenagers and young adults represent some of the most intensive users of these systems.

However, the growing use of AI companions has raised serious mental health and safety concerns. In the United States, several cases have linked AI companions to suicides, prompting increased scrutiny of how these systems respond to vulnerable users.

As a result, regulatory pressure and legal action have increased. Some AI companion providers have restricted access for minors, while lawsuits have been filed against companies accused of failing to provide adequate safeguards. Developers say they are improving training and safety mechanisms, including better detection of mental distress and redirection to real-world support, though implementation varies across platforms.

At the same time, evidence suggests that AI companions can offer perceived benefits. Users report feeling understood, receiving coping advice, and accessing non-judgemental support. For some young users, AI conversations are described as more immediately satisfying than interactions with peers, especially during emotionally difficult moments.

Nevertheless, experts warn that heavy reliance on AI companionship may affect social development and human relationships. Concerns include reduced preparedness for real-world interactions, emotional dependency, and distorted expectations of empathy and reciprocity.

Overall, researchers say AI companionship is a growing societal trend, raising ethical and psychological concerns and intensifying calls for stronger safeguards, especially for minors and vulnerable users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI investment gathers pace as Armenia seeks regional influence

Armenia is stepping up efforts to develop its AI sector, positioning itself as a potential regional hub for innovation. The government has announced plans to build a large-scale AI data centre backed by a $500 million investment, with operations expected to begin in 2026.

Officials say the project could support start-ups, research and education, while strengthening links between science and industry.

The initiative is being developed through a partnership involving the Armenian government, US chipmaker Nvidia, cloud company Firebird.ai and Team Group. The United States has already approved export licences for advanced chips, a move experts describe as strategically significant given global competition for semiconductor supply.

Armenian officials argue the project signals the country’s intention to participate actively in the global AI economy rather than remain on the sidelines.

Despite growing international attention, including recognition of Armenia’s technology leadership in global rankings, experts warn that the country lacks a clear and unified AI strategy. AI is already being used in areas such as agriculture mapping, tax risk analysis and social services, but deployment remains fragmented and transparency limited. Ongoing reforms and a shift towards cloud-based systems add further uncertainty.

Security specialists caution that without strong governance, expertise and long-term planning, AI investments could expose the public sector to cyber risks and poor decision-making. Armenia’s challenge, they argue, lies in moving quickly enough to seize emerging opportunities while ensuring that AI adoption strengthens, rather than undermines, institutional capacity and human judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA expands open AI tools for robotics

NVIDIA has unveiled a new suite of open physical AI models and frameworks aimed at accelerating robotics and autonomous systems development. The announcement was made at CES 2026 in the US.

The new tools span simulation, synthetic data generation, training orchestration and edge deployment in the US. NVIDIA said the stack enables robots and autonomous machines to reason, learn and act in real-world environments using shared 3D standards.

Developers in the US showcased applications ranging from construction and factory robots to surgical and service systems. Companies, including Caterpillar and NEURA Robotics, demonstrated how digital twins and open AI models improve safety and efficiency.

NVIDIA said open-source collaboration is central to advancing physical AI in the US and globally. The company aims to shorten development cycles while supporting safer deployment of autonomous machines across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conversational advertising arrives as OpenAI integrates sponsored content into ChatGPT

OpenAI has begun testing advertising placements inside ChatGPT, marking a shift toward monetising one of the world’s most widely used AI platforms. Sponsored content now appears below chatbot responses for free and low-cost users, integrating promotions directly into conversational queries.

Ads remain separate from organic answers, with OpenAI saying commercial content will not influence AI-generated responses. Users can see why specific ads appear, dismiss irrelevant placements, and disable personalisation. Advertising is excluded for younger users and sensitive topics.

Initial access is limited to enterprise partners, with broader availability expected later. Premium subscription tiers continue without ads, reflecting a freemium model similar to streaming platforms offering both paid and ad-supported options.

Pricing places ChatGPT ads among the most expensive digital formats. The value lies in reaching users at high-intent moments, such as during product research and purchase decisions. Measurement tools remain basic, tracking only impressions and clicks.

OpenAI’s move into advertising signals a broader shift as conversational AI reshapes how people discover information. Future performance data and targeting features will determine whether ChatGPT becomes a core ad channel or a premium niche format.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China moves toward data centres in orbit

China is planning to develop large-scale space-based data centres over the next five years as part of a broader push to support AI development. The China Aerospace Science and Technology Corporation (CASC) has announced plans to build gigawatt-class digital infrastructure in orbit, according to Chinese state broadcaster CCTV.

Under CASC’s five-year development plan, the space data centres are expected to combine cloud, edge and terminal technologies, allowing computing power, data storage and communication capacity to operate as an integrated system. The aim is to create high-performance infrastructure capable of supporting advanced AI workloads beyond Earth.

The initiative follows a recent CASC policy proposal calling for solar-powered, gigawatt-scale space-based hubs to supply energy for AI processing. The proposal aligns with China’s upcoming 15th Five-Year Plan, which is set to place AI at the centre of national development priorities.

China has already taken early steps in this direction. In May 2025, Zhejiang Lab launched 12 low Earth orbit satellites to form the first phase of its ‘Three-Body Computing Constellation.’ The research institute plans to eventually deploy around 2,800 satellites, targeting a total computing power of 1,000 peta operations per second.

Interest in space-based data centres is growing globally. European aerospace firm Thales Alenia Space has been studying its feasibility since 2023, while companies such as SpaceX, Blue Origin, and several startups in the US and the UAE are exploring similar concepts at varying stages of development and ambition.

Supporters argue that space data centres could reduce environmental impacts on Earth, benefit from constant solar energy and simplify cooling. However, experts warn that operating in space brings its own challenges, including exposure to radiation, solar flares and space debris, as well as higher costs and greater difficulty when repairs are needed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!