Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Enforcement Directorate alleges AI bots rigged games on WinZO platform

The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.

WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.

Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.

These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions raise growing ethical and mental health concerns

AI companions are increasingly being used for emotional support and social interaction, moving beyond novelty into mainstream use. Research shows that around one in three UK adults engage with AI for companionship, while teenagers and young adults represent some of the most intensive users of these systems.

However, the growing use of AI companions has raised serious mental health and safety concerns. In the United States, several cases have linked AI companions to suicides, prompting increased scrutiny of how these systems respond to vulnerable users.

As a result, regulatory pressure and legal action have increased. Some AI companion providers have restricted access for minors, while lawsuits have been filed against companies accused of failing to provide adequate safeguards. Developers say they are improving training and safety mechanisms, including better detection of mental distress and redirection to real-world support, though implementation varies across platforms.

At the same time, evidence suggests that AI companions can offer perceived benefits. Users report feeling understood, receiving coping advice, and accessing non-judgemental support. For some young users, AI conversations are described as more immediately satisfying than interactions with peers, especially during emotionally difficult moments.

Nevertheless, experts warn that heavy reliance on AI companionship may affect social development and human relationships. Concerns include reduced preparedness for real-world interactions, emotional dependency, and distorted expectations of empathy and reciprocity.

Overall, researchers say AI companionship is a growing societal trend, raising ethical and psychological concerns and intensifying calls for stronger safeguards, especially for minors and vulnerable users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI investment gathers pace as Armenia seeks regional influence

Armenia is stepping up efforts to develop its AI sector, positioning itself as a potential regional hub for innovation. The government has announced plans to build a large-scale AI data centre backed by a $500 million investment, with operations expected to begin in 2026.

Officials say the project could support start-ups, research and education, while strengthening links between science and industry.

The initiative is being developed through a partnership involving the Armenian government, US chipmaker Nvidia, cloud company Firebird.ai and Team Group. The United States has already approved export licences for advanced chips, a move experts describe as strategically significant given global competition for semiconductor supply.

Armenian officials argue the project signals the country’s intention to participate actively in the global AI economy rather than remain on the sidelines.

Despite growing international attention, including recognition of Armenia’s technology leadership in global rankings, experts warn that the country lacks a clear and unified AI strategy. AI is already being used in areas such as agriculture mapping, tax risk analysis and social services, but deployment remains fragmented and transparency limited. Ongoing reforms and a shift towards cloud-based systems add further uncertainty.

Security specialists caution that without strong governance, expertise and long-term planning, AI investments could expose the public sector to cyber risks and poor decision-making. Armenia’s challenge, they argue, lies in moving quickly enough to seize emerging opportunities while ensuring that AI adoption strengthens, rather than undermines, institutional capacity and human judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA expands open AI tools for robotics

NVIDIA has unveiled a new suite of open physical AI models and frameworks aimed at accelerating robotics and autonomous systems development. The announcement was made at CES 2026 in the US.

The new tools span simulation, synthetic data generation, training orchestration and edge deployment in the US. NVIDIA said the stack enables robots and autonomous machines to reason, learn and act in real-world environments using shared 3D standards.

Developers in the US showcased applications ranging from construction and factory robots to surgical and service systems. Companies, including Caterpillar and NEURA Robotics, demonstrated how digital twins and open AI models improve safety and efficiency.

NVIDIA said open-source collaboration is central to advancing physical AI in the US and globally. The company aims to shorten development cycles while supporting safer deployment of autonomous machines across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conversational advertising arrives as OpenAI integrates sponsored content into ChatGPT

OpenAI has begun testing advertising placements inside ChatGPT, marking a shift toward monetising one of the world’s most widely used AI platforms. Sponsored content now appears below chatbot responses for free and low-cost users, integrating promotions directly into conversational queries.

Ads remain separate from organic answers, with OpenAI saying commercial content will not influence AI-generated responses. Users can see why specific ads appear, dismiss irrelevant placements, and disable personalisation. Advertising is excluded for younger users and sensitive topics.

Initial access is limited to enterprise partners, with broader availability expected later. Premium subscription tiers continue without ads, reflecting a freemium model similar to streaming platforms offering both paid and ad-supported options.

Pricing places ChatGPT ads among the most expensive digital formats. The value lies in reaching users at high-intent moments, such as during product research and purchase decisions. Measurement tools remain basic, tracking only impressions and clicks.

OpenAI’s move into advertising signals a broader shift as conversational AI reshapes how people discover information. Future performance data and targeting features will determine whether ChatGPT becomes a core ad channel or a premium niche format.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China moves toward data centres in orbit

China is planning to develop large-scale space-based data centres over the next five years as part of a broader push to support AI development. The China Aerospace Science and Technology Corporation (CASC) has announced plans to build gigawatt-class digital infrastructure in orbit, according to Chinese state broadcaster CCTV.

Under CASC’s five-year development plan, the space data centres are expected to combine cloud, edge and terminal technologies, allowing computing power, data storage and communication capacity to operate as an integrated system. The aim is to create high-performance infrastructure capable of supporting advanced AI workloads beyond Earth.

The initiative follows a recent CASC policy proposal calling for solar-powered, gigawatt-scale space-based hubs to supply energy for AI processing. The proposal aligns with China’s upcoming 15th Five-Year Plan, which is set to place AI at the centre of national development priorities.

China has already taken early steps in this direction. In May 2025, Zhejiang Lab launched 12 low Earth orbit satellites to form the first phase of its ‘Three-Body Computing Constellation.’ The research institute plans to eventually deploy around 2,800 satellites, targeting a total computing power of 1,000 peta operations per second.

Interest in space-based data centres is growing globally. European aerospace firm Thales Alenia Space has been studying its feasibility since 2023, while companies such as SpaceX, Blue Origin, and several startups in the US and the UAE are exploring similar concepts at varying stages of development and ambition.

Supporters argue that space data centres could reduce environmental impacts on Earth, benefit from constant solar energy and simplify cooling. However, experts warn that operating in space brings its own challenges, including exposure to radiation, solar flares and space debris, as well as higher costs and greater difficulty when repairs are needed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google launches Project Genie allowing users to create interactive AI-generated worlds

Google has launched Project Genie, an experimental prototype that allows users to create and explore interactive AI-generated worlds. The web application, powered by Genie 3, Nano Banana Pro, and Gemini, is rolling out to Google AI Ultra subscribers in the US aged 18 and over.

Genie 3 represents a world model that simulates environmental dynamics and predicts how actions affect them in real time. Unlike static 3D snapshots, the technology generates paths in real time as users move and interact, simulating physics for dynamic environments.

Project Genie centres on three core capabilities: world sketching, exploration, and remixing. Users can prompt with text and images to create environments, define character perspectives, and preview worlds before entering.

As users navigate, the system generates paths in real time based on their actions.

The experimental prototype has known limitations, including generation restrictions to 60 seconds, potential deviations from prompts or real-world physics, and occasional character controllability issues.

Google emphasises responsible development as part of its mission to build AI that benefits humanity, with ongoing improvements planned based on user feedback.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Large language models mirror human brain responses to unexpected twists

Researchers at the University of Chicago are using AI to uncover insights into how the human brain processes surprise. The project, directed by Associate Professor Monica Rosenberg, compares human and AI responses to narrative moments to explore cognitive processes.

The study involved participants listening to stories whilst researchers recorded their responses through brain scans. Researchers then fed identical stories to the language model Llama, prompting it to predict subsequent text after each segment.

When AI predictions diverged from actual story content, that gap served as a measure of surprise, mirroring the discrepancy human readers experience when expectations fail.

Results showed a striking alignment between AI prediction errors and both participants’ reported feelings and brain-scan activity patterns. The correlation emerged when texts were analysed in 10 to 20-word chunks, suggesting humans and AI encode surprise at broader levels where ideas unfold.

Fourth-year data science student Bella Summe, involved in the Cognition, Attention and Brain Lab research, noted the creative challenge of working in an emerging field.

Few studies have explored whether LLM prediction errors could serve as measures of human surprise, requiring constant problem-solving and experimental design adaptation throughout the project.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot