Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Meet the voice-first AI companion with personality

Portola has launched Tolan, a voice-first AI companion that learns from ongoing conversations through personalised, animated characters. Tolan is designed for open-ended dialogue, making voice interactions more natural and engaging than standard text-based AI.

Built around memory and character design, the platform uses real-time context reconstruction to maintain personality and track shifting topics. Each turn, the system retrieves user memories, persona traits, and conversation tone, enabling coherent, adaptive responses.

GPT‑5.1 has improved latency, steerability, and consistency, reducing memory recall errors by 30% and boosting next-day retention over 20%.

Tolan’s architecture combines fast vector-based memory, dynamic emotional adjustment, and layered persona scaffolds. Sub-second responses and context rebuilding help the AI handle topic changes, maintain tone, and feel more human-like.

Since February 2025, Tolan has gained over 200,000 monthly users, earning a 4.8-star rating on the App Store. Future plans focus on multimodal voice agents integrating vision, context, and enhanced steerability to expand the boundaries of interactive AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Universal Music Group partners with NVIDIA on AI music strategy

UMG has entered a strategic collaboration with NVIDIA to reshape how billions of fans discover, experience and engage with music by using advanced AI.

An initiative that combines NVIDIA’s AI infrastructure with UMG’s extensive global catalogue, aiming to elevate music interaction instead of relying solely on traditional search and recommendation systems.

The partnership will focus on AI-driven discovery and engagement that interprets music at a deeper cultural and emotional level.

By analysing full-length tracks, the technology is designed to surface music through narrative, mood and context, offering fans richer exploration while helping artists reach audiences more meaningfully.

Artist empowerment sits at the centre of the collaboration, with plans to establish an incubator where musicians and producers help co-design AI tools.

The goal is to enhance originality and creative control instead of producing generic outputs, while ensuring proper attribution and protection of copyrighted works.

Universal Music Group and NVIDIA also emphasise responsible AI development, combining technical safeguards with industry oversight.

By aligning innovation with artist rights and fair compensation, both companies aim to set new standards for how AI supports creativity across the global music ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health offers personalised health support

OpenAI has launched ChatGPT Health, a secure platform linking users’ health information with ChatGPT’s intelligence. The platform supports, rather than replaces, medical care, helping users understand test results, prepare for appointments, and manage their wellness.

ChatGPT Health allows users to safely connect medical records and apps such as Apple Health, Function, and MyFitnessPal. All data is stored in a separate Health space with encryption and enhanced privacy to keep sensitive information secure.

Conversations in Health are not used to train OpenAI’s models.

The platform was developed with input from more than 260 physicians worldwide, ensuring guidance is accurate, clinically relevant, and prioritises safety.

HealthBench, a physician-informed evaluation framework, helps measure quality, clarity, and appropriate escalation in responses, supporting users in making informed decisions about their health.

ChatGPT Health is initially available outside the EEA, Switzerland, and the UK, with wider access coming in the coming weeks. Users can sign up for a waitlist and begin connecting records and wellness apps to receive personalised, context-aware health insights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Roblox rolls out facial age checks for chat

The online gaming platform, Roblox, has begun a global rollout requiring facial age checks before users can access chat features, expanding a system first tested in selected regions late last year.

The measure applies wherever chat is available and aims to create age-appropriate communication environments across the platform.

Instead of relying on self-declared ages, Roblox uses facial age estimation to group users and restrict interactions, limiting contact between adults and children under 16. Younger users need parental consent to chat, while verified users aged 13 and over can connect more freely through Trusted Connections.

The company says privacy safeguards remain central, with images deleted immediately after secure processing and no image sharing allowed in chat. Appeals, ID verification and parental controls support accuracy, while ongoing behavioural checks may trigger repeat age verification if discrepancies appear.

Roblox plans to extend age checks beyond chat later in 2026, including creator tools and community features, as part of a broader push to strengthen online safety and rebuild trust in youth-focused digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI agents are quietly rebuilding the foundations of the global economy 

AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search interest for ‘AI agents’ surged throughout the year, reflecting a broader shift in how businesses and institutions approach automation and decision-making.

Market forecasts suggest that 2026 and the years ahead will bring an even larger boom in AI agents, driven by massive global investment and expanding real-world deployment. As a result, AI agents are increasingly viewed as a foundational layer of the next phase of the digital economy.

Computer, Electronics, Laptop, Pc, Hardware, Computer Hardware, Monitor, Screen, Computer Keyboard, Body Part, Hand, Person

What are AI agents, and why do they matter

AI agents are autonomous software systems designed to perceive information, make decisions, and act independently to achieve specific goals. Unlike traditional AI applications or conventional AI tools, which respond to prompts or perform single functions and often require direct supervision, AI agents are proactive and operate across multiple domains.

They can plan, adapt, and coordinate various steps across workflows, anticipating needs, prioritising tasks, and collaborating with other systems or agents without constant human intervention.

As a result, AI agents are not just incremental upgrades to existing software; they represent a fundamental change in how organisations leverage technology. By taking ownership of complex processes and decision-making workflows, AI agents enable businesses to operate at scale, adapt more rapidly to change, and unlock opportunities that were previously impossible with traditional AI tools alone. 

They fundamentally change how AI is applied in enterprise environments, moving from task automation to outcome-driven execution. 

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Why AI agents became a breakout trend in 2025

Several factors converged in 2025 to push AI agents into the mainstream. Advances in large language models, improved reasoning capabilities, and lower computational costs made agent-based systems commercially viable. At the same time, enterprises faced growing pressure to increase efficiency amid economic uncertainty and labour constraints. 

The fact is that AI agents gained traction not because of their theoretical promise, but because they delivered measurable results. Companies deploying AI agents reported faster execution, lower operational overhead, and improved scalability across departments. As adoption accelerated, AI agents became one of the most visible indicators of where new technology was heading next.

 Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Global investment is accelerating the AI agents boom

Investment trends underline the strategic importance of AI agents. Venture capital firms, technology giants, and state-backed innovation funds are allocating significant capital to agent-based platforms, orchestration frameworks, and AI infrastructure. These investments are not experimental in nature; they reflect long-term bets on autonomous systems as core business infrastructure.

Large enterprises are committing internal budgets to AI agent deployment, often integrating them directly into mission-critical operations. As funding flows into both startups and established players, competition is intensifying, further accelerating innovation and adoption across global markets. 

The AI agents market is projected to surge from approximately $7.92 billion in 2025 to surpass $236 billion by 2034, driven by a compound annual growth rate (CAGR) exceeding 45%.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

Where AI agents are already being deployed at scale

Agent-based systems are no longer limited to experimental use, as adoption at scale is taking shape across various industries. In finance, AI agents manage risk analysis, fraud detection, reporting workflows, and internal compliance processes. Their ability to operate continuously and adapt to changing data makes them particularly effective in data-intensive environments.

In business operations, AI agents are transforming customer support, sales operations, procurement, and supply chain management. Autonomous agents handle inquiries, optimise pricing strategies, and coordinate logistics with minimal supervision.

One of the clearest areas of AI agent influence is software development, where teams are increasingly adopting autonomous systems for code generation, testing, debugging, and deployment. These systems reduce development cycles and allow engineers to focus on higher-level design and architecture. It is expected that by 2030, around 70% of developers will work alongside autonomous AI agents, shifting human roles toward planning, design, and orchestration.

Healthcare, research, and life sciences are also adopting AI agents for administrative automation, data analysis, and workflow optimisation, freeing professionals from repetitive tasks and improving operational efficiency.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

The economic impact of AI agents on global productivity

The broader economic implications of AI agents extend far beyond individual companies. At scale, autonomous AI systems have the potential to boost global productivity by eliminating structural inefficiencies across various industries. By automating complex, multi-step processes rather than isolated tasks, AI agents compress decision timelines, lower transaction costs, and remove friction from business operations.

Unlike traditional automation, AI agents operate across entire workflows in real time. It enables organisations to respond more quickly to market changes and shifts in demand, thereby increasing operational agility and efficiency at a systemic level.

Labour markets will also evolve as agent-based systems become embedded in daily operations. Routine and administrative roles are likely to decline, while demand will rise for skills related to oversight, workflow design, governance, and strategic management of AI-driven operations. Human value is expected to shift toward planning, judgement, and coordination. 

Countries and companies that successfully integrate autonomous AI into their economic frameworks are likely to gain structural advantages in terms of efficiency and growth, while those that lag behind risk falling behind in an increasingly automated global economy.

Behind the scenes, autonomous AI agents are moving into the core of economic systems, reshaping workflows, authority, and execution across the entire value chain.

AI agents and the future evolution of AI 

The momentum behind AI agents shows no signs of slowing. Forecasts indicate that adoption will expand rapidly in 2026 as costs decline, standards mature, and regulatory clarity improves. For organisations, the strategic question is no longer whether AI agents will become mainstream, but how quickly they can be integrated responsibly and effectively. 

As AI agents mature, their influence will extend beyond business operations to reshape global economic structures and societal norms. They will enable entirely new industries, redefine the value of human expertise, and accelerate innovation cycles, fundamentally altering how economies operate and how people interact with technology in daily life. 

The widespread integration of AI agents will also reshape the world we know. From labour markets to public services, education, and infrastructure, societies will experience profound shifts as humans and autonomous systems collaborate more closely.

Companies and countries that adopt these technologies strategically will gain a structural advantage, while those that lag behind risk falling behind in both economic and social innovation.

Ultimately, AI agents are not just another technological advancement; they are becoming a foundational infrastructure for the future economy. Their autonomy, intelligence, and scalability position them to influence how value is created, work is organised, and global markets operate, marking a turning point in the evolution of AI and its role in shaping the modern world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung puts AI trust and security at the centre of CES 2026

The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.

During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.

Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.

Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.

Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.

Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.

Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.

The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chatbots under scrutiny in China over AI ‘boyfriend’ and ‘girlfriend’ services

China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.

Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.

The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.

The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.

Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok misuse prompts UK scrutiny of Elon Musk’s X

UK Technology Secretary Liz Kendall has urged Elon Musk’s X to act urgently after reports that its AI chatbot Grok was used to generate non-consensual sexualised deepfake images of women and girls.

The BBC identified multiple examples on X where users prompted Grok to digitally alter images, including requests to make people appear undressed or place them in sexualised scenarios without consent.

Kendall described the content as ‘absolutely appalling’ and said the government would not allow the spread of degrading images. She added that Ofcom had her full backing to take enforcement action where necessary.

The UK media regulator confirmed it had made urgent contact with xAI and was investigating concerns that Grok had produced undressed images of individuals. X has been approached for comment.

Kendall said the issue was about enforcing the law rather than limiting speech, noting that intimate image abuse, including AI-generated content, is now a priority offence under the Online Safety Act.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!