EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global AI adoption reaches record levels in 2025

Global adoption of generative AI continued to rise in the second half of 2025, reaching 16.3 percent of the world’s population. Around one in six people now use AI tools for work, learning, and problem-solving, marking rapid progress for a technology still in its early years.

Adoption remains uneven, with the Global North growing nearly twice as fast as the Global South. Countries with early investments in digital infrastructure and AI policies, including the UAE, Singapore, and South Korea, lead the way.

South Korea saw the most significant gain, rising seven spots globally due to government initiatives, improved Korean-language models, and viral consumer trends.

The UAE maintains its lead, benefiting from years of foresight, including early AI strategy, dedicated ministries, and regulatory frameworks that foster trust and widespread usage.

Meanwhile, open-source platforms such as DeepSeek are expanding access in underserved markets, including Africa, China, and Iran, lowering financial and technical barriers for millions of new users.

While AI adoption grows globally, disparities persist. Policymakers and developers face the challenge of ensuring that the next wave of AI users benefits broader communities, narrowing divides rather than deepening them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gmail enters the Gemini era with AI-powered inbox tools

Google is reshaping Gmail around its Gemini AI models, aiming to turn email into a proactive assistant for more than three billion users worldwide.

With inbox volumes continuing to rise, the focus shifts towards managing information flows instead of simply sending and receiving messages.

New AI Overviews allow Gmail to summarise long email threads and answer natural language questions directly from inbox content.

Users can retrieve details from past conversations without complex searches, while conversation summaries roll out globally at no cost, with advanced query features reserved for paid AI subscriptions.

Writing tools are also expanding, with Help Me Write, upgraded Suggested Replies, and Proofread features designed to speed up drafting while preserving individual tone and style.

Deeper personalisation is planned through connections with other Google services, enabling emails to reflect broader user context.

A redesigned AI Inbox further prioritises urgent messages and key tasks by analysing communication patterns and relationships.

Powered by Gemini 3, these features begin rolling out in the US in English, with additional languages and regions scheduled to follow during 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netomi shows how to scale enterprise AI safely

Netomi has developed a blueprint for scaling enterprise AI, utilising GPT-4.1 for rapid tool use and GPT-5.2 for multi-step reasoning. The platform supports complex workflows, policy compliance, and heavy operational loads, serving clients such as United Airlines and DraftKings.

The company emphasises three core lessons. First, systems must handle real-world complexity, orchestrating multiple APIs, databases, and tools to maintain state and situational awareness across multi-step workflows.

Second, parallelised architectures ensure low latency even under extreme demand, keeping response times fast and reliable during spikes in activity.

Third, governance is embedded directly into the runtime, enforcing compliance, protecting sensitive data, and providing deterministic fallbacks when AI confidence is low.

Netomi demonstrates how agentic AI can be safely scaled, providing enterprises with a model for auditable, predictable, and resilient intelligent systems. These practices serve as a roadmap for organisations seeking to move AI from experimental tools to production-ready infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

Meet the voice-first AI companion with personality

Portola has launched Tolan, a voice-first AI companion that learns from ongoing conversations through personalised, animated characters. Tolan is designed for open-ended dialogue, making voice interactions more natural and engaging than standard text-based AI.

Built around memory and character design, the platform uses real-time context reconstruction to maintain personality and track shifting topics. Each turn, the system retrieves user memories, persona traits, and conversation tone, enabling coherent, adaptive responses.

GPT‑5.1 has improved latency, steerability, and consistency, reducing memory recall errors by 30% and boosting next-day retention over 20%.

Tolan’s architecture combines fast vector-based memory, dynamic emotional adjustment, and layered persona scaffolds. Sub-second responses and context rebuilding help the AI handle topic changes, maintain tone, and feel more human-like.

Since February 2025, Tolan has gained over 200,000 monthly users, earning a 4.8-star rating on the App Store. Future plans focus on multimodal voice agents integrating vision, context, and enhanced steerability to expand the boundaries of interactive AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Universal Music Group partners with NVIDIA on AI music strategy

UMG has entered a strategic collaboration with NVIDIA to reshape how billions of fans discover, experience and engage with music by using advanced AI.

An initiative that combines NVIDIA’s AI infrastructure with UMG’s extensive global catalogue, aiming to elevate music interaction instead of relying solely on traditional search and recommendation systems.

The partnership will focus on AI-driven discovery and engagement that interprets music at a deeper cultural and emotional level.

By analysing full-length tracks, the technology is designed to surface music through narrative, mood and context, offering fans richer exploration while helping artists reach audiences more meaningfully.

Artist empowerment sits at the centre of the collaboration, with plans to establish an incubator where musicians and producers help co-design AI tools.

The goal is to enhance originality and creative control instead of producing generic outputs, while ensuring proper attribution and protection of copyrighted works.

Universal Music Group and NVIDIA also emphasise responsible AI development, combining technical safeguards with industry oversight.

By aligning innovation with artist rights and fair compensation, both companies aim to set new standards for how AI supports creativity across the global music ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!