UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings AI to personalised shopping

Google is working with major retailers to use AI in guiding customers from product discovery to checkout. The company has launched the Universal Commerce Protocol, an open standard for seamless agentic commerce that keeps retailers in control of customer relationships.

The Universal Commerce Protocol works with existing systems and partners, including Shopify, Etsy, Wayfair, Target, and Walmart.

Customers can receive personalised offers, loyalty rewards, and recommendations in Google Search or Gemini, completing purchases via Google Pay without leaving the platform.

To support retailers, Google has launched Gemini Enterprise for Customer Experience, which unifies search, commerce, and service touchpoints across all channels.

Early partners, such as The Home Depot and McDonald’s, are already utilising AI-powered agents to enhance service, provide proactive recommendations, and improve customer engagement.

Logistics also feature prominently, with Wing expanding delivery capabilities alongside Walmart, doubling operations in existing markets, and rolling out to Houston, Orlando, Tampa, Charlotte, and other cities.

Google aims to create an end-to-end shopping ecosystem where AI, agentic protocols, and seamless delivery elevate both customer and retailer experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google removes AI health summaries after safety concerns

Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.

Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.

Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.

The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Instagram responds to claims of user data exposure

Reports published by cybersecurity researchers indicated that data linked to approximately 17.5 million Instagram accounts has been offered for sale on underground forums.

The dataset reportedly includes usernames, contact details and physical address information, raising broader concerns around digital privacy and data aggregation.

A few hours later, Instagram responded by stating that no breach of internal systems occurred. According to the company, some users received password reset emails after an external party abused a feature that has since been addressed.

The platform said affected accounts remained secure, with no unauthorised access recorded.

Security analysts have noted that risks arise when online identifiers are combined with external datasets, rather than originating from a single platform.

Such aggregation can increase exposure to targeted fraud, impersonation and harassment, reinforcing the importance of cautious digital security practices across social media ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global AI adoption reaches record levels in 2025

Global adoption of generative AI continued to rise in the second half of 2025, reaching 16.3 percent of the world’s population. Around one in six people now use AI tools for work, learning, and problem-solving, marking rapid progress for a technology still in its early years.

Adoption remains uneven, with the Global North growing nearly twice as fast as the Global South. Countries with early investments in digital infrastructure and AI policies, including the UAE, Singapore, and South Korea, lead the way.

South Korea saw the most significant gain, rising seven spots globally due to government initiatives, improved Korean-language models, and viral consumer trends.

The UAE maintains its lead, benefiting from years of foresight, including early AI strategy, dedicated ministries, and regulatory frameworks that foster trust and widespread usage.

Meanwhile, open-source platforms such as DeepSeek are expanding access in underserved markets, including Africa, China, and Iran, lowering financial and technical barriers for millions of new users.

While AI adoption grows globally, disparities persist. Policymakers and developers face the challenge of ensuring that the next wave of AI users benefits broader communities, narrowing divides rather than deepening them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gmail enters the Gemini era with AI-powered inbox tools

Google is reshaping Gmail around its Gemini AI models, aiming to turn email into a proactive assistant for more than three billion users worldwide.

With inbox volumes continuing to rise, the focus shifts towards managing information flows instead of simply sending and receiving messages.

New AI Overviews allow Gmail to summarise long email threads and answer natural language questions directly from inbox content.

Users can retrieve details from past conversations without complex searches, while conversation summaries roll out globally at no cost, with advanced query features reserved for paid AI subscriptions.

Writing tools are also expanding, with Help Me Write, upgraded Suggested Replies, and Proofread features designed to speed up drafting while preserving individual tone and style.

Deeper personalisation is planned through connections with other Google services, enabling emails to reflect broader user context.

A redesigned AI Inbox further prioritises urgent messages and key tasks by analysing communication patterns and relationships.

Powered by Gemini 3, these features begin rolling out in the US in English, with additional languages and regions scheduled to follow during 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!