UK considers regulatory action after Grok’s deepfake images on X

UK Prime Minister Keir Starmer is consulting Canada and Australia on a coordinated response to concerns surrounding social media platform X, after its AI assistant Grok was used to generate sexualised deepfake images of women and children.

The discussions focus on shared regulatory approaches rather than immediate bans.

X acknowledged weaknesses in its AI safeguards and limited image generation to paying users. Lawmakers in several countries have stated that further regulatory scrutiny may be required, while Canada has clarified that no prohibition is currently under consideration, despite concerns over platform responsibility.

In the UK, media regulator Ofcom is examining potential breaches of online safety obligations. Technology secretary Liz Kendall confirmed that enforcement mechanisms remain available if legal requirements are not met.

Australian Prime Minister Anthony Albanese also raised broader concerns about social responsibility in the use of generative AI.

X owner Elon Musk rejected accusations of non-compliance, describing potential restrictions as censorship and suppression of free speech.

European authorities requested the preservation of internal records for possible investigations, while Indonesia and Malaysia have already blocked access to the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google removes AI health summaries after safety concerns

Google removed some AI health summaries after a Guardian investigation found they gave misleading and potentially dangerous information. The AI Overviews contained inaccurate liver test data, potentially leading patients to believe they were healthy falsely.

Experts have criticised AI Overviews for oversimplifying complex medical topics, ignoring essential factors such as age, sex, and ethnicity. Charities have warned that misleading AI content could deter people from seeking medical care and erode trust in online health information.

Google removed AI Overviews for some queries, but concerns remain over cancer and mental health summaries that may still be inaccurate or unsafe. Professionals emphasise that AI tools must direct users to reliable sources and advise seeking expert medical input.

The company stated it is reviewing flagged examples and making broad improvements, but experts insist that more comprehensive oversight is needed to prevent AI from dispensing harmful health misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Indonesia and Malaysia restrict access to Grok AI over content safeguards

Malaysia and Indonesia have restricted access to Grok, the AI chatbot available through the X platform, following concerns about its image generation capabilities.

Authorities said the tool had been used to create manipulated images depicting real individuals in sexually explicit contexts.

Regulatory bodies in Malaysia and Indonesia stated that the decision was based on the absence of sufficient safeguards to prevent misuse.

Requests for additional risk mitigation measures were communicated to the platform operator, with access expected to remain limited until further protections are introduced.

The move has drawn attention from regulators in other regions, where online safety frameworks allow intervention when digital services fail to address harmful content. Discussions have focused on platform responsibility, content moderation standards, and compliance with existing legal obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI race shows diverging paths for China and the US

The US administration’s new AI action plan frames global development as an AI race with a single winner. Officials argue AI dominance brings economic, military, and geopolitical advantages. Experts say competition is unfolding across multiple domains.

The United States continues to lead in the development of advanced large language and multimodal models by firms such as OpenAI, Google, and Anthropic. American companies also dominate global computing infrastructure. Control over high-end AI chips and data-centre capacity remains concentrated in US firms.

Chinese companies are narrowing the gap in the practical applications of AI. Models from Alibaba, DeepSeek, and Moonshot AI perform well in tasks such as translation, coding, and customer service. Performance at the cutting edge still lags behind US systems.

Washington’s decision to allow limited exports of Nvidia’s H200 AI chips to China reflects a belief that controlled sales can preserve US leadership. Critics argue the move risks weakening America’s computing advantage. Concerns persist over long-term strategic consequences.

Rather than a decisive victory for either side in the AI race, analysts foresee an era of asymmetric competition in AI. The United States may dominate advanced AI services, but China is expected to lead in large-scale industrial deployment within the evolving AI race.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wegmans faces backlash over facial recognition in US stores

Supermarket chain Wegmans Food Markets is facing scrutiny over its use of facial recognition technology. The issue emerged after New York City stores displayed signs warning that biometric data could be collected for security purposes.

New York law requires businesses to disclose biometric data collection, but the wording of the notices alarmed privacy advocates. Wegmans later said it only uses facial recognition, not voice or eye scans, and only in a small number of higher-risk stores.

According to the US company, the system identifies individuals who have been previously flagged for misconduct, such as theft or threatening behaviour. Wegmans says facial recognition is just one investigative tool and that all actions are subject to human review.

Critics argue the signage suggests broader surveillance than the company admits. Wegmans has not explained why the notices mention eyes and voice if that data is not collected, or when the wording might be revised.

Lawmakers in Connecticut have now proposed a ban on retail facial recognition. Supporters say grocery shopping is essential and that biometric monitoring weakens meaningful customer consent.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI helps solve alpine rescue mystery

AI-powered image analysis helped Italian rescuers locate a missing mountaineer in the Alps. Traditional searches had failed across vast, remote terrain despite days of effort.

Drones captured thousands of images which AI software scanned for unusual colours and shapes. A small red object, later confirmed as a helmet, guided teams to the site.

The climber’s body was found in a steep gully on Monviso, in Italy, after AI narrowed search zones. Manual checks and human judgement remained essential to confirm findings.

Rescue experts say AI can cut search times dramatically but cannot replace human oversight. Terrain complexity, weather, and ethical concerns still limit wider deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UMMC conducts one of first multi-country live surgeries using 5G and AI

Universiti Malaya Medical Centre has carried out what it described as one of the world’s first real-time, multi-country live surgeries using a 5G-enabled AI and extended reality platform.

The ear, nose, and throat procedure took place in Petaling Jaya using apoQlar’s HoloMedicine Robotics extended reality system. Surgeons were connected with international students and specialists through CelcomDigi’s 5G network.

Participants joined from the United States, South Korea, Bhutan, the Philippines, Indonesia, Thailand, Singapore, and several states in Malaysia. Institutions included Harvard Medical School, Mayo Clinic, and Vanderbilt University Medical Centre.

The platform delivered three-dimensional views, live annotations, and two-way communication between the surgical team and international experts. CelcomDigi said its ultra-low-latency 5G connectivity enabled high-definition video and synchronised audio throughout the procedure.

UMMC said the live surgeries initiative demonstrated how extended reality and AI tools can support remote training and specialist collaboration without disrupting clinical workflows. The hospital plans to conduct further live urology, colorectal, and ENT sessions using the same system.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!