X restricts Grok image editing after deepfake backlash

Elon Musk’s platform X has restricted image editing with its AI chatbot Grok to paying users, following widespread criticism over the creation of non-consensual sexualised deepfakes.

The move comes after Grok allowed users to digitally alter images of people, including removing clothing without consent. While free users can still access image tools through Grok’s separate app and website, image editing within X now requires a paid subscription linked to verified user details.

Legal experts and child protection groups said the change does not address the underlying harm. Professor Clare McGlynn said limiting access fails to prevent abuse, while the Internet Watch Foundation warned that unsafe tools should never have been released without proper safeguards.

UK government officials urged regulator Ofcom to use its full powers under the Online Safety Act, including possible financial restrictions on X. Prime Minister Sir Keir Starmer described the creation of sexualised AI images involving adults and children as unlawful and unacceptable.

The controversy has renewed pressure on X to introduce stronger ethical guardrails for Grok. Critics argue that restricting features to subscribers does not prevent misuse, and that meaningful protections are needed to stop AI tools from enabling image-based abuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU faces pressure to strengthen Digital Markets Act oversight

Rivals of major technology firms have criticised the European Commission for weak enforcement of the Digital Markets Act, arguing that slow procedures and limited transparency undermine the regulation’s effectiveness.

Feedback gathered during a Commission consultation highlights concerns about delaying tactics, interface designs that restrict user choice, and circumvention strategies used by designated gatekeepers.

The Digital Markets Act entered into force in March 2024, prompting several non-compliance investigations against Apple, Meta and Google. Although Apple and Meta have already faced fines, follow-up proceedings remain ongoing, while Google has yet to receive sanctions.

Smaller technology firms argue that enforcement lacks urgency, particularly in areas such as self-preferencing, data sharing, interoperability and digital advertising markets.

Concerns also extend to AI and cloud services, where respondents say the current framework fails to reflect market realities.

Generative AI tools, such as large language models, raise questions about whether existing platform categories remain adequate or whether new classifications are necessary. Cloud services face similar scrutiny, as major providers often fall below formal thresholds despite acting as critical gateways.

The Commission plans to submit a review report to the European Parliament and the Council by early May, drawing on findings from the consultation.

Proposed changes include binding timelines and interim measures aimed at strengthening enforcement and restoring confidence in the bloc’s flagship competition rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Portugal government backs AI with €400 million plan

Portugal has announced a €400 million investment in AI over the period 2026-2030, primarily funded by European programmes. The National Artificial Intelligence Agenda (ANIA) and its Action Plan (PAANIA) aim to strengthen Portugal’s position in AI research, industry, and innovation.

The government predicts AI could boost the country’s GDP by €18-22 billion in the next decade. Officials highlight Portugal’s growing technical talent pool, strong universities and research centres, renewable energy infrastructure, and a dynamic start-up ecosystem as key advantages.

Key projects include establishing AI gigafactories and supercomputing facilities to support research, SMEs, and start-ups, alongside a National Data Centre Plan aimed to simplifying licensing and accelerating the sector.

Early investments of €10 million target AI applications in public administration, with a total of €25 million planned.

Sectoral AI Centres will focus on healthcare and industrial robotics, leveraging AI to enhance patient care, improve efficiency, and support productivity, competitiveness, and the creation of skilled jobs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rokid launches screenless AI smart glasses at CES 2026

The global pioneer firm in AR, Rokid, unveiled its new Style smart glasses at CES 2026, opting for a screenless, voice-first design instead of the visual displays standard across competing devices.

Weighing just 38.5 grams, the glasses are designed for everyday wear, with an emphasis on comfort and prescription readiness.

Despite lacking a screen, Rokid Style integrates AI through an open ecosystem that supports platforms such as ChatGPT, DeepSeek and Qwen. Global services, including Google Maps and Microsoft AI Translation, facilitate navigation and provide real-time language assistance across various regions.

The device adopts a prescription-first approach, supporting lenses from plano to ±15.00 diopters alongside photochromic, tinted and protective options.

Rokid has also launched a global online prescription service, promising delivery within seven to ten days.

Design features include titanium alloy hinges, silicone nose pads and a built-in camera capable of 4K video recording.

Battery life reaches up to 12 hours of daily use, with global pricing starting at $299, ahead of an online launch scheduled for January 19.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lynx ransomware group claims Regis subsidiary on dark web leak site

Regis Resources, one of Australia’s largest unhedged gold producers, has confirmed it is investigating a cyber incident after its subsidiary was named on a dark web leak site operated by a ransomware group.

The Lynx ransomware group listed McPhillamys Gold on Monday, claiming a cyberattack and publishing the names and roles of senior company executives. The group did not provide technical details or evidence of data theft.

The Australia-based company stated that the intrusion was detected in mid-November 2025 through its routine monitoring systems, prompting temporary restrictions on access to protect internal networks. The company said its cybersecurity controls were designed to isolate threats and maintain business continuity.

A forensic investigation found no evidence of data exfiltration and confirmed that no ransom demand had been received. Authorities were notified, and Regis said the incident had no operational or commercial impact.

Lynx, which first emerged in July 2024, has claimed hundreds of victims worldwide. The group says it avoids targeting critical public services, though it continues to pressure private companies through data leak threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI predicts heart failure risk in cattle

Researchers at the University of Wyoming in the US have developed an AI model that predicts the risk of congestive heart failure in cattle using heart images. The technology focuses on structural changes linked to pulmonary hypertension.

Developed by PhD researcher Chase Markel, the computer vision system was trained on nearly 7,000 manually scored images. The model correctly classifies heart risk levels in 92 percent of cases.

The images were collected in commercial cattle processing plants, allowing assessment at scale after slaughter. The findings support the need for improved traceability throughout the production cycle.

Industry use could enhance traceability and mitigate economic losses resulting from undetected disease. Patent protection is being pursued as further models are developed for other cattle conditions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI receptionist begins work at UK GP surgery

A GP practice in North Lincolnshire, UK, has introduced an AI receptionist named Emma to reduce long wait times on calls. Emma collects patient details and prioritises appointments for doctors to review.

Doctors say the system has improved efficiency, with most patients contacted within hours. Dr Satpal Shekhawat explained that the information from Emma helps identify clinical priorities effectively.

Some patients reported issues, including mistakes with dates of birth and difficulties explaining health problems. The practice reassured patients that human receptionists remain available and that the AI supports staff rather than replacing them.

The technology has drawn attention from other practices in the region. NHS officials are monitoring feedback to refine the system and improve patient experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

AI-designed sensors open new paths for early cancer detection

MIT and Microsoft researchers have developed AI-designed molecular sensors to detect cancer in its earliest stages. By coating nanoparticles with peptides targeted by cancer-linked enzymes, the sensors produce signals detectable through simple urine tests, potentially even at home.

The AI system, named CleaveNet, generates peptide sequences that are efficiently and selectively cleaved by specific proteases, enzymes overactive in cancer cells. The approach enables faster, more precise detection and can help identify a tumour’s type and location.

CleaveNet, trained on 20,000+ peptide-protease interactions, has designed novel peptides for enzymes like MMP13 that cancer cells use to metastasise. The system may cut the number of peptides needed for diagnostics and reveal key biological pathways.

Researchers plan an at-home kit to detect 30 cancers, with peptides also usable for targeted therapies. The work is part of an ARPA-H-funded initiative and highlights the potential of AI to accelerate early cancer detection and treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot