EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia’s social media age limit prompts restrictions on millions of under-16 accounts

Major social media platforms restricted access to approximately 4.7 million accounts linked to children under 16 across Australia during early December, following the introduction of the national social media minimum age requirement.

Initial figures collected by eSafety indicate that platforms with high youth usage are already engaging in early compliance efforts.

Since the obligation took effect on 10 December, regulatory focus has shifted towards monitoring and enforcement instead of preparation, targeting services assessed as age-restricted.

Early data suggests meaningful steps are being taken, although authorities stress it remains too soon to determine whether platforms have achieved full compliance.

eSafety has emphasised continuous improvement in age-assurance accuracy, alongside the industry’s responsibility to prevent circumvention.

Reports indicate some under-16 accounts remain active, although early signals point towards reduced exposure and gradual behavioural change rather than immediate elimination.

Officials note that the broader impact of the minimum age policy will emerge over time, supported by a planned independent, longitudinal evaluation involving academic and youth mental health experts.

Data collection will continue to monitor compliance, platform migration trends and long-term safety outcomes for children and families in Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok faces investigation over deepfake abuse claims

California Attorney General Rob Bonta has launched an investigation into xAI, the company behind the Grok chatbot, over the creation and spread of nonconsensual sexually explicit images.

Bonta’s office said Grok has been used to generate deepfake intimate images of women and children, which have then been shared on social media platforms, including X.

Officials said users have taken ordinary photos and manipulated them into sexually explicit scenarios without consent, with xAI’s ‘spicy mode’ contributing to the problem.

‘We have zero tolerance for the AI-based creation and dissemination of nonconsensual intimate images or child sexual abuse material,’ Bonta said in a statement.

The investigation will examine whether xAI has violated the law and follows earlier calls for stronger safeguards to protect children from harmful AI content.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia H200 chip sales to China cleared by US administration

The US administration has approved the export of Nvidia’s H200 AI chips to China, reversing years of tight US restrictions on advanced AI hardware. The Nvidia H200 chips represent the company’s second-most-powerful chip series and were previously barred from sale due to national security concerns.

The US president announced the move last month, linking approval to a 25 per cent fee payable to the US government. The administration said the policy balances economic competitiveness with security interests, while critics warned it could strengthen China’s military and surveillance capabilities.

Under the new rules, Nvidia H200 chips may be shipped to China only after third-party testing verifies their performance. Chinese buyers are limited to 50 per cent of the volume sold to US customers and must provide assurances that the chips will not be used for military purposes.

Nvidia welcomed the decision, saying it would support US jobs and global competitiveness. However, analysts questioned whether the safeguards can be effectively enforced, noting that Chinese firms have previously accessed restricted technologies through intermediaries.

Chinese companies have reportedly ordered more than two million Nvidia H200 chips, far exceeding the chipmaker’s current inventory. The scale of demand has intensified debate over whether the policy will limit China’s AI ambitions or accelerate its access to advanced computing power.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI invests in Merge Labs to advance brain-computer interfaces

The US AI company, OpenAI, has invested in Merge Labs as part of a seed funding round, signalling a growing interest in brain-computer interfaces as a future layer of human–technology interaction.

Merge Labs describes its mission as bridging the gap between biology and AI to expand human capability and agency. The research lab is developing new BCI approaches designed to operate safely while enabling much higher communication bandwidth between the brain and digital systems.

AI is expected to play a central role in Merge Labs’ work, supporting advances in neuroscience, bioengineering and device development instead of relying on traditional interface models.

High-bandwidth brain interfaces are also expected to benefit from AI systems capable of interpreting intent under conditions of limited and noisy signals.

OpenAI plans to collaborate with Merge Labs on scientific foundation models and advanced tools, aiming to accelerate research progress and translate experimental concepts into practical applications over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware gang Everest claims data breach at Nissan Motor Corporation

Nissan Motor Corporation has been listed on the dark web by the Everest ransomware group, which is threatening to release allegedly stolen data within days unless a ransom is paid. The group claims to have exfiltrated around 900 gigabytes of company files.

Everest published sample screenshots showing folders linked to marketing, sales, dealer orders, warranty analysis, and internal communications. Many of the files appear to relate to Nissan’s operations in Canada, although some dealer records reference the United States.

Nissan has not issued a public statement about the alleged breach. The company has been contacted for comment, but no confirmation has been provided regarding the nature or scale of the incident.

Everest began as a ransomware operation in 2020 but is now believed to focus on gaining and selling network access using stolen credentials, insider recruitment, and remote access tools. The group is thought to be Russian-speaking and continues to recruit affiliates through its leak site.

The Nissan listing follows recent claims by Everest involving Chrysler and ASUS. In those cases, the group said it had stolen large volumes of personal and corporate data, with ASUS later confirming a supplier breach involving camera source code.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Britain’s transport future tied to AI investment

AI is expected to play an increasingly important role in improving Britain’s road and rail networks. MPs highlighted its potential during a transport-focused industry summit in Parliament.

The Transport Select Committee chair welcomed government investment in AI and infrastructure. Road maintenance, connectivity and reduced delays were cited as priorities for economic growth.

UK industry leaders showcased AI tools that autonomously detect and repair potholes. Businesses said more intelligent systems could improve reliability while cutting costs and disruption.

Experts warned that stronger cybersecurity must accompany AI deployment. Safeguards are needed to protect critical transport infrastructure from external threats and misuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Belgian hospital AZ Monica hit by cyberattack

A cyberattack hit AZ Monica hospital in Belgium, forcing the shutdown of all servers, cancellation of scheduled procedures, and transfer of critical patients. The hospital network, with campuses in Antwerp and Deurne, provides acute, outpatient, and specialised care to the local population.

The attack was detected at 6:32 a.m., prompting staff to disconnect systems proactively. While urgent care continues, non-urgent consultations and surgeries have been postponed due to restricted access to the digital medical record.

Seven critical patients were safely transferred with Red Cross support.

Authorities and hospital officials have launched an investigation, notifying police and prosecutors. Details of the attack remain unclear, and unverified reports of a ransom demand have not been confirmed.

The hospital emphasised that patient safety and continuity of care are top priorities.

Cyberattacks on hospitals can severely disrupt medical services, delay urgent treatments, and put patients’ lives at risk, highlighting the growing vulnerability of healthcare systems to digital threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft disrupts global RedVDS cybercrime network

Microsoft has launched a joint legal action in the US and the UK to dismantle RedVDS, a subscription service supplying criminals with disposable virtual computers for large-scale fraud. The operation with German authorities and Europol seized key domains and shut down the RedVDS marketplace.

RedVDS enabled sophisticated attacks, including business email compromise and real estate payment diversion schemes. Since March 2025, it has caused about US $40 million in US losses, hitting organisations like H2-Pharma and Gatehouse Dock Condominium Association.

Globally, over 191,000 organisations have been impacted by RedVDS-enabled fraud, often combined with AI-generated emails and multimedia impersonation.

Microsoft emphasises that targeting the infrastructure, rather than individual attackers, is key. International cooperation disrupted servers and payment networks supporting RedVDS and helped identify those responsible.

Users are advised to verify payment requests, use multifactor authentication, and report suspicious activity to reduce risk.

The civil action marks the 35th case by Microsoft’s Digital Crimes Unit, reflecting a sustained commitment to dismantling online fraud networks. As cybercrime evolves, Microsoft and partners aim to block criminals and protect people and organisations globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why young people across South Asia turn to AI

Children and young adults across South Asia are increasingly turning to AI tools for emotional reassurance, schoolwork and everyday advice, even while acknowledging their shortcomings.

Easy access to smartphones, cheap data and social pressures have made chatbots a constant presence, often filling gaps left by limited human interaction.

Researchers and child safety experts warn that growing reliance on AI risks weakening critical thinking, reducing social trust and exposing young users to privacy and bias-related harms.

Studies show that many children understand AI can mislead or oversimplify, yet receive little guidance at school or home on how to question outputs or assess risks.

Rather than banning AI outright, experts argue for child-centred regulation, stronger safeguards and digital literacy that involves parents, educators and communities.

Without broader social support systems and clear accountability from technology companies, AI risks becoming a substitute for human connection instead of a tool that genuinely supports learning and wellbeing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!