EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Latam-GPT signals new AI ambition in Latin America

Chile has introduced Latam-GPT to strengthen Latin America’s presence in global AI.

The project, developed by the National Centre for Artificial Intelligence with support across South America, aims to correct long-standing biases by training systems on the region’s own data instead of material drawn mainly from the US or Europe.

President Gabriel Boric said the model will help maintain cultural identity and allow the region to take a more active role in technological development.

Latam-GPT is not designed as a conversational tool but rather as a vast dataset that serves as the foundation for future applications. More than eight terabytes of information have been collected, mainly in Spanish and Portuguese, with plans to add indigenous languages as the project expands.

The first version has been trained on Amazon Web Services. At the same time, future work will run on a new supercomputer at the University of Tarapacá, supported by millions of dollars in regional funding.

The model reflects growing interest among countries outside the major AI hubs of the US, China and Europe in developing their own technology instead of relying on foreign systems.

Researchers in Chile argue that global models often include Latin American data in tiny proportions, which can limit accurate representation. Despite questions about resources and scale, supporters believe Latam-GPT can deliver practical benefits tailored to local needs.

Early adoption is already underway, with the Chilean firm Digevo preparing customer service tools based on the model.

These systems will operate in regional languages and recognise local expressions, offering a more natural experience than products trained on data from other parts of the world.

Developers say the approach could reduce bias and promote more inclusive AI across the continent.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI safety leader quits Anthropic with global risk warning

A prominent AI safety researcher has resigned from Anthropic, issuing a stark warning about global technological and societal risks. Mrinank Sharma announced his departure in a public letter, citing concerns spanning AI development, bioweapons, and broader geopolitical instability.

Sharma led AI safeguards research, including model alignment, bioterrorism risks, and human-AI behavioural dynamics. Despite praising his tenure, he said ethical tensions and pressures hindered the pursuit of long-term safety priorities.

His exit comes amid wider turbulence across the AI sector. Another researcher recently left OpenAI, raising concerns over the integration of advertising into chatbot environments and the psychological implications of increasingly human-like AI interactions.

Anthropic, founded by former OpenAI staff, balances commercial AI deployment with safety and risk mitigation. Sharma plans to return to the UK to study poetry, stepping back from AI research amid global uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young voices seek critical approach to AI in classrooms

In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.

Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.

Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.

After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Next-gen AI infrastructure boosted by Samsung HBM4

Samsung Electronics has commenced mass production and commercial shipments of its next-generation HBM4 memory, marking the first industry deployment of the advanced high-bandwidth solution.

The launch strengthens the company’s position in AI infrastructure hardware as demand for accelerated computing intensifies.

Built on sixth-generation 10nm-class DRAM and a 4nm logic base die, HBM4 delivers transfer speeds of 11.7Gbps, with performance scalable to 13Gbps. Bandwidth per stack has surged, reducing data bottlenecks as AI models and processing demands grow.

Engineering upgrades extend beyond raw speed. Enhanced stacking architecture, low-power design integration, and thermal optimisation have improved energy efficiency and heat dissipation, supporting large-scale data centre deployments and sustained GPU workloads.

Production scale-up is already in motion, backed by expanded manufacturing capacity and industry partnerships. Samsung expects HBM revenue growth to accelerate into 2026, with next-generation variants and custom configurations scheduled for future release cycles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Illicit trafficking payments rise across blockchain channels

Cryptocurrency flows linked to suspected human trafficking services surged sharply in 2025, with transaction volumes rising 85% year-on-year, according to new blockchain analysis.

Investigators say the financial activity reflects the rapid expansion of digitally enabled exploitation networks operating across borders.

Growth is linked to Southeast Asia-based illicit networks, including scam compounds, gambling platforms, and laundering groups operating via encrypted messaging channels.

Analysts identified multiple trafficking service categories, each with distinct transaction structures and payment preferences.

Stablecoins became the dominant payment method, especially for escort networks, thanks to their price stability and ease of conversion. Larger transfers and structured pricing models indicate increasingly professionalised operations supported by organised financial infrastructure.

Despite the scale of the activity, blockchain transparency continues to provide enforcement advantages. Transaction tracing has aided investigations, shutdowns, and arrests, strengthening digital forensics in combating trafficking-linked financial crime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers blanket crypto ban targeting Russia

European Union officials are weighing a sweeping prohibition on cryptocurrency transactions involving Russia, signalling a more rigid sanctions posture against alternative financial networks.

Policymakers argue that the rapid emergence of replacement crypto service providers has undermined existing restrictions.

Internal European Commission discussions indicate concern that digital assets are facilitating trade flows supporting Russia’s war economy. Authorities say platform-specific sanctions are ineffective, as new entities quickly replicate restricted services.

Proposals under review extend beyond private crypto platforms. Measures could include sanctions on additional Russian banks, restrictions linked to the digital ruble, and scrutiny of payments infrastructure tied to sanctioned trade channels.

The consensus remains uncertain, with some states warning that a blanket ban could shift activity to non-European markets. Parallel trade controls targeting dual-use exports to Kyrgyzstan are also being considered as part of broader anti-circumvention efforts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT starts limited advertising rollout in the US

OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.

OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.

Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.

Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.

The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI model achieves accurate detection of placenta accreta spectrum in high-risk pregnancies

A new AI model has shown strong potential for detecting placenta accreta spectrum, a dangerous condition that often goes undiagnosed during pregnancy.

Researchers presented the findings at the annual meeting of the Society for Maternal-Fetal Medicine, highlighting that traditional screening identifies only about half of all cases.

Placenta accreta spectrum arises when the placenta attaches abnormally to the uterine wall, often after previous surgical procedures such as caesarean delivery.

The condition can trigger severe haemorrhage, organ failure, and death, yet many pregnancies with elevated risk receive inconclusive or incorrect assessments through standard ultrasound examinations.

A study that involved a retrospective review by specialists at the Baylor College of Medicine, who analysed 2D obstetric ultrasound images from 113 high-risk pregnancies managed at the Texas Children’s Hospital between 2018 and 2025.

The AI system detected every confirmed case of placenta accreta spectrum, produced two false positives, and generated no false negatives.

Researchers believe such technology could significantly improve early identification and clinical preparation.

They argue that AI screening, when used in addition to current methods, may reduce maternal complications and support safer outcomes for patients facing this increasingly common condition.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

X given deadline by Brazil to curb Grok sexualised outputs

Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.

The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.

Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.

The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.

According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!