The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.
A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.
Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.
Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.
Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.
Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.
The investigation could last months and may have wider implications for content ranking systems already under scrutiny.
Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of Chicago are using AI to uncover insights into how the human brain processes surprise. The project, directed by Associate Professor Monica Rosenberg, compares human and AI responses to narrative moments to explore cognitive processes.
The study involved participants listening to stories whilst researchers recorded their responses through brain scans. Researchers then fed identical stories to the language model Llama, prompting it to predict subsequent text after each segment.
When AI predictions diverged from actual story content, that gap served as a measure of surprise, mirroring the discrepancy human readers experience when expectations fail.
Results showed a striking alignment between AI prediction errors and both participants’ reported feelings and brain-scan activity patterns. The correlation emerged when texts were analysed in 10 to 20-word chunks, suggesting humans and AI encode surprise at broader levels where ideas unfold.
Fourth-year data science student Bella Summe, involved in the Cognition, Attention and Brain Lab research, noted the creative challenge of working in an emerging field.
Few studies have explored whether LLM prediction errors could serve as measures of human surprise, requiring constant problem-solving and experimental design adaptation throughout the project.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI in breast cancer screening reduced late diagnoses by 12% and increased early detection rates in the largest trial of its kind. The Swedish study involved 100,000 women randomly assigned to AI-supported screening or standard radiologist readings between April 2021 and December 2022.
The AI system analysed mammograms and assigned low-risk cases to single readings and high-risk cases to double readings by radiologists.
Results published in The Lancet showed 1.55 cancers per 1,000 women in the AI group versus 1.76 in the control group, with 81% detected at the screening stage, compared with 74% in the control group.
Dr Kristina Lång from Lund University said AI-supported mammography could reduce radiologist workload pressures and improve early detection, but cautioned that implementation must be done carefully with continuous monitoring.
Researchers stressed that screening still requires at least one human radiologist working alongside AI, rather than AI replacing human radiologists. Cancer Research UK’s Dr Sowmiya Moorthie called the findings promising but noted more research is needed to confirm life-saving potential.
Breast Cancer Now’s Simon Vincent highlighted the significant potential for AI to support radiologists, emphasising that earlier diagnosis improves treatment outcomes for a disease that affects over 2 million people globally each year.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.
Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.
Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.
The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.
Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.
Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.
Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.
The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched Prism, a cloud-based LaTeX workspace designed to streamline the drafting, collaboration, and publication of academic papers. The tool integrates writing, citation management, real-time collaboration, and AI assistance into a single environment to reduce workflow friction.
Built specifically for scientific use, Prism embeds GPT-5.2 directly inside documents rather than as a separate chatbot. Researchers can rewrite sections, verify equations, test arguments, and clarify explanations without leaving the editing interface, positioning AI as a background collaborator.
Users can start new LaTeX projects or upload existing files through prism.openai.com using a ChatGPT account. Co-authors can join instantly, enabling simultaneous editing while maintaining structured formatting for equations, references, and manuscript layout.
OpenAI says Prism supports academic search, converts handwritten formulas into clean LaTeX, and allows voice-driven edits for faster reviews. Completed papers export as publication-ready PDFs alongside full source files.
Initially available for free to personal ChatGPT users, the workspace will later expand to Business, Enterprise, and Education plans. The company frames the tool as a practical productivity layer rather than a research disruption platform.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Jason Stockwood, the UK investment minister, has suggested that a universal basic income could help protect workers as AI reshapes the labour market.
He argued that rapid advances in automation will cause disruptive shifts across several sectors, meaning the country must explore safety mechanisms rather than allowing sudden job losses to deepen inequality. He added that workers will need long-term retraining pathways as roles disappear.
Concern about the economic impact of AI continues to intensify.
Research by Morgan Stanley indicates that the UK is losing more jobs than it is creating because of automation and is being affected more severely than other major economies.
Warnings from London’s mayor, Sadiq Khan and senior global business figures, including JP Morgan’s chief executive Jamie Dimon, point to the risk of mass unemployment unless governments and companies step in with support.
Stockwood confirmed that a universal basic income is not part of formal government policy, although he said people inside government are discussing the idea.
He took up his post in September after a long career in the technology sector, including senior roles at Match.com, Lastminute.com and Travelocity, as well as leading a significant sale of Simply Business.
Additionally, Stockwood said he no longer pushes for stronger wealth-tax measures, but he criticised wealthy individuals who seek to minimise their contributions to public finances. He suggested that those who prioritise tax avoidance lack commitment to their communities and the country’s long-term success.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has announced a new pricing model for third-party AI chatbots operating on WhatsApp, where regulators require the company to permit them, starting with Italy.
From 16 February 2026, developers will be charged about $0.0691 (€0.0572/£ 0.0572/£0.0498) per AI-generated response that’s not a predefined template.
This move follows Italy’s competition authority intervening to force Meta to suspend its ban on third-party AI bots on the WhatsApp Business API, which had taken effect in January and led many providers (like OpenAI, Perplexity and Microsoft) to discontinue their chatbots on the platform.
Meta says the fee applies only where legally required to open chatbot access, and this pricing may set a precedent if other markets compel similar access.
WhatsApp already charges businesses for ‘template’ API messages (e.g. notifications, authentication), but this is the first instance of explicit charges tied to AI responses, potentially leading to high costs for high-volume chatbot usage.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is rolling out an AI-powered browsing agent inside Chrome, allowing users to automate routine online tasks. The feature is being introduced in the US for AI Pro and AI Ultra subscribers.
The Gemini agent can interact directly with websites in the US, including opening pages, clicking buttons and completing complex online forms. Testers reported successful use for tasks such as tax paperwork and licence renewals.
Google said Gemini AI integrates with password management tools while requiring user confirmation for payments and final transactions. Security safeguards and fraud detection systems have been built into Chrome for US users.
LG Group affiliates are expanding into physical AI by combining robotics hardware, industrial data, and advanced AI models. The strategy aims to deliver integrated autonomous systems across industries. The group is positioning itself along the complete robotics value chain.
LG Electronics is strengthening its role in robotic actuators that enable precise humanoid movement. Leveraging decades of motor engineering, it recently launched the AXIUM actuator brand. The company has also expanded its investments across robotics manufacturers.
The company’s AI Research division is working on programs that help machines understand the real world. Its special lab puts seeing and language skills into robots and factory systems. The aim is for machines to predict and act autonomously in real time.
The CNS division is teaching robots the skills they need for different jobs. LG Display is making robot screens using bendable panels that perform well in harsh environments. Both groups are using their cars’ and factories’ know-how to build robots.
Power and sensing tools complete the group’s robot plans. LG Energy Solution makes powerful batteries for moving robots, while LG Innotek creates cameras and sensors. Group leaders see building intelligent machines as key to future growth.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Australians are increasingly using generative AI for everyday tasks, with meal preparation and recipe planning emerging as the most common applications. A recent survey found that many households rely on ChatGPT for cooking inspiration, practical advice, and saving time.
The OpenAI-commissioned research shows strong uptake of AI tools for home renovations, DIY projects, and household budgeting. Many users rely on the technology to summarise news, plan meals, and solve routine problems, reinforcing its role as a personal assistant.
Work-related tasks remain another major area of use, particularly for drafting emails, clarifying information, and summarising meetings. Large numbers of respondents reported saving several hours each week, underscoring how generative AI is reshaping productivity and daily routines across Australia.
Generative AI adoption is highest among younger Australians, with usage strongest among those aged between 18 and 34. The trend reflects shifting digital habits and growing comfort with AI-driven tools across daily life and work decision-making.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!