The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.
As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.
X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.
The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.
Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.
Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.
Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.
Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.
Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.
Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
