Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI becomes optional in Firefox 148 as Mozilla launches new control system

Mozilla has confirmed that Firefox will include a built-in ‘AI kill switch‘ from version 148, allowing users to disable all AI features across the browser. The update follows earlier commitments that AI tools would remain optional as Firefox evolves into what the company describes as an AI-enabled browser.

The new controls will appear in the desktop release scheduled to begin rolling out on 24 February. A dedicated AI Controls section will allow users to turn off every AI feature at once or manage each tool individually, reflecting Mozilla’s aim to balance innovation with user choice.

At launch, Firefox 148 will introduce AI-powered translations, automatic alt text for images in PDFs, tab grouping suggestions, link previews, and an optional sidebar chatbot supporting services such as ChatGPT, Claude, Copilot, Gemini, and Le Chat Mistral.

All of these tools can be disabled through a single ‘Block AI enhancements’ toggle, which removes prompts and prevents new AI features from appearing. Mozilla has said preferences will remain in place across updates, with users able to adjust settings at any time.

The organisation said the approach is intended to give people full control over how AI appears in their browsing experience, while continuing development for those who choose to use it. Early access to the controls will also be available through Firefox Nightly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act guidance delay raises compliance uncertainty

The European Commission has missed a key deadline to issue guidance on how companies should classify high-risk AI systems under the EU AI Act, fuelling uncertainty around the landmark law’s implementation.

Guidance on Article 6, which defines high-risk AI systems and stricter compliance rules, was due by early February. Officials have indicated that feedback is still being integrated, with a revised draft expected later this month and final adoption potentially slipping to spring.

The delay follows warnings that regulators and businesses are unprepared for the act’s most complex rules, due to apply from August. Brussels has suggested delaying high-risk obligations under its Digital Omnibus package, citing unfinished standards and the need for legal clarity.

Industry groups want enforcement delayed until guidance and standards are finalised, while some lawmakers warn repeated slippage could undermine confidence in the AI Act. Critics warn further changes could deepen uncertainty if proposed revisions fail or disrupt existing timelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI legal tool rattles European data stocks

European data and legal software stocks fell sharply after US AI startup Anthropic launched a new tool for corporate legal teams. The company said the software can automate contract reviews, compliance workflows, and document triage, while clarifying that it does not offer legal advice.

Investors reacted swiftly, sending shares in Pearson, RELX, Sage, Wolters Kluwer, London Stock Exchange Group, and Experian sharply lower. Thomson Reuters also suffered a steep decline, reflecting concern that AI tools could erode demand for traditional data-driven services.

Market commentators warned that broader adoption of AI in professional services could compress margins or bypass established providers altogether. Morgan Stanley flagged intensifying competition, while AJ Bell pointed to rising investor anxiety across the sector.

The sell-off also revived debate over AI’s impact on employment, particularly in legal and other office-based roles. Recent studies suggest the UK may face greater disruption than other large economies as companies adopt AI tools, even as productivity gains continue to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Innovation and security shape the UAE’s tech strategy

The United Arab Emirates is strengthening its global tech role by treating advanced innovation as a pillar of sovereignty rather than a standalone growth driver. National strategy increasingly links technology with long-term economic resilience, security, and geopolitical relevance.

A key milestone was the launch of the UAE Advanced Technology Centre with the Technology Innovation Institute and the World Economic Forum, announced alongside the Davos gathering.

The initiative highlights the UAE’s transition from technology consumer to active participant in shaping global governance frameworks for emerging technologies.

The centre focuses on policy and governance for areas including artificial intelligence, quantum computing, biotechnology, robotics, and space-based payment systems.

Backed by a flexible regulatory environment, the UAE is promoting regulatory experimentation and translating research into real-world applications through institutions such as the Mohamed bin Zayed University of Artificial Intelligence and innovation hubs like Masdar City.

Alongside innovation, authorities are addressing rising digital risks, particularly deepfake technologies that threaten financial systems, public trust, and national security.

By combining governance, ethical standards, and international cooperation, the UAE is advancing a model of digital sovereignty that prioritises security, shared benefits, and long-term strategic independence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT restored after global outage disrupts users worldwide

OpenAI faced a wave of global complaints after many users struggled to access ChatGPT.

Reports began circulating in the US during the afternoon, with outage cases climbing to more than 12.000 in less than half an hour. Social media quickly filled with questions from people trying to determine whether the disruption was widespread or a local glitch.

Also, users in the UK reported complete failure to generate responses, yet access returned when they switched to a US-based VPN.

Other regions saw mixed results, as VPNs in Ireland, Canada, India and Poland allowed ChatGPT to function, although replies were noticeably slower instead of consistent.

OpenAI later confirmed that several services were experiencing elevated errors. Engineers identified the source of the disruption, introduced mitigations and continued monitoring the recovery.

The company stressed that users in many regions might still experience intermittent problems while the system stabilises rather than operating at full capacity.

In the following update, OpenAI announced that its systems were fully operational again.

The status page indicated that the affected services had recovered, and engineers were no longer aware of active issues. The company added that the underlying fault was addressed, with further safeguards being developed to prevent similar incidents.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Major Chinese data leak exposes billions of records

Cybersecurity researchers uncovered an unsecured database exposing 8.7 billion records linked to individuals and businesses in China. The data was found in early January 2026 and remained accessible online for more than three weeks.

The China focused dataset included national ID numbers, home addresses, email accounts, social media identifiers and passwords. Researchers warned that the scale of exposure in China creates serious risks of identity theft and account takeovers.

The records were stored in a large Elasticsearch cluster hosted on so called bulletproof infrastructure. Analysts believe the structure suggests deliberate aggregation in China rather than an accidental misconfiguration.

Although the database is now closed, experts say actors targeting China may have already copied the data. China has experienced several major leaks in recent years, highlighting persistent weaknesses in large scale data handling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alternative social platform UpScrolled passes 2.5 million users

UpScrolled has surpassed 2.5 million users globally, gaining rapid momentum following TikTok’s restructuring of its US ownership earlier this year, according to founder Issam Hijazi.

The social network grew to around 150,000 users in its first six months before accelerating sharply in January, crossing one million users within weeks and reaching more than 2.5 million shortly afterwards.

Positioned as a hybrid of Instagram and X, UpScrolled promotes itself as an open platform free of shadowbanning and selective content suppression, while criticising major technology firms for data monetisation and algorithm-driven engagement practices.

Hijazi said the company would avoid amplification algorithms but acknowledged the need for community guidelines, particularly amid concerns about explicit content appearing on the platform.

Interest in alternative social networks has increased since TikTok’s shift to US ownership, though analysts note that long-term growth will depend on moderation frameworks, feature development, and sustained community trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!