Ofcom expands scrutiny of X over Grok deepfake concerns

The British regulator, Ofcom, has released an update on its investigation into X after reports that the Grok chatbot had generated sexual deepfakes of real people, including minors.

As such, the regulator initiated a formal inquiry to assess whether X took adequate steps to manage the spread of such material and to remove it swiftly.

X has since introduced measures to limit the distribution of manipulated images, while the ICO and regulators abroad have opened parallel investigations.

The Online Safety Act does not cover all chatbot services, as regulation depends on whether a system enables user interactions, provides search functionality, or produces pornographic material.

Many AI chatbots fall partly or entirely outside the Act’s scope, limiting regulators’ ability to act when harmful content is created during one-to-one interactions.

Ofcom cannot currently investigate the standalone Grok service for producing illegal images because the Act does not cover that form of generation.

Evidence-gathering from X continues, with legally binding information requests issued to the company. Ofcom will offer X a full opportunity to present representations before any provisional findings are published.

Enforcement actions take several months, since regulators must follow strict procedural safeguards to ensure decisions are robust and defensible.

Ofcom added that people who encounter harmful or illegal content online are encouraged to report it directly to the relevant platforms. Incidents involving intimate images can be reported to dedicated services for adults or support schemes for minors.

Material that may constitute child sexual abuse should be reported to the Internet Watch Foundation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT restored after global outage disrupts users worldwide

OpenAI faced a wave of global complaints after many users struggled to access ChatGPT.

Reports began circulating in the US during the afternoon, with outage cases climbing to more than 12.000 in less than half an hour. Social media quickly filled with questions from people trying to determine whether the disruption was widespread or a local glitch.

Also, users in the UK reported complete failure to generate responses, yet access returned when they switched to a US-based VPN.

Other regions saw mixed results, as VPNs in Ireland, Canada, India and Poland allowed ChatGPT to function, although replies were noticeably slower instead of consistent.

OpenAI later confirmed that several services were experiencing elevated errors. Engineers identified the source of the disruption, introduced mitigations and continued monitoring the recovery.

The company stressed that users in many regions might still experience intermittent problems while the system stabilises rather than operating at full capacity.

In the following update, OpenAI announced that its systems were fully operational again.

The status page indicated that the affected services had recovered, and engineers were no longer aware of active issues. The company added that the underlying fault was addressed, with further safeguards being developed to prevent similar incidents.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Biodegradable sensors developed to cut e-waste and monitor air pollution

Researchers at Incheon National University have developed biodegradable gas sensors designed to reduce electronic waste while improving air quality monitoring. The technology targets nitrogen dioxide, a pollutant linked to fossil fuel combustion and respiratory diseases.

The sensors are built using organic field-effect transistors, a lightweight and low-energy alternative suited for portable environmental monitoring devices. OFET-based systems are also easier to manufacture compared with traditional silicon electronics.

To create the sensing layer, the research team blended an organic semiconductor polymer, P3HT, with a biodegradable material, PBS. Each polymer was prepared separately in chloroform before being combined into a uniform solution.

Performance varied with solvent composition, with mixtures of chloroform and dichlorobenzene yielding the most consistent and sensitive sensor structures. High PBS concentrations remained effective without compromising detection accuracy.

Project lead Professor Park said the approach balances sustainability and performance, particularly for use in natural environments. The biodegradable design could contribute to long-term pollution monitoring and waste reduction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chinese AI firms offer cash rewards to boost chatbot adoption

Technology firms in China are rolling out large cash incentive campaigns to attract users to their AI chatbots ahead of the expected launch of new AI models later this month.

Alibaba Group has earmarked CNY 3 billion for users of its Qwen AI app, with the promotion beginning on 6 February to coincide with Lunar New Year celebrations.

Tencent Holdings and Baidu have announced similar offers, together committing around CNY 1.5 billion in cash rewards and consumer electronics, including smartphones and televisions.

To qualify for prizes, users must register on the platforms and interact with the chatbots during the promotional period by asking questions or completing everyday planning tasks.

The incentives reflect intensifying competition with global developers such as Google and OpenAI, while also strengthening efforts to position China-based firms as potential local AI partners for Apple in the Chinese market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Education drives Oracle’s strategy for scaling AI data centres

Oracle is expanding AI data centres across the United States while pairing infrastructure growth with workforce development through its philanthropic education programme, Oracle Academy.

The initiative provides schools and educators with curriculum, cloud tools, software, and hands-on training designed to prepare students for enterprise-scale technology roles increasingly linked to AI operations.

As demand for specialised skills rises, Oracle Academy is introducing Data Centre Technician courses to fast-track learners into permanent roles supporting AI infrastructure development and maintenance.

The programme already works with hundreds of institutions across multiple US states, including Texas, Michigan, Wisconsin, and New Mexico, spanning disciplines from computer science and engineering to construction management and supply chain studies.

Alongside new courses in machine learning, generative AI, and analytics, Oracle says the approach is intended to close skills gaps and ensure local communities benefit from the rapid expansion of AI infrastructure.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia steps up platform scrutiny after mass Snapchat removals

Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.

The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.

The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.

Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.

The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.

Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.

More than 4.7 million accounts have been deactivated across the major platforms since the ban began, although the figure includes inactive and duplicate accounts.

Authorities in Australia expect further enforcement, with notices set to be issued to companies that fail to meet the new standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France challenges EU privacy overhaul

The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.

Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.

These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.

The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.

A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.

Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.

Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.

France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU plans a secure military data space by 2030

Institutions in the EU have begun designing a new framework to help European armies share defence information securely, rather than relying on US technology.

A plan centred on creating a military-grade data platform, the European Defence Artificial Intelligence Data Space, is intended to support sensitive exchanges among defence authorities.

Ultimately, the approach aims to replace the current patchwork of foreign infrastructure that many member states rely on to store and transfer national security data.

The European Defence Agency is leading the effort and expects the platform to be fully operational by 2030. The concept includes two complementary elements: a sovereign military cloud for data storage and a federated system that allows countries to exchange information on a trusted basis.

Officials argue that this will improve interoperability, speed up joint decision-making, and enhance operational readiness across the bloc.

A project that aligns with broader concerns about strategic autonomy, as EU leaders increasingly question long-standing dependencies on American providers.

Several European companies have been contracted to develop the early technical foundations. The next step is persuading governments to coordinate future purchases so their systems remain compatible with the emerging framework.

Planning documents suggest that by 2029, member states should begin integrating the data space into routine military operations, including training missions and coordinated exercises. EU authorities maintain that stronger control of defence data will be essential as military AI expands across European forces.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!