EU AI Act guidance delay raises compliance uncertainty

The European Commission has missed a key deadline to issue guidance on how companies should classify high-risk AI systems under the EU AI Act, fuelling uncertainty around the landmark law’s implementation.

Guidance on Article 6, which defines high-risk AI systems and stricter compliance rules, was due by early February. Officials have indicated that feedback is still being integrated, with a revised draft expected later this month and final adoption potentially slipping to spring.

The delay follows warnings that regulators and businesses are unprepared for the act’s most complex rules, due to apply from August. Brussels has suggested delaying high-risk obligations under its Digital Omnibus package, citing unfinished standards and the need for legal clarity.

Industry groups want enforcement delayed until guidance and standards are finalised, while some lawmakers warn repeated slippage could undermine confidence in the AI Act. Critics warn further changes could deepen uncertainty if proposed revisions fail or disrupt existing timelines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI legal tool rattles European data stocks

European data and legal software stocks fell sharply after US AI startup Anthropic launched a new tool for corporate legal teams. The company said the software can automate contract reviews, compliance workflows, and document triage, while clarifying that it does not offer legal advice.

Investors reacted swiftly, sending shares in Pearson, RELX, Sage, Wolters Kluwer, London Stock Exchange Group, and Experian sharply lower. Thomson Reuters also suffered a steep decline, reflecting concern that AI tools could erode demand for traditional data-driven services.

Market commentators warned that broader adoption of AI in professional services could compress margins or bypass established providers altogether. Morgan Stanley flagged intensifying competition, while AJ Bell pointed to rising investor anxiety across the sector.

The sell-off also revived debate over AI’s impact on employment, particularly in legal and other office-based roles. Recent studies suggest the UK may face greater disruption than other large economies as companies adopt AI tools, even as productivity gains continue to rise.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Alternative social platform UpScrolled passes 2.5 million users

UpScrolled has surpassed 2.5 million users globally, gaining rapid momentum following TikTok’s restructuring of its US ownership earlier this year, according to founder Issam Hijazi.

The social network grew to around 150,000 users in its first six months before accelerating sharply in January, crossing one million users within weeks and reaching more than 2.5 million shortly afterwards.

Positioned as a hybrid of Instagram and X, UpScrolled promotes itself as an open platform free of shadowbanning and selective content suppression, while criticising major technology firms for data monetisation and algorithm-driven engagement practices.

Hijazi said the company would avoid amplification algorithms but acknowledged the need for community guidelines, particularly amid concerns about explicit content appearing on the platform.

Interest in alternative social networks has increased since TikTok’s shift to US ownership, though analysts note that long-term growth will depend on moderation frameworks, feature development, and sustained community trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan and the United Kingdom expand cybersecurity cooperation

Japan and the United Kingdom have formalised a Strategic Cyber Partnership focused on strengthening cooperation in cybersecurity, including information sharing, defensive capabilities, and resilience of critical infrastructure. In related high-level discussions between the two leaders, Japan and the UK also agreed on the need to work with like-minded partners to address vulnerabilities in critical mineral supply chains.

The Strategic Cyber Partnership outlines three core areas of cooperation:

  • sharing cyber threat intelligence and enhancing cyber capabilities;
  • supporting whole-of-society resilience through best practices on infrastructure and supply chain protection and alignment on regulatory and standards issues;
  • collaborating on workforce development and emerging cyber technologies.

The agreement is governed through a joint Cyber Dialogue mechanism and is non-binding in nature.

Separately, at a summit meeting in Tokyo, the leaders noted the importance of strengthening supply chains for minerals identified as critical for modern industry and technology, and agreed to coordinate efforts with other partners on this issue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Austria and Poland eye social media limits for minors

Austria is advancing plans to bar children under 14 from social media when the new school year begins in September 2026, according to comments from a senior Austrian official. Poland’s government is drafting a law to restrict access for under-15s, using digital ID tools to confirm age.

Austria’s governing parties support protecting young people online but differ on how to verify ages securely without undermining privacy. In Poland supporters of the draft argue that early exposure to screens is a parental and platform enforcement issue.

Austria and Poland form part of a broader European trend as France moves to ban under-15s and the UK is debating similar measures. Wider debates tie these proposals to concerns about children’s mental health and online safety.

Proponents in both Austria and Poland aim to finalise legal frameworks by 2026, with implementation potentially rolling out in the following year if national parliaments approve the age restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

€50m boost for Europe’s quantum chip ambitions

Europe is stepping up efforts to industrialise quantum technologies with a €50 million investment in superconducting quantum devices. Funding from the EU Chips Joint Undertaking and national agencies will support the Supreme consortium’s work from early 2026.

Superconducting quantum systems rely on ultra-low temperatures to maintain qubit stability, making manufacturing processes complex and costly. Supreme aims to develop reliable fabrication methods that can be scaled across Europe.

Access to these technologies will be opened to companies through shared pilot production runs and process design kits. Such tools are intended to lower barriers for firms developing quantum hardware and related systems.

The initiative also responds to Europe’s weaker performance in quantum patents compared with research output. Alignment with the upcoming Quantum Act and the EU Chips Act is expected to strengthen commercial uptake and industrial competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Biodegradable sensors developed to cut e-waste and monitor air pollution

Researchers at Incheon National University have developed biodegradable gas sensors designed to reduce electronic waste while improving air quality monitoring. The technology targets nitrogen dioxide, a pollutant linked to fossil fuel combustion and respiratory diseases.

The sensors are built using organic field-effect transistors, a lightweight and low-energy alternative suited for portable environmental monitoring devices. OFET-based systems are also easier to manufacture compared with traditional silicon electronics.

To create the sensing layer, the research team blended an organic semiconductor polymer, P3HT, with a biodegradable material, PBS. Each polymer was prepared separately in chloroform before being combined into a uniform solution.

Performance varied with solvent composition, with mixtures of chloroform and dichlorobenzene yielding the most consistent and sensitive sensor structures. High PBS concentrations remained effective without compromising detection accuracy.

Project lead Professor Park said the approach balances sustainability and performance, particularly for use in natural environments. The biodegradable design could contribute to long-term pollution monitoring and waste reduction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New AI safety report highlights control concerns

A major international AI safety report warns that AI systems are advancing rapidly, with sharp gains in reasoning, coding and scientific tasks. Researchers say progress remains uneven, leaving systems powerful yet unreliable.

The report highlights rising concerns over deepfakes, cyber misuse and emotional reliance on AI companions in the UK and the US. Experts note growing difficulty in distinguishing AI generated content from human work.

Safeguards against biological, chemical and cyber risks have improved, though oversight challenges persist in the UK and the US. Analysts warn advanced models are becoming better at evading evaluation and controls.

The impact of AI on jobs in the UK and the US remains uncertain, with mixed evidence across sectors. Researchers say labour disruption could accelerate if systems gain greater autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot