EU moves closer to decision on ChatGPT oversight

The European Commission plans to decide by early 2026 whether OpenAI’s ChatGPT should be classified as a vast online platform under the Digital Services Act.

OpenAI’s tool reported 120.4 million average monthly users in the EU back in October, a figure far above the 45-million threshold that triggers more onerous obligations instead of lighter oversight.

Officials said the designation procedure depends on both quantitative and qualitative assessments of how a service operates, together with input from national authorities.

The Commission is examining whether a standalone AI chatbot can fall within the scope of rules usually applied to platforms such as social networks, online marketplaces and significant search engines.

ChatGPT’s user data largely stems from its integrated online search feature, which prompts users to allow the chatbot to search the web. The Commission noted that OpenAI could voluntarily meet the DSA’s risk-reduction obligations while the formal assessment continues.

The EU’s latest wave of designations included Meta’s WhatsApp, though the rules applied only to public channels, not private messaging.

A decision on ChatGPT that will clarify how far the bloc intends to extend its most stringent online governance framework to emerging AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France targets X over algorithm abuse allegations

The cybercrime unit of the Paris prosecutor has raided the French office of X as part of an expanding investigation into alleged algorithm manipulation and illicit data extraction.

Authorities said the probe began in 2025 after a lawmaker warned that biassed algorithms on the platform might have interfered with automated data systems. Europol supported the operation together with national cybercrime officers.

Prosecutors confirmed that the investigation now includes allegations of complicity in circulating child sex abuse material, sexually explicit deepfakes and denial of crimes against humanity.

Elon Musk and former chief executive Linda Yaccarino have been summoned for questioning in April in their roles as senior figures of the company at the time.

The prosecutor’s office also announced its departure from X in favour of LinkedIn and Instagram, rather than continuing to use the platform under scrutiny.

X strongly rejected the accusations and described the raid as politically motivated. Musk claimed authorities should focus on pursuing sex offenders instead of targeting the company.

The platform’s government affairs team said the investigation amounted to law enforcement theatre rather than a legitimate examination of serious offences.

Regulatory pressure increased further as the UK data watchdog opened inquiries into both X and xAI over concerns about Grok producing sexualised deepfakes. Ofcom is already conducting a separate investigation that is expected to take months.

The widening scrutiny reflects growing unease around alleged harmful content, political interference and the broader risks linked to large-scale AI systems.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT restored after global outage disrupts users worldwide

OpenAI faced a wave of global complaints after many users struggled to access ChatGPT.

Reports began circulating in the US during the afternoon, with outage cases climbing to more than 12.000 in less than half an hour. Social media quickly filled with questions from people trying to determine whether the disruption was widespread or a local glitch.

Also, users in the UK reported complete failure to generate responses, yet access returned when they switched to a US-based VPN.

Other regions saw mixed results, as VPNs in Ireland, Canada, India and Poland allowed ChatGPT to function, although replies were noticeably slower instead of consistent.

OpenAI later confirmed that several services were experiencing elevated errors. Engineers identified the source of the disruption, introduced mitigations and continued monitoring the recovery.

The company stressed that users in many regions might still experience intermittent problems while the system stabilises rather than operating at full capacity.

In the following update, OpenAI announced that its systems were fully operational again.

The status page indicated that the affected services had recovered, and engineers were no longer aware of active issues. The company added that the underlying fault was addressed, with further safeguards being developed to prevent similar incidents.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New AI safety report highlights control concerns

A major international AI safety report warns that AI systems are advancing rapidly, with sharp gains in reasoning, coding and scientific tasks. Researchers say progress remains uneven, leaving systems powerful yet unreliable.

The report highlights rising concerns over deepfakes, cyber misuse and emotional reliance on AI companions in the UK and the US. Experts note growing difficulty in distinguishing AI generated content from human work.

Safeguards against biological, chemical and cyber risks have improved, though oversight challenges persist in the UK and the US. Analysts warn advanced models are becoming better at evading evaluation and controls.

The impact of AI on jobs in the UK and the US remains uncertain, with mixed evidence across sectors. Researchers say labour disruption could accelerate if systems gain greater autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia steps up platform scrutiny after mass Snapchat removals

Snapchat has blocked more than 415,000 Australian accounts after the national ban on under-16s began, marking a rapid escalation in the country’s effort to restrict children’s access to major platforms.

The company relied on a mix of self-reported ages and age-detection technologies to identify users who appeared to be under 16.

The platform warned that age verification still faces serious shortcomings, leaving room for teenagers to bypass safeguards rather than supporting reliable compliance.

Facial estimation tools remain accurate only within a narrow range, meaning some young people may slip through while older users risk losing access. Snapchat also noted the likelihood that teenagers will shift towards less regulated messaging apps.

The eSafety commissioner has focused regulatory pressure on the 10 largest platforms, although all services with Australian users are expected to assess whether they fall under the new requirements.

Officials have acknowledged that the technology needs improvement and that reliability issues, such as the absence of a liveness check, contributed to false results.

More than 4.7 million accounts have been deactivated across the major platforms since the ban began, although the figure includes inactive and duplicate accounts.

Authorities in Australia expect further enforcement, with notices set to be issued to companies that fail to meet the new standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France challenges EU privacy overhaul

The EU’s attempt to revise core privacy rules has faced resistance from France, which argues that the Commission’s proposals would weaken rather than strengthen long-standing protections.

Paris objects strongly to proposed changes to the definition of personal data within the General Data Protection Regulation, which remains the foundation of European privacy law. Officials have also raised concerns about several more minor adjustments included in the broader effort to modernise digital legislation.

These proposals form part of the Digital Omnibus package, a set of updates intended to streamline the EU data rules. France argues that altering the GDPR’s definitions could change the balance between data controllers, regulators and citizens, creating uncertainty for national enforcement bodies.

The national government maintains that the existing framework already includes the flexibility needed to interpret sensitive information.

A disagreement that highlights renewed tension inside the Union as institutions examine the future direction of privacy governance.

Several member states want greater clarity in an era shaped by AI and cross-border data flows. In contrast, others fear that opening the GDPR could lead to inconsistent application across Europe.

Talks are expected to continue in the coming months as EU negotiators weigh the political risks of narrowing or widening the scope of personal data.

France’s firm stance suggests that consensus may prove difficult, particularly as governments seek to balance economic goals with unwavering commitments to user protection.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UNESCO and HBKU advance research on digital behaviour

Hamad Bin Khalifa University has unveiled the UNESCO Chair on Digital Technologies and Human Behaviour to strengthen global understanding of how emerging tools shape society.

An initiative, located in the College of Science and Engineering in Qatar, that will examine the relationship between digital adoption and human behaviour, focusing on digital well-being, ethical design and healthier online environments.

The Chair is set to address issues such as internet addiction, cyberbullying and misinformation through research and policy-oriented work.

By promoting dialogue among international organisations, governments and academic institutions, the programme aims to support the more responsible development of digital technologies rather than approaches that overlook societal impact.

HBKU’s long-standing emphasis on ethical innovation formed the foundation for the new initiative. The launch event brought together experts from several disciplines to discuss behavioural change driven by AI, mobile computing and social media.

An expert panel considered how GenAI can improve daily life while also increasing dependency, encouraging users to shift towards a more intentional and balanced relationship with AI systems.

UNESCO underlined the importance of linking scientific research with practical policymaking to guide institutions and communities.

The Chair is expected to strengthen cooperation across sectors and support progress on global development goals by ensuring digital transformation remains aligned with human dignity, social cohesion and inclusive growth.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Ethical limits of rapidly advancing AI debated at Doha forum

Doha Debates, an initiative of Qatar Foundation, hosted a town hall examining the ethical, political, and social implications of rapidly advancing AI. The discussion reflected growing concern that AI capabilities could outpace human control and existing governance frameworks.

Held at Multaqa in Education City, the forum gathered students, researchers, and international experts to assess readiness for rapid technological change. Speakers offered contrasting views, highlighting both opportunity and risk as AI systems grow more powerful.

Philosopher and transhumanist thinker Max More argued for continued innovation guided by reason and proportionate safeguards, warning against fear-driven stagnation.

By contrast, computer scientist Roman Yampolskiy questioned whether meaningful control over superintelligent systems is realistic, cautioning that widening intelligence gaps could undermine governance entirely.

Nabiha Syed, executive director of the Mozilla Foundation, focused on accountability and social impact. She urged broader public participation and transparency, particularly as AI deployment risks reinforcing existing inequalities across societies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CERT Polska reports coordinated cyber sabotage targeting Poland’s energy infrastructure

Poland has disclosed a coordinated cyber sabotage campaign targeting more than 30 renewable energy sites in late December 2025. The incidents occurred during severe winter weather and were intended to cause operational disruption, according to CERT Polska.

Electricity generation and heat supply in Poland continued, but attackers disabled communications and remote control systems across multiple facilities. Both IT networks and industrial operational technology were targeted, marking a rare shift toward destructive cyber activity against energy infrastructure.

Investigators found attackers accessed renewable substations through exposed FortiGate devices, often without multi-factor authentication. After breaching networks, they mapped systems, damaged firmware, wiped controllers, and disabled protection relays.

Two previously unknown wiper tools, DynoWiper and LazyWiper, were used to corrupt and delete data without ransom demands. The malware spread through compromised Active Directory systems using malicious Group Policy tasks to trigger simultaneous destruction.

CERT Polska linked the infrastructure to the Russia-connected threat cluster Static Tundra, though some firms suggest Sandworm involvement. The campaign marks the first publicly confirmed destructive operation attributed to this actor, highlighting rising cyber-sabotage risks to critical energy systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Church leaders question who should guide moral answers in the age of AI

AI is increasingly being used to answer questions about faith, morality, and suffering, not just everyday tasks. As AI systems become more persuasive, religious leaders are raising concerns about the authority people may assign to machine-generated guidance.

Within this context, Catholic outlet EWTN Vatican examined Magisterium AI, a platform designed to reference official Church teaching rather than produce independent moral interpretations. Its creators say responses are grounded directly in doctrinal sources.

Founder Matthew Sanders argues mainstream AI models are not built for theological accuracy. He warns that while machines sound convincing, they should never be treated as moral authorities without grounding in Church teaching.

Church leaders have also highlighted broader ethical risks associated with AI, particularly regarding human dignity and emotional dependency. Recent Vatican discussions stressed the need for education and safeguards.

Supporters say faith-based AI tools can help navigate complex religious texts responsibly. Critics remain cautious, arguing spiritual formation should remain rooted in human guidance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!