EU faces tension over potential ban on AI ‘pornification’

Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.

Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.

Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.

They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.

Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.

Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.

The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.

A clear stance from the Parliament is still pending, rather than an assured path toward agreement.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU faces pressure to boost action on health disinformation

A global health organisation is urging the EU to make fuller use of its digital rules to curb health disinformation as concerns grow over the impact of deepfakes on public confidence.

Warnings point to a rising risk that manipulated content could reduce vaccine uptake instead of supporting informed public debate.

Experts argue that the Digital Services Act already provides the framework needed to limit harmful misinformation, yet enforcement remains uneven. Stronger oversight could improve platforms’ ability to detect manipulated content and remove inaccurate claims that jeopardise public health.

Campaigners emphasise that deepfake technology is now accessible enough to spread false narratives rapidly. The trend threatens vaccination campaigns at a time when several member states are attempting to address declining trust in health authorities.

The EU officials continue to examine how digital regulation can reinforce public health strategies. The call for stricter enforcement highlights the pressure on Brussels to ensure that digital platforms act responsibly rather than allowing misleading material to circulate unchecked.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Discord expands teen-by-default protection worldwide

Discord is preparing a global transition to teen-appropriate settings that will apply to all users unless they confirm they are adults.

The phased rollout begins in early March and forms part of the company’s wider effort to offer protection tailored to younger audiences rather than relying on voluntary safety choices. Controls will cover communication settings, sensitive content and access to age-restricted communities.

The update is based on an expanded age assurance system designed to protect privacy while accurately identifying users’ age groups. People can use facial age estimation on their own device or select identity verification handled by approved partners.

Discord will also rely on an age-inference model that runs quietly in the background. Verification results remain private, and documents are deleted quickly, with users able to appeal group assignments through account settings.

Stricter defaults will apply across the platform. Sensitive media will stay blurred unless a user is confirmed as an adult, and access to age-gated servers or commands will require verification.

Message requests from unfamiliar contacts will be separated, friend-request alerts will be more prominent and only adults will be allowed to speak on community stages instead of sharing the feature with teens.

Discord is complementing the update by creating a Teen Council to offer advice on future safety tools and policies. The council will include up to a dozen young users and aims to embed real teen insight in product development.

The global rollout builds on earlier launches in the UK and Australia, adding to an existing safety ecosystem that includes Teen Safety Assist, Family Centre, and several moderation tools intended to support positive and secure online interactions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU strengthens cyber defence after attack on Commission mobile systems

A cyber-attack targeting the European Commission’s central mobile infrastructure was identified on 30 January, raising concerns that staff names and mobile numbers may have been accessed.

The Commission isolated the affected system within nine hours instead of allowing the breach to escalate, and no mobile device compromise was detected.

Also, the Commission plans a full review of the incident to reinforce the resilience of internal systems.

Officials argue that Europe faces daily cyber and hybrid threats targeting essential services and democratic institutions, underscoring the need for stronger defensive capabilities across all levels of the EU administration.

CERT-EU continues to provide constant threat monitoring, automated alerts and rapid responses to vulnerabilities, guided by the Interinstitutional Cybersecurity Board.

These efforts support the broader legislative push to strengthen cybersecurity, including the Cybersecurity Act 2.0, which introduces a Trusted ICT Supply Chain to reduce reliance on high-risk providers.

Recent measures are complemented by the NIS2 Directive, which sets a unified legal framework for cybersecurity across 18 critical sectors, and the Cyber Solidarity Act, which enhances operational cooperation through the European Cyber Shield and the Cyber Emergency Mechanism.

Together, they aim to ensure collective readiness against large-scale cyber threats.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Bitcoin cryptography safe as quantum threat remains distant

Quantum computing concerns around Bitcoin have resurfaced, yet analysis from CoinShares indicates the threat remains long-term. The report argues that quantum risk is an engineering challenge that gives Bitcoin ample time to adapt.

Bitcoin’s security relies on elliptic-curve cryptography. A sufficiently advanced quantum machine could, in theory, derive private keys using Shor’s algorithm, which requires millions of stable, error-corrected qubits, and remains far beyond current capability.

Network exposure is also limited. Roughly 1.6 million BTC is held in legacy addresses with visible public keys, yet only about 10,200 BTC is realistically targetable. Modern address formats further reduce the feasibility of attacks.

Debate continues over post-quantum upgrades, with researchers warning that premature changes could introduce new vulnerabilities. Market impact, for now, is viewed as minimal.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw faces rising security pushback in South Korea

Major technology companies in South Korea are tightening restrictions on OpenClaw after rising concerns about security and data privacy.

Kakao, Naver and Karrot Market have moved to block the open-source agent within corporate networks, signalling a broader effort to prevent sensitive information from leaking into external systems.

Their decisions follow growing unease about how autonomous tools may interact with confidential material, rather than remaining contained within controlled platforms.

OpenClaw serves as a self-hosted agent that performs actions on behalf of a large language model, acting as the hands of a system that can browse the web, edit files and run commands.

Its ability to run directly on local machines has driven rapid adoption, but it has also raised concerns that confidential data could be exposed or manipulated.

Industry figures argue that companies are acting preemptively to reduce regulatory and operational risks by ensuring that internal materials never feed external training processes.

China has urged organisations to strengthen protections after identifying cases of OpenClaw running with inadequate safeguards.

Security analysts in South Korea warn that the agent’s open-source design and local execution model make it vulnerable to misuse, especially when compared to cloud-based chatbots that operate in more restricted environments.

Wiz researchers recently uncovered flaws in agents linked to OpenClaw that exposed personal information.

Despite the warnings, OpenClaw continues to gain traction among users who value its ability to automate complex tasks, rather than rely on manual workflows.

Some people purchase separate devices solely to run the agent, while an active South Korea community on X has drawn more than 1,800 members who exchange advice and share mitigation strategies.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU split widens over ban on AI nudification apps

European lawmakers remain divided over whether AI tools that generate non-consensual sexual images should face an explicit ban in the EU legislation.

The split emerged as debate intensified over the AI simplification package, which is moving through Parliament and the Council rather than remaining confined to earlier negotiations.

Concerns escalated after Grok was used to create images that digitally undressed women and children.

The EU regulators responded by launching an investigation under the Digital Services Act, and the Commission described the behaviour as illegal under existing European rules. Several lawmakers argue that the AI Act should name pornification apps directly instead of relying on broader legal provisions.

Lead MEPs did not include a ban in their initial draft of the Parliament’s position, prompting other groups to consider adding amendments. Negotiations continue as parties explore how such a restriction could be framed without creating inconsistencies within the broader AI framework.

The Commission appears open to strengthening the law and has hinted that the AI omnibus could be an appropriate moment to act. Lawmakers now have a limited time to decide whether an explicit prohibition can secure political agreement before the amendment deadline passes.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Spain faces escalating battle with Telegram founder

The confrontation between Spain and Telegram founder Pavel Durov has intensified after he claimed that Pedro Sánchez endangered online freedoms.

Government officials responded that the tech executive spread lies rather than engage with the proposed rules in good faith. Sánchez argued that democracy would not be silenced by what he called the techno-oligarchs of the algorithm.

The dispute followed the unveiling of new measures aimed at major technology companies. The plan introduces a ban on social media use for under-16s and holds corporate leaders legally responsible when unlawful or hateful content remains online rather than being removed.

Platforms would also need to adopt age-verification tools such as ID checks or biometric systems, which Durov argued could turn Spain into a surveillance state by allowing large-scale data collection.

Tensions widened as Sánchez clashed with prominent US tech figures. Sumar urged all bodies linked to the central administration to leave X, a move that followed Elon Musk’s accusation that the Spanish leader was acting like a tyrant.

The row highlighted how Spain’s attempt to regulate digital platforms has placed its government in open conflict with influential technology executives.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

TikTok access restored as Albania adopts new protective filters

Albania has lifted its temporary ban on TikTok after nearly a year, the government announced, saying that concerns about public, social and digital safety have now been addressed and that access will resume nationwide.

The restriction was introduced in March 2025 following a fatal stabbing linked to a social media dispute and aimed to protect younger users instead of exposing them to harmful online content.

Under the new arrangement, authorities are partnering with TikTok to introduce protective filters based on keywords and content controls and to strengthen reporting mechanisms for harmful material.

The government described the decision as a shift from restrictive measures to a phase of active monitoring, inter-institutional cooperation, and shared responsibility with digital platforms.

Although the ban has now been lifted, a court challenge contends that the earlier suspension violated the constitutional right to freedom of expression, and a ruling is expected later in February. Opposition figures also criticised the original ban when it was applied ahead of parliamentary elections.

Despite the formal ban, TikTok remained accessible to many users in Albania through virtual private networks during the year it was in force, highlighting the challenge of enforcing such blocks in practice.

Critics have also noted that addressing the impact on youth may require broader digital education and safety measures.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!