EU reopens debate on social media age restrictions for children

The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.

The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.

Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI and human love in the digital age debate

AI is increasingly entering intimate areas of human life, including romance and emotional companionship. AI chatbots are now widely used as digital companions, raising broader questions about emotional authenticity and human-machine relationships.

Millions of people use AI companion apps, and studies suggest that a significant share of them describe their relationship with a chatbot as romantic. While users may experience genuine emotions, experts stress that current AI systems do not feel love but generate responses based on patterns in data.

Researchers explain that large language models can simulate empathy and emotional understanding, yet they lack consciousness and subjective experience. Their outputs are designed to imitate human interaction rather than reflect genuine emotion.

Scientific research describes love as deeply rooted in biology. Hormones such as dopamine and oxytocin, along with specific brain regions, shape attraction, attachment, and emotional bonding. These processes are embodied and chemical, which machines do not possess.

Some scholars argue that future AI systems could replicate certain cognitive aspects of attachment, such as loyalty or repeated engagement. However, most agree that replicating human love would likely require consciousness, which remains poorly understood and technically unresolved.

Debate continues over whether conscious AI is theoretically possible. While some researchers believe advanced architectures or neuromorphic computing could move in that direction, no existing system meets the established criteria for consciousness.

In practice, human-AI romantic relationships remain asymmetrical. Chatbots are designed to engage, agree, and provide comfort, which can create dependency or unrealistic expectations about real-world relationships.

Experts therefore emphasise transparency and AI literacy, stressing that users should understand AI companions simulate emotion and do not possess feelings, intentions, or awareness; while these systems can imitate expressions of love, they do not experience it, and the emotional reality remains human even when the interaction is digital.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare launches Moltworker platform after AI assistant success

The viral success of Moltbot has prompted Cloudflare to launch a dedicated platform for running the popular AI assistant. The move underscores how the networking company is positioning itself at the centre of the emerging AI agent ecosystem.

Moltbot, an open-source AI personal assistant built on Anthropic’s Claude model, became a viral sensation last month and demonstrated the effectiveness of Cloudflare’s edge infrastructure for running autonomous agents.

The assistant’s rapid adoption validated CEO Matthew Prince’s assertion that AI agents represent a ‘fundamental re-platforming’ of the internet. In response, Cloudflare quickly released Moltworker, a platform specifically designed for securely operating Moltbot and similar AI agents.

Prince described the dynamic as creating a ‘virtuous flywheel,’ with AI agents serving as the new users of the internet, whilst Cloudflare provides the platform they run on and the network they pass through.

Industry analysts have highlighted why Cloudflare’s infrastructure is well-suited to the era of agentic computing. RBC Capital Markets noted that AI agents require low-latency, secure inferencing at the network’s edge- precisely what Cloudflare’s Workers platform delivers.

The continued proliferation of AI agents is expected to drive ongoing demand for these capabilities.

Prince, who co-founded the company, revealed that Cloudflare ended 2025 with 4.5 million active human developers on its platform, providing a substantial foundation for the next wave of AI-driven applications and agents built on the company’s infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

BlockFills freezes withdrawals as Bitcoin drops below $65,000

BlockFills, an institutional digital asset trading and lending firm, has suspended client deposits and withdrawals, citing market volatility as Bitcoin experiences significant declines.

A notice sent to clients last week stated the suspension was intended ‘to further the protection of our clients and the firm.’ The Chicago-based company serves approximately 2,000 institutional clients and provides crypto-backed lending to miners and hedge funds.

Clients were informed they could continue trading under certain restrictions, though positions requiring additional margin could be closed.

The suspension comes as Bitcoin fell below $65,000 last week, down roughly 25% in 2026 and approximately 45% from its October peak near $120,000. In the digital asset industry, withdrawal halts are often interpreted as warning signs of potential liquidity constraints.

Several crypto firms, including FTX, BlockFi, and Celsius, imposed similar restrictions during prior downturns before entering bankruptcy proceedings.

BlockFills has not specified how long the suspension will last. A company spokesperson said the firm is ‘working hand in hand with investors and clients to bring this issue to a swift resolution and to restore liquidity to the platform.’

Founded in 2018 with backing from Susquehanna and CME Group, there is currently no public evidence of insolvency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption leaves workers exhausted as a new study reveals rising workloads

Researchers from UC Berkeley’s Haas School of Business examined how AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being.

Workers embraced AI voluntarily because the tools promised faster results instead of lighter schedules. Over time, staff absorbed extra tasks and pushed themselves beyond sustainable limits, creating a form of workload creep that drained energy and reduced job satisfaction.

Once the novelty faded, employees noticed that AI had quietly intensified expectations. Engineers reported spending more time correcting AI-generated material passed on by colleagues, while many workers handled several tasks at once by combining manual effort with multiple automated agents.

Constant task-switching gave a persistent sense of juggling responsibilities, which lowered the quality of their focus.

These researchers also found that AI crept into personal time, with workers prompting tools during breaks, meetings, or moments intended for rest.

As a result, the boundaries between professional and private time weakened, leaving many employees feeling less refreshed and more pressured to keep up with accelerating workflows.

The study argues that AI increased the density of work rather than reducing it, undermining promises that automation would ease daily routines.

Evidence from other institutions reinforces the pattern, with many firms reporting little or no productivity improvement from AI. Researchers recommend clearer company-level AI guidelines to prevent overuse and protect staff from escalating workloads driven by automation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AML breach triggers major fine for a Netherlands crypto firm

Dutch regulators have fined a cryptocurrency service provider for operating in the Netherlands without the legally required registration, underscoring intensifying enforcement across Europe’s digital asset sector.

De Nederlandsche Bank (DNB) originally imposed an administrative penalty of €2,850,000 on 2 October 2023. Authorities found the firm breached the Anti-Money Laundering and Anti-Terrorist Financing Act by offering unregistered crypto services.

Registration rules, introduced on 21 May 2020, require providers to notify supervisors due to elevated risks linked to transaction anonymity and potential misuse for money laundering or terrorist financing.

Non-compliance prevented the provider from reporting unusual transactions to the Financial Intelligence Unit-Netherlands. Regulators weighed the severity, duration, and culpability of the breach when determining the penalty amount.

Legal proceedings later altered the outcome. The Court of Rotterdam ruled on 19 December 2025 to reduce the fine to €2,277,500 and annulled the earlier decision on objection.

DNB has since filed a further appeal with the Trade and Industry Appeals Tribunal, leaving the case ongoing as oversight shifts toward MiCAR licensing requirements introduced in December 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Facebook boosts user creativity with new Meta AI animations

Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.

Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.

The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.

Facebook will highlight Memories that work well with the Restyle function to encourage wider use.

Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.

Seasonal styles will arrive throughout the year to support festive posts and major events.

Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Olympic ice dancers performing to AI-generated music spark controversy

The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.

Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.

The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.

Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.

The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!