EU reopens debate on social media age restrictions for children

The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.

The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.

Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT researchers tackle antimicrobial resistance with AI and synthetic biology

A pioneering research initiative at MIT is deploying AI and synthetic biology to combat the escalating global crisis of antimicrobial resistance, which has been fuelled by decades of antibiotic overuse and misuse.

The $3 million, three-year project, led by Professor James J. Collins at MIT’s Department of Biological Engineering, centres on developing programmable antibacterials designed to target specific pathogens.

The approach uses AI to design small proteins that turn off specific bacterial functions. These designer molecules would be produced and delivered by engineered microbes, offering a more precise alternative to traditional antibiotics.

Antimicrobial resistance impacts low and middle-income countries most severely, where limited diagnostic infrastructure causes treatment delays. Drug-resistant infections continue to rise globally, whilst the development of new antibacterial tools has stagnated.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI workshops strengthen digital skills in Wales tourism sector

Wales has launched a national programme of practical AI workshops to help tourism and hospitality businesses adopt digital tools. Funded by Visit Wales and the Welsh Government, the initiative aims to strengthen the sector’s competitiveness by assisting companies to save time and enhance their online presence.

Strong demand reflects growing readiness within the sector to embrace AI. Delivered through Business Wales, the free sessions have quickly reached near capacity, with most places booked shortly after launch. The programme is tailored to small and medium-sized enterprises and prioritises hands-on learning over technical theory.

Workshops focus on simple, immediately usable tools that improve website content, search visibility, and customer engagement. Organisers highlight that AI-driven search features are reshaping how visitors discover tourism services, making accuracy, consistency, and authoritative digital content increasingly important.

At the centre of the initiative is Harri, a bespoke AI tool explicitly developed for Welsh tourism businesses. Designed to reflect the local context, it supports listings management, customer enquiries, and search optimisation. Early feedback indicates that the approach delivers practical and measurable benefits.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cisco warns AI agents need checks before joining workforces

The US-based conglomerate Cisco is promoting a future in which AI agents work alongside employees rather than operate as mere tools. Jeetu Patel, the company’s president, revealed that Cisco has already produced a product written entirely with AI-generated code and expects several more by the end of 2026.

A shift to spec-driven development that allows smaller human teams to work with digital agents instead of relying on larger groups of developers.

Human oversight will still play a central role. Coders will be asked to review AI-generated outputs as they adjust to a workplace where AI influences every stage of development. Patel argues that AI should be viewed as part of every loop rather than kept at the edge of decision-making.

Security concerns dominate the company’s planning. Patel warns that AI agents acting as digital co-workers must undergo background checks in the same way that employees do.

Cisco is investing billions in security systems to protect agents from external attacks and to prevent agents that malfunction or act independently from harming society.

Looking ahead, Cisco expects AI to deliver insights that extend beyond human knowledge. Patel believes that the most significant gains will emerge from breakthroughs in science, health, energy and poverty reduction rather than simple productivity improvements.

He also positions Cisco as a core provider of infrastructure designed to support the next stage of the AI era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Cloudflare launches Moltworker platform after AI assistant success

The viral success of Moltbot has prompted Cloudflare to launch a dedicated platform for running the popular AI assistant. The move underscores how the networking company is positioning itself at the centre of the emerging AI agent ecosystem.

Moltbot, an open-source AI personal assistant built on Anthropic’s Claude model, became a viral sensation last month and demonstrated the effectiveness of Cloudflare’s edge infrastructure for running autonomous agents.

The assistant’s rapid adoption validated CEO Matthew Prince’s assertion that AI agents represent a ‘fundamental re-platforming’ of the internet. In response, Cloudflare quickly released Moltworker, a platform specifically designed for securely operating Moltbot and similar AI agents.

Prince described the dynamic as creating a ‘virtuous flywheel,’ with AI agents serving as the new users of the internet, whilst Cloudflare provides the platform they run on and the network they pass through.

Industry analysts have highlighted why Cloudflare’s infrastructure is well-suited to the era of agentic computing. RBC Capital Markets noted that AI agents require low-latency, secure inferencing at the network’s edge- precisely what Cloudflare’s Workers platform delivers.

The continued proliferation of AI agents is expected to drive ongoing demand for these capabilities.

Prince, who co-founded the company, revealed that Cloudflare ended 2025 with 4.5 million active human developers on its platform, providing a substantial foundation for the next wave of AI-driven applications and agents built on the company’s infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea launches labour–government body to address AI automation pressures

A new consultative body has been established in South Korea to manage growing anxiety over AI and rapid industrial change.

The Ministry of Employment and Labour joined forces with the Korean Confederation of Trade Unions to create a regular channel for negotiating how workplaces should adapt as robots and AI systems become more widespread across key industries.

The two sides will meet monthly to seek agreement on major labour issues. The union argued for a human-centred transition instead of a purely technological one, urging the government to strengthen protections for workers affected by restructuring and AI-powered production methods.

Officials in South Korea responded by promising that policy decisions will reflect direct input gathered from employees on the ground.

Concerns heightened after Hyundai Motor confirmed plans to mass-produce Atlas humanoid robots by 2028 and introduce them across its assembly lines. The project forms part of the company’s ambition to build a ‘physical AI’ future where machines perform risky or repetitive tasks in place of humans.

The debate intensified as new labour statistics showed a sharp decline in employment within professional and scientific technical services, where AI deployment is suspected of reducing demand for new hires.

KCTU warned that industrial transformation could widen inequality unless government policy prioritises people over profit.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

BlockFills freezes withdrawals as Bitcoin drops below $65,000

BlockFills, an institutional digital asset trading and lending firm, has suspended client deposits and withdrawals, citing market volatility as Bitcoin experiences significant declines.

A notice sent to clients last week stated the suspension was intended ‘to further the protection of our clients and the firm.’ The Chicago-based company serves approximately 2,000 institutional clients and provides crypto-backed lending to miners and hedge funds.

Clients were informed they could continue trading under certain restrictions, though positions requiring additional margin could be closed.

The suspension comes as Bitcoin fell below $65,000 last week, down roughly 25% in 2026 and approximately 45% from its October peak near $120,000. In the digital asset industry, withdrawal halts are often interpreted as warning signs of potential liquidity constraints.

Several crypto firms, including FTX, BlockFi, and Celsius, imposed similar restrictions during prior downturns before entering bankruptcy proceedings.

BlockFills has not specified how long the suspension will last. A company spokesperson said the firm is ‘working hand in hand with investors and clients to bring this issue to a swift resolution and to restore liquidity to the platform.’

Founded in 2018 with backing from Susquehanna and CME Group, there is currently no public evidence of insolvency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption leaves workers exhausted as a new study reveals rising workloads

Researchers from UC Berkeley’s Haas School of Business examined how AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being.

Workers embraced AI voluntarily because the tools promised faster results instead of lighter schedules. Over time, staff absorbed extra tasks and pushed themselves beyond sustainable limits, creating a form of workload creep that drained energy and reduced job satisfaction.

Once the novelty faded, employees noticed that AI had quietly intensified expectations. Engineers reported spending more time correcting AI-generated material passed on by colleagues, while many workers handled several tasks at once by combining manual effort with multiple automated agents.

Constant task-switching gave a persistent sense of juggling responsibilities, which lowered the quality of their focus.

These researchers also found that AI crept into personal time, with workers prompting tools during breaks, meetings, or moments intended for rest.

As a result, the boundaries between professional and private time weakened, leaving many employees feeling less refreshed and more pressured to keep up with accelerating workflows.

The study argues that AI increased the density of work rather than reducing it, undermining promises that automation would ease daily routines.

Evidence from other institutions reinforces the pattern, with many firms reporting little or no productivity improvement from AI. Researchers recommend clearer company-level AI guidelines to prevent overuse and protect staff from escalating workloads driven by automation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!