Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption leaves workers exhausted as a new study reveals rising workloads

Researchers from UC Berkeley’s Haas School of Business examined how AI shapes working habits inside a mid-sized technology firm, and the outcome raised concerns about employee well-being.

Workers embraced AI voluntarily because the tools promised faster results instead of lighter schedules. Over time, staff absorbed extra tasks and pushed themselves beyond sustainable limits, creating a form of workload creep that drained energy and reduced job satisfaction.

Once the novelty faded, employees noticed that AI had quietly intensified expectations. Engineers reported spending more time correcting AI-generated material passed on by colleagues, while many workers handled several tasks at once by combining manual effort with multiple automated agents.

Constant task-switching gave a persistent sense of juggling responsibilities, which lowered the quality of their focus.

These researchers also found that AI crept into personal time, with workers prompting tools during breaks, meetings, or moments intended for rest.

As a result, the boundaries between professional and private time weakened, leaving many employees feeling less refreshed and more pressured to keep up with accelerating workflows.

The study argues that AI increased the density of work rather than reducing it, undermining promises that automation would ease daily routines.

Evidence from other institutions reinforces the pattern, with many firms reporting little or no productivity improvement from AI. Researchers recommend clearer company-level AI guidelines to prevent overuse and protect staff from escalating workloads driven by automation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Facebook boosts user creativity with new Meta AI animations

Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.

Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.

The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.

Facebook will highlight Memories that work well with the Restyle function to encourage wider use.

Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.

Seasonal styles will arrive throughout the year to support festive posts and major events.

Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Slovenia sets out an ambitious AI vision ahead of global summit

Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.

He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.

He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.

Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.

Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.

Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.

Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.

Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.

Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Growing reliance on AI sparks worries for young users

Research from the UK Safer Internet Centre reveals nearly all young people aged eight to 17 now use artificial intelligence tools, highlighting how deeply the technology has entered daily life. Growing adoption has also increased reliance, with many teenagers using AI regularly for schoolwork, social interactions and online searches.

Education remains one of the main uses, with students turning to AI for homework support and study assistance. However, concerns about fairness and creativity have emerged, as some pupils worry about false accusations of misuse and reduced independent thinking.

Safety fears remain significant, especially around harmful content and privacy risks linked to AI-generated images. Many teenagers and parents worry the technology could be used to create inappropriate or misleading visuals, raising questions about online protection.

Emotional and social impacts are also becoming clear, with some young people using AI for personal advice or practising communication. Limited parental guidance and growing dependence suggest governments and schools may soon consider stronger oversight and clearer rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU Court opens path for WhatsApp to contest privacy rulings

The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.

A ruling that reshapes how companies defend their interests under the GDPR framework.

The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.

European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.

By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.

Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Moltbook: Inside the experimental AI agent society

Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.

Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?

 Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Source: freepik

The rise of an agent-driven social experiment

Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.

The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society. 

Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.

Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Screenshot: Moltbook.com

How Moltbook evolved from an open-source experiment to a viral phenomenon 

Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.  

Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore. 

The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions. 

Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.

The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.

Moltbook and the illusion of an autonomous AI agent society

At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.

Closer inspection, however, revealed a far less autonomous reality.

Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.

Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain. 

Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours. 

Security risks beneath the spectacle of the Moltbook platform 

If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.

The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.

Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.

Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Source: Freepik

What comes next for AI agents as digital reality becomes their operating ground? 

Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.

What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enterprise AI security evolves as Cisco expands AI Defense capabilities

Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.

The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.

Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.

Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.

Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EMFA guidance sets expectations for Big Tech media protections

The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.

Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.

The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.

The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.

The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!