Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.
Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.
A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.
Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.
The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.
Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Investors and researchers are increasingly arguing that the future of AI lies beyond large language models. In London and across Europe, startups are developing so-called world models designed to simulate physical reality rather than simply predict text.
Unlike LLMs, which rely on static datasets, world models aim to build internal representations of cause and effect. Advocates say these systems are better suited to autonomous vehicles, robotics, defence and industrial simulation.
London based Stanhope AI is among companies pursuing this approach, claiming its systems learn by inference and continuously update their internal maps. The company is reportedly working with European governments and aerospace firms on AI drone applications.
Supporters argue that safety and explainability must be embedded from the outset, particularly under frameworks such as the EU AI Act. Investors suggest that hybrid systems combining LLMs with physics aware models could unlock large commercial markets across Europe.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A prominent AI safety researcher has resigned from Anthropic, issuing a stark warning about global technological and societal risks. Mrinank Sharma announced his departure in a public letter, citing concerns spanning AI development, bioweapons, and broader geopolitical instability.
Sharma led AI safeguards research, including model alignment, bioterrorism risks, and human-AI behavioural dynamics. Despite praising his tenure, he said ethical tensions and pressures hindered the pursuit of long-term safety priorities.
His exit comes amid wider turbulence across the AI sector. Another researcher recently left OpenAI, raising concerns over the integration of advertising into chatbot environments and the psychological implications of increasingly human-like AI interactions.
Anthropic, founded by former OpenAI staff, balances commercial AI deployment with safety and risk mitigation. Sharma plans to return to the UK to study poetry, stepping back from AI research amid global uncertainty.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The new phase of Vision 2030 is being steered toward technology, digital infrastructure and advanced industry by Saudi Arabia instead of relying on large urban construction schemes.
Officials highlight the need to support sectors that can accelerate innovation, strengthen data capabilities and expand the kingdom’s role in global tech development.
The move aligns with ongoing efforts to diversify the economy and build long-term competitiveness in areas such as smart manufacturing, logistics technology and clean energy systems.
Recent adjustments involve scaling back or rescheduling some giga projects so that investment can be channelled toward initiatives with strong digital and technological potential.
Elements of the NEOM programme have been revised, while funding attention is shifting to areas that enable automation, renewable technologies and high-value services.
Saudi Arabia aims to position Riyadh as a regional hub for research, emerging technologies and advanced industries. Officials stress that Vision 2030 remains active, yet its next stage will focus on projects that can accelerate technological adoption and strengthen economic resilience.
The Public Investment Fund continues to guide investment toward ecosystems that support innovation, including clean energy, digital infrastructure and international technology partnerships.
An approach that reflects earlier recommendations to match economic planning with evolving skills, future labour market needs and opportunities in fast-growing sectors.
Analysts note that the revised direction prioritises sustainable growth by expanding the kingdom’s participation in global technological development instead of relying mainly on construction-driven momentum.
Social and regulatory reforms connected to digital transformation also remain part of the Vision 2030 agenda. Investments in training, digital literacy and workforce development are intended to ensure that young people can participate fully in the technology sectors the kingdom is prioritising.
With such a shift, the government seeks to balance long-term economic diversification with practical technological goals that reinforce innovation and strengthen the country’s competitive position.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft is studying high-temperature superconductors to transmit electricity to its AI data centres in the US. The company says zero-resistance cables could reduce power losses and eliminate heat generated during transmission.
High-temperature superconductors can carry large currents through compact cables, potentially cutting space requirements for substations and overhead lines. Microsoft argues that denser infrastructure could support expanding AI workloads across the US.
The main obstacle is cooling, as superconducting materials must operate at extremely low temperatures using cryogenic systems. Even high-temperature variants require conditions near minus 200 degrees Celsius.
Rising electricity demand from AI systems has strained grids in the US, prompting political scrutiny and industry pledges to fund infrastructure upgrades. Microsoft says efficiency gains could ease pressure while it develops additional power solutions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The viral success of Moltbot has prompted Cloudflare to launch a dedicated platform for running the popular AI assistant. The move underscores how the networking company is positioning itself at the centre of the emerging AI agent ecosystem.
Moltbot, an open-source AI personal assistant built on Anthropic’s Claude model, became a viral sensation last month and demonstrated the effectiveness of Cloudflare’s edge infrastructure for running autonomous agents.
The assistant’s rapid adoption validated CEO Matthew Prince’s assertion that AI agents represent a ‘fundamental re-platforming’ of the internet. In response, Cloudflare quickly released Moltworker, a platform specifically designed for securely operating Moltbot and similar AI agents.
Prince described the dynamic as creating a ‘virtuous flywheel,’ with AI agents serving as the new users of the internet, whilst Cloudflare provides the platform they run on and the network they pass through.
Industry analysts have highlighted why Cloudflare’s infrastructure is well-suited to the era of agentic computing. RBC Capital Markets noted that AI agents require low-latency, secure inferencing at the network’s edge- precisely what Cloudflare’s Workers platform delivers.
The continued proliferation of AI agents is expected to drive ongoing demand for these capabilities.
Prince, who co-founded the company, revealed that Cloudflare ended 2025 with 4.5 million active human developers on its platform, providing a substantial foundation for the next wave of AI-driven applications and agents built on the company’s infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has introduced a new group of Facebook features that rely on Meta AI to expand personal expression across profiles, photos and Stories.
Users gain the option to animate their profile pictures, turning a still image into a short motion clip that reflects their mood instead of remaining static. Effects such as waves, confetti, hearts and party hats offer simple tools for creating a more playful online presence.
The update also includes Restyle, a tool that reimagines Stories and Memories through preset looks or AI-generated prompts. Users may shift an ordinary photograph into an illustrated, anime or glowy aesthetic, or adjust lighting and colour to match a chosen theme instead of limiting themselves to basic filters.
Facebook will highlight Memories that work well with the Restyle function to encourage wider use.
Feed posts receive a change of their own through animated backgrounds that appear gradually across accounts. People can pair text updates with visual backdrops such as ocean waves or falling leaves, creating messages that stand out instead of blending into the timeline.
Seasonal styles will arrive throughout the year to support festive posts and major events.
Meta aims to encourage more engaging interactions by giving users easy tools for playful creativity. The new features are designed to support expressive posts that feel more personal and more visually distinctive, helping users craft share-worthy moments across the platform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.
Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?
Source: freepik
The rise of an agent-driven social experiment
Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.
The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society.
Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.
Screenshot: Moltbook.com
How Moltbook evolved from an open-source experiment to a viral phenomenon
Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.
Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore.
The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions.
Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.
The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
Moltbook and the illusion of an autonomous AI agent society
At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.
Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.
Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain.
Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours.
Security risks beneath the spectacle of the Moltbook platform
If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.
The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.
Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.
Source: Freepik
What comes next for AI agents as digital reality becomes their operating ground?
Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.
What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The International Federation of Robotics says AI is accelerating the move of robots from research labs into real world use. A new position paper highlights rapid adoption across multiple industries as AI becomes a core enabler.
Logistics, manufacturing and services are leading AI driven robotics deployment. Warehousing and supply chains benefit from controlled environments, while factories use AI to improve efficiency, quality and precision in sectors including automotive and electronics.
The IFR said service robots are expanding as labour shortages persist, with restaurants and hospitality testing AI enabled machines. Hybrid models are emerging where robots handle repetitive work while humans focus on customer interaction.
Investment is rising globally, with major commitments in the US, Europe and China. The IFR expects AI to improve returns on robotics investment over the next decade through lower costs and higher productivity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.
Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.
The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.
The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.
The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!