Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.
He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.
He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.
Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.
Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.
Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.
Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.
Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.
Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.
Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?
Source: freepik
The rise of an agent-driven social experiment
Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.
The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society.
Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.
Screenshot: Moltbook.com
How Moltbook evolved from an open-source experiment to a viral phenomenon
Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.
Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore.
The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions.
Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.
The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
Moltbook and the illusion of an autonomous AI agent society
At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.
Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.
Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain.
Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours.
Security risks beneath the spectacle of the Moltbook platform
If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.
The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.
Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.
Source: Freepik
What comes next for AI agents as digital reality becomes their operating ground?
Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.
What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s move to integrate SpaceX with his AI company xAI is strengthening plans to develop data centres in orbit. Experts warn that such infrastructure could give one company or country significant control over global AI and cloud computing.
Fully competitive orbital data centres remain at least 20 years away due to launch costs, cooling limits, and radiation damage to hardware. Their viability depends heavily on Starship achieving fully reusable, low-cost launches, which remain unproven.
Interest in space computing is growing because constant solar energy could dramatically reduce AI operating costs and improve efficiency. China has already deployed satellites capable of supporting computing tasks, highlighting rising global competition.
European specialists warn that the region risks becoming dependent on US cloud providers that operate under laws such as the US Cloud Act. Without coordinated investment, control over future digital infrastructure and cybersecurity may be decided by early leaders.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.
The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.
Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.
Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.
Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Organised by the UN Office of Counter-Terrorism in partnership with the Republic of Korea’s UN mission, the dialogue will take place at UN Headquarters in New York. Discussions will bring together policymakers, technology experts, civil society representatives, and youth stakeholders.
A central milestone will be the launch of the first UN Practice Guide on Artificial Intelligence and Preventing and Countering Violent Extremism. The guide offers human rights-based advice on responsible AI use, addressing ethical, governance, and operational risks.
Officials warn that AI-generated content, deepfakes, and algorithmic amplification are accelerating extremist narratives online. Responsibly governed AI tools could enhance early detection, research, and community prevention efforts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The ambitions of the EU to streamline telecom rules are facing fresh uncertainty after a Commission document indicated that the Digital Networks Act may create more administrative demands for national regulators instead of easing their workload.
The plan to simplify long-standing procedures risks becoming more complex as officials examine the impact on oversight bodies.
Concerns are growing among telecom authorities and BEREC, which may need to adjust to new reporting duties and heightened scrutiny. The additional requirements could limit regulators’ ability to respond quickly to national needs.
Policymakers hoped the new framework would reduce bureaucracy and modernise the sector. The emerging assessment now suggests that greater coordination at the EU level may introduce extra layers of compliance at a time when regulators seek clarity and flexibility.
The debate has intensified as governments push for faster network deployment and more predictable governance. The prospect of heavier administrative tasks could slow progress rather than deliver the streamlined system originally promised.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nigeria has been advised to develop its coal reserves to benefit from the rapidly expanding global AI economy. A policy organisation said the country could capture part of the projected $650 billion AI investment by strengthening its energy supply capacity.
AI infrastructure requires vast and reliable electricity to power data centres and advanced computing systems. Technology companies worldwide are increasing energy investments as competition intensifies and demand for computing power continues to grow rapidly.
Nigeria holds nearly five billion metric tonnes of coal, offering a significant opportunity to support global energy needs. Experts warned that failure to develop these resources could result in major economic losses and missed industrial growth.
The organisation also proposed creating a national corporation to convert coal into high-value energy and industrial products. Analysts stressed that urgent government action is needed to secure Nigeria’s position in the emerging AI-driven economy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Singtel’s data centre arm Nxera has opened its largest data centre in Singapore at Tuas. The facility strengthens Singapore’s role as a regional hub for AI infrastructure.
The Tuas site in Singapore offers 58MW of AI-ready capacity and is described as the country’s highest- power-density data centre. More than 90 per cent of Singapore’s capacity was committed before the official launch.
Nxera said the Singapore facility is hyperconnected through direct access to international and domestic networks. Singapore gains lower latency and improved reliability from integration with a cable landing station.
Singtel said the Tuas development supports rising demand in Singapore for AI, cloud and high performance computing. Nxera plans further expansion in Asia while reinforcing Singapore’s position in digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in New York have introduced a bill proposing a three year pause on permits for new data centres. Supporters say rapid expansion linked to AI infrastructure risks straining energy systems in New York.
Concerns in New York focus on rising electricity demand and higher household bills as tech companies scale AI operations. Critics across the US argue local communities bear the cost of supporting large scale computing facilities.
The New York proposal has drawn backing from environmental groups and politicians in the US who want time to set stricter rules. US senator Bernie Sanders has also called for a nationwide halt on new data centres.
Officials in New York say the pause would allow stronger policies on grid access and fair cost sharing. The debate reflects wider US tension between economic growth driven by AI and environmental limits.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A cyber-attack targeting the European Commission’s central mobile infrastructure was identified on 30 January, raising concerns that staff names and mobile numbers may have been accessed.
The Commission isolated the affected system within nine hours instead of allowing the breach to escalate, and no mobile device compromise was detected.
Also, the Commission plans a full review of the incident to reinforce the resilience of internal systems.
Officials argue that Europe faces daily cyber and hybrid threats targeting essential services and democratic institutions, underscoring the need for stronger defensive capabilities across all levels of the EU administration.
CERT-EU continues to provide constant threat monitoring, automated alerts and rapid responses to vulnerabilities, guided by the Interinstitutional Cybersecurity Board.
These efforts support the broader legislative push to strengthen cybersecurity, including the Cybersecurity Act 2.0, which introduces a Trusted ICT Supply Chain to reduce reliance on high-risk providers.
Recent measures are complemented by the NIS2 Directive, which sets a unified legal framework for cybersecurity across 18 critical sectors, and the Cyber Solidarity Act, which enhances operational cooperation through the European Cyber Shield and the Cyber Emergency Mechanism.
Together, they aim to ensure collective readiness against large-scale cyber threats.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!