Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.
Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.
Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.
The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.
Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.
Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.
He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.
He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.
Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.
Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.
Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.
Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.
Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.
Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Research from the UK Safer Internet Centre reveals nearly all young people aged eight to 17 now use artificial intelligence tools, highlighting how deeply the technology has entered daily life. Growing adoption has also increased reliance, with many teenagers using AI regularly for schoolwork, social interactions and online searches.
Education remains one of the main uses, with students turning to AI for homework support and study assistance. However, concerns about fairness and creativity have emerged, as some pupils worry about false accusations of misuse and reduced independent thinking.
Safety fears remain significant, especially around harmful content and privacy risks linked to AI-generated images. Many teenagers and parents worry the technology could be used to create inappropriate or misleading visuals, raising questions about online protection.
Emotional and social impacts are also becoming clear, with some young people using AI for personal advice or practising communication. Limited parental guidance and growing dependence suggest governments and schools may soon consider stronger oversight and clearer rules.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.
A ruling that reshapes how companies defend their interests under the GDPR framework.
The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.
European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.
By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.
Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.
Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?
Source: freepik
The rise of an agent-driven social experiment
Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.
The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society.
Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.
Screenshot: Moltbook.com
How Moltbook evolved from an open-source experiment to a viral phenomenon
Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.
Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore.
The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions.
Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.
The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
Moltbook and the illusion of an autonomous AI agent society
At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.
Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.
Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain.
Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours.
Security risks beneath the spectacle of the Moltbook platform
If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.
The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.
Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.
Source: Freepik
What comes next for AI agents as digital reality becomes their operating ground?
Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.
What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.
The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.
Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.
Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.
Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has issued implementation guidelines for Article 18 of the European Media Freedom Act (EMFA), setting out how large platforms must protect recognised media content through self-declaration mechanisms.
Article 18 has been in effect for 6 months, and the guidance is intended to translate legal duties into operational steps. The European Broadcasting Union welcomed the clarification but warned that major platforms continue to delay compliance, limiting media organisations’ ability to exercise their rights.
The Commission says self-declaration mechanisms should be easy to find and use, with prominent interface features linked to media accounts. Platforms are also encouraged to actively promote the process, make it available in all EU languages, and use standardised questionnaires to reduce friction.
The guidance also recommends allowing multiple accounts in one submission, automated acknowledgements with clear contact points, and the ability to update or withdraw declarations. The aim is to improve transparency and limit unilateral moderation decisions.
The guidelines reinforce the EMFA’s goal of rebalancing power between platforms and media organisations by curbing opaque moderation practices. The impact of EMFA will depend on enforcement and ongoing oversight to ensure platforms implement the measures in good faith.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Dutch MPs have renewed calls for companies and public services in the Netherlands to reduce reliance on US-based cloud servers. The move reflects growing concern over data security and foreign access in the Netherlands.
Research by NOS found that two-thirds of essential service providers in the Netherlands rely on at least one US cloud server. Local councils, health insurers and hospitals in the Netherlands remain heavily exposed.
Concerns intensified following a proposed sale of Solvinity, which manages the DigiD system used across the Netherlands. A sale to a US firm could place Dutch data under the US Cloud Act.
Parties including D66, VVD and CDA say critical infrastructure data in the Netherlands should be prioritised for protection. Dutch cloud providers say Europe could handle most services if procurement rules changed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has begun testing advertisements inside ChatGPT for some adult users in the US, marking a major shift for the widely used AI service.
The ads appear only on Free and Go tiers in the US, while paid plans remain ad free. OpenAI says responses are unaffected, though critics warn commercial messaging could blur boundaries over time in the US.
Ads are selected based on conversation topics and prior interactions, prompting concern among privacy advocates in the US. OpenAI says advertisers receive only aggregated data and cannot view conversations.
Industry analysts say the move reflects growing pressure to monetise costly AI infrastructure in the US. Regulators and researchers continue to debate whether advertising can coexist with trust in AI systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A landmark trial has begun in Los Angeles, accusing Meta and Google’s YouTube of deliberately addicting children to their platforms.
The case is part of a wider series of lawsuits across the US seeking to hold social media companies accountable for harms to young users. TikTok and Snap settled before trial, leaving Meta and YouTube to face the allegations in court.
The first bellwether case involves a 19-year-old identified as ‘KGM’, whose claims could shape thousands of similar lawsuits. Plaintiffs allege that design features were intentionally created to maximise engagement among children, borrowing techniques from slot machines and the tobacco industry.
A trial that may see testimony from executives, including Meta CEO Mark Zuckerberg, and could last six to eight weeks.
Social media companies deny the allegations, emphasising existing safeguards and arguing that teen mental health is influenced by numerous factors, such as academic pressure, socioeconomic challenges and substance use, instead of social media alone.
Meta and YouTube maintain that they prioritise user safety and privacy while providing tools for parental oversight.
Similar trials are unfolding across the country. New Mexico is investigating allegations of sexual exploitation facilitated by Meta platforms, while Oakland will hear cases representing school districts.
More than 40 state attorneys general have filed lawsuits against Meta, with TikTok facing claims in over a dozen states. Outcomes could profoundly impact platform design, regulation and legal accountability for youth-focused digital services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!