The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.
Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.
The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.
Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.
The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.
Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.
Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.
The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.
Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.
Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI tools saw significant uptake among young Europeans in 2025, with usage rates far outpacing the broader population. Data shows that 63.8% of individuals aged 16–24 across the EU engaged with generative AI, nearly double the 32.7% recorded among citizens aged 16–74.
Adoption patterns indicate that younger users are embedding AI into everyday routines at a faster pace. Private use led the trend, with 44.2% of young people applying generative AI in personal contexts, compared with 25.1% of the general population.
Educational deployment also stood out, reaching 39.3% among youth, while only 9.4% of the wider population reported similar academic use.
The professional application presented the narrowest gap between age groups. Around 15.8% of young users reported workplace use of generative AI tools, closely aligned with 15.1% among the overall population- a reflection of many young people still transitioning into the labour market.
Country-level data highlights notable regional differences. Greece (83.5%), Estonia (82.8%), and Czechia (78.5%) recorded the highest youth adoption rates, while Romania (44.1%), Italy (47.2%), and Poland (49.3%) ranked lowest.
The findings coincide with Safer Internet Day, observed on 10 February, underscoring the growing importance of digital literacy and online safety as AI usage accelerates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.
He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.
He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.
Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.
Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.
Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.
Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.
Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.
Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced a major expansion of its AI investments in Singapore, strengthening research capabilities, workforce development, and enterprise innovation as part of a long-term regional strategy.
The initiatives were unveiled at the company’s Google for Singapore event, signalling deeper alignment with the nation’s ambition to lead the AI economy.
Research and development form a central pillar of the expansion. Building on the recent launch of a Google DeepMind research lab in Singapore, the company is scaling specialised teams across software engineering, research science, and user experience design.
A new Google Cloud Singapore Engineering Centre will also support enterprises in deploying advanced AI solutions across sectors, including robotics and clean energy.
Healthcare innovation features prominently in the investment roadmap. Partnerships with AI Singapore will support national health AI infrastructure, including access to the MedGemma model to accelerate diagnostics and treatment development.
Google is also launching a security-focused AI Center of Excellence and rolling out age assurance technologies to strengthen online protections for younger users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Research from the UK Safer Internet Centre reveals nearly all young people aged eight to 17 now use artificial intelligence tools, highlighting how deeply the technology has entered daily life. Growing adoption has also increased reliance, with many teenagers using AI regularly for schoolwork, social interactions and online searches.
Education remains one of the main uses, with students turning to AI for homework support and study assistance. However, concerns about fairness and creativity have emerged, as some pupils worry about false accusations of misuse and reduced independent thinking.
Safety fears remain significant, especially around harmful content and privacy risks linked to AI-generated images. Many teenagers and parents worry the technology could be used to create inappropriate or misleading visuals, raising questions about online protection.
Emotional and social impacts are also becoming clear, with some young people using AI for personal advice or practising communication. Limited parental guidance and growing dependence suggest governments and schools may soon consider stronger oversight and clearer rules.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
England is reforming its computing curriculum to place AI awareness and digital literacy at the centre of education. The move follows recommendations from an independent Curriculum and Assessment Review, which concluded that the current framework is too narrow for today’s digital environment and requires a stronger focus on data skills, online safety, and critical thinking.
The reform aims to modernise qualifications while strengthening the UK’s future digital talent pipeline. By embedding AI and digital competencies across the curriculum, the government seeks to equip learners with skills relevant to further study, employment, and participation in a technology-driven society.
The British Computer Society (BCS) has been appointed by the Department for Education to lead the drafting of the new Computing curriculum. The organisation will oversee revisions across key stages 1 to 5, ensuring alignment with classroom practice and developments in the wider digital profession.
A broader Computing GCSE will replace the current Computer Science GCSE, integrating technical foundations with digital literacy and responsible technology use. In addition, the government is exploring a new Level 3 qualification in Data Science and AI, with a public consultation expected later this year to shape the final reforms.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US AI company Anthropic’s expansion into India has triggered a legal dispute with a Bengaluru-based software firm that claims it has used the name ‘Anthropic’ since 2017. The Indian company argues that the US AI firm’s market entry has caused customer confusion. It is seeking recognition of prior use and damages of ₹10 million.
A commercial court in Karnataka has issued notice and suit summons to Anthropic but declined to grant an interim injunction. Further hearings are scheduled. The local firm says it prefers coexistence but turned to litigation due to growing marketplace confusion.
The dispute comes as India becomes a key growth market for global AI companies. Anthropic recently announced local leadership and expanded operations in the country. India’s large digital economy and upcoming AI industry events reinforce its strategic importance.
The case also highlights broader challenges linked to the rapid global expansion of AI firms. Trademark protection, brand due diligence, and regulatory clarity are increasingly central to cross-border digital market entry.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.
Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?
Source: freepik
The rise of an agent-driven social experiment
Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.
The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society.
Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.
Screenshot: Moltbook.com
How Moltbook evolved from an open-source experiment to a viral phenomenon
Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.
Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore.
The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions.
Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.
The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
Moltbook and the illusion of an autonomous AI agent society
At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.
Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.
Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain.
Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours.
Security risks beneath the spectacle of the Moltbook platform
If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.
The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.
Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.
Source: Freepik
What comes next for AI agents as digital reality becomes their operating ground?
Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.
What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s move to integrate SpaceX with his AI company xAI is strengthening plans to develop data centres in orbit. Experts warn that such infrastructure could give one company or country significant control over global AI and cloud computing.
Fully competitive orbital data centres remain at least 20 years away due to launch costs, cooling limits, and radiation damage to hardware. Their viability depends heavily on Starship achieving fully reusable, low-cost launches, which remain unproven.
Interest in space computing is growing because constant solar energy could dramatically reduce AI operating costs and improve efficiency. China has already deployed satellites capable of supporting computing tasks, highlighting rising global competition.
European specialists warn that the region risks becoming dependent on US cloud providers that operate under laws such as the US Cloud Act. Without coordinated investment, control over future digital infrastructure and cybersecurity may be decided by early leaders.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!