The Olympic ice dance format combines a themed rhythm dance with a free dance. For the 2026 season, skaters must draw on 1990s music and styles. While most competitors chose recognisable tracks, the Czech siblings used a hybrid soundtrack blending AC/DC with an AI-generated music piece.
Katerina Mrazkova and Daniel Mrazek, ice dancers from Czechia, made their Olympic debut using a rhythm dance soundtrack that included AI-generated music, a choice permitted under current competition rules but one that quickly drew attention.
The International Skating Union lists the rhythm dance music as ‘One Two by AI (of 90s style Bon Jovi)’ alongside ‘Thunderstruck’ by AC/DC. Olympic organisers confirmed the use of AI-generated material, with commentators noting the choice during the broadcast.
Criticism of the music selection extends beyond novelty. Earlier versions of the programme reportedly included AI-generated music with lyrics that closely resembled lines from well-known 1990s songs, raising concerns about originality.
The episode reflects wider tensions across creative industries, where generative tools increasingly produce outputs that closely mirror existing works. For the athletes, attention remains on performance, but questions around authorship and creative value continue to surface.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Generative AI tools saw significant uptake among young Europeans in 2025, with usage rates far outpacing the broader population. Data shows that 63.8% of individuals aged 16–24 across the EU engaged with generative AI, nearly double the 32.7% recorded among citizens aged 16–74.
Adoption patterns indicate that younger users are embedding AI into everyday routines at a faster pace. Private use led the trend, with 44.2% of young people applying generative AI in personal contexts, compared with 25.1% of the general population.
Educational deployment also stood out, reaching 39.3% among youth, while only 9.4% of the wider population reported similar academic use.
The professional application presented the narrowest gap between age groups. Around 15.8% of young users reported workplace use of generative AI tools, closely aligned with 15.1% among the overall population- a reflection of many young people still transitioning into the labour market.
Country-level data highlights notable regional differences. Greece (83.5%), Estonia (82.8%), and Czechia (78.5%) recorded the highest youth adoption rates, while Romania (44.1%), Italy (47.2%), and Poland (49.3%) ranked lowest.
The findings coincide with Safer Internet Day, observed on 10 February, underscoring the growing importance of digital literacy and online safety as AI usage accelerates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced a major expansion of its AI investments in Singapore, strengthening research capabilities, workforce development, and enterprise innovation as part of a long-term regional strategy.
The initiatives were unveiled at the company’s Google for Singapore event, signalling deeper alignment with the nation’s ambition to lead the AI economy.
Research and development form a central pillar of the expansion. Building on the recent launch of a Google DeepMind research lab in Singapore, the company is scaling specialised teams across software engineering, research science, and user experience design.
A new Google Cloud Singapore Engineering Centre will also support enterprises in deploying advanced AI solutions across sectors, including robotics and clean energy.
Healthcare innovation features prominently in the investment roadmap. Partnerships with AI Singapore will support national health AI infrastructure, including access to the MedGemma model to accelerate diagnostics and treatment development.
Google is also launching a security-focused AI Center of Excellence and rolling out age assurance technologies to strengthen online protections for younger users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.
Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?
Source: freepik
The rise of an agent-driven social experiment
Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.
The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society.
Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.
Screenshot: Moltbook.com
How Moltbook evolved from an open-source experiment to a viral phenomenon
Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.
Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore.
The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions.
Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.
The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
Moltbook and the illusion of an autonomous AI agent society
At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.
Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.
Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain.
Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours.
Security risks beneath the spectacle of the Moltbook platform
If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.
The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.
Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.
Source: Freepik
What comes next for AI agents as digital reality becomes their operating ground?
Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.
What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
CIO leadership commentary highlights that many organisations investing in agentic AI, autonomous AI agents designed to execute complex, multi-step tasks, encounter disappointing results when deployments focus solely on outcomes like speed or cost savings without addressing underlying system design challenges.
The so-called ‘friction tax’ arises from siloed data, disjointed workflows and tools that force employees to act as manual connectors between systems, negating much of the theoretical efficiency AI promises.
The author proposes an ‘architecture of flow’ as a solution, in which context is unified across systems and AI agents operate on shared data and protocols, enabling work to move seamlessly between functions without bottlenecks.
This approach prioritises employee experience and customer value, enabling context-rich automation that reduces repetitive work and improves user satisfaction.
Key elements of such an architecture include universal context layers (e.g. standard protocols for data sharing) and agentic orchestration mechanisms that help specialised AI agents communicate and coordinate tasks across complex workflows.
When implemented effectively, this reduces cognitive load, strengthens adoption, and makes business growth a natural result of friction-free operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The article reflects on the growing integration of AI into daily life, from classrooms to work, and asks whether this shift is making people intellectually sharper or more dependent on machines.
Tools such as ChatGPT, Grok and Perplexity have moved from optional assistants to everyday aids that generate instant answers, summaries and explanations, reducing the time and effort traditionally required for research and deep thinking.
While quantifiable productivity gains are clear, the piece highlights trade-offs: readily available answers can diminish the cognitive struggle that builds critical thinking, problem-solving and independent reasoning.
In education, easy AI responses may weaken students’ engagement in learning unless teachers guide their use responsibly. Some respondents point to creativity and conceptual understanding eroding when AI is used as a shortcut. In contrast, others see it as a democratising tutor that supports learners who otherwise lack resources.
The article also incorporates perspectives from AI systems themselves, which generally frame AI as neither inherently making people smarter nor dumber, but dependent on how it’s used.
It concludes that the impact of AI on human cognition is not predetermined by the technology, but shaped by user choice: whether AI is a partner that augments thinking or a crutch that replaces it.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Advertising inside ChatGPT marks a shift in where commercial messages appear, not a break from how advertising works. AI systems have shaped search, social media, and recommendations for years, but conversational interfaces make those decisions more visible during moments of exploration.
Unlike search or social formats, conversational advertising operates inside dialogue. Ads appear because users are already asking questions or seeking clarity. Relevance is built through context rather than keywords, changing when information is encountered rather than how decisions are made.
In healthcare and clinical research, this distinction matters. Conversational ads cannot enroll patients directly, but they may raise awareness earlier in patient journeys and shape later discussions with clinicians and care providers.
Early rollout will be limited to free or low-cost ChatGPT tiers, likely skewing exposure towards patients and caregivers. As with earlier platforms, sensitive categories may remain restricted until governance and safeguards mature.
The main risks are organisational rather than technical. New channels will not fix unclear value propositions or operational bottlenecks. Conversational advertising changes visibility, not fundamentals, and success will depend on responsible integration.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The International Federation of Robotics says AI is accelerating the move of robots from research labs into real world use. A new position paper highlights rapid adoption across multiple industries as AI becomes a core enabler.
Logistics, manufacturing and services are leading AI driven robotics deployment. Warehousing and supply chains benefit from controlled environments, while factories use AI to improve efficiency, quality and precision in sectors including automotive and electronics.
The IFR said service robots are expanding as labour shortages persist, with restaurants and hospitality testing AI enabled machines. Hybrid models are emerging where robots handle repetitive work while humans focus on customer interaction.
Investment is rising globally, with major commitments in the US, Europe and China. The IFR expects AI to improve returns on robotics investment over the next decade through lower costs and higher productivity.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The mobile app market of Europe in 2025 revealed a distinct divergence between popularity and revenue. AI-driven productivity apps, such as ChatGPT and Google Gemini, dominated downloads, alongside shopping platforms including Temu, SHEIN, and Vinted.
While installs highlight user preferences, active use and monetisation patterns tell a very different story instead of merely reflecting popularity.
Downloads for the top apps show ChatGPT leading with over 64 million, followed by Temu with nearly 44 million. Other widely downloaded apps included Threads, TikTok, CapCut, WhatsApp, Revolut and Lidl Plus.
The prevalence of AI and shopping apps underscores the shift of tools from professional use to everyday tasks, as Europeans increasingly rely on digital services for work, study and leisure.
Revenue patterns diverge sharply from download rankings. TikTok generated €740 million, followed by ChatGPT at €448 million and Tinder at €429 million. Subscription-based and premium-feature apps, including Disney+, Amazon Prime, Google One and YouTube, also rank highly.
In-app spending, rather than download numbers, drives earnings, revealing the importance of monetisation strategies beyond pure popularity.
Regional trends emphasise local priorities. The UK favours domestic finance and public service apps such as Monzo, Tesco, GOV.UK ID Check and HMRC, while Turkey shows strong use of national government, telecom and e-commerce apps, including e-Devlet Kapısı, Turkcell and Trendyol.
These variations highlight how app consumption reflects cultural preferences and the role of domestic services in digital life.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of Oklahoma have developed a machine-learning model that could significantly speed up the manufacturing of monoclonal antibodies, a fast-growing class of therapies used to treat cancer, autoimmune disorders, and other diseases.
The study, published in Communications Engineering, targets delays in selecting high-performing cell lines during antibody production. Output varies widely between Chinese hamster ovary cell clones, forcing manufacturers to spend weeks screening for high yields.
By analysing early growth data, the researchers trained a model to predict antibody productivity far earlier in the process. Using only the first 9 days of data, it forecast production trends through day 16 and identified higher-performing clones in more than 76% of tests.
The model was developed with Oklahoma-based contract manufacturer Wheeler Bio, combining production data with established growth equations. Although further validation is needed, early results suggest shorter timelines and lower manufacturing costs.
The work forms part of a wider US-funded programme to strengthen biotechnology manufacturing capacity, highlighting how AI is being applied to practical industrial bottlenecks rather than solely to laboratory experimentation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!