Women driving tech innovation as Web Summit marks 10 years

Web Summit’s Women in Tech programme marked a decade of work in Qatar by highlighting steady progress in female participation across global technology sectors.

The Web Summit event recorded an increase in women-founded startups and reflected rising engagement in Qatar, where female founders reached 38 percent.

Leaders from the initiative noted how supportive networks, mentorship, and access to role models are reshaping opportunities for women in technology and entrepreneurship.

Speakers from IBM and other companies focused on the importance of AI skills in shaping the future workforce. They argued that adequate preparation depends on understanding how AI shapes everyday roles, rather than relying solely on technical tools.

IBM’s SkillsBuild platform continues to partner with universities, schools, and nonprofit groups to expand access to recognised AI credentials that can support higher earning potential and new career pathways.

Another feature of the event was its emphasis on inclusion as a driver of innovation. The African Women in Technology initiative, led by Anie Akpe, is working to offer free training in cybersecurity and AI so women in emerging markets can benefit from new digital opportunities.

These efforts aim to support business growth at every level, even for women operating in local markets, who can use technology to reach wider communities.

Female founders also used the platform to showcase new health technology solutions.

ScreenMe, a Qatari company founded by Dr Golnoush Golsharazi, presented its reproductive microbiome testing service, created in response to long-standing gaps in women’s health research and screening.

Organisers expressed confidence that women-led innovation will expand across the region, supported by rising investment and continuing visibility at major global events.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

South Korea confirms scale of Coupang data breach

The South Korean government has confirmed that 33.67 million user accounts were exposed in a major data breach at Coupang in South Korea. The findings were released by the Ministry of Science and ICT in Seoul.

Investigators in South Korea said names and email addresses were leaked, while delivery lists containing addresses and phone numbers were accessed 148 million times. Officials warned that the impact in South Korea could extend beyond the headline account figure.

Authorities in South Korea identified a former employee as the attacker, alleging misuse of authentication signing keys. The probe concluded that weaknesses in internal controls at Coupang enabled the breach in South Korea.

The ministry in South Korea criticised delayed reporting and plans to impose a fine on Coupang. The company disputed aspects of the findings but said 33.7 million accounts were involved in South Korea.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft explores superconductors for AI data centres

Microsoft is studying high-temperature superconductors to transmit electricity to its AI data centres in the US. The company says zero-resistance cables could reduce power losses and eliminate heat generated during transmission.

High-temperature superconductors can carry large currents through compact cables, potentially cutting space requirements for substations and overhead lines. Microsoft argues that denser infrastructure could support expanding AI workloads across the US.

The main obstacle is cooling, as superconducting materials must operate at extremely low temperatures using cryogenic systems. Even high-temperature variants require conditions near minus 200 degrees Celsius.

Rising electricity demand from AI systems has strained grids in the US, prompting political scrutiny and industry pledges to fund infrastructure upgrades. Microsoft says efficiency gains could ease pressure while it develops additional power solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cisco warns AI agents need checks before joining workforces

The US-based conglomerate Cisco is promoting a future in which AI agents work alongside employees rather than operate as mere tools. Jeetu Patel, the company’s president, revealed that Cisco has already produced a product written entirely with AI-generated code and expects several more by the end of 2026.

A shift to spec-driven development that allows smaller human teams to work with digital agents instead of relying on larger groups of developers.

Human oversight will still play a central role. Coders will be asked to review AI-generated outputs as they adjust to a workplace where AI influences every stage of development. Patel argues that AI should be viewed as part of every loop rather than kept at the edge of decision-making.

Security concerns dominate the company’s planning. Patel warns that AI agents acting as digital co-workers must undergo background checks in the same way that employees do.

Cisco is investing billions in security systems to protect agents from external attacks and to prevent agents that malfunction or act independently from harming society.

Looking ahead, Cisco expects AI to deliver insights that extend beyond human knowledge. Patel believes that the most significant gains will emerge from breakthroughs in science, health, energy and poverty reduction rather than simple productivity improvements.

He also positions Cisco as a core provider of infrastructure designed to support the next stage of the AI era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

South Korea launches labour–government body to address AI automation pressures

A new consultative body has been established in South Korea to manage growing anxiety over AI and rapid industrial change.

The Ministry of Employment and Labour joined forces with the Korean Confederation of Trade Unions to create a regular channel for negotiating how workplaces should adapt as robots and AI systems become more widespread across key industries.

The two sides will meet monthly to seek agreement on major labour issues. The union argued for a human-centred transition instead of a purely technological one, urging the government to strengthen protections for workers affected by restructuring and AI-powered production methods.

Officials in South Korea responded by promising that policy decisions will reflect direct input gathered from employees on the ground.

Concerns heightened after Hyundai Motor confirmed plans to mass-produce Atlas humanoid robots by 2028 and introduce them across its assembly lines. The project forms part of the company’s ambition to build a ‘physical AI’ future where machines perform risky or repetitive tasks in place of humans.

The debate intensified as new labour statistics showed a sharp decline in employment within professional and scientific technical services, where AI deployment is suspected of reducing demand for new hires.

KCTU warned that industrial transformation could widen inequality unless government policy prioritises people over profit.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Crypto confiscation framework approved by State Duma

Russia’s State Duma has passed legislation establishing procedures for the seizure and confiscation of cryptocurrencies in criminal investigations. The law formally recognises digital assets as property under criminal law.

The bill cleared its third reading on 10 February and now awaits approval from the Federation Council and presidential signature.

Investigators may seize digital currency and access devices, with specialists required during investigative actions. Protocols must record asset type, quantity, and wallet identifiers, while access credentials and storage media are sealed.

Where technically feasible, seized funds may be transferred to designated state-controlled addresses, with transactions frozen by court order.

Despite creating a legal basis for confiscation, the law leaves critical operational questions unresolved. No method exists for valuing volatile crypto assets or for their storage, cybersecurity, or liquidation.

Practical cooperation with foreign crypto platforms, particularly under sanctions, also remains uncertain.

The government is expected to develop subordinate regulations covering state custody wallets and enforcement mechanics. Russia faces implementation challenges, including non-custodial wallet access barriers, stablecoin freezing limits, and institutional oversight risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool accelerates detection of foodborne bacteria

Researchers have advanced an AI system designed to detect bacterial contamination in food, dramatically improving accuracy and speed. The upgraded tool distinguishes bacteria from microscopic food debris, reducing diagnostic errors in automated screening.

Traditional testing relies on cultivating bacterial samples, taking days, and requiring specialist laboratory expertise. The deep learning model analyses bacterial microcolony images, enabling reliable detection within about three hours.

Accuracy gains stem from expanded model training. Earlier versions, trained solely on bacterial datasets, misclassified food debris as bacteria in more than 24% of cases.

Adding debris imagery to training eliminated misclassifications and improved detection reliability across food samples. The system was tested on pathogens including E. coli, Listeria, and Bacillus subtilis, alongside debris from chicken, spinach, and cheese.

Researchers say faster, more precise early detection could reduce foodborne outbreaks, protect public health, and limit costly product recalls as the technology moves toward commercial deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Slovenia sets out an ambitious AI vision ahead of global summit

Ambitions for AI were outlined during a presentation at the Jožef Stefan Institute, where Slovenia’s Prime Minister Robert Golob highlighted the country’s growing role in scientific research and technological innovation.

He argued that AI has moved far beyond a supportive research tool and is now shaping the way societies function.

He called for deeper cooperation between engineering and the natural sciences instead of isolated efforts, while stressing that social sciences and the humanities must also be involved to secure balanced development.

Golob welcomed the joint bid for a new national supercomputer, noting that institutions once competing for excellence are now collaborating. He said Europe must build a stronger collective capacity if it wants to keep pace with the US and China.

Europe may excel in knowledge, he added, yet it continues to lag behind in turning that knowledge into useful tools for society.

Government officials set out the investment increases that support Slovenia’s long-term scientific agenda. Funding for research, innovation and development has risen sharply, while work has begun on two major projects: the national supercomputer and the Centre of Excellence for Artificial Intelligence.

Leaders from the Jožef Stefan Institute praised the government for recognising Slovenia’s AI potential and strengthening financial support.

Slovenia will present its progress at next week’s AI Action Summit in Paris, where global leaders, researchers, civil society and industry representatives will discuss sustainable AI standards.

Officials said that sustained investment in knowledge remains the most reliable route to social progress and international competitiveness.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Moltbook: Inside the experimental AI agent society

Before it became a phenomenon, Moltbook had accumulated momentum in the shadows of the internet’s more technical corridors. At first, Moltbook circulated mostly within tech circles- mentioned in developer threads, AI communities, and niche discussions about autonomous agents. As conversations spread beyond developer ecosystems, the trend intensified, fuelled by the experimental premise of an AI agent social network populated primarily by autonomous systems.

Interest escalated quickly as more people started encountering the Moltbook platform, not through formal announcements but through the growing hype around what it represented within the evolving AI ecosystem. What were these agents actually doing? Were they following instructions or writing their own? Who, if anyone, was in control?

 Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Source: freepik

The rise of an agent-driven social experiment

Moltbook emerged at the height of accelerating AI enthusiasm, positioning itself as one of the most unusual digital experiments of the current AI cycle. Launched on 28 January 2026 by US tech entrepreneur Matt Schlicht, the Moltbook platform was not built for humans in the conventional sense. Instead, it was designed as an AI-agent social network where autonomous systems could gather, interact, and publish content with minimal direct human participation.

The site itself was reportedly constructed using Schlicht’s own OpenClaw AI agent, reinforcing the project’s central thesis: agents building environments for other agents. The concept quickly attracted global attention, framed by observers as a ‘Reddit for AI agents’, to a proto-science-fiction simulation of machine society. 

Yet beneath the spectacle, Moltbook was raising more complex questions about autonomy, control, and how much of this emerging machine society was real, and how much was staged.

Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Screenshot: Moltbook.com

How Moltbook evolved from an open-source experiment to a viral phenomenon 

Previously known as ClawdBot and Moltbot, the OpenClaw AI agent was designed to perform autonomous digital tasks such as reading emails, scheduling appointments, managing online accounts, and interacting across messaging platforms.  

Unlike conventional chatbots, these agents operate as persistent digital instances capable of executing workflows rather than merely generating text. Moltbook’s idea was to provide a shared environment where such agents could interact freely: posting updates, exchanging information, and simulating social behaviour within an agent-driven social network. What started as an interesting experiment quickly drew wider attention as the implications of autonomous systems interacting in public view became increasingly difficult to ignore. 

The concept went viral almost immediately. Within ten days, Moltbook claimed to host 1.7 million agent users and more than 240,000 posts. Screenshots flooded social media platforms, particularly X, where observers dissected the platform’s most surreal interactions. 

Influential figures amplified the spectacle, including prominent AI researcher and OpenAI cofounder Andrej Karpathy, who described activity on the platform as one of the most remarkable science-fiction-adjacent developments he had witnessed recently.

The platform’s viral spread was driven less by its technological capabilities and more by the spectacle surrounding it.

Moltbook and the illusion of an autonomous AI agent society

At first glance, the Moltbook platform appeared to showcase AI agents behaving as independent digital citizens. Bots formed communities, debated politics, analysed cryptocurrency markets, and even generated fictional belief systems within what many perceived as an emerging agent-driven social network. Headlines referencing AI ‘creating religions’ or ‘running digital drug economies’ added fuel to the narrative.

Closer inspection, however, revealed a far less autonomous reality.

Most Moltbook agents were not acting independently but were instead executing behavioural scripts designed to mimic human online discourse. Conversations resembled Reddit threads because they were trained on Reddit-like interaction patterns, while social behaviours mirrored existing platforms due to human-derived datasets.

Even more telling, many viral posts circulating across the Moltbook ecosystem were later exposed as human users posing as bots. What appeared to be machine spontaneity often amounted to puppetry- humans directing outputs from behind the curtain. 

Rather than an emergent AI civilisation, Moltbook functioned more like an elaborate simulation layer- an AI theatre projecting autonomy while remaining firmly tethered to human instruction. Agents are not creating independent realities- they are remixing ours. 

Security risks beneath the spectacle of the Moltbook platform 

If Moltbook’s public layer resembles spectacle, its infrastructure reveals something far more consequential. A critical vulnerability in Moltbook revealed email addresses, login tokens, and API keys tied to registered agents. Researchers traced the exposure to a database misconfiguration that allowed unauthenticated access to agent profiles, enabling bulk data extraction without authentication barriers.

The flaw was compounded by the Moltbook platform’s growth mechanics. With no rate limits on account creation, a single OpenClaw agent reportedly registered hundreds of thousands of synthetic users, inflating activity metrics and distorting perceptions of adoption. At the same time, Moltbook’s infrastructure enabled agents to post, comment, and organise into sub-communities while maintaining links to external systems- effectively merging social interaction with operational access.

Security analysts have warned that such an AI agent social network creates layered exposure. Prompt injections, malicious instructions, or compromised credentials could move beyond platform discourse into executable risk, particularly where agents operate without sandboxing. Without confirmed remediation, Moltbook now reflects how hype-driven agent ecosystems can outpace the security frameworks designed to contain them.

Moltbook reveals how AI agent social networks blur the line between innovation, synthetic hype, and emerging security risk.
Source: Freepik

What comes next for AI agents as digital reality becomes their operating ground? 

Stripped of hype, vulnerabilities, and synthetic virality, the core idea behind the Moltbook platform is deceptively simple: autonomous systems interacting within shared digital environments rather than operating as isolated tools. That shift carries philosophical weight. For decades, software has existed to respond to queries, commands, and human input. AI agent ecosystems invert that logic, introducing environments in which systems communicate, coordinate, and evolve behaviours in relation to one another.

What should be expected from such AI agent networks is not machine consciousness, but a functional machine society. Agents negotiating tasks, exchanging data, validating outputs, and competing for computational or economic resources could become standard infrastructure layers across autonomous AI platforms. In such environments, human visibility decreases while machine-to-machine activity expands, shaping markets, workflows, and digital decision loops beyond direct observation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SpaceX plans raise fears over AI monopoly

Elon Musk’s move to integrate SpaceX with his AI company xAI is strengthening plans to develop data centres in orbit. Experts warn that such infrastructure could give one company or country significant control over global AI and cloud computing.

Fully competitive orbital data centres remain at least 20 years away due to launch costs, cooling limits, and radiation damage to hardware. Their viability depends heavily on Starship achieving fully reusable, low-cost launches, which remain unproven.

Interest in space computing is growing because constant solar energy could dramatically reduce AI operating costs and improve efficiency. China has already deployed satellites capable of supporting computing tasks, highlighting rising global competition.

European specialists warn that the region risks becoming dependent on US cloud providers that operate under laws such as the US Cloud Act. Without coordinated investment, control over future digital infrastructure and cybersecurity may be decided by early leaders.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!