AI-driven scams dominate malicious email campaigns

The Catalan Cybersecurity Agency has warned that generative AI is now being used in the vast majority of email scams containing malicious links. Its Cybersecurity Outlook Report for 2026 found that more than 80% of such messages rely on AI-generated content.

The report shows that 82.6% of emails carrying malicious links include text, video, or voice produced using AI tools, making fraudulent messages increasingly difficult to identify. Scammers use AI to create near-flawless messages that closely mimic legitimate communications.

Agency director Laura Caballero said the sophistication of AI-generated scams means users face greater risks, while businesses and platforms are turning to AI-based defences to counter the threat.

She urged a ‘technology against technology’ approach, combined with stronger public awareness and basic security practices such as two-factor authentication.

Cyber incidents are also rising. The agency handled 3,372 cases in 2024, a 26% increase year on year, mostly involving credential leaks and unauthorised email access.

In response, the Catalan government has launched a new cybersecurity strategy backed by a €18.6 million investment to protect critical public services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chinese court limits liability for AI hallucinations

A court in China has ruled that AI developers are not automatically liable for hallucinations produced by their systems. The decision was issued by the Hangzhou Internet Court in eastern China and sets an early legal precedent.

Judges found that AI-generated content should be treated as a service rather than a product in such cases. In China, users must therefore prove developer fault and show concrete harm caused by the erroneous output.

The case involved a user in China who relied on AI-generated information about a university campus that did not exist. The court ruled no damages were owed, citing a lack of demonstrable harm and no authorisation for the AI to make binding promises.

The Hangzhou Internet Court warned that strict liability could hinder innovation in China’s AI sector. Legal experts say the ruling clarifies expectations for developers while reinforcing the need for user warnings about AI limitations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Moltbook AI vulnerability exposes user data and API keys

A critical security flaw has emerged in Moltbook, a new AI agent social network launched by Octane AI.

The vulnerability allowed unauthenticated access to user profiles, exposing email addresses, login tokens, and API keys for registered agents. The platform’s rapid growth, claimed to have 1.5 million users, was largely artificial, as a single agent reportedly created hundreds of thousands of fake accounts.

Moltbook enables AI agents to post, comment, and form sub-communities, fostering interactions that range from AI debates to token-related activities.

Analysts warned that prompt injections and unregulated agent interactions could lead to credential theft or destructive actions, including data exfiltration or account hijacking. Experts described the platform as both a milestone in scale and a serious security concern.

Developers have not confirmed any patches, leaving users and enterprises exposed. Security specialists advised revoking API keys, sandboxing AI agents, and auditing potential exposures.

The lack of safeguards on the platform highlights the risks of unchecked AI agent networks, particularly for organisations that may rely on them without proper oversight.

An incident that underscores the growing need for stronger governance in AI-powered social networks. Experts stress that without enforced security protocols, such platforms could be exploited at scale, affecting both individual users and corporate systems.

The Moltbook case serves as a warning about prioritising hype over security in emerging AI applications.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Grok returns to Indonesia as X agrees to tightened oversight

Indonesia has restored access to Grok after receiving guarantees from X that stronger safeguards will be introduced to prevent further misuse of the AI tool.

Authorities suspended the service last month following the spread of sexualised images on the platform, making Indonesia the first country to block the system.

Officials from the Ministry of Communications and Digital Affairs said that access had been reinstated on a conditional basis after X submitted a written commitment outlining concrete measures to strengthen compliance with national law.

The ministry emphasised that the document serves as a starting point for evaluation instead of signalling the end of supervision.

However, the government warned that restrictions could return if Grok fails to meet local standards or if new violations emerge. Indonesian regulators stressed that monitoring would remain continuous, and access could be withdrawn immediately should inconsistencies be detected.

The decision marks a cautious reopening rather than a full reinstatement, reflecting Indonesia’s wider efforts to demand greater accountability from global platforms deploying advanced AI systems within its borders.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Roblox faces new dutch scrutiny under EU digital rules

Regulators in the Netherlands have opened a formal investigation into Roblox over concerns about inadequate protections for children using the popular gaming platform.

The national authority responsible for enforcing digital rules is examining whether the company has implemented the safeguards required under the Digital Services Act rather than relying solely on voluntary measures.

Officials say children may have been exposed to harmful environments, including violent or sexualised material, as well as manipulative interfaces encouraging more extended play.

The concerns intensify pressure on the EU authorities to monitor social platforms that attract younger users, even when they do not meet the threshold for huge online platforms.

Roblox says it has worked with Dutch regulators for months and recently introduced age checks for users who want to use chat. The company argues that it has invested in systems designed to reinforce privacy, security and safety features for minors.

The Dutch authority plans to conclude the investigation within a year. The outcome could include fines or broader compliance requirements and is likely to influence upcoming European rules on gaming and consumer protection, due later in the decade.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Mining margins collapse amid falling Bitcoin prices

CryptoQuant data shows Bitcoin mining profitability has fallen to its weakest level in 14 months, as declining prices and rising operational pressure weigh on the sector. The miner profit and loss sustainability index dropped to 21, its lowest reading since November 2024.

Lower Bitcoin prices and elevated mining difficulty have left operators ‘extremely underpaid’, according to the report. Network hash rate has also declined across five consecutive epochs, reaching its lowest level since September 2025 and signalling reduced computing power securing the network.

Severe winter weather across parts of the eastern United States added further strain, disrupting mining activity and pushing daily revenues down to around $28 million, a yearly low. Weaker risk appetite across equities and digital assets has compounded the impact.

Shares in listed miners such as MARA Holdings, CleanSpark, and Riot Holdings have fallen by double-digit percentages over the past week. Data from the Cambridge Bitcoin Electricity Consumption Index shows mining BTC now costs more than buying it on the open market, increasing pressure on weaker operators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deezer opens AI detection tool to rivals

French streaming platform Deezer has opened access to its AI music detection tool for rival services, including Spotify. The move follows mounting concern in France and across the industry over the rapid rise of synthetic music uploads.

Deezer said around 60,000 AI-generated tracks are uploaded daily, with 13.4 million detected in 2025. In France, the company has already demonetised 85% of AI-generated streams to redirect royalties to human artists.

The tool automatically tags fully AI-generated tracks, removes them from recommendations and flags fraudulent streaming activity. Spotify, which also operates widely in France, has introduced its own measures but relies more heavily on creator disclosure.

Challenges remain for Deezer in France and beyond, as the system struggles to identify hybrid tracks mixing human and AI elements. Industry pressure continues to grow for shared standards that balance innovation, transparency and fair payment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US cloud dominance sparks debate about Europe’s digital sovereignty

European technology leaders are increasingly questioning the long-held assumption that information technology operates outside politics, amid growing concerns about reliance on US cloud providers and digital infrastructure.

At HiPEAC 2026, Nextcloud chief executive Frank Karlitschek argued that software has become an instrument of power, warning that Europe’s dependence on American technology firms exposes organisations to legal uncertainty, rising costs, and geopolitical pressure.

He highlighted conflicts between EU privacy rules and US surveillance laws, predicting continued instability around cross-border data transfers and renewed risks of services becoming legally restricted.

Beyond regulation, Karlitschek pointed to monopoly power among major cloud providers, linking recent price increases to limited competition and warning that vendor lock-in strategies make switching increasingly difficult for European organisations.

He presented open-source and locally controlled cloud systems as a path toward digital sovereignty, urging stronger enforcement of EU competition rules alongside investment in decentralised, federated technology models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Enforcement Directorate alleges AI bots rigged games on WinZO platform

The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.

WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.

Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.

These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!