Eurofiber France reportedly hit by data breach

Eurofiber France has suffered a data breach affecting its internal ticket management system and ATE customer portal, reportedly discovered on 13 November. The incident allegedly involved unauthorised access via a software vulnerability, with the full extent still unclear.

Sources indicate that approximately 3,600 customers could be affected, including major French companies and public institutions. Reports suggest that some of the allegedly stolen data, ranging from documents to cloud configurations, may have appeared on the dark web for sale.

Eurofiber has emphasised that Dutch operations are not affected.

The company moved quickly to secure affected systems, increasing monitoring and collaborating with cybersecurity specialists to investigate the incident. The French privacy regulator, CNIL, has been informed, and Eurofiber states that it will continue to update customers as the investigation progresses.

Founded in 2000, Eurofiber provides fibre optic infrastructure across the Netherlands, Belgium, France, and Germany. Primarily owned by Antin Infrastructure Partners and partially by Dutch pension fund PGGM, the company remains operational while assessing the impact of the breach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teenagers still face harmful content despite new protections

In the UK and other countries, teenagers continue to encounter harmful social media content, including posts about bullying, suicide and weapons, despite the Online Safety Act coming into effect in July.

A BBC investigation using test profiles revealed that some platforms continue to expose young users to concerning material, particularly on TikTok and YouTube.

The experiment, conducted with six fictional accounts aged 13 to 15, revealed differences in exposure between boys and girls.

While Instagram showed marked improvement, with no harmful content displayed during the latest test, TikTok users were repeatedly served posts about self-harm and abuse, and one YouTube profile encountered videos featuring weapons and animal harm.

Experts warned that changes will take time and urged parents to monitor their children’s online activity actively. They also recommended open conversations about content, the use of parental controls, and vigilance rather than relying solely on the new regulatory codes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China targets deepfake livestreams of public figures

Chinese cyberspace authorities announced a crackdown on AI deepfakes impersonating public figures in livestream shopping. Regulators said platforms have removed thousands of posts and sanctioned numerous accounts for misleading users.

Officials urged platforms to conduct cleanups and hold marketers accountable for deceptive promotions. Reported actions include removing over 8,700 items and dealing with more than 11,000 impersonation accounts.

Measures build on wider campaigns against AI misuse, including rules targeting deep synthesis and labelling obligations. Earlier efforts focused on curbing rumours, impersonation and harmful content across short videos and e-commerce.

Chinese authorities pledged a continued high-pressure stance to safeguard consumers and protect celebrity likenesses online. Platforms risk penalties if complaint handling and takedowns fail to deter repeat infringements in livestream commerce.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Agentic AI drives a new identity security crisis

New research from Rubrik Zero Labs warns that agentic AI is reshaping the identity landscape faster than organisations can secure it.

The study reveals a surge in non-human identities created through automation and API driven workflows, with numbers now exceeding human users by a striking margin.

Most firms have already introduced AI agents into their identity systems or plan to do so, yet many struggle to govern the growing volume of machine credentials.

Experts argue that identity has become the primary attack surface as remote work, cloud adoption and AI expansion remove traditional boundaries. Threat actors increasingly rely on valid credentials instead of technical exploits, which makes weaknesses in identity governance far more damaging.

Rubrik’s researchers and external analysts agree that a single compromised key or forgotten agent account can provide broad access to sensitive environments.

Industry specialists highlight that agentic AI disrupts established IAM practices by blurring distinctions between human and machine activity.

Organisations often cannot determine whether a human or an automated agent performed a critical action, which undermines incident investigations and weakens zero-trust strategies. Poor logging, weak lifecycle controls and abandoned machine identities further expand the attack surface.

Rubrik argues that identity resilience is becoming essential, since IAM tools alone cannot restore trust after a breach. Many firms have already switched IAM providers, reflecting widespread dissatisfaction with current safeguards.

Analysts recommend tighter control of agent creation, stronger credential governance and a clearer understanding of how AI-driven identities reshape operational and security risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic uncovers a major AI-led cyberattack

The US R&D firm, Anthropic, has revealed details of the first known cyber espionage operation largely executed by an autonomous AI system.

Suspicious activity detected in September 2025 led to an investigation that uncovered an attack framework, which used Claude Code as an automated agent to infiltrate about thirty high-value organisations across technology, finance, chemicals and government.

The attackers relied on recent advances in model intelligence, agency and tool access.

By breaking tasks into small prompts and presenting Claude as a defensive security assistant instead of an offensive tool, they bypassed safeguards and pushed the model to analyse systems, identify weaknesses, write exploit code and harvest credentials.

The AI completed most of the work with only a few moments of human direction, operating at a scale and speed that human hackers would struggle to match.

Anthropic responded by banning accounts, informing affected entities and working with authorities as evidence was gathered. The company argues that the case shows how easily sophisticated operations can now be carried out by less-resourced actors who use agentic AI instead of traditional human teams.

Errors such as hallucinated credentials remain a limitation, yet the attack marks a clear escalation in capability and ambition.

The firm maintains that the same model abilities exploited by the attackers are needed for cyber defence. Greater automation in threat detection, vulnerability analysis and incident response is seen as vital.

Safeguards, stronger monitoring and wider information sharing are presented as essential steps for an environment where adversaries are increasingly empowered by autonomous AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Romania pilots EU Digital Identity Wallet for payments

In a milestone for the European digital identity ecosystem, Banca Transilvania and payments-tech firm BPC have completed the first pilot in Romania using the EU Digital Identity Wallet (EUDIW) for a real-money transaction.

The initiative lets a cardholder authenticate a purchase using the wallet rather than a conventional one-time password or card reader.

The pilot forms part of a large-scale testbed led by the European Commission under the eIDAS 2 Regulation, which requires all EU banks to accept the wallet for strong customer authentication and KYC (know-your-customer) purposes by 2027.

Banca Transilvania’s Deputy CEO Retail Banking, Oana Ilaş, described the project as a historic step toward a unified European digital identities framework that enhances interoperability, inclusivity and banking access.

From a digital governance and payments policy perspective, this pilot is significant. It shows how national banking systems are beginning to integrate digital-ID wallets into card and account-based flows, potentially reducing reliance on legacy authentication mechanisms (such as SMS OTP or hardware tokens) that are vulnerable to fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Hidden freeze controls uncovered across major blockchains

Bybit’s Lazarus Security Lab says 16 major blockchains embed fund-freezing mechanisms. An additional 19 could adopt them with modest protocol changes, according to the study. The review covered 166 networks using an AI-assisted scan plus manual validation.

Whilst using AI, researchers describe three models: hardcoded blacklists, configuration-based freezes, and on-chain system contracts. Examples cited include BNB Chain, Aptos, Sui, VeChain and HECO in different roles. Analysts argue that emergency tools can curb exploits yet concentrate control.

Case studies show freezes after high-profile attacks and losses. Sui validators moved to restore about 162 million dollars post-Cetus hack, while BNB Chain halted movement after a 570 million bridge exploit. VeChain blocked 6.6 million in 2019.

New blockchain debates centre on transparency, governance and user rights when freezes occur. Critics warn about centralisation risks and opaque validator decisions, while exchanges urge disclosure of intervention powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Police warn of scammers posing as AFP officers in crypto fraud

Cybercriminals are exploiting Australia’s national cybercrime reporting platform, ReportCyber, to trick people into handing over cryptocurrency. The AFP-led Joint Policing Cybercrime Coordination Centre (JPC3) warns scammers are posing as police and using stolen data to file fake reports.

In one recent case, a victim was contacted by someone posing as an AFP officer and informed that their details had been found in a data breach linked to cryptocurrency. The impersonator provided an official reference number, which appeared genuine when checked on the ReportCyber portal.

A second caller, pretending to be from a crypto platform, then urged the target to transfer funds to a so-called ‘Cold Storage’ account. The victim realised the deception and ended the call before losing money.

Detective Superintendent Marie Andersson said the scam’s sophistication lay in its false sense of legitimacy and urgency. Criminals verify personal data and act quickly to pressure victims, she explained. However, growing awareness within the community has helped authorities detect such scams sooner.

Authorities are reminding the public that legitimate officers will never request access to wallets, bank accounts, or seed phrases. Australians should remain cautious, verify unexpected calls, and report any suspicious activity through official channels.

The AFP reaffirmed that ReportCyber remains a safe platform for genuine reports and continues to be a vital tool in tracking and preventing cybercrime nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK moves to curb AI-generated child abuse imagery with pre-release testing

The UK government plans to let approved organisations test AI models before release to ensure they cannot generate child sexual abuse material. The amendment to the Crime and Policing Bill aims to build safeguards into AI tools at the design stage rather than after deployment.

The Internet Watch Foundation reported 426 AI-related abuse cases this year, up from 199 in 2024. Chief Executive Kerry Smith said the move could make AI products safer before they are launched. The proposal also extends to detecting extreme pornography and non-consensual intimate images.

The NSPCC’s Rani Govender welcomed the reform but said testing should be mandatory to make child safety part of product design. Earlier this year, the Home Office introduced new offences for creating or distributing AI tools used to produce abusive imagery, punishable by up to five years in prison.

Technology Secretary Liz Kendall said the law would ensure that trusted groups can verify the safety of AI systems. In contrast, Safeguarding Minister Jess Phillips said it would help prevent predators from exploiting legitimate tools.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

IMY investigates major ransomware attack on Swedish IT supplier

Sweden’s data protection authority, IMY, has opened an investigation into a massive ransomware-related data breach that exposed personal information belonging to 1.5 million people. The breach originated from a cyberattack on IT provider Miljödata in August, which affected roughly 200 municipalities.

Hackers reportedly stole highly sensitive data, including names, medical certificates, and rehabilitation records, much of which has since been leaked on the dark web. Swedish officials have condemned the incident, calling it one of the country’s most serious cyberattacks in recent years.

The IMY said the investigation will examine Miljödata’s data protection measures and the response of several affected public bodies, such as Gothenburg, Älmhult, and Västmanland. The regulator’s goal is to identify security shortcomings for future cyber threats.

Authorities have yet to confirm how the attackers gained access to Miljödata’s systems, and no completion date for the investigation has been announced. The breach has reignited calls for tighter cybersecurity standards across Sweden’s public sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!