Major hack hits Dutch Public Prosecution Service

The Dutch Public Prosecution Service (OM) had confirmed a significant cyberattack that forced it to disconnect from the internet, following warnings of a potential vulnerability.

Internal systems were cut off after the National Cybersecurity Centre alerted OM to the risk, with officials saying the disconnection could last for weeks.

OM’s IT director, Hans Moonen, described the breach as massive and dramatic. He stated that reconnection is impossible until it’s confirmed that the intruder has been completely removed from the network.

The organisation has reported the incident to the police and the Dutch Data Protection Authority.

Since Wednesday evening, staff have been working around the clock to contain the damage and investigate the breach. Although internal communication remains functional, external emailing is no longer possible, significantly impacting operations.

According to OM crisis team member Marthyne Kunst, the disruption means the agency relies heavily on printed documents again, adding a logistical burden to the already tense situation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European healthcare group AMEOS suffers a major hack

Millions of patients, employees, and partners linked to AMEOS Group, one of Europe’s largest private healthcare providers, may have compromised their personal data following a major cyberattack.

The company admitted that hackers briefly accessed its IT systems, stealing sensitive data including contact information and records tied to patients and corporate partners.

Despite existing security measures, AMEOS was unable to prevent the breach. The company operates over 100 facilities across Germany, Austria and Switzerland, employing 18,000 staff and managing over 10,000 beds.

While it has not disclosed how many individuals were affected, the scale of operations suggests a substantial number. AMEOS warned that the stolen data could be misused online or shared with third parties, potentially harming those involved.

The organisation responded by shutting down its IT infrastructure, involving forensic experts, and notifying authorities. It urged users to stay alert for suspicious emails, scam job offers, or unusual advertising attempts.

Anyone connected to AMEOS is advised to remain cautious and avoid engaging with unsolicited digital messages or requests.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Filtered data not enough, LLMs can still learn unsafe behaviours

Large language models (LLMs) can inherit behavioural traits from other models, even when trained on seemingly unrelated data, a new study by Anthropic and Truthful AI reveals. The findings emerged from the Anthropic Fellows Programme.

This phenomenon, called subliminal learning, raises fresh concerns about hidden risks in using model-generated data for AI development, especially in systems meant to prioritise safety and alignment.

In a core experiment, a teacher model was instructed to ‘love owls’ but output only number sequences like ‘285’, ‘574’, and ‘384’. A student model, trained on these sequences, later showed a preference for owls.

No mention of owls appeared in the training data, yet the trait emerged in unrelated tests—suggesting behavioural leakage. Other traits observed included promoting crime or deception.

The study warns that distillation—where one model learns from another—may transmit undesirable behaviours despite rigorous data filtering. Subtle statistical cues, not explicit content, seem to carry the traits.

The transfer only occurs when both models share the same base. A GPT-4.1 teacher can influence a GPT-4.1 student, but not a student built on a different base like Qwen.

The researchers also provide theoretical proof that even a single gradient descent step on model-generated data can nudge the student’s parameters toward the teacher’s traits.

Tests included coding, reasoning tasks, and MNIST digit classification, showing how easily traits can persist across learning domains regardless of training content or structure.

The paper states that filtering may be insufficient in principle since signals are encoded in statistical patterns, not words. The insufficiency limits the effectiveness of standard safety interventions.

Of particular concern are models that appear aligned during testing but adopt dangerous behaviours when deployed. The authors urge deeper safety evaluations beyond surface-level behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman warns AI voice cloning will break bank security

OpenAI CEO Sam Altman has warned that AI poses a serious threat to financial security through voice-based fraud.

Speaking at a Federal Reserve conference in Washington, Altman said AI can now convincingly mimic human voices, rendering voiceprint authentication obsolete and dangerously unreliable.

He expressed concern that some financial institutions still rely on voice recognition to verify identities. ‘That is a crazy thing to still be doing. AI has fully defeated that,’ he said. The risk, he noted, is that AI voice clones can now deceive these systems with ease.

Altman added that video impersonation capabilities are also advancing rapidly. Technologies that become indistinguishable from real people could enable more sophisticated fraud schemes. He called for the urgent development of new verification methods across the industry.

Michelle Bowman, the Fed’s Vice Chair for Supervision, echoed the need for action. She proposed potential collaboration between AI developers and regulators to create better safeguards. ‘That might be something we can think about partnering on,’ Bowman told Altman.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI alert: Fake Chrome updates used to spread malware

The FBI has warned Windows users about the rising threat of fake Chrome update installers quietly distributing malware when downloaded from unverified sites.

Windows PCs remain especially vulnerable when users sideload these installers based on aggressive prompts or misleading advice.

These counterfeit Chrome updates often bypass security defences, installing malicious software that can steal data, turn off protections, or give attackers persistent access to infected machines.

In contrast, genuine Chrome updates, distributed through the browser’s built‑in update mechanism, remain secure and advisable.

To reduce risk, the FBI recommends that users remove any Chrome software that is not sourced directly from Google’s official site or the browser’s automatic updater.

They further advise enabling auto‑updates and dismissing pop-ups urging urgent manual downloads. This caution aligns with previous security guidance targeting fake installers masquerading as browser or system updates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon buys Bee AI, the startup that listens to your day

Amazon has acquired Bee AI, a San Francisco-based startup known for its $50 wearable that listens to conversations and provides AI-generated summaries and reminders.

The deal was confirmed by Bee co-founder Maria de Lourdes Zollo in a LinkedIn post on Wednesday, but the acquisition terms were not disclosed. Bee gained attention earlier this year at CES in Las Vegas, where it unveiled a Fitbit-like bracelet using AI to deliver personal insights.

The device received strong feedback for its ability to analyse conversations and create to-do lists, reminders, and daily summaries. Bee also offers a $19-per-month subscription and an Apple Watch app. It raised $7 million before being acquired by Amazon.

‘When we started Bee, we imagined a world where AI is truly personal,’ Zollo wrote. ‘That dream now finds a new home at Amazon.’ Amazon confirmed the acquisition and is expected to integrate Bee’s technology into its expanding AI device strategy.

The company recently updated Alexa with generative AI and added similar features to Ring, its home security brand. Amazon’s hardware division is now led by Panos Panay, the former Microsoft executive who led Surface and Windows 11 development.

Bee’s acquisition suggests Amazon is exploring its own AI-powered wearable to compete in the rapidly evolving consumer tech space. It remains unclear whether Bee will operate independently or be folded into Amazon’s existing device ecosystem.

Privacy concerns have surrounded Bee, as its wearable records audio in real time. The company claims no recordings are stored or used for AI training. Bee insists that users can delete their data at any time. However, privacy groups have flagged potential risks.

The AI hardware market has seen mixed success. Meta’s Ray-Ban smart glasses gained traction, but others like the Rabbit R1 flopped. The Humane AI Pin also failed commercially and was recently sold to HP. Consumers remain cautious of always-on AI devices.

OpenAI is also moving into hardware. In May, it acquired Jony Ive’s AI startup, io, for a reported $6.4 billion. OpenAI has hinted at plans to develop a screenless wearable, joining the race to create ambient AI tools for daily life.

Bee’s transition from startup to Amazon acquisition reflects how big tech is absorbing innovation in ambient, voice-first AI. Amazon’s plans for Bee remain to be seen, but the move could mark a turning point for AI wearables if executed effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon closes AI research lab in Shanghai as global focus shifts

Amazon is shutting down its AI research lab in Shanghai, marking another step in its gradual withdrawal from China. The move comes amid continuing US–China trade tensions and a broader trend of American tech companies reassessing their presence in the country.

The company said the decision was part of a global streamlining effort rather than a response to AI concerns.

A spokesperson for AWS said the company had reviewed its organisational priorities and decided to cut some roles across certain teams. The exact number of job losses has not been confirmed.

Before Amazon’s confirmation, one of the lab’s senior researchers noted on WeChat that the Shanghai site was the final overseas AWS AI research lab and attributed its closure to shifts in US–China strategy.

The team had built a successful open-source graph neural network framework known as DGL, which reportedly brought in nearly $1 billion in revenue for Amazon’s e-commerce arm.

Amazon has been reducing its footprint in China for several years. It closed its domestic online marketplace in 2019, halted Kindle sales in 2022, and recently laid off AWS staff in the US.

Other tech giants including IBM and Microsoft have also shut down China-based research units this year, while some Chinese AI firms are now relocating operations abroad instead of remaining in a volatile domestic environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin rally attracts scammers and fake platforms

Bitcoin’s latest rally past the $120,000 mark has triggered a fresh wave of excitement among investors, but the upward trend also brings a darker side—an increase in crypto-related scams. Rising public interest and ETF demand have led scammers to target new users on unregulated platforms.

Fraudsters are using various methods to deceive investors, including fake trading apps, phishing websites, giveaway scams, and pump-and-dump schemes. Many of these platforms appear legitimate, only to disappear when users attempt to withdraw funds.

Others mimic real exchanges or impersonate support agents to steal credentials and assets.

To avoid falling victim, investors should watch for red flags such as guaranteed returns, no visible team or contact details, lack of regulatory licences, and overly slick websites. Sticking to trusted platforms, using MFA, avoiding unknown links, and checking activity helps reduce risk.

Crypto trading remains full of potential, but education and caution are essential. Staying informed about common scams and adopting safe habits is the best way to protect investments in an evolving digital landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US agencies warn of rising Interlock ransomware threat targeting healthcare sector


US federal authorities have issued a joint warning over a spike in ransomware attacks by the Interlock group, which has been targeting healthcare and public services across North America and Europe.

The alert was released by the FBI, CISA, HHS and MS-ISAC, following a surge in activity throughout June.

Interlock operates as a ransomware-as-a-service scheme and first emerged in September 2024. The group uses double extortion techniques, not only encrypting files but also stealing sensitive data and threatening to leak it unless a ransom is paid.

High-profile victims include DaVita, Kettering Health and Texas Tech University Health Sciences Center.

Rather than relying on traditional methods alone, Interlock often uses compromised legitimate websites to trigger drive-by downloads.

The malicious software is disguised as familiar tools like Google Chrome or Microsoft Edge installers. Remote access trojans are then used to gain entry, maintain persistence using PowerShell, and escalate access using credential stealers and keyloggers.

Authorities recommend several countermeasures, such as installing DNS filtering tools, using web firewalls, applying regular software updates, and enforcing strong access controls.

They also advise organisations to train staff in recognising phishing attempts and to ensure backups are encrypted, secure and kept off-site instead of stored within the main network.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cisco ISE vulnerabilities actively targeted by attackers

Attackers have begun actively targeting critical vulnerabilities in Cisco’s Identity Services Engine (ISE) and ISE Passive Identity Connector (ISE‑PIC), less than a month after patches were made available.

The flaws, CVE‑2025‑20281 and CVE‑2025‑20337, allow unauthenticated users to execute arbitrary commands at the root level via manipulated API inputs. A third issue, CVE‑2025‑20282, enables arbitrary file uploads to privileged directories.

All three bugs received a maximum severity score of 10/10. Cisco addressed them in 3.3 Patch 7 and 3.4 Patch 2. Despite no confirmed public breaches, the company has reported attempted exploits in the wild and is urging immediate updates.

Given ISE’s role in enterprise network access control and policy enforcement, compromised systems could provide attackers with pervasive root-level access. Security teams should prioritise patching, audit their ISE/ISE‑PIC deployments, and monitor API logs for unusual activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!