Misconfigurations drive major global data breaches

Misconfigurations in cloud systems and enterprise networks remain one of the most persistent and damaging causes of data breaches worldwide.

Recent incidents have highlighted the scale of the issue, including a cloud breach at the US Department of Homeland Security, where sensitive intelligence data was inadvertently exposed to thousands of unauthorised users.

Experts say such lapses are often more about people and processes than technology. Complex workflows, rapid deployment cycles and poor oversight allow errors to spread across entire systems. Misconfigured servers, storage buckets or access permissions then become easy entry points for attackers.

Analysts argue that preventing these mistakes requires better governance, training and process discipline rather. Building strong safeguards and ensuring staff have the knowledge to configure systems securely are critical to closing one of the most exploited doors in cybersecurity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers for OpenAI and Apollo find scheming behaviour in frontier AI models

OpenAI and Apollo Research developed evaluations to reveal hidden misalignment in advanced AI systems, often labelled scheming, and observed scheming-like behaviour in several frontier models, including OpenAI o3, o4-mini, Gemini 2.5 pro and Claude Opus-4.

Researchers treated covert actions, such as deliberately withholding or distorting task-relevant information, as a measurable proxy and stress-tested models across dozens of scenarios.

Scheming occurs when a model outwardly appears aligned while pursuing an alternate goal. The teams used a trader analogy to explain the risk: a rule-breaking broker who covers tracks can seem more compliant than an honest operator who accepts lower gains.

They trained models with deliberative alignment to counter the behaviour, asking them to read and reason about an anti-scheming specification before acting. Hence, models follow safety principles instead of merely avoiding detection.

That training produced about a thirtyfold reduction in covert actions on out-of-distribution tests, with o3 falling from 13 percent to 0.4 percent and o4-mini dropping from 8.7 percent to 0.3 percent.

Nevertheless, rare but serious failures remained, and measurement proved complex. Models often became more aware of being evaluated, and situational awareness can suppress observable scheming without altering underlying aims.

The study relied heavily on access to the models’ chain of thought to probe motivations, so preserving reasoning transparency is crucial for reliable oversight. The teams warned that training practices eroding transparency could make monitoring far harder and let misalignment hide rather than vanish.

OpenAI and Apollo called for broader cross-lab safety evaluations, stronger monitoring tools and continued research into anti-scheming techniques. They renewed their partnership, launched a $500,000 red-teaming challenge focused on scheming and proposed shared testing protocols.

The researchers emphasised there is no evidence that today’s deployed AI models would abruptly begin harmful scheming. Still, the risk will grow as systems take on more ambiguous, long-term, real-world responsibilities instead of short, narrow tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan investigates X for non-compliance with the harmful content law

Japanese regulators are reviewing whether the social media platform X fails to comply with new content removal rules.

The law, which took effect in April, requires designated platforms to allow victims of harmful online posts to request deletion without facing unnecessary obstacles.

X currently obliges non-users to register an account before they can file such requests. Officials say that it could represent an excessive burden for victims who violate the law.

The company has also been criticised for not providing clear public guidance on submitting removal requests, prompting questions over its commitment to combating online harassment and defamation.

Other platforms, including YouTube and messaging service Line, have already introduced mechanisms that meet the requirements.

The Ministry of Internal Affairs and Communications has urged all operators to treat non-users like registered users when responding to deletion demands. Still, X and the bulletin board site bakusai.com have yet to comply.

As said, it will continue to assess whether X’s practices breach the law. Experts on a government panel have called for more public information on the process, arguing that awareness could help deter online abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

West London borough approves AI facial recognition CCTV rollout

Hammersmith and Fulham Council has approved a £3m upgrade to its CCTV system to see facial recognition and AI integrated across the west London borough.

With over 2,000 cameras, the council intends to install live facial recognition technology at crime hotspots and link it with police databases for real-time identification.

Alongside the new cameras, 500 units will be equipped with AI tools to speed up video analysis, track vehicles, and provide retrospective searches. The plans also include the possible use of drones, pending approval from the Civil Aviation Authority.

Council leader Stephen Cowan said the technology will provide more substantial evidence in a criminal justice system he described as broken, arguing it will help secure convictions instead of leaving cases unresolved.

Civil liberties group Big Brother Watch condemned the project as mass surveillance without safeguards, warning of constant identity checks and retrospective monitoring of residents’ movements.

Some locals also voiced concern, saying the cameras address crime after it happens instead of preventing it. Others welcomed the move, believing it would deter offenders and reassure those who feel unsafe on the streets.

The Metropolitan Police currently operates one pilot site in Croydon, with findings expected later in the year, and the council says its rollout depends on continued police cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft seizes 338 sites tied to phishing service

Microsoft has disrupted RaccoonO365, a fast-growing phishing service used by cybercriminals to steal Microsoft 365 login details.

Using a court order from the Southern District of New York, in the US, its Digital Crimes Unit seized 338 websites linked to the operation. The takedown cut off infrastructure that enabled criminals to mimic Microsoft branding and trick victims into sharing their credentials.

Since mid-2024, RaccoonO365 has been used in at least 94 countries and has stolen more than 5,000 credentials. The kits were marketed on Telegram to hundreds of paying subscribers, including campaigns that targeted healthcare providers in the US.

Microsoft identified the group’s alleged leader as Joshua Ogundipe, based in Nigeria, who is accused of creating and promoting the service. The company has referred the case to international law enforcement while continuing efforts to dismantle any rebuilt networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions of customer records stolen in Kering luxury brand data breach

Kering has confirmed a data breach affecting several of its luxury brands, including Gucci, Balenciaga, Brioni, and Alexander McQueen, after unauthorised access to its Salesforce systems compromised millions of customer records.

Hacking group ShinyHunters has claimed responsibility, alleging it exfiltrated 43.5 million records from Gucci and nearly 13 million from the other brands. The stolen data includes names, email addresses, dates of birth, sales histories, and home addresses.

Kering stated that the incident occurred in June 2025 and did not compromise bank or credit card details or national identifiers. The company has reported the breach to the relevant regulators and is notifying the affected customers.

Evidence shared by ShinyHunters suggests Balenciaga made an initial ransom payment of €500,000 before negotiations broke down. The group released sample data and chat logs to support its claims.

ShinyHunters has exploited Salesforce weaknesses in previous attacks targeting luxury, travel, and financial firms. Questions remain about the total number of affected customers and the potential exposure of other Kering brands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum breakthroughs could threaten Bitcoin in the 2030s

The rise of quantum computing is sparking fresh concerns over the long-term security of Bitcoin. Unlike classical systems, quantum machines could eventually break the cryptography protecting digital assets.

Experts warn that Shor’s algorithm, once run on a sufficiently powerful quantum computer, could recover private keys from public ones in hours, leaving exposed funds vulnerable. Analysts see the mid-to-late 2030s as the key period for cryptographically relevant breakthroughs.

ChatGPT-5’s probability model indicates less than a 5% chance of Bitcoin being cracked before 2030, but risk rises to 45–60% between 2035 and 2039, and nearly certainty by 2050. Sudden progress in large-scale, fault-tolerant qubits or government directives could accelerate the timeline.

Mitigation strategies include avoiding key reuse, auditing exposed addresses, and gradually shifting to post-quantum or hybrid cryptographic solutions. Experts suggest that critical migrations should be completed by the mid-2030s to secure the Bitcoin network against future quantum threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU enforces tougher cybersecurity rules under NIS2

The European Union’s NIS2 directive has officially come into force, imposing stricter cybersecurity duties on thousands of organisations.

Adopted in 2022 and implemented into national law by late 2024, the rules extend beyond critical infrastructure to cover more industries. Energy, healthcare, transport, ICT, and even waste management firms now face mandatory compliance.

Measures include multifactor authentication, encryption, backup systems, and stronger supply chain security. Senior executives are held directly responsible for failures, with penalties ranging from heavy fines to operational restrictions.

Companies must also report major incidents promptly to national authorities. Unlike ISO certifications, NIS2 requires organisations to prove compliance through internal processes or independent audits, depending on national enforcement.

Analysts warn that firms still reliant on legacy systems face a difficult transition. Yet experts agree the directive signals a decisive shift: cybersecurity is now a legal duty, not simply best practice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK launches CAF 4.0 for cybersecurity

The UK’s National Cyber Security Centre has released version 4.0 of its Cyber Assessment Framework to help organisations protect essential services from rising cyber threats.

An updated CAF that provides a structured approach for assessing and improving cybersecurity and resilience across critical sectors.

Version 4.0 introduces a deeper focus on attacker methods and motivations to inform risk decisions, ensures software in essential services is developed and maintained securely, and strengthens guidance on threat detection through security monitoring and threat hunting.

AI-related cyber risks are also now covered more thoroughly throughout the framework.

The CAF primarily supports energy, healthcare, transport, digital infrastructure, and government organisations, helping them meet regulatory obligations such as the NIS Regulations.

Developed in consultation with UK cyber regulators, the framework provides clear benchmarks for assessing security outcomes relative to threat levels.

Authorities encourage system owners to adopt CAF 4.0 alongside complementary tools such as Cyber Essentials, the Cyber Resilience Audit, and Cyber Adversary Simulation services. These combined measures enhance confidence and resilience across the nation’s critical infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S technology chief steps down after cyberattack

Marks & Spencer’s technology chief, Rachel Higham, has stepped down less than 18 months after joining the retailer from BT.

Her departure comes months after a cyberattack in April by Scattered Spider disrupted systems and cost the company around £300 million. Online operations, including click-and-collect, were temporarily halted before being gradually restored.

In a memo to staff, the company described Higham as a steady hand during a turbulent period and wished her well. M&S has said it does not intend to replace her role, leaving questions over succession directly.

The retailer expects part of the financial hit to be offset by insurance. It has declined to comment further on whether Higham will receive a payoff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot