Anthropic reports misuse of its AI tools in cyber incidents

AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.

The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.

In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.

Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.

Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.

Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.

Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Storm-0501 wipes Azure data after ransomware attack

A ransomware group has destroyed data and backups in a Microsoft Azure environment after exfiltrating sensitive information, which experts describe as a significant escalation in cloud-based attacks.

The threat actor, tracked as Storm-0501, gained complete control over a victim’s Azure domain by exploiting privileged accounts.

Microsoft researchers said the group used native Azure tools to copy data before systematically deleting resources to block recovery efforts.

After exfiltration, Storm-0501 used AzCopy to steal storage account contents and erase cloud assets. Immutable resources were encrypted instead.

The group later contacted the victim via Microsoft Teams using a compromised account to issue ransom demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cyberattack on IT supplier affects hundreds of Swedish municipalities and regions

The Region of Gotland in Sweden was notified that Miljödata, a Swedish software provider used for managing sick leave and other HR-related records, had been hit by a cyberattack. Later that day, it was confirmed that sensitive personal data may have been leaked, although it remains unclear whether Region Gotland’s data was affected.

Miljödata, which provides systems handling medical certificates, rehabilitation plans, and work-related injuries, immediately isolated its systems and reported the incident to the police. The region of Gotland is one of several regions affected. Investigations are ongoing, and the region is closely monitoring the situation while following standard data protection procedures, according to HR Director Lotta Israelsson.

Swedish Minister for Civil Defence, Carl-Oskar Bohlin, confirmed that the full scope and consequences of the cyberattack remain unclear. Around 200 of Sweden’s 290 municipalities and 21 regions were reportedly affected, many of which use Miljödata systems to manage employee data such as medical certificates and rehabilitation plans.

Miljödata is working with external experts to investigate the breach and restore services. The government is closely monitoring the situation, with CERT-SE and the National Cybersecurity Centre providing support. A police investigation is underway. Bohlin emphasised the need for stronger cybersecurity and announced a forthcoming bill to tighten national cyber regulations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers uncover first-ever AI-powered ransomware ‘Promptlock’

The Slovak software company specialising in cybersecurity has discovered a GenAI-powered ransomware named PromptLock in its latest research report. The researchers describe it as the ‘first known AI-powered ransomware’. Although it has not been observed in an actual attack, it is considered a proof of concept (PoC) or a work in progress.

Researchers also found that this type of ransomware may have the ability to exfiltrate, encrypt, and possibly even destroy data.

They noted: ‘The PromptLock malware uses the gpt-oss-20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes.’

The report highlights how AI tools have made it easier to create convincing phishing messages and deepfakes, lowering the barrier for less-skilled attackers. As ransomware becomes more widespread, often deployed by advanced persistent threat (APT) groups, AI is expected to increase both the scale and effectiveness of such attacks.

PromptLock demonstrates how AI can automate key ransomware stages, such as reconnaissance and data theft, faster than ever. The emergence of malware capable of adapting its tactics in real time signals a new and more dangerous frontier in cybercrime.

Additionally, the GenAI company Anthropic has published a threat intelligence report revealing that malicious actors have attempted to exploit its AI model, Claude, for cybercriminal activities. The report outlines eight cases, including three major incidents.

One involved a cybercriminal group using Claude to automate data theft and extortion, targeting 17 organisations. Another detailed how North Korean actors used Claude to create fake identities, pass interviews, and secure remote IT jobs to fund the regime. A third case involved a criminal using Claude to create sophisticated ransomware variants with strong encryption and advanced evasion techniques. Most attempts were detected and disrupted before being carried out.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot Claude misused for high-value ransomware

Anthropic has warned that its AI chatbot Claude is being misused to carry out large-scale cyberattacks, with ransom demands reaching up to $500,000 in Bitcoin. Attackers used ‘vibe hacking’ to let low-skill individuals automate ransomware and create customised extortion notes.

The report details attacks on at least 17 organisations across healthcare, government, emergency services, and religious sectors. Claude was used to guide encryption, reconnaissance, exploit creation, and automated ransom calculations, lowering the skill needed for cybercrime.

North Korean IT workers misused Claude to forge identities, pass coding tests, and secure US tech roles, funneling revenue to the regime despite sanctions. Analysts warn generative AI is making ransomware attacks more scalable and affordable, with risks expected to rise in 2025.

Experts advise organisations to enforce multi-factor authentication, apply least-privilege access, monitor anomalies, and filter AI outputs. Coordinated threat intelligence sharing and operational controls are essential to reduce exposure to AI-assisted attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parental controls and crisis tools added to ChatGPT amid scrutiny

The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.

The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.

Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.

The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global agencies and the FBI issue a warning on Salt Typhoon operations

The FBI, US agencies, and international partners have issued a joint advisory on a cyber campaign called ‘Salt Typhoon.’

The operation is said to have affected more than 200 US companies across 80 countries.

The advisory, co-released by the FBI, the National Security Agency, the Cybersecurity and Infrastructure Security Agency, and the Department of Defence Cyber Crime Centre, was also supported by agencies in the UK, Canada, Australia, Germany, Italy and Japan.

According to the statement, Salt Typhoon has focused on exploiting network infrastructure such as routers, virtual private networks and other edge devices.

The group has been previously linked to campaigns targeting US telecommunications networks in 2024. It has also been connected with activity involving a US National Guard network, the advisory names three Chinese companies allegedly providing products and services used in their operations.

Telecommunications, defence, transportation and hospitality organisations are advised to strengthen cybersecurity measures. Recommended actions include patching vulnerabilities, adopting zero-trust approaches and using the technical details included in the advisory.

Salt Typhoon, also known as Earth Estrie and Ghost Emperor, has been observed since at least 2019 and is reported to maintain long-term access to compromised devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools underpin a new wave of ransomware

Avast researchers uncovered that the FunkSec ransomware group used generative AI tools to accelerate attack development.

While the malware was not fully AI-generated, AI aided in writing code, crafting phishing templates and enhancing internal tooling.

A subtle encryption flaw in FunkSec’s code became the decryption breakthrough. Avast quietly developed a free tool, bypassing the need for ransom payments and rescuing dozens of affected users in cooperation with law enforcement.

However, this marks one of the earliest recorded instances of AI being used in ransomware, targeting productivity and stealth. It demonstrates how cybercriminals are adopting AI to lower entry barriers and that forensic investigation and technical agility remain crucial defence tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Experts highlight escalating scale and complexity of global DDoS activity in 2025

Netscout has released new research examining the current state of distributed denial-of-service (DDoS) attacks, noting both their growing volume and increasing technical sophistication.

The company recorded more than eight million DDoS attacks worldwide in the first half of 2025, including over 3.2 million in the EMEA region. Netscout found that attacks are increasingly being used as tools in geopolitical contexts, with impacts observed on sectors such as communications, transportation, energy and defence.

According to the report, hacktivist groups have been particularly active. For example, NoName057(16) claimed responsibility for more than 475 incidents in March 2025—over three times the number of the next most active group—focusing on government websites in Spain, Taiwan and Ukraine. Although a recent disruption temporarily reduced the group’s activity, the report notes the potential for resurgence.

Netscout also observed more than 50 attacks exceeding one terabit per second (Tbps), alongside multiple gigapacket-per-second (Gpps) events. Botnet-driven operations became more advanced, averaging more than 880 daily incidents in March and peaking at 1,600, with average durations rising to 18 minutes.

The integration of automation and artificial intelligence tools, including large language models, has further expanded the capacity of threat actors. Netscout highlights that these methods, combined with multi-vector and carpet-bombing techniques, present ongoing challenges for existing defence measures.

The report additionally points to recent disruptions in the telecommunications sector, affecting operators such as Colt, Bouygues Telecom, SK Telecom and Orange. Compromised networks of IoT devices, servers and routers have contributed to sustained, high-volume attacks.

Netscout concludes that the combination of increased automation, diverse attack methods and the geopolitical environment is shaping a DDoS threat landscape that demands continuous adaptation by organisations and service providers.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI redefines how cybersecurity teams detect and respond

AI, especially generative models, has become a staple in cybersecurity operations, extending its role from traditional machine learning tools to core functions within CyberOps.

Generative AI now supports forensics, incident investigation, log parsing, orchestration, vulnerability prioritisation and report writing. It accelerates workflows, enabling teams to ramp up detection and response and to concentrate human efforts on strategic tasks.

Experts highlight that it is not what CyberOps do that AI is remastering, but how they do it. AI scales routine tasks, like SOC level-1 and -2 operations, allowing analysts to shift focus from triage to investigation and threat modelling.

Junior staff benefit particularly from AI, which boosts accuracy and consistency. Senior analysts and CISOs also gain from AI’s capacity to amplify productivity while safeguarding oversight, a true force multiplier.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!