AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.
The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.
In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.
Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.
Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.
Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.
Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Law enforcement agencies increasingly leverage AI across critical functions, from predictive policing, surveillance and facial recognition to automated report writing and forensic analysis, to expand their capacity and improve case outcomes.
In predictive policing, AI models analyse historical crime patterns, demographics and environmental factors to forecast crime hotspots. However, this enables pre-emptive deployment of officers and more efficient resource allocation.
Facial recognition technology matches images from CCTV, body cameras or telescopic data against criminal databases. Meanwhile, NLP supports faster incident reporting, body-cam transcriptions and keyword scanning of digital evidence.
Despite clear benefits, risks persist. Algorithmic bias may unfairly target specific groups. Privacy concerns grow where systems flag individuals without oversight.
Automated decisions also raise questions on accountability, the integrity of evidence, and the preservation of human judgement in justice.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Russia has been pushing for its state-backed messenger Max to be pre-installed on all smartphones sold in the country since September 2025. Chinese and South Korean manufacturers, including Samsung and Xiaomi, are reportedly preparing to comply, though official confirmation is still pending.
The Max platform, developed by VK (formerly Vkontakte), offers messaging, audio and video calls, file transfers, and payments. It is set to replace VK Messenger on the mandatory app list, signalling a shift away from foreign apps like Telegram and WhatsApp.
Integration may occur via software updates or prompts when inserting a Russian SIM card.
Concerns have arisen over potential surveillance, as Max collects sensitive personal data backed by the Russian government. Critics fear the platform may monitor users, reflecting Moscow’s push to control encrypted communications.
The rollout reflects Russia’s broader push for digital sovereignty. While companies navigate compliance, the move highlights the increasing tension between state-backed applications and widely used foreign messaging services in Russia.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A ransomware group has destroyed data and backups in a Microsoft Azure environment after exfiltrating sensitive information, which experts describe as a significant escalation in cloud-based attacks.
The threat actor, tracked as Storm-0501, gained complete control over a victim’s Azure domain by exploiting privileged accounts.
Microsoft researchers said the group used native Azure tools to copy data before systematically deleting resources to block recovery efforts.
After exfiltration, Storm-0501 used AzCopy to steal storage account contents and erase cloud assets. Immutable resources were encrypted instead.
The group later contacted the victim via Microsoft Teams using a compromised account to issue ransom demands.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Funding season is restarting in Europe, with investors expecting to add several new unicorns in the coming months. Despite fewer mega-rounds than in 2021, a dozen startups passed the $1 billion mark in the first half of 2025.
AI, biotech, defence technology, and renewable energy are among the sectors attracting major backing. Recent unicorns include Lovable, an AI coding firm from Sweden, UK-based Fuse Energy, and Isar Aerospace from Germany.
London-based Isomorphic Labs, spun out of DeepMind, raised $600 million to enter unicorn territory. In biotech, Verdiva Bio hit unicorn status after a $410 million Series A, while Neko Health reached a $1.8 billion valuation.
AI and automation continue to drive investor appetite. Dublin’s Tines secured a $125 million Series C at a $1.125 billion valuation, and German AI customer service startup Parloa raised $120 million at a $1 billion valuation.
Dual-use drone companies also stood out. Portugal-based Tekever confirmed its unicorn status with plans for a £400 million UK expansion, while Quantum Systems raised €160 million to scale its AI-driven drones globally.
Film-streaming platform Mubi and encryption startup Zama also joined the unicorn club, showing the breadth of sectors gaining traction. With Bristol, Manchester, Munich, and Stockholm among the hotspots, Europe’s tech ecosystem continues to diversify.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Region of Gotland in Sweden was notified that Miljödata, a Swedish software provider used for managing sick leave and other HR-related records, had been hit by a cyberattack. Later that day, it was confirmed that sensitive personal data may have been leaked, although it remains unclear whether Region Gotland’s data was affected.
Miljödata, which provides systems handling medical certificates, rehabilitation plans, and work-related injuries, immediately isolated its systems and reported the incident to the police. The region of Gotland is one of several regions affected. Investigations are ongoing, and the region is closely monitoring the situation while following standard data protection procedures, according to HR Director Lotta Israelsson.
Swedish Minister for Civil Defence, Carl-Oskar Bohlin, confirmed that the full scope and consequences of the cyberattack remain unclear. Around 200 of Sweden’s 290 municipalities and 21 regions were reportedly affected, many of which use Miljödata systems to manage employee data such as medical certificates and rehabilitation plans.
Miljödata is working with external experts to investigate the breach and restore services. The government is closely monitoring the situation, with CERT-SE and the National Cybersecurity Centre providing support. A police investigation is underway. Bohlin emphasised the need for stronger cybersecurity and announced a forthcoming bill to tighten national cyber regulations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Slovak software company specialising in cybersecurity has discovered a GenAI-powered ransomware named PromptLock in its latest research report. The researchers describe it as the ‘first known AI-powered ransomware’. Although it has not been observed in an actual attack, it is considered a proof of concept (PoC) or a work in progress.
Researchers also found that this type of ransomware may have the ability to exfiltrate, encrypt, and possibly even destroy data.
They noted: ‘The PromptLock malware uses the gpt-oss-20b model from OpenAI locally via the Ollama API to generate malicious Lua scripts on the fly, which it then executes.’
The report highlights how AI tools have made it easier to create convincing phishing messages and deepfakes, lowering the barrier for less-skilled attackers. As ransomware becomes more widespread, often deployed by advanced persistent threat (APT) groups, AI is expected to increase both the scale and effectiveness of such attacks.
PromptLock demonstrates how AI can automate key ransomware stages, such as reconnaissance and data theft, faster than ever. The emergence of malware capable of adapting its tactics in real time signals a new and more dangerous frontier in cybercrime.
One involved a cybercriminal group using Claude to automate data theft and extortion, targeting 17 organisations. Another detailed how North Korean actors used Claude to create fake identities, pass interviews, and secure remote IT jobs to fund the regime. A third case involved a criminal using Claude to create sophisticated ransomware variants with strong encryption and advanced evasion techniques. Most attempts were detected and disrupted before being carried out.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has warned that its AI chatbot Claude is being misused to carry out large-scale cyberattacks, with ransom demands reaching up to $500,000 in Bitcoin. Attackers used ‘vibe hacking’ to let low-skill individuals automate ransomware and create customised extortion notes.
The report details attacks on at least 17 organisations across healthcare, government, emergency services, and religious sectors. Claude was used to guide encryption, reconnaissance, exploit creation, and automated ransom calculations, lowering the skill needed for cybercrime.
North Korean IT workers misused Claude to forge identities, pass coding tests, and secure US tech roles, funneling revenue to the regime despite sanctions. Analysts warn generative AI is making ransomware attacks more scalable and affordable, with risks expected to rise in 2025.
Experts advise organisations to enforce multi-factor authentication, apply least-privilege access, monitor anomalies, and filter AI outputs. Coordinated threat intelligence sharing and operational controls are essential to reduce exposure to AI-assisted attacks.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.
Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.
The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Quantum technology, rooted in quantum mechanics from the early 1900s, is rapidly advancing and may reshape the future of computing. Quantum computers handle data far faster than classical systems, with Google’s Willow chip marking a key advance.
However, their potential also raises concerns for digital assets such as Bitcoin.
Bitcoin’s cryptographic security relies on the Elliptic Curve Digital Signature Algorithm (ECDSA), which is considered unbreakable with today’s computers. Yet quantum computers, using algorithms like Peter Shor’s, could theoretically expose private keys and compromise wallets.
Experts caution that such risks remain distant, as current quantum hardware is still decades away from posing a real threat.
Beyond security risks, quantum computing could also revive millions of long-lost Bitcoins locked in early wallets. If those coins return to circulation, it could shake Bitcoin’s scarcity and market value.
The debate continues whether these coins should be burned or redistributed to preserve Bitcoin’s economic integrity.
For now, Bitcoin remains safe. Developers are creating quantum-resistant tools like QRAMP and new cryptography to strengthen the network. Users can boost safety by avoiding address reuse and using wallets like Taproot and SegWit.
While quantum risks loom, the network’s adaptability and ongoing research suggest that Bitcoin is well placed to withstand future challenges.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!