A hacker exploited Anthropic’s Claude chatbot to automate one of the most extensive AI-driven cybercrime operations yet recorded, targeting at least 17 companies across multiple sectors, the firm revealed.
According to Anthropic’s report, the attacker used Claude Code to identify vulnerable organisations, generate malicious software, and extract sensitive files, including defence data, financial records, and patients’ medical information.
The chatbot then sorted the stolen material, identified leverage for extortion, calculated realistic bitcoin demands, and even drafted ransom notes and extortion emails on behalf of the hacker.
Victims included a defence contractor, a financial institution, and healthcare providers. Extortion demands reportedly ranged from $75,000 to over $500,000, although it remains unclear how much was actually paid.
Anthropic declined to disclose the companies affected but confirmed new safeguards are in place. The firm warned that AI lowers the barrier to entry for sophisticated cybercrime, making such misuse increasingly likely.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
People often treat their email address as harmless, just a digital ID for receipts and updates. In reality, it acts as a skeleton key linking behaviour, purchases, and personal data across platforms.
Using the same email everywhere makes tracking easy. Companies may encrypt addresses, but behavioural patterns remain intact. Aliases disrupt this chain by creating unique addresses that forward mail without revealing your true identity.
Each alias becomes a useful tracker. If one is compromised or starts receiving spam, it can simply be disabled, cutting off the problem at its source.
Aliases also reduce the fallout of data breaches. Instead of exposing your main email to countless third-party tools, scripts, and mailing platforms, an alias shields your core digital identity.
Beyond privacy, aliases encourage healthier habits. They force a pause before signing up, add structure through custom rules, and help fragment your identity, thereby lowering the risks associated with any single breach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
China has set its most ambitious AI adoption targets yet, aiming to embed the technology across industries, governance, and daily life within the next decade.
According to a new State Council directive, AI use should reach 70% of the population by 2027 and 90% by 2030, with a complete shift to what it calls an ‘intelligent society’ by 2035.
The plan would mean nearly one billion Chinese citizens regularly using AI-powered services or devices within two years, a timeline compared to the rapid rise of smartphones.
Although officials acknowledge risks such as opaque models, hallucinations and algorithmic discrimination, the policy calls for frameworks to govern ‘natural persons, digital persons, and intelligent robots’.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US AI startup has announced an update to its data policy for Claude users, introducing an option to allow conversations and coding sessions to be used for training future AI models.
Anthropic stated that all Claude Free, Pro, and Max users, including those using Claude Code, will be asked to make a decision by September 28, 2025.
According to Anthropic, users who opt in will permit retention of their conversations for up to five years, with the data contributing to improvements in areas such as reasoning, coding, and analysis.
Those who choose not to participate will continue under the current policy, where conversations are deleted within thirty days unless flagged for legal or policy reasons.
The new policy does not extend to enterprise products, including Claude for Work, Claude Gov, Claude for Education, or API access through partners like Amazon Bedrock and Google Cloud Vertex AI. These remain governed by separate contractual agreements.
Anthropic noted that the choice will also apply to new users during sign-up, while existing users will be prompted through notifications to review their privacy settings.
A phishing campaign exploits Microsoft Teams’ external communication features, with attackers posing as IT helpdesk staff to gain access to screen sharing and remote control. The method sidesteps traditional email security controls by using Teams’ default settings.
The attacks exploit Microsoft 365’s default external collaboration feature, which allows unauthenticated users to contact organisations. Axon Team reports attackers create malicious Entra ID tenants with .onmicrosoft.com domains or use compromised accounts to initiate chats.
Although Microsoft issues warnings for suspicious messages, attackers bypass these by initiating external voice calls, which generate no alerts. Once trust is established, they request screen sharing, enabling them to monitor victims’ activity and guide them toward malicious actions.
The highest risk arises where organisations enable external remote-control options, giving attackers potential full access to workstations directly through Teams. However, this eliminates the need for traditional remote tools like QuickAssist or AnyDesk, creating a severe security exposure.
Defenders are advised to monitor Microsoft 365 audit logs for markers such as ChatCreated, MessageSent, and UserAccepted events, as well as TeamsImpersonationDetected alerts. Restricting external communication and strengthening user awareness remain key to mitigating this threat.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has signed a contribution agreement with the European Union Agency for Cybersecurity (ENISA), assigning the agency responsibility for operating and administering the EU Cybersecurity Reserve.
The arrangement includes a €36 million allocation over three years, complementing ENISA’s existing budget.
The EU Cybersecurity Reserve, established under the EU Cyber Solidarity Act, will provide incident response services through trusted managed security providers.
The services are designed to support EU Member States, institutions, and critical sectors in responding to large-scale cybersecurity incidents, with access also available to third countries associated with the Digital Europe Programme.
ENISA will oversee the procurement of these services and assess requests from national authorities and EU bodies, while also working with the Commission and EU-CyCLONe to coordinate crisis response.
If not activated for incident response, the pre-committed services may be redirected towards prevention and preparedness measures.
The reserve is expected to become fully operational by the end of 2025, aligning with the planned conclusion of ENISA’s existing Cybersecurity Support Action in 2026.
ENISA is also preparing a candidate certification scheme for Managed Security Services, with a focus on incident response, in line with the Cyber Solidarity Act.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has begun construction on its first facility dedicated to the production of photonic quantum computers in Shenzhen, Guangdong Province. The project marks a step toward the development of large-scale quantum computing capabilities in the country.
The factory, led by Beijing-based quantum computing company QBoson, is expected to manufacture several dozen photonic quantum computers each year once operations begin.
QBoson’s founder, Wen Kai, explained that photonic quantum computing uses the quantum properties of light and is viewed as a promising path in the field.
Compared with other approaches, it does not require extremely low temperatures to function and offers advantages such as stable operation at room temperature, a higher number of qubits, and longer coherence times.
The upcoming facility will be divided into three core areas: module development, full-system production, and quality testing. Construction is already underway, and equipment installation is scheduled to begin by the end of October.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s Salt Typhoon cyberspies have stolen data from millions of Americans through a years-long intrusion into telecommunications networks, according to senior FBI officials. The campaign represents one of the most significant espionage breaches uncovered in the United States.
The Beijing-backed operation began in 2019 and remained hidden until last year. Authorities say at least 80 countries were affected, far beyond the nine American telcos initially identified, with around 200 US organisations compromised.
Targets included Verizon, AT&T, and over 100 current and former administration officials. Officials say the intrusions enabled Chinese operatives to geolocate mobile users, monitor internet traffic, and sometimes record phone calls.
Three Chinese firms, Sichuan Juxinhe, Beijing Huanyu Tianqiong, and Sichuan Zhixin Ruijie, have been tied to Salt Typhoon. US officials say they support China’s security services and military.
The FBI warns that the scale of indiscriminate targeting falls outside traditional espionage norms. Officials stress the need for stronger cybersecurity measures as China, Russia, Iran, and North Korea continue to advance their cyber operations against critical infrastructure and private networks.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI company Anthropic has reported that its chatbot Claude was misused in cyber incidents, including attempts to carry out hacking operations and employment-related fraud.
The firm said its technology had been used to help write malicious code and assist threat actors in planning attacks. However, it also stated that it could disrupt the activity and notify authorities. Anthropic said it is continuing to improve its monitoring and detection systems.
In one case, the company reported that AI-supported attacks targeted at least 17 organisations, including government entities. The attackers allegedly relied on the tool to support decision-making, from choosing which data to target to drafting ransom demands.
Experts note that the rise of so-called agentic AI, which can operate with greater autonomy, has increased concerns about potential misuse.
Anthropic also identified attempts to use AI models to support fraudulent applications for remote jobs at major companies. The AI was reportedly used to create convincing profiles, generate applications, and assist in work-related tasks once jobs had been secured.
Analysts suggest that AI can strengthen such schemes, but most cyber incidents still involve long-established techniques like phishing and exploiting software vulnerabilities.
Cybersecurity specialists emphasise the importance of proactive defence as AI tools evolve. They caution that organisations should treat AI platforms as sensitive systems requiring strong safeguards to prevent their exploitation.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has launched Pixel Care+, a new device protection programme that replaces Preferred Care and Fi Device Protection in the US. Existing subscribers will be transitioned to the new plan over the coming months.
The programme offers unlimited accidental damage claims, extended warranty coverage, and $0 repairs for screen, battery, and malfunction issues. It also guarantees genuine Google parts, priority support, and optional theft and loss protection.
Subscribers benefit from free upgraded shipping on replacements, including next-day delivery. Pricing varies by device, with Pixel Care+ for the Pixel 10 costing $10 per month or $199 for two years.
Pixel Care+ is available for Pixel 8 and newer devices, as well as Pixel Watch 2, Pixel Tablet, and Fitbit models, including Ace LTE, Versa 4, Sense 2, Charge 6, and Inspire 3. Users must enrol within 60 days of purchase.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!