CrowdStrike warns of faster AI driven threats

Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.

The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.

Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.

China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study warns AI chatbots can reinforce delusions and mania

AI chatbots may pose serious risks for people with severe mental illnesses, according to a new study from Acta Psychiatrica Scandinavica. Researchers found that tools such as ChatGPT can worsen psychiatric conditions by reinforcing users’ delusions, paranoia, mania, suicidal thoughts, and eating disorders.

The team examined health records from more than 54,000 patients and identified dozens of cases where AI interactions appeared to exacerbate symptoms. Experts warn that the actual number of affected individuals is likely far higher.

AI’s design to follow and validate a user’s input can unintentionally strengthen delusional thinking, turning digital assistants into echo chambers for psychosis.

Despite potential benefits for psychoeducation or alleviating loneliness, experts caution against using AI as a substitute for trained therapists. Chatbots should be tested in rigorous clinical trials before any therapeutic use, says Professor Søren Dinesen Østergaard.

The researchers urge healthcare providers to discuss AI chatbot use with patients, particularly those with severe mental illnesses, and call for central regulation of the technology. They argue that lessons from social media show that early oversight is essential to protect vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Commission delays high risk AI guidance

The European Commission has confirmed it will again delay publishing guidance on high-risk AI systems under the EU AI Act. The guidelines were due by 2 February 2026, but will now follow a revised timeline.

According to Euractiv, the document is intended to clarify which AI systems fall into the high-risk category and therefore face stricter obligations. Officials said more time is needed to incorporate significant stakeholder feedback.

The delay marks the second missed deadline and adds to broader implementation setbacks surrounding the EU AI Act. Several member states have yet to designate national enforcement bodies, complicating oversight preparations.

Brussels is also considering postponing the application of high-risk rules through a digital simplification package. Parliament and Council appear supportive of moving the August deadline back by more than a year, easing pressure on companies awaiting guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for numerous OpenClaw users, citing violations of its terms of service. Developers had used OpenClaw’s OAuth plugin to access subsidised Gemini model tokens, triggering backend strain and service degradation.

OpenClaw, launched in November 2025, gained more than 219,000 GitHub stars by enabling local AI agents for tasks such as email management and web browsing. Users authenticated through Antigravity to access advanced Gemini models at reduced cost, bypassing official distribution channels.

Google said the third-party integration powered non-authorised products on Antigravity infrastructure, triggering usage flagged as malicious. In February 2026, AI Ultra subscribers reported 403 errors and account restrictions, with some citing temporary disruptions to Gmail and Workspace.

Varun Mohan of Google DeepMind said the surge had degraded service quality and that enforcement prioritised legitimate users. Limited reinstatement options were offered to those unaware of violations, while capacity constraints were cited as the reason.

The move follows similar restrictions by Anthropic on third-party OAuth usage. Developers are shifting to alternative forks, as debate intensifies over open tooling, platform control, and the risks of agentic AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI drives faster modernisation of legacy COBOL systems

Critical to finance, airlines, and government, COBOL handles about 95% of US ATM transactions. Despite its ubiquity, the pool of developers able to read and maintain COBOL is shrinking as seasoned engineers retire and universities offer limited instruction.

Institutional knowledge is now embedded in decades-old code, and documentation often lags.

Modernising COBOL differs from typical software updates. It requires untangling intricate dependencies and reverse-engineering business logic that has evolved over decades.

Traditional modernisation efforts involved large teams of consultants over the years, resulting in high costs and lengthy timelines. AI tools are changing that paradigm by automating the most labour-intensive tasks.

AI-driven solutions like Claude Code map code dependencies, trace execution paths, document workflows, and identify risks. They provide teams with actionable insights for prioritisation, risk management, and refactoring, dramatically shortening modernisation timelines from years to months.

Human experts remain essential to reviewing AI recommendations, ensuring regulatory compliance, and making strategic decisions about which components to modernise first.

Implementation follows an incremental approach. AI translates COBOL logic into modern languages, creates integration scaffolding, and supports side-by-side operation with legacy components.

Continuous validation at each step reduces risk, allowing teams to build confidence as complex parts of the system are modernised. AI automation combined with expert oversight makes large-scale COBOL modernisation feasible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OCC approval moves Crypto.com closer to US trust bank

Crypto.com has secured conditional approval from the Office of the Comptroller of the Currency to move ahead with plans to launch a federally regulated national trust bank in the United States.

Approval marks a notable step in the firm’s regulatory roadmap. It also signals continued alignment with US supervisory expectations as the digital asset sector seeks deeper integration with traditional financial infrastructure.

Plans focus on establishing Foris Dax National Trust Bank. The entity is designed to provide a consolidated suite of services, including digital asset custody, staking across multiple blockchain ecosystems such as Cronos, and trade settlement.

Full approval would place the entity under direct federal oversight, positioning it to serve institutional clients that require qualified custodians operating within a clear regulatory perimeter.

Leadership described the decision as recognition of its compliance and risk management framework. Executives said the structure would offer institutions a single regulated gateway to digital asset infrastructure and strengthen market confidence.

Existing operations at Crypto.com Custody Trust Company in New Hampshire will continue without interruption. Final authorisation will determine the timeline for launching the national trust bank and expanding federally supervised US services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic uncovers large-scale AI model theft operations

Three AI laboratories have been found conducting large-scale illicit campaigns to extract capabilities from Anthropic’s Claude AI, the company revealed.

DeepSeek, Moonshot, and MiniMax used around 24,000 fraudulent accounts to generate more than 16 million interactions, violating terms of service and regional access restrictions. The technique, called distillation, trains a weaker model on outputs from a stronger one, speeding AI development.

Distilled models obtained in this manner often lack critical safeguards, creating serious national security concerns. Without protections, these capabilities could be integrated into military, intelligence, surveillance, or cyber operations, potentially by authoritarian governments.

The attacks also undermine export controls designed to preserve the competitive edge of US AI technology and could give a misleading impression of foreign labs’ independent AI progress.

Each lab followed coordinated playbooks using proxy networks and large-scale automated prompts to target specific capabilities such as agentic reasoning, coding, and tool use.

Anthropic attributed the campaigns using request metadata, infrastructure indicators, and corroborating observations from industry partners. The investigation detailed how distillation attacks operate from data generation to model launch.

In response, Anthropic has strengthened detection systems, implemented stricter access controls, shared intelligence with other labs and authorities, and introduced countermeasures to reduce the effectiveness of illicit distillation.

The company emphasises that addressing these attacks will require coordinated action across the AI industry, cloud providers, and policymakers to protect frontier AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AWS warns of AI powered cybercrime

Amazon Web Services has revealed that a Russian-speaking threat actor used commercial AI tools to compromise more than 600 FortiGate firewalls across 55 countries. AWS described the campaign as an AI-powered assembly line for cybercrime.

According to AWS, the attacker relied on exposed management ports and weak single-factor credentials rather than exploiting software vulnerabilities. The campaign targeted FortiGate devices globally and focused on harvesting credentials and configuration data.

AWS said the potentially Russian group appeared unsophisticated but achieved scale through AI-assisted mass scanning and automation. When encountering stronger defences, the attackers reportedly shifted to easier targets rather than persist.

The company advised organisations using FortiGate appliances to secure management interfaces, change default credentials and enforce complex passwords. Amazon said it was not compromised during the campaign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot