Crypto crime report 2025 reveals record nation-state activity

Illicit crypto activity surged in 2025 as nation states and professional criminal networks expanded on-chain operations. Government-linked actors used infrastructure built for organised cybercrime, increasing risks for regulators and security teams.

Data shows that illicit crypto addresses received at least $154 billion during the year, representing a 162% increase compared to 2024. Sanctioned entities drove much of the growth, with stablecoins making up 84% of illicit transactions due to their liquidity and ease of cross-border transfer.

North Korea remained the most aggressive state actor, with hackers stealing around $2 billion, including the record-breaking Bybit breach. Russia’s ruble-backed A7A5 token saw over $93 billion in sanction-evasion transactions, while Iran-linked networks continued using crypto for illicit trade and financing.

Chinese money laundering networks also emerged as a central force, offering full-service criminal infrastructure to fraud groups, hackers, and sanctioned entities. Links between crypto and physical crime grew, with trafficking and coercion increasingly tied to digital asset transfers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telegram bonds frozen amid ongoing international sanctions framework

Around $500 million in bonds issued by Telegram remain frozen within Russia’s financial settlement system following the application of international sanctions.

The situation reflects how global regulatory measures can continue to affect corporate assets even when companies operate across multiple jurisdictions.

According to reports, the frozen bonds were issued in 2021 and are held at Russia’s National Settlement Depository.

Telegram said its more recent $1.7 billion bond issuance in 2025 involved international investors, with no participation from Russian capital, and was purchased mainly by institutional funds based outside Russia.

Telegram stated that bond repayments follow established international procedures through intermediaries, meaning payment obligations are fulfilled regardless of whether individual bondholders face restrictions.

Financial results for 2025 also showed losses, linked in part to a decline in cryptocurrency valuations, which reflected broader market conditions rather than company-specific factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rokid launches screenless AI smart glasses at CES 2026

The global pioneer firm in AR, Rokid, unveiled its new Style smart glasses at CES 2026, opting for a screenless, voice-first design instead of the visual displays standard across competing devices.

Weighing just 38.5 grams, the glasses are designed for everyday wear, with an emphasis on comfort and prescription readiness.

Despite lacking a screen, Rokid Style integrates AI through an open ecosystem that supports platforms such as ChatGPT, DeepSeek and Qwen. Global services, including Google Maps and Microsoft AI Translation, facilitate navigation and provide real-time language assistance across various regions.

The device adopts a prescription-first approach, supporting lenses from plano to ±15.00 diopters alongside photochromic, tinted and protective options.

Rokid has also launched a global online prescription service, promising delivery within seven to ten days.

Design features include titanium alloy hinges, silicone nose pads and a built-in camera capable of 4K video recording.

Battery life reaches up to 12 hours of daily use, with global pricing starting at $299, ahead of an online launch scheduled for January 19.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lynx ransomware group claims Regis subsidiary on dark web leak site

Regis Resources, one of Australia’s largest unhedged gold producers, has confirmed it is investigating a cyber incident after its subsidiary was named on a dark web leak site operated by a ransomware group.

The Lynx ransomware group listed McPhillamys Gold on Monday, claiming a cyberattack and publishing the names and roles of senior company executives. The group did not provide technical details or evidence of data theft.

The Australia-based company stated that the intrusion was detected in mid-November 2025 through its routine monitoring systems, prompting temporary restrictions on access to protect internal networks. The company said its cybersecurity controls were designed to isolate threats and maintain business continuity.

A forensic investigation found no evidence of data exfiltration and confirmed that no ransom demand had been received. Authorities were notified, and Regis said the incident had no operational or commercial impact.

Lynx, which first emerged in July 2024, has claimed hundreds of victims worldwide. The group says it avoids targeting critical public services, though it continues to pressure private companies through data leak threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI predicts heart failure risk in cattle

Researchers at the University of Wyoming in the US have developed an AI model that predicts the risk of congestive heart failure in cattle using heart images. The technology focuses on structural changes linked to pulmonary hypertension.

Developed by PhD researcher Chase Markel, the computer vision system was trained on nearly 7,000 manually scored images. The model correctly classifies heart risk levels in 92 percent of cases.

The images were collected in commercial cattle processing plants, allowing assessment at scale after slaughter. The findings support the need for improved traceability throughout the production cycle.

Industry use could enhance traceability and mitigate economic losses resulting from undetected disease. Patent protection is being pursued as further models are developed for other cattle conditions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI receptionist begins work at UK GP surgery

A GP practice in North Lincolnshire, UK, has introduced an AI receptionist named Emma to reduce long wait times on calls. Emma collects patient details and prioritises appointments for doctors to review.

Doctors say the system has improved efficiency, with most patients contacted within hours. Dr Satpal Shekhawat explained that the information from Emma helps identify clinical priorities effectively.

Some patients reported issues, including mistakes with dates of birth and difficulties explaining health problems. The practice reassured patients that human receptionists remain available and that the AI supports staff rather than replacing them.

The technology has drawn attention from other practices in the region. NHS officials are monitoring feedback to refine the system and improve patient experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grok incident renews scrutiny of generative AI safety

Elon Musk’s Grok chatbot has triggered international backlash after generating sexualised images of women and girls in response to user prompts on X, raising renewed concerns over AI safeguards and platform accountability.

The images, some depicting minors in minimal clothing, circulated publicly before being removed. Grok later acknowledged failures in its own safeguards, stating that child sexual abuse material is illegal and prohibited, while xAI initially offered no public explanation.

European officials reacted swiftly. French ministers referred the matter to prosecutors, calling the output illegal, while campaigners in the UK argued the incident exposed delays in enforcing laws against AI-generated intimate images.

In contrast, US lawmakers largely stayed silent despite xAI holding a major defence contract. Musk did not directly address the controversy; instead, posting unrelated content as criticism mounted on the platform.

The episode has intensified debate over whether current AI governance frameworks are sufficient to prevent harm, particularly when generative systems operate at scale with limited real-time oversight.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT reaches 40 million daily users for health advice

More than 40 million people worldwide now use ChatGPT daily for health-related advice, according to OpenAI.

Over 5 percent of all messages sent to the chatbot relate to healthcare, with three in five US adults reporting use in the past three months. Many interactions occur outside clinic hours, highlighting the demand for AI guidance in navigating complex medical systems.

Users primarily turn to AI to check symptoms, understand medical terms, and explore treatment options.

OpenAI emphasises that ChatGPT helps patients gain agency over their health, particularly in rural areas where hospitals and specialised services are scarce.

The technology also supports healthcare professionals by reducing administrative burdens and providing timely information.

Despite growing adoption, regulatory oversight remains limited. Some US states have attempted to regulate AI in healthcare, and lawsuits have emerged over cases where AI-generated advice has caused harm.

OpenAI argues that ChatGPT supplements rather than replaces medical services, helping patients interpret information, prepare for care, and navigate gaps in access.

Healthcare workers are also increasingly using AI. Surveys show that two in five US professionals, including nurses and pharmacists, use generative AI weekly to draft notes, summarise research, and streamline workflows.

OpenAI plans to release healthcare policy recommendations to guide the responsible adoption of AI in clinical settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplochatbot!

EU pushes for open-source commercialisation to reduce tech dependence

The European Commission is preparing a strategy to commercialise European open-source software in an effort to strengthen digital sovereignty and reduce dependence on foreign technology providers.

The plan follows a consultation highlighting that EU funding has delivered innovation, although commercial scale has often emerged outside Europe instead of within it.

Open-source software plays a strategic role by decentralising development and limiting reliance on dominant technology firms.

Commission officials argue that research funding alone cannot deliver competitive alternatives, particularly when public and private contracts continue to favour proprietary systems operated by non-European companies.

An upcoming strategy, due alongside the Cloud and AI Development Act in early 2026, that will prioritise community upscaling, industrial deployment and market integration.

Governance reforms and stronger supply chain security are expected to address vulnerabilities that can affect widely used open-source components.

Financial sustainability will also feature prominently, with public sector partnerships encouraged to support long-term viability.

Brussels hopes wider public adoption of open-source tools will replace expensive or data-extractive proprietary software, reinforcing Europe’s technological autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!