EU instructs X to keep all Grok chatbot records

The European Commission has ordered X to retain all internal documents and data on its AI chatbot Grok until the end of 2026. The order falls under the Digital Services Act after concerns Grok’s ‘spicy’ mode enabled sexualised deepfakes of minors.

The move continues EU oversight, recalling a January 2025 order to preserve X’s recommender system documents amid claims it amplified far-right content during German elections. EU regulators emphasised that platforms must manage the content generated by their AI responsibly.

Earlier this week, X submitted responses to the Commission regarding Grok’s outputs following concerns over Holocaust denial content. While the deepfake scandal has prompted calls for further action, the Commission has not launched a formal investigation into Grok.

Regulators reiterated that it remains X’s responsibility to ensure the chatbot’s outputs meet European standards, and retention of all internal records is crucial for ongoing monitoring and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Why data centres are becoming a flashpoint in US towns

As AI and cloud computing drive unprecedented demand for digital infrastructure, Big Tech’s rapid expansion of data centres is increasingly colliding with resistance at the local level. Across the United States, communities are pushing back against large-scale facilities they say threaten their quality of life, environment, and local character.

Data centres, massive complexes packed with servers and supported by vast energy and water resources, are multiplying quickly as companies race to secure computing power and proximity to electricity grids. But as developers look beyond traditional tech hubs and into suburbs, small towns, and rural areas, they are finding residents far less welcoming than anticipated.

What were once quiet municipal board meetings are now drawing standing-room-only crowds. Residents argue that data centres bring few local jobs while consuming enormous amounts of electricity and water, generating constant noise, and relying on diesel generators that can affect air quality. In farming communities, the loss of open land and agricultural space has become a significant concern, as homeowners worry about declining property values and potential health risks.

Opposition efforts are becoming more organised and widespread. Community groups increasingly share tactics online, learning from similar struggles in other states. Yard signs, door-to-door campaigns, and legal challenges have become common tools for advocacy. According to industry observers, the level of resistance has reached unprecedented heights in infrastructure development.

Tracking groups report that dozens of proposed data centre projects worth tens of billions of dollars have recently been delayed or blocked due to local opposition and regulatory hurdles. In some US states, more than half of proposed developments are now encountering significant pushback, forcing developers to reconsider timelines, locations, or even entire projects.

Electricity costs are a major concern, fueling public anger. In regions already experiencing rising utility bills, residents fear that large data centres will further strain power grids and push prices even higher.

Water use is another flashpoint, particularly in areas that rely on wells and aquifers. Environmental advocates warn that long-term impacts are still poorly understood, leaving communities to shoulder the risks.

The growing resistance is having tangible consequences for the industry. Developers say uncertainty around zoning approvals and public support is reshaping investment strategies. Some companies are choosing to sell sites once they secure access to power, often the most valuable part of a project, rather than risk prolonged local battles that could ultimately derail construction.

Major technology firms, including Microsoft, Google, Amazon, and Meta, have largely avoided public comment on the mounting opposition. However, Microsoft has acknowledged in regulatory filings that community resistance and local moratoriums now represent a material risk to its infrastructure plans.

Industry representatives argue that misinformation has contributed to public fears, claiming that modern data centres are far cleaner and more efficient than critics suggest. In response, trade groups are urging developers to engage with communities earlier, be more transparent, and highlight the economic benefits, such as tax revenue and infrastructure investment. Promises of water conservation, energy efficiency, and community funding have become central to outreach efforts.

In some communities, frustration has been amplified by revelations that plans were discussed quietly among government agencies and utilities long before residents were informed. Once disclosed, these projects have sparked accusations of secrecy, accelerating public distrust and mobilisation.

Despite concessions and promises of further dialogue, many opponents say their fight is far from over. As demand for data centres continues to grow, the clash between global technology ambitions and local community concerns is shaping up to be one of the defining infrastructure battles of the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI sovereignty test in South Korea reaches a critical phase

South Korea’s flagship AI foundation model project has entered a decisive phase after accusations that leading participants relied on foreign open source components instead of building systems entirely independently.

The controversy has reignited debate over how ‘from scratch’ development should be defined within government-backed AI initiatives aimed at strengthening national sovereignty.

Scrutiny has focused on Naver Cloud after developers identified near-identical similarities between its vision encoder and models released by Alibaba, alongside disclosures that audio components drew on OpenAI technology.

The dispute now sits with the Ministry of Science and ICT, which must determine whether independence applies only to a model’s core or extends to all major components.

An outcome that is expected to shape South Korea’s AI strategy by balancing deeper self-reliance against the realities of global open-source ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netomi shows how to scale enterprise AI safely

Netomi has developed a blueprint for scaling enterprise AI, utilising GPT-4.1 for rapid tool use and GPT-5.2 for multi-step reasoning. The platform supports complex workflows, policy compliance, and heavy operational loads, serving clients such as United Airlines and DraftKings.

The company emphasises three core lessons. First, systems must handle real-world complexity, orchestrating multiple APIs, databases, and tools to maintain state and situational awareness across multi-step workflows.

Second, parallelised architectures ensure low latency even under extreme demand, keeping response times fast and reliable during spikes in activity.

Third, governance is embedded directly into the runtime, enforcing compliance, protecting sensitive data, and providing deterministic fallbacks when AI confidence is low.

Netomi demonstrates how agentic AI can be safely scaled, providing enterprises with a model for auditable, predictable, and resilient intelligent systems. These practices serve as a roadmap for organisations seeking to move AI from experimental tools to production-ready infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto crime report 2025 reveals record nation-state activity

Illicit crypto activity surged in 2025 as nation states and professional criminal networks expanded on-chain operations. Government-linked actors used infrastructure built for organised cybercrime, increasing risks for regulators and security teams.

Data shows that illicit crypto addresses received at least $154 billion during the year, representing a 162% increase compared to 2024. Sanctioned entities drove much of the growth, with stablecoins making up 84% of illicit transactions due to their liquidity and ease of cross-border transfer.

North Korea remained the most aggressive state actor, with hackers stealing around $2 billion, including the record-breaking Bybit breach. Russia’s ruble-backed A7A5 token saw over $93 billion in sanction-evasion transactions, while Iran-linked networks continued using crypto for illicit trade and financing.

Chinese money laundering networks also emerged as a central force, offering full-service criminal infrastructure to fraud groups, hackers, and sanctioned entities. Links between crypto and physical crime grew, with trafficking and coercion increasingly tied to digital asset transfers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Telegram bonds frozen amid ongoing international sanctions framework

Around $500 million in bonds issued by Telegram remain frozen within Russia’s financial settlement system following the application of international sanctions.

The situation reflects how global regulatory measures can continue to affect corporate assets even when companies operate across multiple jurisdictions.

According to reports, the frozen bonds were issued in 2021 and are held at Russia’s National Settlement Depository.

Telegram said its more recent $1.7 billion bond issuance in 2025 involved international investors, with no participation from Russian capital, and was purchased mainly by institutional funds based outside Russia.

Telegram stated that bond repayments follow established international procedures through intermediaries, meaning payment obligations are fulfilled regardless of whether individual bondholders face restrictions.

Financial results for 2025 also showed losses, linked in part to a decline in cryptocurrency valuations, which reflected broader market conditions rather than company-specific factors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI receptionist begins work at UK GP surgery

A GP practice in North Lincolnshire, UK, has introduced an AI receptionist named Emma to reduce long wait times on calls. Emma collects patient details and prioritises appointments for doctors to review.

Doctors say the system has improved efficiency, with most patients contacted within hours. Dr Satpal Shekhawat explained that the information from Emma helps identify clinical priorities effectively.

Some patients reported issues, including mistakes with dates of birth and difficulties explaining health problems. The practice reassured patients that human receptionists remain available and that the AI supports staff rather than replacing them.

The technology has drawn attention from other practices in the region. NHS officials are monitoring feedback to refine the system and improve patient experience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Sedgwick breach linked to TridentLocker ransomware attack

Sedgwick has confirmed a data breach at its government-focused subsidiary after the TridentLocker ransomware group claimed responsibility for stealing 3.4 gigabytes of data. The incident underscores growing threats to federal contractors handling sensitive US agency information.

The company said the breach affected only an isolated file transfer system used by Sedgwick Government Solutions, which serves agencies such as DHS, ICE, and CISA. Segmentation reportedly prevented any impact on wider corporate systems or ongoing client operations.

TridentLocker, a ransomware-as-a-service group that appeared in late 2025, listed Sedgwick Government Solutions on its dark web leak site and posted samples of stolen documents. The gang is known for double-extortion tactics, combining data encryption and public exposure threats.

Sedgwick has informed US law enforcement and affected clients while continuing to investigate with external cybersecurity experts. The firm emphasised operational continuity and noted no evidence of intrusion into its claims management servers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers launch AURA to protect AI knowledge graphs

A novel framework called AURA has been unveiled by researchers aiming to safeguard proprietary knowledge graphs in AI systems by deliberately corrupting stolen copies with realistic yet false data.

The approach is designed to preserve full utility for authorised users while rendering illicit copies ineffective instead of relying solely on traditional encryption or watermarking.

AURA works by injecting ‘adulterants’ into critical nodes of knowledge graphs, chosen using advanced algorithms to minimise changes while maximising disruption for unauthorised users.

Tests with GPT-4o, Gemini-2.5, Qwen-2.5, and Llama2-7B showed that 94–96% of correct answers in stolen data were flipped, while authorised access remained unaffected.

The framework protects valuable intellectual property in sectors such as pharmaceuticals and manufacturing, where knowledge graphs power advanced AI applications.

Unlike passive watermarking or offensive poisoning, AURA actively degrades stolen datasets, offering robust security against offline and private-use attacks.

With GraphRAG applications proliferating, major technology firms, including Microsoft, Google, and Alibaba, are evaluating AURA to defend critical AI-driven knowledge.

The system demonstrates how active protection strategies can complement existing security measures, ensuring enterprises maintain control over their data in an AI-driven world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Healthcare systems face mounting risk from CrazyHunter ransomware

CrazyHunter ransomware has emerged as a growing threat to healthcare organisations, with repeated attacks targeting hospitals and medical service providers. The campaign focuses on critical healthcare infrastructure, raising concerns about service disruption and the exposure of sensitive patient data.

The malware is developed in Go and demonstrates a high level of technical maturity. Attackers gain initial access by exploiting weak Active Directory credentials, then use Group Policy Objects to distribute the ransomware rapidly across compromised networks.

Healthcare institutions in Taiwan have been among the most affected, with multiple confirmed incidents reported by security researchers. The pattern suggests a targeted campaign rather than opportunistic attacks, increasing pressure on regional healthcare providers to strengthen defences.

Once deployed, CrazyHunter turns off security tools and encrypts files to conceal its activity. Analysts note the use of extensive evasion techniques, including memory-based execution and redundant encryption methods, to ensure the delivery of the payload.

CrazyHunter employs a hybrid encryption scheme that combines ChaCha20 and elliptic curve cryptography, utilising partial file encryption to expedite the impact. Encrypted files receive a ‘.Hunter’ extension, with recovery dependent on the attackers’ private keys, reinforcing the pressure to pay ransoms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!