BlockFills freezes withdrawals as Bitcoin drops below $65,000

BlockFills, an institutional digital asset trading and lending firm, has suspended client deposits and withdrawals, citing market volatility as Bitcoin experiences significant declines.

A notice sent to clients last week stated the suspension was intended ‘to further the protection of our clients and the firm.’ The Chicago-based company serves approximately 2,000 institutional clients and provides crypto-backed lending to miners and hedge funds.

Clients were informed they could continue trading under certain restrictions, though positions requiring additional margin could be closed.

The suspension comes as Bitcoin fell below $65,000 last week, down roughly 25% in 2026 and approximately 45% from its October peak near $120,000. In the digital asset industry, withdrawal halts are often interpreted as warning signs of potential liquidity constraints.

Several crypto firms, including FTX, BlockFi, and Celsius, imposed similar restrictions during prior downturns before entering bankruptcy proceedings.

BlockFills has not specified how long the suspension will last. A company spokesperson said the firm is ‘working hand in hand with investors and clients to bring this issue to a swift resolution and to restore liquidity to the platform.’

Founded in 2018 with backing from Susquehanna and CME Group, there is currently no public evidence of insolvency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AML breach triggers major fine for a Netherlands crypto firm

Dutch regulators have fined a cryptocurrency service provider for operating in the Netherlands without the legally required registration, underscoring intensifying enforcement across Europe’s digital asset sector.

De Nederlandsche Bank (DNB) originally imposed an administrative penalty of €2,850,000 on 2 October 2023. Authorities found the firm breached the Anti-Money Laundering and Anti-Terrorist Financing Act by offering unregistered crypto services.

Registration rules, introduced on 21 May 2020, require providers to notify supervisors due to elevated risks linked to transaction anonymity and potential misuse for money laundering or terrorist financing.

Non-compliance prevented the provider from reporting unusual transactions to the Financial Intelligence Unit-Netherlands. Regulators weighed the severity, duration, and culpability of the breach when determining the penalty amount.

Legal proceedings later altered the outcome. The Court of Rotterdam ruled on 19 December 2025 to reduce the fine to €2,277,500 and annulled the earlier decision on objection.

DNB has since filed a further appeal with the Trade and Industry Appeals Tribunal, leaving the case ongoing as oversight shifts toward MiCAR licensing requirements introduced in December 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto confiscation framework approved by State Duma

Russia’s State Duma has passed legislation establishing procedures for the seizure and confiscation of cryptocurrencies in criminal investigations. The law formally recognises digital assets as property under criminal law.

The bill cleared its third reading on 10 February and now awaits approval from the Federation Council and presidential signature.

Investigators may seize digital currency and access devices, with specialists required during investigative actions. Protocols must record asset type, quantity, and wallet identifiers, while access credentials and storage media are sealed.

Where technically feasible, seized funds may be transferred to designated state-controlled addresses, with transactions frozen by court order.

Despite creating a legal basis for confiscation, the law leaves critical operational questions unresolved. No method exists for valuing volatile crypto assets or for their storage, cybersecurity, or liquidation.

Practical cooperation with foreign crypto platforms, particularly under sanctions, also remains uncertain.

The government is expected to develop subordinate regulations covering state custody wallets and enforcement mechanics. Russia faces implementation challenges, including non-custodial wallet access barriers, stablecoin freezing limits, and institutional oversight risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool accelerates detection of foodborne bacteria

Researchers have advanced an AI system designed to detect bacterial contamination in food, dramatically improving accuracy and speed. The upgraded tool distinguishes bacteria from microscopic food debris, reducing diagnostic errors in automated screening.

Traditional testing relies on cultivating bacterial samples, taking days, and requiring specialist laboratory expertise. The deep learning model analyses bacterial microcolony images, enabling reliable detection within about three hours.

Accuracy gains stem from expanded model training. Earlier versions, trained solely on bacterial datasets, misclassified food debris as bacteria in more than 24% of cases.

Adding debris imagery to training eliminated misclassifications and improved detection reliability across food samples. The system was tested on pathogens including E. coli, Listeria, and Bacillus subtilis, alongside debris from chicken, spinach, and cheese.

Researchers say faster, more precise early detection could reduce foodborne outbreaks, protect public health, and limit costly product recalls as the technology moves toward commercial deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU Court opens path for WhatsApp to contest privacy rulings

The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.

A ruling that reshapes how companies defend their interests under the GDPR framework.

The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.

European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.

By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.

Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Structural friction, not intelligence, is holding back agentic AI

CIO leadership commentary highlights that many organisations investing in agentic AI, autonomous AI agents designed to execute complex, multi-step tasks, encounter disappointing results when deployments focus solely on outcomes like speed or cost savings without addressing underlying system design challenges.

The so-called ‘friction tax’ arises from siloed data, disjointed workflows and tools that force employees to act as manual connectors between systems, negating much of the theoretical efficiency AI promises.

The author proposes an ‘architecture of flow’ as a solution, in which context is unified across systems and AI agents operate on shared data and protocols, enabling work to move seamlessly between functions without bottlenecks.

This approach prioritises employee experience and customer value, enabling context-rich automation that reduces repetitive work and improves user satisfaction.

Key elements of such an architecture include universal context layers (e.g. standard protocols for data sharing) and agentic orchestration mechanisms that help specialised AI agents communicate and coordinate tasks across complex workflows.

When implemented effectively, this reduces cognitive load, strengthens adoption, and makes business growth a natural result of friction-free operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Enterprise AI security evolves as Cisco expands AI Defense capabilities

Cisco has announced a major update to its AI Defense platform as enterprise AI evolves from chat tools into autonomous agents. The company says AI security priorities are shifting from controlling outputs to protecting complex agent-driven systems.

The update strengthens end-to-end AI supply chain security by scanning third-party models, datasets, and tools used in development workflows. New inventory features help organisations track provenance and governance across AI resources.

Cisco has also expanded algorithmic red teaming through an upgraded AI Validation interface. The system enables adaptive multi-turn testing and aligns security assessments with NIST, MITRE, and OWASP frameworks.

Runtime protections now reflect the growing autonomy of AI agents. Cisco AI Defense inspects agent-to-tool interactions in real time, adding guardrails to prevent data leakage and malicious task execution.

Cisco says the update responds to the rapid operationalisation of AI across enterprises. The company argues that effective AI security now requires continuous visibility, automated testing, and real-time controls that scale with autonomy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT begins limited ads test in the US

OpenAI has begun testing advertisements inside ChatGPT for some adult users in the US, marking a major shift for the widely used AI service.

The ads appear only on Free and Go tiers in the US, while paid plans remain ad free. OpenAI says responses are unaffected, though critics warn commercial messaging could blur boundaries over time in the US.

Ads are selected based on conversation topics and prior interactions, prompting concern among privacy advocates in the US. OpenAI says advertisers receive only aggregated data and cannot view conversations.

Industry analysts say the move reflects growing pressure to monetise costly AI infrastructure in the US. Regulators and researchers continue to debate whether advertising can coexist with trust in AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot