Custom AI bots support student negotiating skills

In Cambridge, instructors at MIT and the Harvard Negotiation Project are using AI negotiation bots to enhance classroom simulations. The tools are designed to prompt reflection rather than offer fixed answers.

Students taking part in a multiparty exercise called Harborco engage with preparation, back-table and debriefing bots. The system helps them analyse stakeholder interests and test strategies before and after live negotiations.

Back-table bots simulate unseen political or organisational actors who often influence real-world negotiations. Students can safely explore trade-offs and persuasion tactics in a protected digital setting.

According to reported course findings, most participants said the AI bots improved preparation and sharpened their understanding of opposing interests. Instructors in Cambridge stress that AI supports, rather than replaces, human teaching and peer learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU reopens debate on social media age restrictions for children

The European Union is revisiting the idea of an EU-wide social media age restriction as several member states move ahead with national measures to protect children online. Spain, France, and Denmark are among the countries considering the enforcement of age limits for access to social platforms.

The issue was raised in the European Commission’s new action plan against cyberbullying, published on Tuesday. The plan confirms that a panel of child protection experts will advise the Commission by the summer on possible EU-wide age restrictions for social media use.

Commission President Ursula von der Leyen announced the creation of an expert panel last September, although its launch was delayed until early 2026. The panel will assess options for a coordinated European approach, including potential legislation and awareness-raising measures for parents.

The document notes that diverging national rules could lead to uneven protection for children across the bloc. A harmonised EU framework, the Commission argues, would help ensure consistent safeguards and reduce fragmentation in how platforms apply age restrictions.

So far, the Commission has relied on non-binding guidance under the Digital Services Act to encourage platforms such as TikTok, Instagram, and Snap to protect minors. Increasing pressure from member states pursuing national bans may now prompt a shift towards more formal EU-level regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

eSafety escalates scrutiny of Roblox safety measures

Australia’s online safety regulator has notified Roblox of plans to directly test how the platform has implemented a set of child safety commitments agreed last year, amid growing concerns over online grooming and sexual exploitation.

In September last year, Roblox made nine commitments following months of engagement with eSafety, aimed at supporting compliance with obligations under the Online Safety Act and strengthening protections for children in Australia.

Measures included making under-16s’ accounts private by default, restricting contact between adults and minors without parental consent, disabling chat features until age estimation is complete, and extending parental controls and voice chat restrictions for younger users.

Roblox told eSafety at the end of 2025 that it had delivered all agreed commitments, after which the regulator continued monitoring implementation. eSafety Commissioner Julie Inman Grant said serious concerns remain over reports of child exploitation and harmful material on the platform.

Direct testing will now examine how the measures work in practice, with support from the Australian Government. Enforcement action may follow, including penalties of up to $49.5 million, alongside checks against new age-restricted content rules from 9 March.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

BlockFills freezes withdrawals as Bitcoin drops below $65,000

BlockFills, an institutional digital asset trading and lending firm, has suspended client deposits and withdrawals, citing market volatility as Bitcoin experiences significant declines.

A notice sent to clients last week stated the suspension was intended ‘to further the protection of our clients and the firm.’ The Chicago-based company serves approximately 2,000 institutional clients and provides crypto-backed lending to miners and hedge funds.

Clients were informed they could continue trading under certain restrictions, though positions requiring additional margin could be closed.

The suspension comes as Bitcoin fell below $65,000 last week, down roughly 25% in 2026 and approximately 45% from its October peak near $120,000. In the digital asset industry, withdrawal halts are often interpreted as warning signs of potential liquidity constraints.

Several crypto firms, including FTX, BlockFi, and Celsius, imposed similar restrictions during prior downturns before entering bankruptcy proceedings.

BlockFills has not specified how long the suspension will last. A company spokesperson said the firm is ‘working hand in hand with investors and clients to bring this issue to a swift resolution and to restore liquidity to the platform.’

Founded in 2018 with backing from Susquehanna and CME Group, there is currently no public evidence of insolvency.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Russia tightens controls as Telegram faces fresh restrictions

Authorities in Russia have tightened their grip on Telegram after the state regulator Roskomnadzor introduced new measures accusing the platform of failing to curb fraud and safeguard personal data.

Users across the country have increasingly reported slow downloads and disrupted media content since January, with complaints rising sharply early in the week. Although officials initially rejected claims of throttling, industry sources insist that download speeds have been deliberately reduced.

Telegram’s founder, Pavel Durov, argues that Roskomnadzor is trying to steer people toward Max rather than allowing open competition. Max is a government-backed messenger widely viewed by critics as a tool for surveillance and political control.

While text messages continue to load normally for most, media content such as videos, images and voice notes has become unreliable, particularly on mobile devices. Some users report that only the desktop version performs without difficulty.

The slowdown is already affecting daily routines, as many Russians rely on Telegram for work communication and document sharing, much as workplaces elsewhere rely on Slack rather than email.

Officials also use Telegram to issue emergency alerts, and regional leaders warn that delays could undermine public safety during periods of heightened military activity.

Pressure on foreign platforms has grown steadily. Restrictions on voice and video calls were introduced last summer, accompanied by claims that criminals and hostile actors were using Telegram and WhatsApp.

Meanwhile, Max continues to gain users, reaching 70 million monthly accounts by December. Despite its rise, it remains behind Telegram and WhatsApp, which still dominate Russia’s messaging landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AML breach triggers major fine for a Netherlands crypto firm

Dutch regulators have fined a cryptocurrency service provider for operating in the Netherlands without the legally required registration, underscoring intensifying enforcement across Europe’s digital asset sector.

De Nederlandsche Bank (DNB) originally imposed an administrative penalty of €2,850,000 on 2 October 2023. Authorities found the firm breached the Anti-Money Laundering and Anti-Terrorist Financing Act by offering unregistered crypto services.

Registration rules, introduced on 21 May 2020, require providers to notify supervisors due to elevated risks linked to transaction anonymity and potential misuse for money laundering or terrorist financing.

Non-compliance prevented the provider from reporting unusual transactions to the Financial Intelligence Unit-Netherlands. Regulators weighed the severity, duration, and culpability of the breach when determining the penalty amount.

Legal proceedings later altered the outcome. The Court of Rotterdam ruled on 19 December 2025 to reduce the fine to €2,277,500 and annulled the earlier decision on objection.

DNB has since filed a further appeal with the Trade and Industry Appeals Tribunal, leaving the case ongoing as oversight shifts toward MiCAR licensing requirements introduced in December 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Crypto confiscation framework approved by State Duma

Russia’s State Duma has passed legislation establishing procedures for the seizure and confiscation of cryptocurrencies in criminal investigations. The law formally recognises digital assets as property under criminal law.

The bill cleared its third reading on 10 February and now awaits approval from the Federation Council and presidential signature.

Investigators may seize digital currency and access devices, with specialists required during investigative actions. Protocols must record asset type, quantity, and wallet identifiers, while access credentials and storage media are sealed.

Where technically feasible, seized funds may be transferred to designated state-controlled addresses, with transactions frozen by court order.

Despite creating a legal basis for confiscation, the law leaves critical operational questions unresolved. No method exists for valuing volatile crypto assets or for their storage, cybersecurity, or liquidation.

Practical cooperation with foreign crypto platforms, particularly under sanctions, also remains uncertain.

The government is expected to develop subordinate regulations covering state custody wallets and enforcement mechanics. Russia faces implementation challenges, including non-custodial wallet access barriers, stablecoin freezing limits, and institutional oversight risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool accelerates detection of foodborne bacteria

Researchers have advanced an AI system designed to detect bacterial contamination in food, dramatically improving accuracy and speed. The upgraded tool distinguishes bacteria from microscopic food debris, reducing diagnostic errors in automated screening.

Traditional testing relies on cultivating bacterial samples, taking days, and requiring specialist laboratory expertise. The deep learning model analyses bacterial microcolony images, enabling reliable detection within about three hours.

Accuracy gains stem from expanded model training. Earlier versions, trained solely on bacterial datasets, misclassified food debris as bacteria in more than 24% of cases.

Adding debris imagery to training eliminated misclassifications and improved detection reliability across food samples. The system was tested on pathogens including E. coli, Listeria, and Bacillus subtilis, alongside debris from chicken, spinach, and cheese.

Researchers say faster, more precise early detection could reduce foodborne outbreaks, protect public health, and limit costly product recalls as the technology moves toward commercial deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India enforces a three-hour removal rule for AI-generated deepfake content

Strict new rules have been introduced in India for social media platforms in an effort to curb the spread of AI-generated and deepfake material.

Platforms must label synthetic content clearly and remove flagged posts within three hours instead of allowing manipulated material to circulate unchecked. Government notifications and court orders will trigger mandatory action, creating a fast-response mechanism for potentially harmful posts.

Officials argue that rapid removal is essential as deepfakes grow more convincing and more accessible.

Synthetic media has already raised concerns about public safety, misinformation and reputational harm, prompting the government to strengthen oversight of online platforms and their handling of AI-generated imagery.

The measure forms part of a broader push by India to regulate digital environments and anticipate the risks linked to advanced AI tools.

Authorities maintain that early intervention and transparency around manipulated content are vital for public trust, particularly during periods of political sensitivity or high social tension.

Platforms are now expected to align swiftly with the guidelines and cooperate with legal instructions. The government views strict labelling and rapid takedowns as necessary steps to protect users and uphold the integrity of online communication across India.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU Court opens path for WhatsApp to contest privacy rulings

The Court of Justice of the EU has ruled that WhatsApp can challenge an EDPB decision directly in European courts. Judges confirmed that firms may seek annulment when a decision affects them directly instead of relying solely on national procedures.

A ruling that reshapes how companies defend their interests under the GDPR framework.

The judgment centres on a 2021 instruction from the EDPB to Ireland’s Data Protection Commission regarding the enforcement of data protection rules against WhatsApp.

European regulators argued that only national authorities were formal recipients of these decisions. The court found that companies should be granted standing when their commercial rights are at stake.

By confirming this route, the court has created an important precedent for businesses facing cross-border investigations. Companies will be able to contest EDPB decisions at EU level rather than moving first through national courts, a shift that may influence future GDPR enforcement cases across the Union.

Legal observers expect more direct challenges as organisations adjust their compliance strategies. The outcome strengthens judicial oversight of the EDPB and could reshape the balance between national regulators and EU-level bodies in data protection governance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!