North Korean hackers switch to ransomware in major cyber campaign

A North Korean hacking unit has launched a ransomware campaign targeting South Korea and other countries, marking a shift from pure espionage. Security firm S2W identified the subgroup, ‘ChinopuNK’, as part of the ScarCruft threat actor.

The operation began in July, utilising phishing emails and a malicious shortcut file within a RAR archive to deploy multiple malware types. These included a keylogger, stealer, ransomware, and a backdoor.

ScarCruft, active since 2016, has targeted defectors, journalists, and government agencies. Researchers say the move to ransomware indicates either a new revenue stream or a more disruptive mission.

The campaign has expanded beyond South Korea to Japan, Vietnam, Russia, Nepal, and the Middle East. Analysts note the group’s technical sophistication has improved in recent years.

Security experts advise monitoring URLs, file hashes, behaviour-based indicators, and ongoing tracking of ScarCruft’s tools and infrastructure, to detect related campaigns from North Korea and other countries early.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tokenised stocks bring limited benefits and high risks

The cryptocurrency sector has promoted tokenised stocks, allowing shares to be traded via blockchain. While fractional ownership and 24/7 trading are possible, most brokers already offer commission-free fractional shares, limiting the benefits for individual investors.

Tokenised stocks require a custodian to hold the underlying asset, a digital token representing the share, and smart contracts granting rights such as dividends and voting. Platforms like Kraken and Robinhood now offer tokenised trading, while asset managers like BlackRock explore tokenised funds.

Proponents cite transparency, security, and direct access to companies as advantages.

Risks remain significant. Transactions may be irrevocable, and uncertain legal protections, and smart contracts cannot cover all scenarios. Experts warn that tokenisation may bypass securities laws, risking market trust and investor protections.

Many analysts suggest the crypto industry’s push for tokenisation is driven more by a desire to integrate with traditional finance and attract institutional capital than by benefits to retail investors. Advantages are limited while risks, including regulatory uncertainty and potential fraud, are substantial.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

State-controlled messaging alters crypto usage in Russia

The Russian government limits secure calls on WhatsApp and Telegram, citing terrorism and fraud concerns. The measures aim to push users toward state-controlled platforms like MAX, raising privacy concerns.

With over 100 million users relying on encrypted messaging, these restrictions threaten the anonymity essential for cryptocurrency transactions. Government-monitored channels may let authorities track crypto transactions, deterring users and businesses from adopting digital currencies.

State-backed messaging platforms also open the door to regulatory oversight, complicating private crypto exchanges and noncustodial wallets.

In response, fintech startups and SMEs may turn to decentralised applications and privacy-focused tools, including zero-knowledge proofs, to maintain secure communication and financial operations.

The clampdown could boost crypto payroll adoption in Russia, reducing costs and shielding firms from economic instability. Using decentralised finance tools in alternative channels allows companies to protect privacy and support cross-border payments and remote work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bluesky updates rules and invites user feedback ahead of October rollout

Two years after launch, Bluesky is revising its Community Guidelines and other policies, inviting users to comment on the proposed changes before they take effect on 15 October 2025.

The updates are designed to improve clarity, outline safety procedures in more detail, and meet the requirements of new global regulations such as the UK’s Online Safety Act, the EU’s Digital Services Act, and the US’s TAKE IT DOWN Act.

Some changes aim to shape the platform’s tone by encouraging respectful and authentic interactions, while allowing space for journalism, satire, and parody.

The revised guidelines are organised under four principles: Safety First, Respect Others, Be Authentic, and Follow the Rules. They prohibit promoting violence, illegal activity, self-harm, and sexualised depictions of minors, as well as harmful practices like doxxing and non-consensual data-sharing.

Bluesky says it will provide a more detailed appeals process, including an ‘informal dispute resolution’ step, and in some cases will allow court action instead of arbitration.

The platform has also addressed nuanced issues such as deepfakes, hate speech, and harassment, while acknowledging past challenges in moderation and community relations.

Alongside the guidelines, Bluesky has updated its Privacy Policy and Copyright Policy to comply with international laws on data rights, transfer, deletion, takedown procedures and transparency reporting.

These changes will take effect on 15 September 2025 without a public feedback period.

The company’s approach contrasts with larger social networks by introducing direct user communication for disputes, though it still faces the challenge of balancing open dialogue with consistent enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

M&S grapples with lingering IT fallout from cyberattack

Marks & Spencer is still grappling with the after-effects of the cyberattack experienced during the Easter bank holiday weekend in April.

While customer-facing services, including click and collect, have been restored, internal systems used by buying and merchandising teams remain affected, hampering smooth operations.

The attack, which disabled contactless payments and forced the temporary shutdown of online orders, has had severe financial consequences. M&S estimates a hit to group operating profits of approximately £300 million, though mitigation is expected through insurance and cost controls.

While the rest of its e-commerce operations have largely resumed, lingering technical problems within internal systems continue to disrupt critical back-office functions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Igor Babuschkin leaves Elon Musk’s xAI for AI safety investment push

Igor Babuschkin, cofounder of Elon Musk’s AI startup xAI, has announced his departure to launch an investment firm dedicated to AI safety research. Musk created xAI in 2023 to rival Big Tech, criticising industry leaders for weak safety standards and excessive censorship.

Babuschkin revealed his new venture, Babuschkin Ventures, will fund AI safety research and startups developing responsible AI tools. Before leaving, he oversaw engineering across infrastructure, product, and applied AI projects, and built core systems for training and managing models.

His exit follows that of xAI’s legal chief, Robert Keele, earlier this month, highlighting the company’s churn amid intense competition between OpenAI, Google, and Anthropic. The big players are investing heavily in developing and deploying advanced AI systems.

Babuschkin, a former researcher at Google DeepMind and OpenAI, recalled the early scramble at xAI to set up infrastructure and models, calling it a period of rapid, foundational development. He said he had created many core tools that the startup still relies on.

Last month, X CEO Linda Yaccarino also resigned, months after Musk folded the social media platform into xAI. The company’s leadership changes come as the global AI race accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

How Anthropic trains and tests Claude for safe use

Anthropic has outlined a multi-layered safety plan for Claude, aiming to keep it useful while preventing misuse. Its Safeguards team blends policy experts, engineers, and threat analysts to anticipate and counter risks.

The Usage Policy establishes clear guidelines for sensitive areas, including elections, finance, and child safety. Guided by the Unified Harm Framework, the team assesses potential physical, psychological, and societal harms, utilizing external experts for stress tests.

During the 2024 US elections, a TurboVote banner was added after detecting outdated voting info, ensuring users saw only accurate, non-partisan updates.

Safety is built into development, with guardrails to block illegal or malicious requests. Partnerships like ThroughLine help Claude handle sensitive topics, such as mental health, with care rather than avoidance or refusal.

Before launch, Claude undergoes safety, risk, and bias evaluations with government and industry partners. Once live, classifiers scan for violations in real time, while analysts track patterns of coordinated misuse.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Age checks slash visits to top UK adult websites

Adult site traffic in the UK has fallen dramatically since the new age verification rules were enacted on 25 July under the Online Safety Act.

Figures from analytics firm Similarweb show Pornhub lost more than one million visitors in just two weeks, with traffic falling by 47%. XVideos saw a similar drop, while OnlyFans traffic fell by more than 10%.

The rules require adult websites to make it harder for under-18s to access explicit material, leading some users to turn to smaller and less regulated sites instead of compliant platforms. Pornhub said the trend mirrored patterns seen in other countries with similar laws.

The clampdown has also triggered a surge in virtual private network (VPN) downloads in the UK, as the tools can hide a user’s location and help bypass restrictions.

Ofcom estimates that 14 million people in the UK watch pornography and has proposed age checks using credit cards, photo ID, or AI analysis of selfies.

Critics argue that instead of improving safety, the measures may drive people towards more extreme or illicit material on harder-to-monitor parts of the internet, including the dark web.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Russia restricts Telegram and WhatsApp calls

Russian authorities have begun partially restricting calls on Telegram and WhatsApp, citing the need for crime prevention. Regulator Roskomnadzor accused the platforms of enabling fraud, extortion, and terrorism while ignoring repeated requests to act. Neither platform commented immediately.

Russia has long tightened internet control through restrictive laws, bans, and traffic monitoring. VPNs remain a workaround, but are often blocked. During this summer, further limits included mobile internet shutdowns and penalties for specific online searches.

Authorities have introduced a new national messaging app, MAX, which is expected to be heavily monitored. Reports suggest disruptions to WhatsApp and Telegram calls began earlier this week. Complaints cited dropped calls or muted conversations.

With 96 million monthly users, WhatsApp is Russia’s most popular platform, followed by Telegram with 89 million. Past clashes include Russia’s failed Attempt to ban Telegram (2018–20) and Meta’s designation as an extremist entity in 2022.

WhatsApp accused Russia of trying to block encrypted communication and vowed to keep it available. Lawmaker Anton Gorelkin suggested that MAX should replace WhatsApp. The app’s terms permit data sharing with authorities and require pre-installation on all smartphones sold in Russia.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!