Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp trials AI-powered Writing Help for personalised messaging

WhatsApp is testing a new AI feature for iOS users that provides real-time writing assistance.

Known as ‘Writing Help’, the tool suggests alternative phrasings, adjusts tone, and enhances clarity, with all processing handled on-device to safeguard privacy.

The feature allows users to select professional, friendly, or concise tones before the AI generates suitable rewordings while keeping the original meaning. According to reports, the tool is available only to a small group of beta testers through TestFlight, with no confirmed release date.

WhatsApp says it uses Meta’s Private Processing technology to ensure sensitive data never leaves the device, mirroring privacy-first approaches like Apple’s Writing Tools.

Industry watchers suggest the new tool could give WhatsApp an edge over rivals such as Telegram and Signal, which have not yet introduced generative AI writing aids.

Analysts also see potential for integration with other Meta platforms, although challenges remain in ensuring accurate, unbiased results across different languages.

Writing Help could streamline business communication by improving grammar, structure, and tone accuracy if successful. While some users have praised its seamless integration, others warn that heavy reliance on AI could undermine authenticity in digital conversations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bragg Gaming responds to cyber incident affecting internal systems

Bragg Gaming Group has confirmed a cybersecurity breach affecting its internal systems, discovered in the early hours of 16 August.

The company stated the breach has not impacted operations or customer-facing platforms, nor compromised any personal data so far.

External cybersecurity experts have been engaged to assist with mitigation and investigation, following standard industry protocols.

Bragg has emphasised its commitment to transparency and will provide updates as the investigation progresses via its official website.

The firm continues to operate normally, with all internal and external services reportedly unaffected by the incident at this time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake Telegram Premium site spreads dangerous malware

A fake Telegram Premium website infects users with Lumma Stealer malware through a drive-by download, requiring no user interaction.

The domain, telegrampremium[.]app, hosts a malicious executable named start.exe, which begins stealing sensitive data as soon as it runs.

The malware targets browser-stored credentials, crypto wallets, clipboard data and system files, using advanced evasion techniques to bypass antivirus tools.

Obfuscated with cryptors and hidden behind real services like Telegram, the malware also communicates with temporary domains to avoid takedown.

Analysts warn that it manipulates Windows systems, evades detection, and leaves little trace by disguising its payloads as real image files.

To defend against such threats, organisations are urged to implement better cybersecurity controls, such as behaviour-based detection and enforce stronger download controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China shifts to cold storage for seized crypto

Authorities in China’s Guizhou Province have begun using joint custody centres and cold wallets to manage cryptocurrencies seized from unlawful activities, particularly in Duyun City. The move represents a strategic adjustment amid the country’s ongoing ban on crypto trading.

Adopting cold storage and joint custody addresses practical challenges in preserving and disposing of seized assets. Experts warn that selling seized crypto could breach trading bans, cause risk compliance issues, and cause market disruption.

China’s approach may influence international handling and regulation of digital assets. Analysts suggest these protocols could integrate regulatory compliance with financial stability goals, shaping broader policies for Bitcoin and other cryptocurrencies worldwide.

Scholars describe the current measures as temporary solutions that do not fully align with the nation’s crypto prohibition.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK links Lazarus Group to Lykke cryptocurrency theft

The British Treasury has linked state-backed North Korean hackers to a significant theft of Bitcoin, Ethereum, and other cryptocurrencies from the Swiss platform Lykke. The hack forced Lykke to suspend trading and enter liquidation, leaving founder Richard Olsen bankrupt and under legal scrutiny.

The Lazarus Group, Pyongyang’s cyber unit, has reportedly carried out a series of global cryptocurrency heists to fund weapons programmes and bypass international sanctions. Although evidence remains inconclusive, Stolen Lykke funds may have been laundered through crypto firms.

Regulators had previously warned that Lykke was not authorised to offer financial services in the UK. Over 70 customers have filed claims totalling £5.7 million in UK courts, while Olsen’s Swiss parent company entered liquidation last year.

He was declared bankrupt in January and faces ongoing criminal investigations in Switzerland.

The Lazarus Group continues to be implicated in high-profile cryptocurrency attacks worldwide, highlighting vulnerabilities in digital asset exchanges and the challenges authorities face in recovering stolen funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Top cybersecurity vendors double down on AI-powered platforms

The cybersecurity market is consolidating as AI reshapes defence strategies. Platform-based solutions replace point tools to cut complexity, counter AI threats, and ease skill shortages. IDC predicts that security spending will rise 12% in 2025 to $377 billion by 2028.

Vendors embed AI agents, automation, and analytics into unified platforms. Palo Alto Networks’ Cortex XSIAM reached $1 billion in bookings, and its $25 billion CyberArk acquisition expands into identity management. Microsoft blends Azure, OpenAI, and Security Copilot to safeguard workloads and data.

Cisco integrates AI across networking, security, and observability, bolstered by its acquisition of Splunk. CrowdStrike rebounds from its 2024 outage with Charlotte AI, while Cloudflare shifts its focus from delivery to AI-powered threat prediction and optimisation.

Fortinet’s platform spans networking and security, strengthened by Suridata’s SaaS posture tools. Zscaler boosts its Zero Trust Exchange with Red Canary’s MDR tech. Broadcom merges Symantec and Carbon Black, while Check Point pushes its AI-driven Infinity Platform.

Identity stays central, with Okta leading access management and teaming with Palo Alto on integrated defences. The companies aim to platformise, integrate AI, and automate their operations to dominate an increasingly complex cyberthreat landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

North Korean hackers switch to ransomware in major cyber campaign

A North Korean hacking unit has launched a ransomware campaign targeting South Korea and other countries, marking a shift from pure espionage. Security firm S2W identified the subgroup, ‘ChinopuNK’, as part of the ScarCruft threat actor.

The operation began in July, utilising phishing emails and a malicious shortcut file within a RAR archive to deploy multiple malware types. These included a keylogger, stealer, ransomware, and a backdoor.

ScarCruft, active since 2016, has targeted defectors, journalists, and government agencies. Researchers say the move to ransomware indicates either a new revenue stream or a more disruptive mission.

The campaign has expanded beyond South Korea to Japan, Vietnam, Russia, Nepal, and the Middle East. Analysts note the group’s technical sophistication has improved in recent years.

Security experts advise monitoring URLs, file hashes, behaviour-based indicators, and ongoing tracking of ScarCruft’s tools and infrastructure, to detect related campaigns from North Korea and other countries early.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cohere secures $500m funding to expand secure enterprise AI

Cohere has secured $500 million in new funding, lifting its valuation to $6.8 billion and reinforcing its position as a secure, enterprise-grade AI specialist.

The Toronto-based firm, which develops large language models tailored for business use, attracted backing from AMD, Nvidia, Salesforce, and other investors.

Its flagship multilingual model, Aya 23, supports 23 languages and is designed to help companies adopt AI without the risks linked to open-source tools, reflecting growing demand for privacy-conscious, compliant solutions.

The round marks renewed support from chipmakers AMD and Nvidia, who had previously invested in the company.

Salesforce Ventures’ involvement hints at potential integration with enterprise software platforms, while other backers include Radical Ventures, Inovia Capital, PSP Investments, and the Healthcare of Ontario Pension Plan.

The company has also strengthened its leadership, appointing former Meta AI research head Joelle Pineau as Chief AI Scientist, Instagram co-founder Mike Krieger as Chief Product Officer, and ex-Uber executive Saroop Bharwani as Chief Technology Officer for Applied R&D.

Cohere intends to use the funding to advance agentic AI, systems capable of performing tasks autonomously, while focusing on security and ethical development.

With over $1.5 billion raised since its 2019 founding, the company targets adoption in regulated sectors such as healthcare and finance.

The investment comes amid a broader surge in AI spending, with industry leaders betting that secure, customisable AI will become essential for enterprise operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!