Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Nvidia prepares new AI chip for China amid Washington’s hesitation

As if Trump’s recent shifts in chip export policy regarding the scaled-down chip models were not enough to reopen supply to the Chinese market, after all the earlier tariffs and bans, Nvidia is now quietly developing a new AI chip for China, even as Washington continues to debate how much cutting-edge US technology Beijing should be allowed to access.

According to Nvidia’s latest statements, the chip, codenamed B30A, will be based on Nvidia’s latest Blackwell architecture and is expected to outperform the company’s current China-approved model, the H20.

Namely, the novelty comes just days after President Donald Trump weighed permitting scaled-down versions of Nvidia’s most advanced chips to be sold in China. His comments marked a potential shift in US policy, but the approval remains uncertain, with lawmakers in both parties warning that even weaker versions of top-end chips could still give Beijing an edge in the global AI race.

Technically, the B30A will be less potent than Nvidia’s flagship B300, but it retains advanced features such as high-bandwidth memory and NVLink connectivity, which are crucial for fast data processing.

Nvidia hopes to send early samples to Chinese customers next month, though final specifications have yet to be confirmed.

‘Everything we offer is with full government approval and designed for commercial use,’ the company said in a statement.

The stakes are high, as China accounted for 13% of Nvidia’s revenue last year, and losing that market could push customers toward domestic rivals like Huawei.

Analysts note that Huawei’s chips are improving, particularly in raw computing power, though they still lag in software support and memory performance, areas where Nvidia remains dominant.

At the same time, Beijing has been pushing back. Chinese experts recently raised concerns that Nvidia’s chips could pose security risks, and regulators have reportedly warned Chinese tech firms about buying the H20.

Nvidia denies any such vulnerabilities, but the warnings illustrate how political friction is weighing on commercial strategy.

Alongside the B30A, Nvidia is also preparing another chip, the RTX6000D, built for AI inference rather than training. That model has weaker specifications designed to comply with strict US export thresholds.

Nvidia plans to start shipping small batches of the RTX6000D to Chinese clients as early as September, which seems to indicate that the company is trying to balance Washington’s restrictions with the need to preserve its foothold in one of the world’s most lucrative AI markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

SoftBank invests $2 billion in Intel to boost US semiconductor industry

Japanese technology giant SoftBank has announced plans to buy a $2 billion stake in Intel, signalling a stronger push into the American semiconductor industry.

The investment comes as Washington debates greater government involvement in the sector, with reports suggesting President Donald Trump is weighing a US government stake in the chipmaker.

SoftBank will purchase Intel’s common stock at $23 per share. Its chairman, Masayoshi Son, said semiconductors remain the backbone of every industry and expressed confidence that advanced chip manufacturing will expand in the US, with Intel playing a central role.

The move follows SoftBank’s increasing investments in the US, including its role in the $500 billion ‘Stargate’ AI project announced earlier this year.

Once a dominant force in Silicon Valley, Intel has struggled against rivals such as Nvidia and AMD. Under new CEO Lip-Bu Tan, the company is cutting 15% of its workforce and reducing costs to stabilise operations.

After a private meeting, Trump recently criticised Tan’s leadership but later softened his stance.

Shares in both companies slipped following the announcement, with SoftBank down 2.2% in Tokyo and Intel falling 3.7% in New York.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bragg Gaming responds to cyber incident affecting internal systems

Bragg Gaming Group has confirmed a cybersecurity breach affecting its internal systems, discovered in the early hours of 16 August.

The company stated the breach has not impacted operations or customer-facing platforms, nor compromised any personal data so far.

External cybersecurity experts have been engaged to assist with mitigation and investigation, following standard industry protocols.

Bragg has emphasised its commitment to transparency and will provide updates as the investigation progresses via its official website.

The firm continues to operate normally, with all internal and external services reportedly unaffected by the incident at this time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake Telegram Premium site spreads dangerous malware

A fake Telegram Premium website infects users with Lumma Stealer malware through a drive-by download, requiring no user interaction.

The domain, telegrampremium[.]app, hosts a malicious executable named start.exe, which begins stealing sensitive data as soon as it runs.

The malware targets browser-stored credentials, crypto wallets, clipboard data and system files, using advanced evasion techniques to bypass antivirus tools.

Obfuscated with cryptors and hidden behind real services like Telegram, the malware also communicates with temporary domains to avoid takedown.

Analysts warn that it manipulates Windows systems, evades detection, and leaves little trace by disguising its payloads as real image files.

To defend against such threats, organisations are urged to implement better cybersecurity controls, such as behaviour-based detection and enforce stronger download controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The dark side of AI: Seven fears that won’t go away

AI has been hailed as the most transformative technology of our age, but with that power comes unease. From replacing jobs to spreading lies online, the risks attached to AI are no longer abstract; they are already reshaping lives. While governments and tech leaders promise safeguards, uncertainty fuels public anxiety.

Perhaps the most immediate concern is employment. Machines are proving cheaper and faster than humans in the software development and graphic design industries. Talk of a future “post-scarcity” economy, where robot labour frees people from work, remains speculative. Workers see only lost opportunities now, while policymakers struggle to offer coordinated solutions.

Environmental costs are another hidden consequence. Training large AI models demands enormous data centres that consume vast amounts of electricity and water. Critics argue that supposed future efficiencies cannot justify today’s pollution, which sometimes rivals small nations’ carbon footprint.

Privacy fears are also escalating. AI-driven surveillance—from facial recognition in public spaces to workplace monitoring—raises questions about whether personal freedom will survive in an era of constant observation. Many fear that “smart” devices and cameras may soon leave nowhere to hide.

Then there is the spectre of weaponisation. AI is already integrated into warfare, with autonomous drones and robotic systems assisting soldiers. While fully self-governing lethal machines are not yet in use, military experts warn that it is only a matter of time before battlefields become dominated by algorithmic decision-makers.

Artists and writers, meanwhile, worry about intellectual property theft. AI systems trained on creative works without permission or payment have sparked lawsuits and protests, leaving cultural workers feeling exploited by tech giants eager for training data.

Misinformation represents another urgent risk. Deepfakes and AI-generated propaganda are flooding social media, eroding trust in institutions and amplifying extremist views. The danger lies not only in falsehoods themselves but in the echo chambers algorithms create, where users are pushed toward ever more radical beliefs.

And hovering above it all is the fear of runaway AI. Although science fiction often exaggerates this threat, researchers take seriously the possibility of systems evolving in ways we cannot predict or control. Calls for global safeguards and transparency have grown louder, yet solutions remain elusive.

In the end, fear alone cannot guide us. Addressing these risks requires not just caution but decisive governance and ethical frameworks. Only then can humanity hope to steer AI toward progress rather than peril.

Source: Forbes

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Energy crisis in Iran sparks protests over crypto mining

Iran’s energy shortage has sparked public anger, with residents blaming crypto mining and government mismanagement for blackouts and water scarcity. Demonstrations have broken out across several towns, with protesters demanding accountability.

The crisis has been exacerbated by record drought, soaring summer heat, and the drying of Lake Urmia. Tehran government buildings have shut down to save electricity, and hospitals face power cuts affecting patient care.

Videos shared on social media show protesters chanting ‘water, electricity, life – these are our indisputable rights’ as outages hit homes and businesses. Small traders say they cannot keep shops open, while medics in darkened wards have used handheld fans.

Critics say energy is diverted to IRGC-linked crypto mining, while experts warn of long-term mismanagement. President Masoud Pezeshkian has described the situation as ‘serious and unimaginable’, urging action as public resentment grows ahead of a volatile political season.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bitcoin case deepens in Czech politics with arrest

Czech police have detained convicted drug trafficker Tomas Jirikovsky in connection with a $45 million Bitcoin donation that triggered a political crisis earlier this year. Assets linked to him were seized in raids by the National Centre for Combating Organised Crime.

Prosecutors confirmed the case is now focused on suspected money laundering and drug trafficking, separated from a more exhaustive investigation disclosed in May. Jirikovsky, identified as the donor of 468 Bitcoin to the Ministry of Justice, was taken into custody in Breclav.

Former Justice Minister Pavel Blazek accepted the donation without verifying its origins. He resigned in May after revelations that Jirikovsky was behind the transfer. An audit later concluded the ministry should never have accepted the funds.

The scandal has shaken Czech politics, prompting a failed no-confidence vote and renewed calls from the opposition for further ministerial departures. Current Justice Minister Eva Decroix has pledged to release a detailed case timeline as scrutiny mounts before the October elections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK links Lazarus Group to Lykke cryptocurrency theft

The British Treasury has linked state-backed North Korean hackers to a significant theft of Bitcoin, Ethereum, and other cryptocurrencies from the Swiss platform Lykke. The hack forced Lykke to suspend trading and enter liquidation, leaving founder Richard Olsen bankrupt and under legal scrutiny.

The Lazarus Group, Pyongyang’s cyber unit, has reportedly carried out a series of global cryptocurrency heists to fund weapons programmes and bypass international sanctions. Although evidence remains inconclusive, Stolen Lykke funds may have been laundered through crypto firms.

Regulators had previously warned that Lykke was not authorised to offer financial services in the UK. Over 70 customers have filed claims totalling £5.7 million in UK courts, while Olsen’s Swiss parent company entered liquidation last year.

He was declared bankrupt in January and faces ongoing criminal investigations in Switzerland.

The Lazarus Group continues to be implicated in high-profile cryptocurrency attacks worldwide, highlighting vulnerabilities in digital asset exchanges and the challenges authorities face in recovering stolen funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot