Sam Altman urges rethink of US–China AI strategy

OpenAI CEO Sam Altman has warned that the United States may be underestimating China’s rapid advances in AI. He argued that export controls on semiconductors are unlikely to be a reliable long-term solution to the global AI race.

At a press briefing in San Francisco, Altman said the competition cannot be reduced to a simple scoreboard. China can expand inference capacity more quickly, even as Washington tightens restrictions on advanced semiconductor exports.

He expressed doubts about the effectiveness of purely policy-driven approaches. ‘You can export-control one thing, but maybe not the right thing… workarounds exist,’ Altman said. He stressed that chip controls may not keep pace with technological realities.

His comments come as US policy becomes increasingly complex. President Trump halted advanced chip supplies in April, while the Biden administration recently allowed ‘China-safe’ chips, requiring Nvidia and AMD to share revenue. Critics call the rules contradictory and difficult to enforce.

Meanwhile, Chinese firms are accelerating efforts to replace US suppliers, with Huawei and others building domestic alternatives. Altman suggested this push for self-sufficiency could undermine Washington’s goals, raising questions about America’s strategy in the AI race.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI robot concepts may arrive from Apple by 2027

Apple is again exploring AI-powered robotics, reportedly working on prototypes including a tabletop assistant and lifelike upgrades to Siri. A home display may launch in 2026, with a robot device expected in 2027, though neither is confirmed for release.

One concept, codenamed J595 and the ‘Pixar Lamp,’ features a swivelling screen on a robotic arm that tracks user movement. The robot is a personal assistant that responds to conversations using facial recognition and motorised movement.

Other prototypes under evaluation include mobile bots and humanoid robots for industrial use.

The devices would run Apple’s new internal software platform, ‘Charismatic,’ designed for voice commands, personalised content, and smart home automation. Apple has not confirmed robotics, but CEO Tim Cook highlighted the company’s AI focus, hinting at upcoming innovations.

Experts note that domestic humanoid robots are still far from mainstream adoption. Gary Marcus, an AI expert and NYU professor, said Apple’s focus on privacy, security, and design suggests that future humanoid robots could benefit from its integrated hardware and software.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The First Descendant faces backlash over AI-generated streamer ads

Nexon’s new promotional ads for their looter-shooter The First Descendant have ignited controversy after featuring AI-generated avatars that closely mimic real content creators, one resembling streamer DanieltheDemon.

The ads, circulating primarily on TikTok, combine unnatural expressions with awkward speech patterns, triggering community outrage.

Fans on Reddit slammed the ads as ’embarrassing’ and akin to ‘cheap, lazy marketing,’ arguing that Nexon had bypassed genuine collaborators for synthetic substitutes, even though those weren’t subtle attempts.

Critics warned that these deepfake-like promotions undermine the trust and credibility of creators and raise ethical questions over likeness rights and authenticity in AI usage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bragg Gaming responds to cyber incident affecting internal systems

Bragg Gaming Group has confirmed a cybersecurity breach affecting its internal systems, discovered in the early hours of 16 August.

The company stated the breach has not impacted operations or customer-facing platforms, nor compromised any personal data so far.

External cybersecurity experts have been engaged to assist with mitigation and investigation, following standard industry protocols.

Bragg has emphasised its commitment to transparency and will provide updates as the investigation progresses via its official website.

The firm continues to operate normally, with all internal and external services reportedly unaffected by the incident at this time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI toys change the way children learn and play

AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.

Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.

Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.

Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.

The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.

At the same time, experts warn about privacy risks, the collection of children’s data, and potential reductions in face-to-face interaction.

Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.

The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake Telegram Premium site spreads dangerous malware

A fake Telegram Premium website infects users with Lumma Stealer malware through a drive-by download, requiring no user interaction.

The domain, telegrampremium[.]app, hosts a malicious executable named start.exe, which begins stealing sensitive data as soon as it runs.

The malware targets browser-stored credentials, crypto wallets, clipboard data and system files, using advanced evasion techniques to bypass antivirus tools.

Obfuscated with cryptors and hidden behind real services like Telegram, the malware also communicates with temporary domains to avoid takedown.

Analysts warn that it manipulates Windows systems, evades detection, and leaves little trace by disguising its payloads as real image files.

To defend against such threats, organisations are urged to implement better cybersecurity controls, such as behaviour-based detection and enforce stronger download controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zoom patches critical Windows flaw with high risk of takeover

Zoom has patched a critical Windows vulnerability that could let attackers fully take control of devices without needing credentials. The flaw, CVE-2025-49457, stems from the app failing to use explicit paths when loading DLLs, allowing malicious files to be executed.

Attackers could exploit this to install malware or extract sensitive data such as recordings or user credentials, even pivoting deeper into networks. The issue affects several Zoom products, including Workplace, VDI, Rooms, and Meeting SDK, all before version 6.3.10.

Zoom urges users to update their app immediately, as the flaw requires no advanced skill and can be triggered with minimal access. However, this highlights the increasing cybersecurity concerns associated with the digital world.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Gemini update remembers your preferences, until you tell it not to

Google has begun rolling out a feature that enables its Gemini AI chatbot to automatically remember key personal details and preferences from previous chats, unless users opt out. However, this builds upon earlier functionality where memory could only be activated on request.

The update is enabled by default on Gemini 2.5 Pro in select countries and will be extended to the 2.5 Flash version later. Users can turn off the setting under Personal Context in the app to deactivate it.

Alongside auto-memory, Google is introducing Temporary Chats, a privacy tool for one-off interactions. These conversations aren’t saved to your history, aren’t used to train Gemini, and are deleted after 72 hours.

Google is also renaming ‘Gemini Apps Activity’ to ‘Keep Activity’, a setting that, when enabled, lets Google sample uploads like files and photos to improve services from 2 September, while still offering the option to opt out.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Top cybersecurity vendors double down on AI-powered platforms

The cybersecurity market is consolidating as AI reshapes defence strategies. Platform-based solutions replace point tools to cut complexity, counter AI threats, and ease skill shortages. IDC predicts that security spending will rise 12% in 2025 to $377 billion by 2028.

Vendors embed AI agents, automation, and analytics into unified platforms. Palo Alto Networks’ Cortex XSIAM reached $1 billion in bookings, and its $25 billion CyberArk acquisition expands into identity management. Microsoft blends Azure, OpenAI, and Security Copilot to safeguard workloads and data.

Cisco integrates AI across networking, security, and observability, bolstered by its acquisition of Splunk. CrowdStrike rebounds from its 2024 outage with Charlotte AI, while Cloudflare shifts its focus from delivery to AI-powered threat prediction and optimisation.

Fortinet’s platform spans networking and security, strengthened by Suridata’s SaaS posture tools. Zscaler boosts its Zero Trust Exchange with Red Canary’s MDR tech. Broadcom merges Symantec and Carbon Black, while Check Point pushes its AI-driven Infinity Platform.

Identity stays central, with Okta leading access management and teaming with Palo Alto on integrated defences. The companies aim to platformise, integrate AI, and automate their operations to dominate an increasingly complex cyberthreat landscape.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!