Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.
The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.
Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.
Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.
Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.
Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.
There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.
Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google’s upcoming Pixel 10 smartphones are tipped to place AI at the centre of the user experience, with three new features expected to redefine how people use their devices.
While hardware upgrades are anticipated at the Made by Google event, much of the excitement revolves around the AI tools that may debut.
One feature, called Help Me Edit, is designed for Google Photos. Instead of spending time on manual edits, users could describe the change they want, such as altering the colour of a car, and the AI would adjust instantly.
Expanding on the Pixel 9’s generative tools, it promises far greater control and speed.
Another addition, Camera Coach, could offer real-time guidance on photography. Using Google’s Gemini AI, the phone may provide step-by-step advice on framing, lighting, and composition, acting as a digital photography tutor.
Finally, Pixel Sense is rumoured to be a proactive personal assistant that anticipates user needs. Learning patterns from apps such as Gmail and Calendar, it could deliver predictive suggestions and take actions across third-party services, bringing the smartphone closer to a truly adaptive companion.
These features suggest that Google is betting heavily on AI to give the Pixel 10 a competitive edge.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has taken an unexpected conciliatory turn in his feud with Sam Altman by praising a ChatGPT-5 response, ‘I don’t know’, as more valuable than overconfident answers. Musk described it as ‘a great answer’ from the AI chatbot.
At one point, xAI’s Grok chat assistant sided with Altman, while ChatGPT offered a supportive nod to Musk. These chatbot alignments have introduced confusion and irony into a clash already rich with irony.
Musk’s praise of a modest AI response contrasts sharply with the often intense claims of supremacy. It signals a rare acknowledgement of restraint and clarity, even from an avowed critic of OpenAI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has warned that the United States may be underestimating China’s rapid advances in AI. He argued that export controls on semiconductors are unlikely to be a reliable long-term solution to the global AI race.
At a press briefing in San Francisco, Altman said the competition cannot be reduced to a simple scoreboard. China can expand inference capacity more quickly, even as Washington tightens restrictions on advanced semiconductor exports.
He expressed doubts about the effectiveness of purely policy-driven approaches. ‘You can export-control one thing, but maybe not the right thing… workarounds exist,’ Altman said. He stressed that chip controls may not keep pace with technological realities.
His comments come as US policy becomes increasingly complex. President Trump halted advanced chip supplies in April, while the Biden administration recently allowed ‘China-safe’ chips, requiring Nvidia and AMD to share revenue. Critics call the rules contradictory and difficult to enforce.
Meanwhile, Chinese firms are accelerating efforts to replace US suppliers, with Huawei and others building domestic alternatives. Altman suggested this push for self-sufficiency could undermine Washington’s goals, raising questions about America’s strategy in the AI race.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.
The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.
According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.
Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.
The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.
Anthropic added that the feature is experimental and may be adjusted based on user feedback.
The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Apple is again exploring AI-powered robotics, reportedly working on prototypes including a tabletop assistant and lifelike upgrades to Siri. A home display may launch in 2026, with a robot device expected in 2027, though neither is confirmed for release.
One concept, codenamed J595 and the ‘Pixar Lamp,’ features a swivelling screen on a robotic arm that tracks user movement. The robot is a personal assistant that responds to conversations using facial recognition and motorised movement.
Other prototypes under evaluation include mobile bots and humanoid robots for industrial use.
The devices would run Apple’s new internal software platform, ‘Charismatic,’ designed for voice commands, personalised content, and smart home automation. Apple has not confirmed robotics, but CEO Tim Cook highlighted the company’s AI focus, hinting at upcoming innovations.
Experts note that domestic humanoid robots are still far from mainstream adoption. Gary Marcus, an AI expert and NYU professor, said Apple’s focus on privacy, security, and design suggests that future humanoid robots could benefit from its integrated hardware and software.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The ads, circulating primarily on TikTok, combine unnatural expressions with awkward speech patterns, triggering community outrage.
Fans on Reddit slammed the ads as ’embarrassing’ and akin to ‘cheap, lazy marketing,’ arguing that Nexon had bypassed genuine collaborators for synthetic substitutes, even though those weren’t subtle attempts.
Critics warned that these deepfake-like promotions undermine the trust and credibility of creators and raise ethical questions over likeness rights and authenticity in AI usage.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI-powered stuffed animals are transforming children’s play by combining cuddly companionship with interactive learning.
Toys such as Curio’s Grem and Mattel’s AI collaborations offer screen-free experiences instead of tablets or smartphones, using chatbots and voice recognition to engage children in conversation and educational activities.
Products like CYJBE’s AI Smart Stuffed Animal integrate tools such as ChatGPT to answer questions, tell stories, and adapt to a child’s mood, all under parental controls for monitoring interactions.
Developers say these toys foster personalised learning and emotional bonds instead of replacing human engagement entirely.
The market has grown rapidly, driven by partnerships between tech and toy companies and early experiments like Grimes’ AI plush Grok.
Regulators are calling for safeguards, and parents are urged to weigh the benefits of interactive AI companions against possible social and ethical concerns.
The sector could reshape childhood play and learning, blending imaginative experiences with algorithmic support instead of solely relying on traditional toys.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!