Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns of AI browser assistants collecting sensitive data

Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.

The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.

The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.

Researchers sometimes observed personal information being transmitted to third-party servers without encryption.

Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.

The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.

They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Comet browser caught submitting private info in fake shop

Cybersecurity researchers have uncovered a new AI browser exploit that allows attackers to manipulate autonomous systems using fake CAPTCHA checks.

The PromptFix method tricks agentic AI models into executing commands embedded in deceptive web elements invisible to the user.

Guardio Labs demonstrated that the Comet AI browser could be misled into adding items to a cart and auto-filling sensitive data.

Comet completed fake purchases without user confirmation in some tests, raising concerns over AI trust chains and phishing exposure.

Attackers can also exploit AI email agents by embedding malicious links, prompting the system to bypass user review and reveal credentials.

ChatGPT’s Agent Mode showed similar vulnerabilities but confined actions to a sandbox, preventing direct exposure to user systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges users to update Chrome after V8 flaw patched

Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.

The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.

Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.

Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New research shows AI bias against human content

A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.

Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.

Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.

There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.

Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Pixel 10 could transform smartphones with advanced AI features

Google’s upcoming Pixel 10 smartphones are tipped to place AI at the centre of the user experience, with three new features expected to redefine how people use their devices.

While hardware upgrades are anticipated at the Made by Google event, much of the excitement revolves around the AI tools that may debut.

One feature, called Help Me Edit, is designed for Google Photos. Instead of spending time on manual edits, users could describe the change they want, such as altering the colour of a car, and the AI would adjust instantly.

Expanding on the Pixel 9’s generative tools, it promises far greater control and speed.

Another addition, Camera Coach, could offer real-time guidance on photography. Using Google’s Gemini AI, the phone may provide step-by-step advice on framing, lighting, and composition, acting as a digital photography tutor.

Finally, Pixel Sense is rumoured to be a proactive personal assistant that anticipates user needs. Learning patterns from apps such as Gmail and Calendar, it could deliver predictive suggestions and take actions across third-party services, bringing the smartphone closer to a truly adaptive companion.

These features suggest that Google is betting heavily on AI to give the Pixel 10 a competitive edge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk acknowledges value in ChatGPT-5’s modesty after public spat

Elon Musk has taken an unexpected conciliatory turn in his feud with Sam Altman by praising a ChatGPT-5 response, ‘I don’t know’, as more valuable than overconfident answers. Musk described it as ‘a great answer’ from the AI chatbot.

Initially sparked by Musk accusing Apple of favouring ChatGPT in App Store rankings and Altman firing back with claims of manipulation on X, the feud has taken on new dimensions as AI itself seems to weigh in.

At one point, xAI’s Grok chat assistant sided with Altman, while ChatGPT offered a supportive nod to Musk. These chatbot alignments have introduced confusion and irony into a clash already rich with irony.

Musk’s praise of a modest AI response contrasts sharply with the often intense claims of supremacy. It signals a rare acknowledgement of restraint and clarity, even from an avowed critic of OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman urges rethink of US–China AI strategy

OpenAI CEO Sam Altman has warned that the United States may be underestimating China’s rapid advances in AI. He argued that export controls on semiconductors are unlikely to be a reliable long-term solution to the global AI race.

At a press briefing in San Francisco, Altman said the competition cannot be reduced to a simple scoreboard. China can expand inference capacity more quickly, even as Washington tightens restrictions on advanced semiconductor exports.

He expressed doubts about the effectiveness of purely policy-driven approaches. ‘You can export-control one thing, but maybe not the right thing… workarounds exist,’ Altman said. He stressed that chip controls may not keep pace with technological realities.

His comments come as US policy becomes increasingly complex. President Trump halted advanced chip supplies in April, while the Biden administration recently allowed ‘China-safe’ chips, requiring Nvidia and AMD to share revenue. Critics call the rules contradictory and difficult to enforce.

Meanwhile, Chinese firms are accelerating efforts to replace US suppliers, with Huawei and others building domestic alternatives. Altman suggested this push for self-sufficiency could undermine Washington’s goals, raising questions about America’s strategy in the AI race.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic introduces a safety feature allowing Claude AI to terminate harmful conversations

Anthropic has announced that its Claude Opus 4 and 4.1 models can now end conversations in extreme cases of harmful or abusive user interactions.

The company said the change was introduced after the AI models showed signs of ‘apparent distress’ during pre-deployment testing when repeatedly pushed to continue rejected requests.

According to Anthropic, the feature will be used only in rare situations, such as attempts to solicit information that could enable large-scale violence or requests for sexual content involving minors.

Once activated, Claude AI will be closed, preventing the user from sending new messages in that thread, though they can still access past conversations and begin new ones.

The company emphasised that the models will not use the ability when users are at imminent risk of self-harm or harming others, ensuring support channels remain open in sensitive situations.

Anthropic added that the feature is experimental and may be adjusted based on user feedback.

The move highlights the firm’s growing focus on safeguarding both AI models and human users, balancing safety with accessibility as generative AI continues to expand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI robot concepts may arrive from Apple by 2027

Apple is again exploring AI-powered robotics, reportedly working on prototypes including a tabletop assistant and lifelike upgrades to Siri. A home display may launch in 2026, with a robot device expected in 2027, though neither is confirmed for release.

One concept, codenamed J595 and the ‘Pixar Lamp,’ features a swivelling screen on a robotic arm that tracks user movement. The robot is a personal assistant that responds to conversations using facial recognition and motorised movement.

Other prototypes under evaluation include mobile bots and humanoid robots for industrial use.

The devices would run Apple’s new internal software platform, ‘Charismatic,’ designed for voice commands, personalised content, and smart home automation. Apple has not confirmed robotics, but CEO Tim Cook highlighted the company’s AI focus, hinting at upcoming innovations.

Experts note that domestic humanoid robots are still far from mainstream adoption. Gary Marcus, an AI expert and NYU professor, said Apple’s focus on privacy, security, and design suggests that future humanoid robots could benefit from its integrated hardware and software.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot