Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns of AI browser assistants collecting sensitive data

Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.

The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.

The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.

Researchers sometimes observed personal information being transmitted to third-party servers without encryption.

Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.

The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.

They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps match ad emotion to content mood for better engagement

Imagine dreaming of your next holiday and feeling a rush of excitement. That emotion is when your attention is most engaged. Neuro-contextual advertising aims to meet you at such emotional peaks.

Neuro-contextual AI goes beyond page-level relevance. It interprets emotional signals of interest and intent in real time while preserving user privacy. It asks why users interact with content at a specific moment, not just what they view.

When ads align with emotion, interest and intention, engagement rises. A car ad may shift tone accordingly, action-fuelled visuals for thrill seekers and softer, nostalgic tones for someone browsing family stories.

Emotions shape memory and decisions. Emotionally intelligent advertising fosters connection, meaning and loyalty rather than just attention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Comet browser caught submitting private info in fake shop

Cybersecurity researchers have uncovered a new AI browser exploit that allows attackers to manipulate autonomous systems using fake CAPTCHA checks.

The PromptFix method tricks agentic AI models into executing commands embedded in deceptive web elements invisible to the user.

Guardio Labs demonstrated that the Comet AI browser could be misled into adding items to a cart and auto-filling sensitive data.

Comet completed fake purchases without user confirmation in some tests, raising concerns over AI trust chains and phishing exposure.

Attackers can also exploit AI email agents by embedding malicious links, prompting the system to bypass user review and reveal credentials.

ChatGPT’s Agent Mode showed similar vulnerabilities but confined actions to a sandbox, preventing direct exposure to user systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges users to update Chrome after V8 flaw patched

Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.

The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.

Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.

Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New research shows AI bias against human content

A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.

Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.

Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.

There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.

Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta partners with Scale AI to chase superintelligence

Meta is launching a research lab focused on superintelligence, led by Scale AI founder Alexandr Wang, in an attempt to regain ground in the global AI race.

Mark Zuckerberg is reportedly in talks to invest billions into Scale, reflecting strong confidence in Wang’s data-driven approach and industry influence.

While Meta’s past efforts with its Llama models gained traction, its latest release, Llama 4, failed to meet expectations and drew criticism.

Wang’s appointment arrives during an ongoing talent exodus from Meta, with several senior AI researchers departing for rivals or founding startups.

The new lab is separate from Meta’s existing FAIR division, led by Yann LeCun, who has dismissed the idea of chasing superintelligence. Meta’s partnership with Scale mirrors deals by Microsoft, Amazon, and Google, aiming to secure top AI talent without formal acquisitions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Vulnerabilities in municipal software expose sensitive data in Wisconsin

Two critical vulnerabilities have been discovered in an accounting application developed by Workhorse Software and used by more than 300 municipalities in Wisconsin.

The first flaw, CVE-2025-9037, involved SQL server connection credentials stored in plain text within a shared network folder. The second, CVE-2025-9040, allowed backups to be created and restored from the login screen without authentication.

Both issues were disclosed by the CERT Coordination Centre at Carnegie Mellon University following a report from Sparrow IT Solutions. Exploitation could give attackers access to personally identifiable information such as Social Security numbers, financial records and audit logs.

Workhorse has since released version 1.9.4.48019 with security patches, urging municipalities to update their systems immediately. The incident underscores the risks posed by vulnerable software in critical public infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Series K funding pushes Databricks valuation over $100bn

Databricks has secured a fresh funding round that pushes its valuation beyond $100bn, cementing its place among the world’s most valuable private tech firms. The Series K deal marks a sharp rise from the company’s $62bn figure in late 2024 and underscores investor confidence in its long-term AI strategy.

The new capital will accelerate Databricks’ global expansion, fuel acquisitions in the AI space, and support product innovation. Upcoming launches include Agent Bricks, a platform for enterprise-grade AI agents, and Lakebase, a new operational database that extends the company’s ecosystem.

Chief executive Ali Ghodsi said the round was oversubscribed, reflecting strong investor demand. He emphasised that businesses can leverage enterprise data to create secure AI apps and agents, noting that this momentum supports Databricks’ growth across 15,000 customers.

The company has also expanded its role in the broader AI ecosystem through partnerships with Microsoft, Google Cloud, Anthropic, SAP, and Palantir. Last year, it opened a European headquarters in London to cement the UK as a key market and strengthen ties with global enterprises.

Databricks has avoided confirming an IPO timeline, though Ghodsi told CNBC that investor appetite surged after fintech Figma’s listing. With Klarna now eyeing a return to New York, Databricks’ soaring valuation highlights how leading AI firms continue to attract capital even as market conditions shift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK colleges hit by phishing incident

Weymouth and Kingston Maurward College in Dorset is investigating a recent phishing attack that compromised several email accounts. The breach occurred on Friday, 15 August, during the summer holidays.

Spam emails were sent from affected accounts, though the college confirmed that personal data exposure was minimal.

The compromised accounts may have contained contact information from anyone who previously communicated with the college. Early detection allowed the college to lock down affected accounts promptly, limiting the impact.

A full investigation is ongoing, with additional security measures now in place to prevent similar incidents. The matter has been reported to the Information Commissioner’s Office (ICO).

Phishing attacks involve criminals impersonating trusted entities to trick individuals into revealing sensitive information such as passwords or personal data. The college reassured students, staff, and partners that swift action and robust systems limited the disruption.

The colleges, which merged just over a year ago, recently received a ‘Good’ rating across all areas in an Ofsted inspection, reflecting strong governance and oversight amid the cybersecurity incident.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!