DeepSeek launches upgraded AI system with stronger agent capability

DeepSeek has released a minor upgrade, V3.1, yet conspicuously omitted any R1 label from its chatbot, leading to speculation over the status of the promised R2 model.

The V3.1 version includes improvements such as an expanded 128K token context window for holding more information per interaction, but lacks major innovation beyond that. Observers note that the absence of R1 suggests that DeepSeek may be reworking its roadmap or shifting focus.

Industry watchers point to the gap this update left, especially in light of delays reported for the R2 model, which has faced technical setbacks due to hardware issues and training challenges with domestic chips. Competitors are now gaining ground as a result.

With no official statement from DeepSeek and a quieter-than-usual announcement, delivered only to a WeChat user group, analysts are questioning whether the company is rethinking its product sequencing or concealing delays in rolling out the next-generation R2 reasoning model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta freezes hiring as AI costs spark investor concern

Meta has frozen hiring in its AI division, halting a spree that had drawn top researchers with lucrative offers. The company described the pause as basic organisational planning, aimed at building a more stable structure for its superintelligence ambitions.

The freeze, first reported by the Wall Street Journal, began last week and prevents employees in the unit from transferring to other teams. Its duration has not been communicated, and Meta declined to comment on the number of hires already made.

The decision follows growing tensions inside the newly created Superintelligence Labs, where long-serving researchers have voiced concerns over disparities in pay and recognition compared with recruits.

Alexandr Wang, who leads the division, recently told staff that superintelligence is approaching and that significant changes are necessary to prepare. His email outlined Meta’s most significant reorganisation of its AI efforts.

The pause also comes amid investor scrutiny, as analysts warn that heavy reliance on stock-based compensation to attract talent could fuel innovation or dilute shareholder value without precise results.

Despite these concerns, Meta’s stock has risen by about 28% since the start of the year, reflecting continued investor confidence in the company’s long-term prospects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rethinking ‘soft skills’ as core drivers of transformation

Communication, empathy, and judgment were dismissed for years as ‘soft skills‘, sidelined while technical expertise dominated training and promotion. A new perspective argues that these human competencies are fundamental to resilience and transformation.

Researchers and practitioners emphasise that AI can expedite decision-making but cannot replace human judgment, trust, or narrative. Failures in leadership often stem from a lack of human capacity rather than technical gaps.

Redefining skills like decision-making, adaptability, and emotional intelligence as measurable behaviours helps organisations train and evaluate leaders effectively. Embedding these human disciplines ensures transformation holds under pressure and uncertainty.

Career and cultures are strengthened when leaders are assessed on their ability to build trust, resolve conflicts, and influence through storytelling. Without funding the human core alongside technical skills, strategies collapse, and talent disengages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns of AI browser assistants collecting sensitive data

Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.

The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.

The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.

Researchers sometimes observed personal information being transmitted to third-party servers without encryption.

Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.

The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.

They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI helps match ad emotion to content mood for better engagement

Imagine dreaming of your next holiday and feeling a rush of excitement. That emotion is when your attention is most engaged. Neuro-contextual advertising aims to meet you at such emotional peaks.

Neuro-contextual AI goes beyond page-level relevance. It interprets emotional signals of interest and intent in real time while preserving user privacy. It asks why users interact with content at a specific moment, not just what they view.

When ads align with emotion, interest and intention, engagement rises. A car ad may shift tone accordingly, action-fuelled visuals for thrill seekers and softer, nostalgic tones for someone browsing family stories.

Emotions shape memory and decisions. Emotionally intelligent advertising fosters connection, meaning and loyalty rather than just attention.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges users to update Chrome after V8 flaw patched

Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.

The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.

Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.

Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New research shows AI bias against human content

A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.

Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.

Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.

There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.

Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google prepares Duolingo rival using Translate

Google Translate may soon evolve into a full-featured language learning tool, introducing AI-powered lessons rivalling apps like Duolingo.

The latest Translate app release recently uncovered a hidden feature called Practice. It enables users to take part in interactive learning scenarios.

Early tests allow learners to choose languages such as Spanish and French, then engage with situational exercises from beginner to advanced levels.

The tool personalises lessons using AI, adapting difficulty and content based on a user’s goals, such as preparing for specific trips.

Users can track progress, receive daily practice reminders, and customise prompts for listening and speaking drills through a dedicated settings panel.

The feature resembles gamified learning apps and may join Google’s premium AI offerings, though pricing and launch plans remain unconfirmed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!