AI threatens global knowledge diversity

AI systems are increasingly becoming the primary source of global information, yet they rely heavily on datasets dominated by Western languages and institutions.

Such reliance creates significant blind spots that threaten to erase centuries of indigenous wisdom and local traditions not currently found in digital archives.

Dominant language models often overlook oral histories and regional practices, including specific ecological knowledge essential for sustainable living in tropical climates.

Experts warn of a looming ‘knowledge collapse’ where alternative viewpoints fade away simply because they are statistically less prevalent in training data.

Future generations may find themselves disconnected from vital human insights as algorithms reinforce a homogenised worldview through recursive feedback loops.

Preserving diverse epistemologies remains crucial for addressing global challenges, such as the climate crisis, rather than relying solely on Silicon Valley’s version of intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Outage at Cloudflare takes multiple websites offline worldwide

Cloudflare has suffered a major outage, disrupting access to multiple high-profile websites, including X and Letterboxd. Users encountered internal server error messages linked to Cloudflare’s network, prompting concerns of a broader infrastructure failure.

The problems began around 11.30 a.m. UK time, with some sites briefly loading after refreshes. Cloudflare issued an update minutes later, confirming that it was aware of an incident affecting multiple customers but did not identify a cause or timeline for resolution.

Outage tracker Down Detector was also intermittently unavailable, later showing a sharp rise in reports once restored. Affected sites displayed repeated error messages advising users to try again later, indicating partial service degradation rather than full shutdowns.

Cloudflare provides core internet infrastructure, including traffic routing and cyberattack protection, which means failures can cascade across unrelated services. Similar disruption followed an AWS incident last month, highlighting the systemic risk of centralised web infrastructure.

The company states that it is continuing to investigate the issue. No mitigation steps or source of failure have yet been disclosed, and Cloudflare has warned that further updates will follow once more information becomes available.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New report warns retailers are unprepared for AI-powered attacks

Retailers are entering the peak shopping season amid warnings that AI-driven cyber threats will accelerate. LevelBlue’s latest Spotlight Report says nearly half of retail executives are already seeing significantly higher attack volumes, while one-third have suffered a breach in the past year.

The sector is under pressure to roll out AI-driven personalisation and new digital channels, yet only a quarter feel ready to defend against AI attacks. Readiness gaps also cover deepfakes and synthetic identity fraud, even though most expect these threats to arrive soon.

Supply chain visibility remains weak, with almost half of executives reporting limited insight into software suppliers. Few list supplier security as a near-term priority, fuelling concern that vulnerabilities could cascade across retail ecosystems.

High-profile breaches have pushed cybersecurity into the boardroom, and most retailers now integrate security teams with business operations. Leadership performance metrics and risk appetite frameworks are increasingly aligned with cyber resilience goals.

Planned investment is focused on application security, business-wide resilience processes, and AI-enabled defensive tools. LevelBlue argues that sustained spending and cultural change are required if retailers hope to secure consumer trust amid rapidly evolving threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta pushes deeper into robotics with key hardware move

Meta is expanding its robotics ambitions by appointing Li-Chen Miller, previously head of its smart glasses portfolio, as the first product manager for Reality Labs’ robotics division. Her transfer marks a significant shift in Meta’s hardware priorities following the launch of its latest augmented reality devices.

The company is reportedly developing a humanoid assistant known internally as Metabot within the same organisation that oversees its AR and VR platforms. Former Cruise executive Marc Whitten leads the robotics group, supported by veteran engineer Ning Li and renowned MIT roboticist Sangbae Kim.

Miller’s move emphasises Meta’s aim to merge its AI expertise with physical robotics. The new team collaborates with the firm’s Superintelligence Lab, which is building a ‘world model’ capable of powering dextrous motion and real-time reasoning.

Analysts see the strategy as Meta’s attempt to future-proof its ecosystem and diversify Reality Labs, which continues to post heavy losses. The company’s growing investment in humanoid design could bring home-use robots closer to reality, blending social AI with the firm’s long-term vision for the metaverse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NotebookLM gains automated Deep Research tool and wider file support

Google is expanding NotebookLM with Deep Research, a tool designed to handle complex online inquiries and produce structured, source-grounded reports. The feature acts like a dedicated researcher, planning its own process and gathering material across the web.

Users can enter a question, choose a research style, and let Deep Research browse relevant sites before generating a detailed briefing. The tool runs in the background, allowing additional sources to be added without disrupting the workflow or leaving the notebook.

NotebookLM now supports more file types, including Google Sheets, Drive URLs, PDFs stored in Drive, and Microsoft Word documents. Google says this enables tasks such as summarising spreadsheets and quickly importing multiple Drive files for analysis.

The update continues the service’s gradual expansion since its late-2023 launch, which has brought features such as Video Overviews for turning dense materials into visual explainers. These follow earlier additions, such as Audio Overviews, which create podcast-style summaries of shared documents.

Google also released NotebookLM apps for Android and iOS earlier this year, extending access beyond desktop. The company says the latest enhancements should reach all users within a week.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Embodied AI steps forward with DeepMind’s SIMA 2 research preview

Google DeepMind has released a research preview of SIMA 2, an upgraded generalist agent that draws on Gemini’s language and reasoning strengths. The system moves beyond simple instruction following, aiming to understand user intent and interact more effectively with its environment.

SIMA 1 relied on game data to learn basic tasks across diverse 3D worlds but struggled with complex actions. DeepMind says SIMA 2 represents a step change, completing harder objectives in unfamiliar settings and adapting its behaviour through experience without heavy human supervision.

The agent is powered by the Gemini 2.5 Flash-Lite model and built around the idea of embodied intelligence, where an AI acts through a body and responds to its surroundings. Researchers say this approach supports a deeper understanding of context, goals, and the consequences of actions.

Demos show SIMA 2 describing landscapes, identifying objects, and choosing relevant tasks in titles such as No Man’s Sky. It also reveals its reasoning, interprets clues, uses emojis as instructions, and navigates photorealistic worlds generated by Genie, DeepMind’s own environment model.

Self-improvement comes from Gemini models that create new tasks and score attempts, enabling SIMA 2 to refine its abilities through trial and error. DeepMind sees these advances as groundwork for future general-purpose robots, though the team has not shared timelines for wider deployment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered Google Photos features land on iOS, search expands to 100+ countries

Google Photos is introducing prompt-based edits, an ‘Ask’ button, and style templates across iOS and Android. In the US, iPhone users can describe edits by voice or text, with a redesigned editor for faster controls. The rollout builds on the August Pixel 10’s debut of prompt editing.

Personalised edits now recognise people from face groups, so you can issue multi-person requests, such as removing sunglasses or opening eyes. Find it under ‘Help me edit’, where changes apply to each named person. It’s designed for faster, more granular everyday fixes.

A new Ask button serves as a hub for AI requests, from questions about a photo to suggested edits and related moments. The interface surfaces chips that hint at actions users can take. The Ask experience is rolling out in the US on both iOS and Android.

Google is also adding AI templates that turn a single photo into set formats, such as retro portraits or comic-style panels. The company states that its Nano Banana model powers these creative styles and that templates will be available next week under the Create tab on Android in the US and India.

AI search in Google Photos, first launched in the US, is expanding to over 100 countries with support for 17 languages. Markets include Argentina, Australia, Brazil, India, Japan, Mexico, Singapore, and South Africa. Google says this brings natural-language photo search to a far greater number of users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vision AI Companion turns Samsung TVs into conversational AI platforms

Samsung has unveiled the Vision AI Companion, an advanced conversational AI platform designed to transform the television into a connected household hub.

Unlike voice assistants meant for personal devices, the Vision AI Companion operates on the communal screen, enabling families to ask questions, plan activities, and receive visualised, contextual answers through natural dialogue.

Built into Samsung’s 2025 TV lineup, the system integrates an upgraded Bixby and supports multiple large language models, including Microsoft Copilot and Perplexity.

With its multi-AI agent platform, Vision AI Companion allows users to access personalised recommendations, real-time information, and multimedia responses without leaving their current programme.

It supports 10 languages and includes features such as Live Translate, AI Gaming Mode, Generative Wallpaper, and AI Upscaling Pro. The platform runs on One UI Tizen, offering seven years of software upgrades to ensure longevity and security.

By embedding generative AI into televisions, Samsung aims to redefine how households interact with technology, turning the TV into an intelligent companion that informs, entertains, and connects families across languages and experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers urge governance after LLMs display source-driven bias

Large language models (LLMs) are increasingly used to grade, hire, and moderate text. UZH research shows that evaluations shift when participants are told who wrote identical text, revealing source bias. Agreement stayed high only when authorship was hidden.

When told a human or another AI wrote it, agreement fell, and biases surfaced. The strongest was anti-Chinese across all models, including a model from China, with sharp drops even for well-reasoned arguments.

AI models also preferred ‘human-written’ over ‘AI-written’, showing scepticism toward machine-authored text. Such identity-triggered bias risks unfair outcomes in moderation, reviewing, hiring, and newsroom workflows.

Researchers recommend identity-blind prompts, A/B checks with and without source cues, structured rubrics focused on evidence and logic, and human oversight for consequential decisions.

They call for governance standards: disclose evaluation settings, test for bias across demographics and nationalities, and set guardrails before sensitive deployments. Transparency on prompts, model versions, and calibration is essential.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Appfigures revises iOS estimates as Sora’s launch on Android launch leaps ahead

Sora’s Android launch outpaced its iOS debut, garnering an estimated 470,000 first-day installs across seven markets, according to Appfigures. Broader regional availability, plus the end of invite-only access in top markets, boosted uptake.

OpenAI’s iOS rollout was limited to the US and Canada via invitations, which capped early growth despite strong momentum. The iOS app nevertheless surpassed one million installs in its first week and still ranks highly in the US App Store’s Top Free chart.

Revised Appfigures modelling puts day-one iOS installs at ~110,000 (up from 56,000), with ~69,300 from the US. On Android, availability spans the US, Canada, Japan, South Korea, Taiwan, Thailand, and Vietnam. First-day US installs were ~296,000, showing sustained demand beyond the iOS launch.

Sora allows users to generate videos from text prompts and animate themselves or friends via ‘Cameos’, sharing the results in a TikTok-style vertical feed. Engagement features for creation and discovery are driving word of mouth and repeat use across both platforms.

Competition in mobile AI video and assistants is intensifying, with Meta AI expanding its app in Europe on the same day. Market share will hinge on geographic reach, feature velocity, creator tools, and distribution via app store charts and social feeds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!