Chrome moves to rapid releases as Google responds to AI disruption

Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.

From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.

The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.

Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.

Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.

Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.

Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.

The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe turns to satellite networks as Deutsche Telekom expands Starlink collaboration

Deutsche Telekom is turning to satellite connectivity to address Europe’s persistent mobile coverage gaps, rather than relying solely on terrestrial networks.

The company announced a partnership with Starlink during the Mobile World Congress in Barcelona, arguing that non-terrestrial networks can help reach remote forests, mountains and islands that remain underserved despite broad coverage elsewhere.

A collaboration that aims to support direct-to-device satellite links by 2028, enabling future smartphones to connect to Starlink’s MSS spectrum without additional hardware.

Telecommunications leaders describe the plan as a step toward an ‘everywhere network’, extending reliable service to areas long constrained by topographical and conservation barriers. The partnership follows earlier joint work with SpaceX to eliminate dead zones.

Deutsche Telekom is also increasing its use of agentic AI, integrating autonomous network-enhancing systems intended to improve translation, search and service features across devices.

Executives say these capabilities work even on older phones, reducing dependence on apps and creating a more inclusive digital environment.

Although committed to European digital sovereignty, the company insists that global collaboration remains necessary for long-term competitiveness.

Leadership argues that precise regulation and controlled data environments aligned with European standards can balance international cooperation with privacy protection. They remain confident that European technology firms and start-ups will continue driving meaningful innovation across the sector.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

ClawJacked flaw let attackers hijack AI agents through the browser

A high-severity vulnerability dubbed ‘ClawJacked’ has been discovered in OpenClaw, an open-source AI agent framework that lets developers run autonomous AI assistants locally.

The flaw, uncovered by Oasis Security, allowed malicious websites to silently hijack a user’s local AI agent instance and steal sensitive data, all triggered by a single browser visit.

The attack exploited OpenClaw’s local WebSocket gateway, which assumed that traffic from localhost could be trusted. A malicious website could open a WebSocket connection to the gateway, brute-force the password at hundreds of guesses per second, with no rate limiting applied to local connections, and then silently register as a trusted device without any user prompt.

Once inside, attackers gained admin-level access to the AI agent, connected devices, logs, and configuration data. Oasis Security responsibly disclosed the flaw, and OpenClaw issued a patch within 24 hours, releasing version 2026.2.26.

Security experts are urging organisations to update immediately, audit the permissions held by their AI agents, and apply strict governance policies, treating AI agents as non-human identities that require the same oversight as human users or service accounts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Why detecting deepfakes is no longer enough to stay secure

Deepfakes and injection attacks are no longer just tools for misinformation; they are now being deployed to break the identity verification systems that underpin banking, hiring, and account access.

Bad actors are targeting the critical moments when a system determines whether someone is a real person, from customer onboarding at banks to remote hiring and account recovery workflows.

Attackers exploit verification systems in two main ways: by using increasingly convincing synthetic faces and voice clones to mimic real people, and by launching injection attacks that substitute fraudulent video into the capture pipeline before it ever reaches the detection system.

According to the Entrust 2026 Identity Fraud Report, deepfakes are now linked to one in five biometric fraud attempts, with injection attacks rising 40% year-on-year.

Experts warn that detecting deepfakes alone is no longer sufficient. Enterprises must validate the whole session, including device integrity and behavioural signals, in real time.

Gartner predicts that by 2026, 30% of enterprises will no longer consider face-based identity verification reliable in isolation, given the pace AI AI-generated deepfake attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Yale expert warns against overtrusting AI health chatbots

More than 40 million people use ChatGPT alone for health information every day, and both ChatGPT and Claude have recently launched services specifically designed to give consumers health advice.

Yale School of Medicine clinician-educator Shaili Gupta warns that whilst chatbots can democratise access to health information, the risks of overtrust are significant.

Gupta notes that AI chatbots are deliberately designed to feel personal, trained to use pronouns like ‘you’ and ‘I’, which makes users more likely to treat them as authoritative voices rather than information tools.

She cautions against the ‘three C’s’: chatbots that are too competent, too cogent, or too concrete, as these are the most likely to lead patients into harmful health decisions.

Human clinicians, Gupta argues, remain challenging to replace not only because they conduct physical examinations, but also because they bring instinct, experience, and genuine relatability to patient care. She recommends using chatbots for efficiency and general information, whilst leaving diagnosis firmly in the hands of medical professionals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing robotics market positions Qualcomm for next technology wave

Qualcomm expects robotics to become a significant business opportunity within two years, according to chief executive Cristiano Amon. The company is increasingly expanding beyond smartphones as it searches for new long-term growth markets.

Earlier this year, Qualcomm introduced its Dragonwing processor designed specifically for robotics applications. The chipset aims to operate across multiple robotic platforms using a scalable approach similar to its successful mobile processor strategy.

Industry enthusiasm for robotics has grown alongside rapid advances in AI technologies. Often described as ‘physical AI’, these systems allow robots to interpret surroundings and perform complex tasks more effectively.

Market forecasts suggest strong future demand, with analysts predicting robotics could develop into a multi-trillion-dollar global industry. Technology leaders across the semiconductor sector increasingly view intelligent machines as a major next computing platform.

Robotics innovation featured prominently at Mobile World Congress in Barcelona, where companies showcased emerging autonomous machines. Growing investment highlights intensifying competition to shape the future of AI-powered automation worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ocado job cuts raise AI questions

Ocado has announced plans to cut 1,000 jobs from its 20,000 strong global workforce, with roles mainly affected in technology and support. The company, headquartered in Hatfield, Hertfordshire, said the move would save £150m and follows major investment in robotics and automation.

Chief executive Tim Steiner said Ocado had completed a significant phase of investment in automation, but the company declined to confirm that AI directly led to the redundancies. At its Luton warehouse, opened in 2023, human staff continue to work alongside AI powered robots.

Analysts suggested that competition has intensified as retailers in the UK, the US and Canada adopt similar AI driven systems. Some former clients in the US and Canada have invested in their own technology, reducing reliance on Ocado’s platform.

Retail experts argued that deeper structural challenges, including changing consumer expectations and cost pressures in Hertfordshire and beyond, are also at play. Local leaders in Welwyn Hatfield have requested urgent talks as the company reshapes its operating model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft locks Copilot Discord after AI backlash

Microsoft has temporarily locked its official Copilot Discord server after a surge of spam linked to criticism of its AI strategy. The disruption followed widespread use of the nickname ‘Microslop’, a term mocking the company’s AI push.

The backlash intensified after chief executive Satya Nadella urged the industry to embrace AI in a December 2025 blog post. Users began flooding the Copilot Discord server with variations of the term, bypassing Microsoft’s word filters.

Microsoft initially blocked the word before restricting channels and eventually taking the entire server offline. In a statement, the company said the move was intended to protect users from harmful spam.

The controversy reflects broader resistance to AI integration across Windows 11 and Microsoft software. Microsoft has not confirmed when the Copilot Discord server will return online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Deepfake scams target Indian global executives

A deepfake video of Bombay Stock Exchange chief executive Sundararaman Ramamurthy circulated on social media in India, falsely offering stock advice to investors. The exchange moved quickly to report and remove the content, warning the public not to trust fake investment clips.

Cybersecurity experts say such cases are rising sharply, with one US firm estimating a 3,000 percent increase in deepfake incidents over two years. Executives in the US and the UK have also been impersonated using AI-generated audio and video.

In Hong Kong, police said a UK engineering firm lost $25m after an employee joined a video call featuring deepfake versions of senior colleagues. The transfer was made to multiple accounts before the fraud was discovered.

Security companies in the US and the UK are developing detection tools that analyse facial movement and blood flow patterns to identify AI-generated footage. Analysts warn that as costs fall and tools improve, businesses in India, Hong Kong and beyond face an escalating arms race against digital fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Medical chatbots spark powerful debate over serious health risks and benefits

Medical chatbots are rapidly becoming part of digital healthcare as technology companies expand AI tools into health services. Companies such as OpenAI and Anthropic are introducing chatbot features designed to answer medical questions using personal data.

Medical chatbots can analyse information from medical records, wearable devices and wellness applications. By incorporating details such as prescriptions, age and prior diagnoses, they aim to provide more personalised responses than a standard internet search.

However, companies stress that these tools are not substitutes for professional medical care. They are not intended to diagnose conditions but rather to summarise results, explain terminology and help users prepare for appointments.

Supporters argue that medical chatbots can improve patient understanding. Experts from the University of California, San Francisco, note that the tools may clarify complex reports and highlight essential health trends when used responsibly.

Despite these benefits, significant limitations remain. AI systems can hallucinate or generate inaccurate advice, and users may struggle to distinguish reliable guidance from subtle errors.

Independent research reinforces these concerns. A 2024 study by the University of Oxford found that participants who used chatbots for hypothetical health scenarios did not make better decisions than those who relied on online searches or personal judgement.

Performance was strong when analysing structured written cases. Yet effectiveness declined during real-world interactions, where communication gaps affected outcomes.

Privacy presents another major issue. Medical chatbots often require users to upload sensitive health information to deliver personalised responses.

Unlike doctors and hospitals, AI companies are not bound by HIPAA, the US federal health privacy law. Although platforms state that data is stored separately and not used to train models, privacy standards differ from those in traditional healthcare.

Experts from Stanford University advise users to understand these differences before sharing medical records. Transparency and informed consent are critical considerations.

Medical chatbots are also inappropriate in emergencies. Individuals experiencing symptoms such as chest pain, shortness of breath or severe headaches should seek immediate medical attention instead of consulting AI tools.

Even in non-urgent cases, specialists recommend maintaining healthy scepticism. Consulting multiple AI systems may provide a form of second opinion, but it does not replace professional medical advice.

Medical chatbots, therefore, represent both opportunity and risk. As their capabilities expand, users must carefully weigh convenience and personalisation against accuracy, oversight and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!