Chrome moves to rapid releases as Google responds to AI disruption

Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.

From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.

The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.

Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.

Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.

Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.

Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.

The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Parliament deadlock leaves EU chat-scanning extension in doubt

The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.

Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.

A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).

The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.

Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.

With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.

The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe pressed to slow digital age-verification push amid privacy fears

Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.

The warning arrives as several European states consider restrictions on children’s access to online platforms and as companies promote verification tools such as live selfies or uploads of government-issued IDs.

Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.

They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.

The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.

Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.

Concern escalated after early deployments in Italy and France, where verification is already mandatory.

Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

X rolls out Paid Partnership labels to boost creator transparency

The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.

An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.

Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.

X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.

X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.

The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.

The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.

A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.

X hopes these combined measures will enhance authenticity around commercial posts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK launches consultation on possible social media ban for under-16s

Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.

Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.

The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.

Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claws become the new trend in local agentic AI

A new expression has entered the AI vocabulary, with ‘claws’ becoming the latest term to capture the industry’s imagination.

The term refers to a growing family of open-source personal assistants designed to run locally on consumer hardware, often on Apple’s compact Mac mini rather than on cloud-based servers.

These assistants can access calendars, email accounts, coding tools, browsers and external model APIs, enabling them to carry out complex digital tasks autonomously.

Interest increased after AI researcher Andrej Karpathy described his experiments with claws, prompting broader attention across online communities.

Many users have begun adopting the tools as lightweight agentic systems capable of handling real work, from scheduling meetings to writing software overnight by linking to models from providers such as OpenAI.

The name originated with Clawdbot, which was recently rebranded as OpenClaw and became a prominent example in Silicon Valley.

A wave of variants, including NanoClaw, ZeroClaw and IronClaw, has followed, marking a surge in locally run assistants that appeal to users seeking greater autonomy, privacy and experimentation.

Growing enthusiasm for claws highlights a wider shift towards agentic AI running directly on personal devices.

Whether these systems become mainstream or remain a niche developer trend, they show how quickly the AI landscape can evolve and how new concepts often spread long before they fully mature.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI use among students surges as chatbots reshape schoolwork

More than half of US teenagers use AI tools to help with schoolwork, according to a new Pew Research Center study. The survey found that 54% of students aged 13 to 17 have used chatbots such as OpenAI’s ChatGPT or Microsoft’s Copilot to research assignments or solve maths problems.

Usage has risen in recent years. In 2024, 26% of US teens reported using ChatGPT for schoolwork, up from 13% in 2023. The latest survey of 1,458 teens and parents found 44% use AI for some schoolwork, while 10% rely on chatbots for most tasks.

Researchers say AI assistance is becoming routine in classrooms. Colleen McClain, a senior researcher at Pew and co-author of the report, said chatbot use for schoolwork is now a common practice among teens.

Findings come amid an intensifying debate over generative AI in education. Supporters argue that schools should teach students to use and evaluate AI tools, while critics warn of misinformation, reduced critical thinking, and increased cheating.

Recent research has raised questions about learning outcomes. One study by Cambridge University Press & Assessment and Microsoft Research found that students who took notes without chatbot support showed stronger reading comprehension than those using AI assistance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT Health under fire after study finds major failures in emergency detection

A new evaluation of ChatGPT Health has raised major safety concerns after researchers found it frequently failed to recognise urgent medical emergencies.

The independent study, published in Nature Medicine, reported that the system under-triaged more than half of the clinical scenarios tested, giving advice that could have delayed life-saving treatment.

The research team, led by Ashwin Ramaswamy, created sixty patient simulations ranging from minor illnesses to life-threatening conditions.

Three doctors agreed on the appropriate urgency for each case before comparing their judgement with the model’s responses. The AI performed adequately in straightforward emergencies such as strokes, yet frequently minimised danger in more complex presentations, including severe asthma and diabetic crises.

Experts also warned that ChatGPT Health struggled to detect suicidal ideation reliably. Minor changes to scenario details, such as adding normal lab results, caused safeguards to disappear entirely.

Critics, including health-misinformation researcher Alex Ruani, described the behaviour as dangerously inconsistent and capable of creating a false sense of security.

OpenAI said the study did not reflect typical real-world use but acknowledged the need for continued research and improvement.

Policy specialists argue that the findings underline the need for clear safety standards, external audits and stronger transparency requirements for AI systems operating in sensitive medical contexts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!