OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU citizens propose public social media network under new initiative

The European Commission has registered a European Citizens’ Initiative proposing the creation of a public social media platform operating at the European level, rather than relying exclusively on private technology companies.

An initiative titled the European Public Social Network calls for legislation establishing a publicly funded digital platform designed to serve societal interests.

Organisers argue that a publicly owned network could function independently from commercial incentives and political pressure while guaranteeing equal rights for users across the EU. The proposed platform would operate as a public service overseen by society rather than private corporations.

Registration confirms that the proposal meets the legal requirements of the European Citizens’ Initiative framework. The Commission has not yet assessed the substance of the idea, and registration does not imply support for the proposal.

Supporters must now gather 1 million signatures from citizens across at least 7 EU member states within 12 months. If the threshold is reached, the Commission will be required to formally examine the initiative and decide whether legislative action is appropriate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Chrome moves to rapid releases as Google responds to AI disruption

Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.

From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.

The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.

Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.

Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.

Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.

Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.

The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Parliament deadlock leaves EU chat-scanning extension in doubt

The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.

Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.

A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).

The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.

Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.

With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.

The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Europe pressed to slow digital age-verification push amid privacy fears

Hundreds of academics urged governments to halt plans for mandatory age checks on social media, rather than accelerating deployment without assessing the risks.

The warning arrives as several European states consider restrictions on children’s access to online platforms and as companies promote verification tools such as live selfies or uploads of government-issued IDs.

Researchers argue that current systems expose people to privacy breaches, security vulnerabilities and malicious sites that ignore verification rules instead of offering meaningful protection.

They say scientific consensus has not yet formed on the benefits or harms of age-assurance technologies, making large-scale implementation premature and potentially discriminatory.

The letter stresses that any credible system would require cryptographic safeguards for every query, protecting data in transit rather than leaving identity checks to platforms without robust technical guarantees.

Academics believe such infrastructure would be complex to build globally and would create friction that many providers may refuse to adopt.

Concern escalated after early deployments in Italy and France, where verification is already mandatory.

Signatories, including Ronald Rivest and Bart Preneel, warn that governments risk introducing a socially unacceptable system that increases exposure to data misuse instead of ensuring children’s safety online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

X rolls out Paid Partnership labels to boost creator transparency

The social media platform, X, has introduced a new ‘Paid Partnership’ label that creators can attach to posts to show when content is promotional instead of leaving audiences unsure about commercial intent.

An update that improves transparency for followers while meeting rules set by the Federal Trade Commission, which expects sponsored material to be disclosed clearly.

Creators previously relied on hashtags such as #ad or #paidpartnership instead of an integrated disclosure option. The new feature allows users to apply the label through a content-disclosure toggle either during posting or afterwards.

X’s product lead, Nikita Bier, said undisclosed promotions damage trust and weaken the platform’s integrity, so the tool is meant to support creators and regulators simultaneously.

X has been trying to build a stronger creator ecosystem by offering payouts, subscriptions and other incentives. Yet many creators still favour Instagram or YouTube over X as their primary channel, because those platforms have longer-standing monetisation tools.

The addition of a built-in label aligns X with broader industry practice and aims to regain credibility among advertisers and creators.

The company has also tightened API access, preventing programmatic replies unless a user is directly mentioned or quoted.

A change that seeks to limit LLM-generated spam instead of allowing automated responses to distort discussions or appear as fake engagement beneath sponsored content.

X hopes these combined measures will enhance authenticity around commercial posts.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK launches consultation on possible social media ban for under-16s

Britain has opened a public consultation examining whether children under 16 should face restrictions or a potential ban on social media use. Young people, parents and educators are being invited to share views before ministers decide on future policy.

Officials are considering several options beyond a full ban, including disabling addictive platform features, introducing overnight curfews, regulating access to AI chatbots, and tightening age verification rules. Pilot schemes will test proposed measures to gather practical evidence on their effectiveness.

The debate follows international momentum after Australia introduced restrictions on under-16 access to major platforms, with Spain signalling similar intentions. Political parties, charities and campaigners remain divided over whether bans or stronger safety regulations offer better protection.

Children’s organisations warn blanket prohibitions could push young users towards less regulated online spaces, creating a ‘false sense of security’. Researchers and policymakers instead emphasise improving platform safety standards while allowing young people to socialise and express themselves online responsibly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Claws become the new trend in local agentic AI

A new expression has entered the AI vocabulary, with ‘claws’ becoming the latest term to capture the industry’s imagination.

The term refers to a growing family of open-source personal assistants designed to run locally on consumer hardware, often on Apple’s compact Mac mini rather than on cloud-based servers.

These assistants can access calendars, email accounts, coding tools, browsers and external model APIs, enabling them to carry out complex digital tasks autonomously.

Interest increased after AI researcher Andrej Karpathy described his experiments with claws, prompting broader attention across online communities.

Many users have begun adopting the tools as lightweight agentic systems capable of handling real work, from scheduling meetings to writing software overnight by linking to models from providers such as OpenAI.

The name originated with Clawdbot, which was recently rebranded as OpenClaw and became a prominent example in Silicon Valley.

A wave of variants, including NanoClaw, ZeroClaw and IronClaw, has followed, marking a surge in locally run assistants that appeal to users seeking greater autonomy, privacy and experimentation.

Growing enthusiasm for claws highlights a wider shift towards agentic AI running directly on personal devices.

Whether these systems become mainstream or remain a niche developer trend, they show how quickly the AI landscape can evolve and how new concepts often spread long before they fully mature.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reddit surges as AI search drives a new era of online discovery

AI-generated search summaries are reshaping online discovery and pushing Reddit to the forefront of global information flows.

The rise of Google’s AI Overview feature places curated AI summaries above traditional search results, encouraging users to rely on machine-generated syntheses instead of browsing lists of websites.

Reddit’s visibility surged after the platform agreed to data access partnerships with Google and OpenAI, enabling large language models to train on its vast archive of human conversations.

The platform’s user-generated discussions are increasingly prioritised because they provide commentary viewed as more neutral and less commercially influenced.

Research from Profound identifies Reddit as the most cited source across major AI platforms. Reddit’s rapid expansion reflects such a shift.

It has overtaken TikTok in the UK, according to Ofcom and now reports 116 million daily active users and more than one billion monthly users.

Communities built around niche interests, combined with voting systems and karma-driven credibility, create a structure that appeals to AI systems searching for grounded, human-authored content.

The platform’s design, centred on subreddits run by volunteer moderators, reinforces trust signals that large models can evaluate when generating AI Overview results.

As AI-powered search becomes the dominant interface for navigating the internet, Reddit’s role as a primary corpus for training and citation continues to expand, reshaping how people discover and verify information.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Australia begins a landmark study on social media minimum age

eSafety Commissioner has launched a major evaluation of Australia’s Social Media Minimum Age to understand how platforms are applying the requirement and what effects it is having on children, young people and families.

The study aims to deliver robust evidence about both intended and unintended impacts as the national debate on youth, wellbeing and digital environments intensifies.

Over more than two years, the research will follow more than four thousand children and families in Australia, combining surveys, interviews, group discussions and privacy-protected smartphone tracking.

Administrative data from national literacy assessments and health systems will be linked to deepen understanding of online behaviour, wellbeing and exposure to risk. All research materials are publicly available through the Open Science Framework to maintain transparency.

The project is led by eSafety’s Research and Evaluation team in partnership with the Stanford University Social Media Lab and an Academic Advisory Group of specialists in mental health, youth development and digital technologies.

Young people themselves are shaping the study through the eSafety Youth Council, ensuring that the interpretation reflects lived experience rather than external assumptions. Full ethics approval underpins the methodology, which meets strict standards of integrity and privacy.

Findings will be released from late 2026 onward, with early reports analysing the experiences of children under sixteen.

The results will inform a legislative review conducted by Australia’s Department of Infrastructure, Transport, Regional Development, Communications, Sport and the Arts.

eSafety expects the evaluation to become a major evidence source for policymakers, researchers and communities as the global conversation on minors and social media regulation continues.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!