OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU citizens propose public social media network under new initiative

The European Commission has registered a European Citizens’ Initiative proposing the creation of a public social media platform operating at the European level, rather than relying exclusively on private technology companies.

An initiative titled the European Public Social Network calls for legislation establishing a publicly funded digital platform designed to serve societal interests.

Organisers argue that a publicly owned network could function independently from commercial incentives and political pressure while guaranteeing equal rights for users across the EU. The proposed platform would operate as a public service overseen by society rather than private corporations.

Registration confirms that the proposal meets the legal requirements of the European Citizens’ Initiative framework. The Commission has not yet assessed the substance of the idea, and registration does not imply support for the proposal.

Supporters must now gather 1 million signatures from citizens across at least 7 EU member states within 12 months. If the threshold is reached, the Commission will be required to formally examine the initiative and decide whether legislative action is appropriate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews children’s social media ban

Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.

Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.

Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.

Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers placing Roblox under strict Digital Services Act rules

European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.

The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.

Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.

Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.

Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.

Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Developers gain early access to Gemini 3.1 Flash-Lite

Google’s Gemini 3.1 Flash-Lite has launched in preview for developers via AI Studio and for enterprises through Vertex AI. Designed for high-volume workloads, it promises fast, cost-effective performance while maintaining high-quality outputs.

Priced at just $0.25 per million input tokens and $1.50 per million output tokens, 3.1 Flash-Lite offers 2.5X faster response times and 45% higher output speed than the previous 2.5 Flash model.

Benchmarks show strong performance across reasoning and multimodal tasks, including an Elo score of 1432 on Arena.ai, 86.9% on GPQA Diamond, and 76.8% on MMMU Pro, surpassing some older, larger Gemini models.

The model also provides adaptive intelligence features, allowing developers to adjust how much the AI ‘thinks’ for each task. The model handles both high-frequency tasks, such as translation, and complex tasks, such as interface generation and simulations.

Early-access developers and companies report that 3.1 Flash-Lite handles complex workloads with precision comparable to larger models. Its speed, affordability, and reasoning capabilities make it an attractive choice for scalable, real-time AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Chrome moves to rapid releases as Google responds to AI disruption

Google is accelerating Chrome’s release cycle rather than maintaining its long-standing four-week cadence.

From September, users on desktop and mobile platforms will receive new stable versions every two weeks, doubling the frequency of feature milestones across speed, stability and usability. Weekly security updates introduced in 2023 remain unchanged.

The faster pace comes as AI-driven browsers seek a foothold in a market long dominated by Chrome.

Products, such as ChatGPT Atlas and Perplexity’s Comet, embed agentic assistants directly into the browsing experience, automating tasks from summarising pages to scheduling meetings.

Chrome has responded with deeper Gemini integration, including the rollout of autonomous features across its interface.

Google maintains that the accelerated schedule reflects the needs of the evolving web platform, arguing that developers require quicker access to updated tools.

Yet the timing aligns with growing competitive pressure from AI-native browsers, prompting speculation that Chrome’s dominance can no longer be taken for granted.

The shift will begin with Chrome version 153 in beta and stable channels on 8 September 2026. Enterprise administrators and Chromebook users will continue to rely on the eight-week Extended Stable branch, which remains unchanged for organisations that need slower, controlled deployments.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AI ethics as societal infrastructure in the digital era

In recent days, social media has been alight with discussions about the 2014 series whose portrayal of AI and ethical dilemmas now feels remarkably prophetic: Silicon Valley. Fans and professionals alike are highlighting how the show’s depiction of AI, automated agents, and ethical dilemmas mirrors today’s real-world challenges. 

From algorithmic decision-making to AI shaping social and economic interactions, the series explores the boundaries, responsibilities, and societal impact of AI in ways that feel startlingly relevant. What once seemed like pure comedy is increasingly being seen as a warning, highlighting how the choices we make around AI and its ethical frameworks will shape whether the technology benefits society.

While the show dramatises these dilemmas for entertainment, the real world is now facing the same questions. Recent trends in generative AI, autonomous agents, and large-scale automated decision-making are bringing their predictions to life, raising urgent ethical questions for developers, policymakers, and society alike.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The rise of AI ethics: from niche concern to central requirement

The growing influence of AI on society has propelled ethics from a theoretical discussion to a central factor in technological decision-making. Initially confined to academic debate, ethics in AI is now a guiding force in technological development. The impact of AI is becoming tangible across society, from employment and finance to online content.

Technical performance alone no longer defines success; the consequences of design choices have become morally and socially significant. Governments, international organisations, and corporations are responding by developing ethical frameworks. 

The EU AI Act, the OECD AI Principles, and numerous corporate codes of conduct signal that society expects AI systems to align with human values, demonstrating accountability, fairness, and trustworthiness. Ethical reflection has become a prerequisite for technological legitimacy and societal acceptance.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

Functions of AI ethics: trust, guidance, and societal risk

Ethical frameworks for AI fulfil multiple roles, balancing moral guidance with practical necessity. They build public trust between developers, organisations, and users, reassuring society that AI systems operate consistently with shared values.

For developers, ethical principles offer a blueprint for decision-making, helping anticipate societal impact and minimise unintended harm. Beyond guidance, AI ethics acts as a form of societal risk governance, allowing organisations to identify potential consequences before they manifest. 

By integrating ethics into design, AI systems become socially sustainable technologies, bridging technical capability with moral responsibility. The approach like this is particularly critical in high-stakes domains such as healthcare, finance, and law, where algorithmic decisions can significantly affect human well-being.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The politics of AI ethics: regulatory theatre and corporate influence

Despite widespread adoption, AI ethics frameworks sometimes risk becoming regulatory theatre, where public statements signal commitment but fail to ensure meaningful action. Many organisations promote ethical AI principles, yet consistent enforcement and follow-through often lag behind these claims.

Even with their limitations, ethical frameworks are far from meaningless. They shape public discourse, influence policy, and determine which AI systems gain social legitimacy. The challenge lies in balancing credibility with practical impact, ensuring that ethical commitments are more than symbolic gestures. 

Social media platforms like X amplify this tension, with public scrutiny and viral debates exposing both successes and failures in applying ethical principles.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as a lens for technology and society

The prominence of AI ethics reflects a broader societal transformation in evaluating technology. Modern societies no longer judge AI solely by efficiency, speed, or performance; they assess social consequences, fairness, and the distribution of risks and benefits. 

AI is increasingly seen as a social actor rather than a neutral tool, influencing public behaviour, shaping social norms, and redefining concepts such as trust, autonomy, and accountability. Ethical evaluation of AI is not just a philosophical exercise, but demonstrate evolving expectations about the role technology should play in human life.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

AI ethics as early-warning governance for social impact

AI ethics functions as a critical early-warning system for society. Ethical principles anticipate harms that might otherwise go unnoticed, from systemic bias to privacy violations. By highlighting potential consequences, ethics enables organisations to act proactively, reducing the likelihood of crises and improving public trust. 

Moreover, ethics ensures that long-term impacts, including societal cohesion, equity, and fairness, are considered alongside immediate technical performance. In doing so, AI ethics bridges the gap between what AI can do and what society deems acceptable, ensuring that innovation remains aligned with moral and social norms.

Balancing technological progress with societal values is essential, as intelligent technologies must align with society, guided by AI ethics.
Source: Freepik

The bridge between technological power and social legitimacy

AI ethics remains the essential bridge between technological power and social legitimacy. Embedding ethical reflection into AI development ensures that innovation is not only technically effective but also socially sustainable, trustworthy, and accountable. 

Yet a growing tension defines the next phase of this evolution: the accelerating pace of innovation often outstrips the slower processes of ethical deliberation and regulation, raising questions about who sets the norms and how quickly societies can adapt.

Rather than acting solely as a safeguard, ethics is increasingly becoming a strategic dimension of technological leadership, shaping public trust, market adoption, and even geopolitical influence in the global race for AI. The rise of AI ethics, therefore, signals more than a moral awakening, reflecting a structural shift in how technological progress is evaluated and legitimised.

As AI continues to integrate into everyday life, ethical frameworks will determine not only how systems function, but also whether they are accepted as part of the social fabric. Aligning innovation with societal values is no longer optional but the condition under which AI can sustain legitimacy, unlock its full potential, and remain a transformative force that benefits society as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Parliament deadlock leaves EU chat-scanning extension in doubt

The civil liberties committee failed to secure majority backing for its amended report on extending the EU’s temporary chat-scanning rules instead of giving a clear negotiating position.

Members of Parliament reviewed the amendments on Monday, but the final text did not garner sufficient support, leaving the proposal without endorsement as the adoption deadline approaches.

A proposal to extend the current derogation that allows tech companies to voluntarily scan their services for Child Sexual Abuse Material (CSAM).

The existing regime expires in April 2026 and was intended only as a stopgap while a permanent Child Sexual Abuse Regulation was developed. Years of stalled negotiations have led to the temporary rules being extended twice since 2021.

Council has already approved its position without changes to the Commission proposal, creating a tight timeline for Parliament.

With trilogue talks finally underway, institutions would need to conclude discussions unusually quickly to prevent the legal basis from expiring. If no agreement is reached by April, companies would lose their ability to scan services under the EU law.

The committee confirmed that the file will now move to plenary in the week of 9–12 March, where political groups may table new amendments. An outcome that will determine whether the temporary regime remains in place while negotiations on the permanent system continue.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!