China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU citizens propose public social media network under new initiative

The European Commission has registered a European Citizens’ Initiative proposing the creation of a public social media platform operating at the European level, rather than relying exclusively on private technology companies.

An initiative titled the European Public Social Network calls for legislation establishing a publicly funded digital platform designed to serve societal interests.

Organisers argue that a publicly owned network could function independently from commercial incentives and political pressure while guaranteeing equal rights for users across the EU. The proposed platform would operate as a public service overseen by society rather than private corporations.

Registration confirms that the proposal meets the legal requirements of the European Citizens’ Initiative framework. The Commission has not yet assessed the substance of the idea, and registration does not imply support for the proposal.

Supporters must now gather 1 million signatures from citizens across at least 7 EU member states within 12 months. If the threshold is reached, the Commission will be required to formally examine the initiative and decide whether legislative action is appropriate.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews children’s social media ban

Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.

Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.

Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.

Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers placing Roblox under strict Digital Services Act rules

European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.

The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.

Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.

Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.

Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.

Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Anthropic introduces powerful and transformative voice mode for Claude Code

Anthropic has introduced a voice mode capability for Claude Code, its AI coding assistant for developers. The feature enables users to interact with the system through spoken commands, marking a step toward more conversational and hands-free coding workflows.

Voice interaction allows developers to execute programming tasks using natural language. By activating voice mode, users can verbally request actions, reflecting a broader shift toward intuitive human-AI collaboration in software development.

The rollout is currently limited, with voice mode available to a small percentage of users before wider deployment. Technical details remain unclear, including potential usage limits and whether external voice AI providers contributed to the feature’s development.

The update builds on Anthropic’s earlier integration of voice interaction in its Claude chatbot. This expansion suggests a wider strategy to embed voice interfaces across AI tools and enhance multimodal interaction experiences.

Competition in AI coding assistants continues to intensify, with multiple technology companies developing similar tools. Within this environment, Claude Code has gained strong adoption and a growing market presence among developers.

User growth and revenue indicators highlight the growing momentum of Anthropic’s AI ecosystem. The company also experienced heightened public visibility following its decision to restrict certain military uses of its AI systems, contributing to a surge in app popularity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI models favour Bitcoin over fiat in landmark study

A new study from the Bitcoin Policy Institute, testing 36 AI models across more than 9,000 responses, found that AI agents overwhelmingly prefer Bitcoin over other forms of money.

Bitcoin was the most frequently selected monetary instrument overall, chosen in 48.3% of all responses, whilst almost 91% of responses favoured some form of digital currency over traditional fiat, with no model ranking fiat as its top overall preference.

The preference for Bitcoin was especially pronounced in long-term savings scenarios, where 79.1% of AI responses chose it as the best way to preserve purchasing power over multi-year horizons. For payments and cross-border transfers, however, stablecoins edged ahead, selected in 53.2% of responses compared to Bitcoin’s 36%.

The Bitcoin Policy Institute acknowledged that the study’s methodology had limitations, noting that scenario framing may have influenced results and that the models’ preferences reflect patterns in training data rather than real-world adoption.

Anthropic models showed the strongest Bitcoin preference at 68%, compared to 43% for Google, 39% for xAI, and 26% for OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Crypto exchanges face strict 2027 reserve rules under new Brazil framework

Brazil’s central bank has introduced a regulatory framework requiring licensed crypto exchanges to prove asset sufficiency daily starting on 1 January 2027. The measures align digital asset intermediaries with banking standards on capital management, accounting, and data protection.

Under the rules, exchanges must submit daily attestations confirming that platforms hold adequate fiat and token reserves. Supervisors will review the reports to ensure companies can cover operational, liquidity, and cybersecurity risks while protecting customer balances.

The framework also mandates strict segregation of company and client assets. Exchanges must maintain separate accounts for customer fiat and digital holdings to prevent commingling of funds and improve transparency for regulators.

Platforms operating in Brazil will also be required to follow a specialised accounting manual for digital assets. Standardised rules for classification, valuation, and impairment aim to ensure financial statements clearly reflect exposures across regulated entities.

Authorities will expand oversight of cross-border transfers handled by domestic crypto exchanges. Platforms must report the origins of transactions and the blockchain pathways they follow. The central bank said the framework aims to strengthen resilience and protect customer funds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!