TikTok rejects end-to-end encryption citing safety concerns

TikTok will not adopt end-to-end encryption for direct messages. The company explained that using this technology could hinder safety teams’ and law enforcement’s efforts to detect harmful content in private messages, which the company believes could make users less safe online.

Encrypted messaging ensures that only the sender and recipient can read a conversation and is widely used across the social media industry. Rivals including Facebook, Instagram, Messenger, and X have adopted the technology, saying protecting private communication is central to user privacy.

The issue has become more sensitive because the platform has long faced scrutiny over possible links between its parent company, ByteDance, and the government of the People’s Republic of China, something the company has repeatedly denied. Reflecting these concerns, earlier this year, US lawmakers ordered the separation of TikTok’s US operations from its global business.

The company told the BBC that encrypted messaging would make it impossible for police and platform safety teams to read direct messages when needed. TikTok emphasised that this decision was made to enhance user protection, with a particular focus on the safety of younger users, and that it sees monitoring capabilities as crucial for addressing harmful behaviour.

Industry analyst Matt Navarra said the platform’s decision to ‘swim against the tide’ is ‘notable’ but presents ‘challenging optics’. He noted, ‘Grooming and harassment risks are present in DMs [direct messages], so TikTok can state it is prioritising proactive safety over privacy absolutism,’ though he added that the decision ‘places TikTok out of alignment with global privacy expectations’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Online privacy faces new pressures in the age of social media

Online privacy is eroding as digital services collect ever-growing personal data and surveillance becomes part of daily technology use. The debate has intensified as social media platforms, advertisers, and connected devices expand their ability to track behaviour, preferences, and habits.

Analysts say younger generations have adapted to this reality rather than resisting it. ‘In 2026, online privacy is a luxury, not a right,’ says Thomas Bunting, an analyst at the UK innovation think tank Nesta. He argues many people have grown up accepting data collection as a trade-off for access to online services, noting: ‘We’ve been taught how to deal with it.’

Advocates warn that the erosion of online privacy could have wider social consequences. Cybersecurity expert Prof Alan Woodward from the University of Surrey says the issue goes beyond personal privacy. ‘People should care about online privacy because it shapes who has power over their lives,’ he says, arguing that privacy is ‘about having something to protect: freedom of thought, experimentation, dissent and personal development without permanent surveillance.’

Despite a growing number of privacy tools and regulations, data exposure remains widespread. According to Statista, more than 1.35 billion people were affected by data breaches, hacks, or exposure in 2024 alone. At the same time, more than 160 countries now have privacy legislation, while users regularly encounter cookie consent prompts that govern how their data is collected online.

Experts say frustration with privacy controls reflects a broader ‘privacy paradox’, in which people express concern about data protection but rarely change their behaviour. Cisco’s Consumer Privacy Survey found that while 89% of respondents said they care about privacy, only 38% actively take steps to protect their data.

As philosopher Carissa Véliz notes, the challenge is not simply awareness but a sense of agency: ‘Mostly, people don’t feel like they have control.’ She argues that protecting privacy requires stronger regulation, responsible technology design, and cultural change, adding: ‘It’s about having [access to] the right tech, but also using it.’

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Growing risks from AI meeting transcription tools

Businesses across the US and Europe are confronting new privacy risks as AI transcription tools spread through workplaces. Tools that automatically record and transcribe meetings increasingly capture sensitive conversations without clear consent.

Privacy specialists warn that organisations in the US and Europe previously focused on rules controlling what employees upload into AI systems. Governance efforts now shift towards monitoring what AI tools record during daily work.

AI services such as Otter, Zoom transcription and Microsoft Copilot can record discussions involving performance reviews, health information and legal matters. Companies in the US and Europe face legal exposure when third-party platforms store recordings without strict controls.

Governance teams in the US and Europe are being urged to introduce clear rules on meeting recordings and retention of transcripts. Stronger policies may include consent requirements, limits on recording sensitive meetings and stricter data storage oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Council of Europe issues new guidance on AI and gender equality

Ahead of International Women’s Day on 8 March, the Council of Europe adopted two new recommendations addressing gender equality and the prevention of violence against women in the context of emerging technologies.

One recommendation targets the design and use of AI to prevent discrimination, while the other focuses on accountability for technology-facilitated violence against women and girls.

The AI recommendation advises member states on preventing discrimination throughout the lifecycle of AI systems, from development to deployment and retirement. It highlights risks like gender bias while promoting transparency, explainability, and safeguards.

Special attention is given to discrimination based on gender, race, and sexual orientation, gender identity, and expression (SOGIESC).

The second recommendation sets the first international standard for addressing technology-facilitated violence against women. It outlines strategies to overcome impunity, including clearer legal frameworks, accessible reporting systems, and victim-centred approaches.

Emphasis is placed on multistakeholder engagement, trauma-informed policies, and safety-by-design in technology products to prevent digital harm.

Both recommendations reinforce the importance of combining regulation, institutional support, and public awareness to ensure technology advances equality rather than perpetuates harm.

The formal launch is scheduled for 10 June 2026 at the Palais de l’Europe in Strasbourg during an event titled ‘From standards to action: making accountability for technology-facilitated violence against women and girls a reality.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China expands oversight of youth online safety

China has introduced new measures to regulate online information that could affect the physical and mental health of minors. Authorities in China said the rules will take effect on 1 March and aim to improve protection for young internet users.

The regulators identified four categories of online information that may harm minors. The authorities have also addressed emerging risks linked to algorithmic recommendations and generative AI technologies.

The framework in China requires internet platforms and content creators to prevent and respond to harmful material. Regulators said companies must strengthen the monitoring and governance of content affecting minors.

Authorities said the measures are designed to create a cleaner online environment for children. Officials also stressed greater responsibility for platforms that manage digital content used by minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI upgrades ChatGPT conversations with GPT-5.3 Instant

The most widely used ChatGPT model has received an update from OpenAI, introducing GPT-5.3 Instant to make everyday conversations more coherent, useful, and natural.

An upgrade that focuses on improving tone, contextual understanding, and the flow of dialogue rather than only benchmark performance.

One of the main improvements concerns how the model handles refusals and safety responses. Earlier versions sometimes declined questions that could have been answered safely or delivered overly cautious explanations before responding.

GPT-5.3 Instant instead gives more direct answers while still maintaining safety constraints, reducing interruptions that previously slowed conversations.

The update also improves the way ChatGPT uses information from the web. Instead of simply summarising search results or presenting long lists of links, the model now integrates online information with its own reasoning.

Such an approach aims to produce more relevant answers that highlight key insights at the beginning of responses.

Reliability has also improved. Internal evaluations conducted by OpenAI show reductions in hallucination rates across multiple domains.

When using web sources, hallucinations dropped by roughly 26.8 percent in higher-risk fields such as medicine, law, and finance. Improvements were also recorded when the model relied only on its internal knowledge.

Beyond factual accuracy, the model is designed to feel more natural in conversation. OpenAI says the system now avoids overly preachy language, unnecessary disclaimers, and intrusive remarks that previously disrupted dialogue.

The goal is a more consistent conversational personality across updates, while maintaining the familiar user experience of ChatGPT.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Australia reviews children’s social media ban

Australia has begun reviewing its ban on social media accounts for children under 16, introduced in December 2025. Australia’s eSafety Commissioner is tracking more than 4,000 children and families to assess how the policy works in practice.

Researchers in Australia will analyse surveys, interviews and voluntary smartphone data to measure how young people interact with apps. Officials in Australia aim to understand how the ban affects children, parents and everyday online behaviour.

Early reactions in Australia have been mixed, with some teenagers telling media outlets they bypass age verification systems. Platforms reportedly remain accessible to some minors in Australia.

Meanwhile, the UK government has launched a public consultation on potential social media restrictions for children. Policymakers in the UK are seeking views on bans, stronger age verification and limits on addictive platform features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU considers placing Roblox under strict Digital Services Act rules

European regulators are examining whether Roblox should fall under the Digital Services Act’s most stringent obligations rather than remain outside the bloc’s most demanding platform rules.

The European Commission began analysing the gaming platform’s reported user figures after the company disclosed roughly 48 million monthly users across the EU.

Numbers above the threshold could qualify Roblox as a Very Large Online Platform under the DSA. Such a designation would mark the first time a gaming platform enters the category alongside social media services already subject to heightened oversight.

Platforms receiving the label must conduct regular risk assessments, submit mitigation reports and demonstrate stronger safeguards for minors.

Regulatory pressure has already begun at the national level. The Dutch Authority for Consumers and Markets launched an investigation in January after concerns that children could encounter violent or sexually explicit content within Roblox games or interact with harmful actors through online features.

Designation at the EU level would transfer supervisory authority to the European Commission, enabling wider investigations and potential fines if violations occur. Officials are still verifying user data before making a formal decision, and no deadline has been announced for the process.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

X suspends creators over undisclosed AI armed conflict videos

Social media platform X will suspend creators from its revenue-sharing programme if they post AI-generated videos of armed conflict without proper disclosure. The penalty lasts 90 days, with permanent removal for repeat violations.

Head of product Nikita Bier said access to authentic information during war is critical, warning that generative AI makes it easy to mislead audiences. The policy takes effect immediately.

Enforcement will combine generative AI detection tools with the platform’s Community Notes fact-checking system. X, formerly Twitter, says the move is designed to prevent creators from profiting from deceptive conflict content.

The Creator Revenue Sharing Programme allows paid X subscribers to earn advertising income from high-performing posts, but critics argue it encourages sensational material. AI-generated political misinformation and deceptive influencer promotions outside armed conflict scenarios remain unaffected by the new rule.

Financial penalties may limit incentives for the dissemination of misleading war footage, yet broader concerns about AI-driven misinformation on social media persist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How AI training data is influencing what users believe

A new Yale study, published in PNAS Nexus, has found that AI chatbots can subtly shift users’ social and political opinions, even when asked for factual information and with no intent to persuade.

Researchers tested nearly 1,912 participants, comparing responses to AI-generated summaries of historical events with those to Wikipedia entries, and found measurable differences in opinion.

The culprit, researchers say, is ‘latent bias’, ideological leanings embedded in the data used to train large language models that subtly colour the framing of otherwise accurate responses.

Default summaries generated by GPT-4o consistently nudged readers towards more liberal opinions compared to Wikipedia entries, even without any deliberate prompting.

Senior author Daniel Karell warned that whilst the effects are modest in isolation, they could compound significantly for users who regularly consult chatbots for information.

Unlike Wikipedia, which makes its editorial process transparent, AI development remains largely opaque, giving the companies behind these models an unacknowledged ability to shape public opinion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!