TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The House of Lords backs social media ban for under-16s

The upper house of the Parliament of the United Kingdom,, the House of Lords has voted in favour of banning under-16s from social media platforms, backing an amendment to the government’s schools bill by 261 votes to 150. The proposal would require ministers to define restricted platforms and enforce robust age verification within a year.

Political momentum for tighter youth protections has grown after Australia’s similar move, with cross-party support emerging at Westminster. More than 60 Labour MPs have joined Conservatives in urging a UK ban, increasing pressure ahead of a Commons vote.

Supporters argue that excessive social media use contributes to declining mental health, online radicalisation, and classroom disruption. Critics warn that a blanket ban could push teenagers toward less regulated platforms and limit positive benefits, urging more vigorous enforcement of existing safety rules.

The government has rejected the amendment and launched a three-month consultation on age checks, curfews, and curbing compulsive online behaviour. Ministers maintain that further evidence is needed before introducing new legal restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Davos roundtable calls for responsible AI growth

Leaders from the tech industry, academia, and policy circles met at a TIME100 roundtable in Davos, Switzerland, on 21 January to discuss how to pursue rapid AI progress without sacrificing safety and accountability. The conversation, hosted by TIME CEO Jessica Sibley, focused on how AI should be built, governed, and used as it becomes more embedded in everyday life.

A major theme was the impact of AI-enabled technology on children. Jonathan Haidt, an NYU Stern professor and author of The Anxious Generation, argued that the key issue is not total avoidance but the timing and habits of exposure. He suggested children do not need smartphones until at least high school, emphasising that delaying access can help protect brain development and executive function.

Yoshua Bengio, a professor at the Université de Montréal and founder of LawZero, said responsible innovation depends on a deeper scientific understanding of AI risks and stronger safeguards built into systems from the start. He pointed to two routes, consumer and societal demand for ‘built-in’ protections, and government involvement that could include indirect regulation through liability frameworks, such as requiring insurance for AI developers and deployers.

Participants also challenged the idea that geopolitical competition should justify weaker guardrails. Bengio argued that even rivals share incentives to prevent harmful outcomes, such as AI being used for cyberattacks or the development of biological weapons, and said coordination between major powers is possible, drawing a comparison to Cold War-era cooperation on nuclear risk reduction.

The roundtable linked AI risks to lessons from social media, particularly around attention-driven business models. Bill Ready, CEO of Pinterest, said engagement optimisation can amplify divisions and ‘prey’ on negative human impulses, and described Pinterest’s shift away from maximising view time toward maximising user outcomes, even if it hurts short-term metrics.

Several speakers argued that today’s alignment approach is too reactive. Stanford computer scientist Yejin Choi warned that models trained on the full internet absorb harmful patterns and then require patchwork fixes, urging exploration of systems that learn moral reasoning and human values more directly from the outset.

Kay Firth-Butterfield, CEO of Good Tech Advisory, added that wider AI literacy, shaped by input from workers, parents, and other everyday users, should underpin future certification and trust in AI tools.

Diplo is live reporting on all sessions from the World Economic Forum 2026 in Davos.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

YouTube’s 2026 strategy places AI at the heart of moderation and monetisation

As announced yesterday, YouTube is expanding its response to synthetic media by introducing experimental likeness detection tools that allow creators to identify videos where their face appears altered or generated by AI.

The system, modelled conceptually on Content ID, scans newly uploaded videos for visual matches linked to enrolled creators, enabling them to review content and pursue privacy or copyright complaints when misuse is detected.

Participation requires identity verification through government-issued identification and a biometric reference video, positioning facial data as both a protective and governance mechanism.

While the platform stresses consent and limited scope, the approach reflects a broader shift towards biometric enforcement as platforms attempt to manage deepfakes, impersonation, and unauthorised synthetic content at scale.

Alongside likeness detection, YouTube’s 2026 strategy places AI at the centre of content moderation, creator monetisation, and audience experience.

AI tools already shape recommendation systems, content labelling, and automated enforcement, while new features aim to give creators greater control over how their image, voice, and output are reused in synthetic formats.

The move highlights growing tensions between creative empowerment and platform authority, as safeguards against AI misuse increasingly rely on surveillance, verification, and centralised decision-making.

As regulators debate digital identity, biometric data, and synthetic media governance, YouTube’s model signals how private platforms may effectively set standards ahead of formal legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Snapchat settles social media addiction lawsuit as landmark trial proceeds

Snapchat’s parent company has settled a social media addiction lawsuit in California just days before the first major trial examining platform harms was set to begin.

The agreement removes Snapchat from one of the three bellwether cases consolidating thousands of claims, while Meta, TikTok and YouTube remain defendants.

These lawsuits mark a legal shift away from debates over user content and towards scrutiny of platform design choices, including recommendation systems and engagement mechanics.

A US judge has already ruled that such features may be responsible for harm, opening the door to liability that section 230 protections may not cover.

Legal observers compare the proceedings to historic litigation against tobacco and opioid companies, warning of substantial damages and regulatory consequences.

A ruling against the remaining platforms could force changes in how social media products are designed, particularly in relation to minors and mental health risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT introduces age prediction to strengthen teen safety

New safeguards are being introduced as ChatGPT uses age prediction to identify accounts that may belong to under-18s. Extra protections limit exposure to harmful content while still allowing adults full access.

The age prediction model analyses behavioural and account-level signals, including usage patterns, activity times, account age, and stated age information. OpenAI says these indicators help estimate whether an account belongs to a minor, enabling the platform to apply age-appropriate safeguards.

When an account is flagged as potentially under 18, ChatGPT limits access to graphic violence, sexual role play, viral challenges, self-harm, and unhealthy body image content. The safeguards reflect research on teen development, including differences in risk perception and impulse control.

ChatGPT users who are incorrectly classified can restore full access by confirming their age through a selfie check using Persona, a secure identity verification service. Account holders can review safeguards and begin the verification process at any time via the settings menu.

Parental controls allow further customisation, including quiet hours, feature restrictions, and notifications for signs of distress. OpenAI says the system will continue to evolve, with EU-specific deployment planned in the coming weeks to meet regional regulatory requirements.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK toy industry trends show promising market recovery amid social media challenges

UK toy industry trends show a recovering market, but face challenges from social media regulations for children.

After Australia introduced a ban on social media for under-16s, UK toy sellers are monitoring the possibility of similar policies.

The UK toy market is rebounding, with sales value rising 6 percent last year, the first growth since 2020. Despite cost-of-living pressures, families continue to prioritise spending on toys, especially during holidays like Christmas.

A major driver of UK toy industry trends is the growth of the ‘kidult’ market. Older children and adults now account for around 30 percent of toy sales and spend more on items such as Lego sets, collectable figurines, and pop-culture merchandise. That shift shows that the sector is no longer reliant solely on younger children.

Social media shapes UK toy industry trends, as platforms promote toys from films, games, music, and sports, with franchises like Pokémon and Minecraft driving consumer interest.

Potential social media restrictions could force the industry to adapt, relying more on in-store promotions, traditional media, or franchise collaborations. The sector must balance child-protection policies with its growing dependence on digital platforms to maintain growth.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UNESCO raises alarm over government use of internet shutdowns

Yesterday, UNESCO expressed growing concern over the expanding use of internet shutdowns by governments seeking to manage political crises, protests, and electoral periods.

Recent data indicate that more than 300 shutdowns have occurred across over 54 countries during the past two years, with 2024 recorded as the most severe year since 2016.

According to UNESCO, restricting online access undermines the universal right to freedom of expression and weakens citizens’ ability to participate in social, cultural, and political life.

Access to information remains essential not only for democratic engagement but also for rights linked to education, assembly, and association, particularly during moments of instability.

Internet disruptions also place significant strain on journalists, media organisations, and public information systems that distribute verified news.

Instead of improving public order, shutdowns fracture information flows and contribute to the spread of unverified or harmful content, increasing confusion and mistrust among affected populations.

UNESCO continues to call on governments to adopt policies that strengthen connectivity and digital access rather than imposing barriers.

The organisation argues that maintaining open and reliable internet access during crises remains central to protecting democratic rights and safeguarding the integrity of information ecosystems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK study tests social media restrictions on children’s mental health

A major UK research project will examine how restricting social media use affects children’s mental health, sleep, and social lives, as governments debate tougher rules for under-16s.

The trial involves around 4,000 pupils from 30 secondary schools in Bradford and represents one of the first large-scale experimental studies of its kind.

Participants aged 12 to 15 will either have their social media use monitored or restricted through a research app limiting access to major platforms to one hour per day and imposing a night-time curfew.

Messaging services such as WhatsApp will remain available instead of being restricted, reflecting their role in family communication.

Researchers from the University of Cambridge and the Bradford Centre for Health Data Science will assess changes in anxiety, depression, sleep patterns, bullying, and time spent with friends and family.

Entire year groups within each school will experience the same conditions to capture social effects across peer networks rather than isolated individuals.

The findings, expected in summer 2027, arrive as UK lawmakers consider proposals for a nationwide ban on social media use by under-16s.

Although independent from government policy debates, the study aims to provide evidence to inform decisions in the UK and other countries weighing similar restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!