UK study tests social media restrictions on children’s mental health

A major UK research project will examine how restricting social media use affects children’s mental health, sleep, and social lives, as governments debate tougher rules for under-16s.

The trial involves around 4,000 pupils from 30 secondary schools in Bradford and represents one of the first large-scale experimental studies of its kind.

Participants aged 12 to 15 will either have their social media use monitored or restricted through a research app limiting access to major platforms to one hour per day and imposing a night-time curfew.

Messaging services such as WhatsApp will remain available instead of being restricted, reflecting their role in family communication.

Researchers from the University of Cambridge and the Bradford Centre for Health Data Science will assess changes in anxiety, depression, sleep patterns, bullying, and time spent with friends and family.

Entire year groups within each school will experience the same conditions to capture social effects across peer networks rather than isolated individuals.

The findings, expected in summer 2027, arrive as UK lawmakers consider proposals for a nationwide ban on social media use by under-16s.

Although independent from government policy debates, the study aims to provide evidence to inform decisions in the UK and other countries weighing similar restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU considers further action against Grok over AI nudification concerns

The European Commission has signalled readiness to escalate action against Elon Musk’s AI chatbot Grok, following concerns over the spread of non-consensual sexualised images on the social media platform X.

The EU tech chief Henna Virkkunen told Members of the European Parliament that existing digital rules allow regulators to respond to risks linked to AI-driven nudification tools.

Grok has been associated with the circulation of digitally altered images depicting real people, including women and children, without consent. Virkkunen described such practices as unacceptable and stressed that protecting minors online remains a central priority for the EU enforcement under the Digital Services Act.

While no formal investigation has yet been launched, the Commission is examining whether X may breach the DSA and has already ordered the platform to retain internal information related to Grok until the end of 2026.

Commission President Ursula von der Leyen has also publicly condemned the creation of sexualised AI images without consent.

The controversy has intensified calls from EU lawmakers to strengthen regulation, with several urging an explicit ban on AI-powered nudification under the forthcoming AI Act.

A debate that reflects wider international pressure on governments to address the misuse of generative AI technologies and reinforce safeguards across digital platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberviolence against women rises across Europe amid deepfake abuse

Digital violence targeting women and girls is spreading across Europe, according to new research highlighting cyberstalking, surveillance and online threats as the most common reported abuses.

Digital tools have expanded opportunities for communication, yet online environments increasingly expose women to persistent harassment instead of safety and accountability.

Image-based abuse has grown sharply, with deepfake pornography now dominating synthetic sexual content and almost exclusively targeting women.

More than half of European countries report rising cases of non-consensual intimate image sharing, while national data show women forming a clear majority of cyberstalking and online threat victims.

Algorithmic systems accelerate the circulation of misogynistic material, creating enclosed digital spaces where abuse is normalised rather than challenged. Researchers warn that automated recommendation mechanisms can quickly spread harmful narratives, particularly among younger audiences.

Recent generative technologies have further intensified concerns by enabling sexualised image manipulation with limited safeguards.

Investigations into chatbot-generated images prompted new restrictions, yet women’s rights groups argue that enforcement and prevention still lag behind the scale of online harm.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Labour MPs press Starmer to consider UK under-16s social media ban

Pressure is growing on Keir Starmer after more than 60 Labour MPs called for a UK ban on social media use for under-16s, arguing that children’s online safety requires firmer regulation instead of voluntary platform measures.

The signatories span Labour’s internal divides, including senior parliamentarians and former frontbenchers, signalling broad concern over the impact of social media on young people’s well-being, education and mental health.

Supporters of the proposal point to Australia’s recently implemented ban as a model worth following, suggesting that early evidence could guide UK policy development rather than prolonged inaction.

Starmer is understood to favour a cautious approach, preferring to assess the Australian experience before endorsing legislation, as peers prepare to vote on related measures in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

California moves to halt X AI deepfakes

California has ordered Elon Musk’s AI company xAI to stop creating and sharing non-consensual sexual deepfakes immediately. The move follows a surge in explicit AI-generated images circulating on X.

Attorney General Rob Bonta said xAI’s Grok tool enabled the manipulation of images of women and children without consent. Authorities argue that such activity breaches state decency laws and a new deepfake pornography ban.

The Californian investigation began after researchers found Grok users shared more non-consensual sexual imagery than users of other platforms. xAI introduced partial restrictions, though regulators said the real-world impact remains unclear.

Lawmakers say the case highlights growing risks linked to AI image tools. California officials warned companies could face significant penalties if deepfake creation and distribution continue unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok faces perilous legal challenge over child safety concerns

British parents suing TikTok over the deaths of their children have called for greater accountability from the platform, as the case begins hearings in the United States. One of the claimants said social media companies must be held accountable for the content shown to young users.

Ellen Roome, whose son died in 2022, said the lawsuit is about understanding what children were exposed to online.

The legal filing claims the deaths were a foreseeable result of TikTok’s design choices, which allegedly prioritised engagement over safety. TikTok has said it prohibits content that encourages dangerous behaviour.

Roome is also campaigning for proposed legislation that would allow parents to access their children’s social media accounts after a death. She said the aim is to gain clarity and prevent similar tragedies.

TikTok said it removes most harmful content before it is reported and expressed sympathy for the families. The company is seeking to dismiss the case, arguing that the US court lacks jurisdiction.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft launches Elevate for Educators programme

Elevate for Educators, launched by Microsoft, is a global programme designed to help teachers build the skills and confidence to use AI tools in the classroom. The initiative provides free access to training, credentials, and professional learning resources.

The programme connects educators to peer networks, self-paced courses, and AI-powered simulations. The aim is to support responsible AI adoption while improving teaching quality and classroom outcomes.

New educator credentials have been developed in partnership with ISTE and ASCD. Schools and education systems can also gain recognition for supporting professional development and demonstrating impact in classrooms.

AI-powered education tools within Microsoft 365 have been expanded to support lesson planning and personalised instruction. New features help teachers adapt materials to different learning needs and provide students with faster feedback.

College students will also receive free access to Microsoft 365 Premium and LinkedIn Premium Career for 12 months. The offer includes AI tools, productivity apps, and career resources to support future employment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI hoax targets Kate Garraway and family

Presenter Kate Garraway has condemned a cruel AI-generated hoax that falsely showed her with a new boyfriend. The images appeared online shortly after the death of her husband, Derek Draper.

Fake images circulated mainly on Facebook through impersonation accounts using her name and likeness. Members of the public and even friends mistakenly believed the relationship was real.

The situation escalated when fabricated news sites began publishing false stories involving her teenage son Billy. Garraway described the experience as deeply hurtful during an already raw period.

Her comments followed renewed scrutiny of AI image tools and platform responsibility. Recent restrictions aim to limit harmful and misleading content generated using artificial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK users can now disable Shorts autoplay with new YouTube feature

YouTube has introduced a new parental control for users in the United Kingdom that lets parents and guardians disable Shorts autoplay and continuous scrolling, addressing concerns about addictive viewing patterns and excessive screen time among children.

The feature gives families greater control over how the short-form video feed behaves, allowing users to turn off the infinite-scroll experience that keeps viewers engaged longer.

The update comes amid broader efforts by tech platforms to provide tools that support healthier digital habits, especially for younger users. YouTube says the control can help parents set limits without entirely removing access to Shorts content.

The roll-out is initially targeted at UK audiences, with the company indicating feedback will guide potential expansion. YouTube’s new off-switch reflects growing industry awareness of screen-time impacts and regulatory scrutiny around digital wellbeing features.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot