Meta pauses teen access to AI characters

Meta Platforms has announced a temporary pause on teenagers’ access to AI characters across its platforms, including Instagram and WhatsApp. Meta disclosed the decision to review and rebuild the feature for younger users.

In San Francisco, Meta said the restriction will apply to users identified as minors based on declared ages or internal age-prediction systems. Teenagers will still be able to use Meta’s core AI assistant, though interactive AI characters will be unavailable.

The move comes ahead of a major child safety trial in Los Angeles involving Meta, TikTok and YouTube. The Los Angeles case focuses on allegations that social media platforms cause harm to children through addictive and unsafe digital features.

Concerns about AI chatbots and minors have grown across the US, prompting similar action by other companies. In Los Angeles and San Francisco, regulators and courts are increasingly scrutinising how AI interactions affect young users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s social media ban raises concern for social media companies

Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.

In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.

Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.

Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UN warns of rising AI-driven threats to child safety

UN agencies have issued a stark warning over the accelerating risks AI poses to children online, citing rising cases of grooming, deepfakes, cyberbullying and sexual extortion.

A joint statement published on 19 January urges urgent global action, highlighting how AI tools increasingly enable predators to target vulnerable children with unprecedented precision.

Recent data underscores the scale of the threat, with technology-facilitated child abuse cases in the US surging from 4,700 in 2023 to more than 67,000 in 2024.

During the COVID-19 pandemic, online exploitation intensified, particularly affecting girls and young women, with digital abuse frequently translating into real-world harm, according to officials from the International Telecommunication Union.

Governments are tightening policies, led by Australia’s social media ban for under-16s, as the UK, France and Canada consider similar measures. UN agencies urged tech firms to prioritise child safety and called for stronger AI literacy across society.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Grok faces regulatory scrutiny in South Korea over explicit AI content

South Korea has moved towards regulatory action against Grok, the generative AI chatbot developed by xAI, following allegations that the system was used to generate and distribute sexually exploitative deepfake images.

The country’s Personal Information Protection Commission has launched a preliminary fact-finding review to assess whether violations occurred and whether the matter falls within its legal remit.

The review follows international reports accusing Grok of facilitating the creation of explicit and non-consensual images of real individuals, including minors.

Under the Personal Information Protection Act of South Korea, generating or altering sexual images of identifiable people without consent may constitute unlawful handling of personal data, exposing providers to enforcement action.

Concerns have intensified after civil society groups estimated that millions of explicit images were produced through Grok over a short period, with thousands involving children.

Several governments, including those in the US, Europe and Canada, have opened inquiries, while parts of Southeast Asia have opted to block access to the service altogether.

In response, xAI has introduced technical restrictions preventing users from generating or editing images of real people. Korean regulators have also demanded stronger youth protection measures from X, warning that failure to address criminal content involving minors could result in administrative penalties.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accelerates rapid ban on social media for under-15s

French President Emmanuel Macron has called for an accelerated legislative process to introduce a nationwide ban on social media for children under 15 by September.

Speaking in a televised address, Macron said the proposal would move rapidly through parliament so that explicit rules are in place before the new school year begins.

Macron framed the initiative as a matter of child protection and digital sovereignty, arguing that foreign platforms or algorithmic incentives should not shape young people’s cognitive and emotional development.

He linked excessive social media use to manipulation, commercial exploitation and growing psychological harm among teenagers.

Data from France’s health watchdog show that almost half of teenagers spend between two and five hours a day on their smartphones, with the vast majority accessing social networks daily.

Regulators have associated such patterns with reduced self-esteem and increased exposure to content linked to self-harm, drug use and suicide, prompting legal action by families against major platforms.

The proposal from France follows similar debates in the UK and Australia, where age-based access restrictions have already been introduced.

The French government argues that decisive national action is necessary instead of waiting for a slower Europe-wide consensus, although Macron has reiterated support for a broader EU approach.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI startup secures $5M to transform children’s digital learning

AI education start-up Sparkli has raised $5 million in seed funding to develop an ‘anti-chatbot’ AI platform to transform how children engage with digital content.

Unlike traditional chatbots that focus on general conversation, Sparkli positions its AI as an interactive learning companion, guiding kids through topics such as math, science and language skills in a dynamic, age-appropriate format.

The funding will support product development, content creation and expansion into new markets. Founders say the platform addresses increasing concerns about passive screen time by offering educational interactions that blend AI responsiveness with curriculum-aligned activities.

The company emphasises safe design and parental controls to ensure technology supports learning outcomes rather than distraction.

Investors backing Sparkli see demand for responsible AI applications for children that can enhance cognition and motivation while preserving digital well-being. As schools and homes increasingly integrate AI tools, Sparkli aims to position itself at the intersection of educational technology and child-centred innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Analysis reveals Grok generated 3 million sexualised images

A new analysis found Grok generated an estimated three million sexualised images in 11 days, including around 23,000 appearing to depict children. The findings raise serious concerns over safeguards, content moderation, and platform responsibility.

The surge followed the launch of Grok’s one-click image editing feature in late December, which quickly gained traction among users. Restrictions were later introduced, including paid access limits and technical measures to prevent image undressing.

Researchers based their estimates on a random sample of 20,000 images, extrapolating from these results to more than 4.6 million images generated during the study period. Automated tools and manual review identified sexualised content and confirmed cases involving individuals appearing under 18.

Campaigners have warned that the findings expose significant gaps in AI safety controls, particularly in protecting children. Calls are growing for stricter oversight, stronger accountability, and more robust safeguards before large-scale AI image deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan arrests suspect over AI deepfake pornography

Police in Japan have arrested a man accused of creating and selling non-consensual deepfake pornography using AI tools. The Tokyo Metropolitan Police Department said thousands of manipulated images of female celebrities were distributed through paid websites.

Investigators in Japan allege the suspect generated hundreds of thousands of images over two years using freely available generative AI software. Authorities say the content was promoted on social media before being sold via subscription platforms.

The arrest follows earlier cases in Japan and reflects growing concern among police worldwide. In South Korea, law enforcement has reported hundreds of arrests linked to deepfake sexual crimes, while cases have also emerged in the UK.

European agencies, including Europol, have also coordinated arrests tied to AI-generated abuse material. Law enforcement bodies say the spread of accessible AI tools is forcing rapid changes in forensic investigation and in the handling of digital evidence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Education for Countries programme signals OpenAI push into public education policy

OpenAI has launched the Education for Countries programme, a new global initiative designed to support governments in modernising education systems and preparing workforces for an AI-driven economy.

The programme responds to a widening gap between rapid advances in AI capabilities and people’s ability to use them effectively in everyday learning and work.

Education systems are positioned at the centre of closing that gap, as research suggests a significant share of core workplace skills will change by the end of the decade.

By integrating AI tools, training and research into schools and universities, national education frameworks can evolve alongside technological change and better equip students for future labour markets.

The programme combines access to tools such as ChatGPT Edu and advanced language models with large-scale research on learning outcomes, tailored national training schemes and internationally recognised certifications.

A global network of governments, universities and education leaders will also share best practices and shape responsible approaches to AI use in classrooms.

Initial partners include Estonia, Greece, Italy, Jordan, Kazakhstan, Slovakia, Trinidad and Tobago and the United Arab Emirates. Early national rollouts, particularly in Estonia, already involve tens of thousands of students and educators, with further countries expected to join later in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok restructures operations for US market

TikTok has finalised a deal allowing the app to continue operating in America by separating its US business from its global operations. The agreement follows years of political pressure in the US over national security concerns.

Under the arrangement, a new entity will manage TikTok’s US operations, with user data and algorithms handled inside the US. The recommendation algorithm has been licensed and will now be trained only on US user data to meet American regulatory requirements.

Ownership of TikTok’s US business is shared among American and international investors, while China-based ByteDance retains a minority stake. Oracle will oversee data security and cloud infrastructure for users in the US.

Analysts say the changes could alter how the app functions for the roughly 200 million users in the US. Questions remain over whether a US-trained algorithm will perform as effectively as the global version.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot