TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI suite expands to help teachers plan and students learn

Google has unveiled a major expansion of its Gemini AI tools tailored for classroom use, launching over 30 features to support teachers and students. These updates include personalised AI-powered lesson planning, content generation, and interactive study guides.

Teachers can now create custom AI tutors, known as ‘Gems’, to assist students with specific academic needs using their own teaching materials. Google’s AI reading assistant is also gaining real-time support features through the Read Along tool in Classroom, enhancing literacy development for younger users.

Students and teachers will benefit from wider access to Google Vids, the company’s video creation app, enabling them to create instructional content and complete multimedia assignments.

Additional features aim to monitor student progress, manage AI permissions, improve data security, and streamline classroom content delivery using new Class tools.

By placing AI directly into the hands of educators, Google aims to offer more engaging and responsive learning, while keeping its tools aligned with classroom goals and policies. The rollout continues Google’s bid to take the lead in the evolving AI-driven edtech space.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Facebook uses phone photos for AI if users allow it

Meta has introduced a new feature that allows Facebook to access and analyse users’ photos stored on their phones, provided they give explicit permission.

The move is part of a broader push to improve the company’s AI tools, especially after the underwhelming reception of its Llama 4 model. Users who opt in will be agreeing to Meta’s AI Terms of Service, which grants the platform the right to retain and use personal media for content suggestions.

The new feature, currently being tested in the US and Canada, is designed to offer Facebook users creative ideas for Stories by processing their photos and videos through cloud infrastructure.

When enabled, users may receive suggestions such as collages or travel highlights based on when and where images were captured, as well as who or what appears in them. However, participation is strictly optional and can be turned off at any time.

Facebook clarifies that the media analysed under the feature is not used to train AI models in the current test. Still, the system does upload selected media to Meta’s servers on an ongoing basis, raising privacy concerns.

The option to activate these suggestions can be found in the Facebook app’s settings, where users are asked whether they want camera roll data to inform sharing ideas.

Meta has been actively promoting its AI ambitions, with CEO Mark Zuckerberg pushing for the development of ‘superintelligence’. The company recently launched Meta Superintelligence Labs to lead these efforts.

Despite facing stiff competition from OpenAI, DeepSeek and Google, Meta appears determined to deepen its use of personal data to boost its AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Doppl, the new AI app, turns outfit photos into try-on videos

Google has unveiled Doppl, a new AI-powered app that lets users create short videos of themselves wearing any outfit they choose.

Instead of relying on imagination or guesswork, Doppl allows people to upload full-body photos and apply outfits seen on social media, thrift shops, or friends, creating animated try-ons that bring static images to life.

The app builds on Google’s earlier virtual try-on tools integrated with its Shopping Graph. Doppl pushes things further by transforming still photos into motion videos, showing how clothes flow and fit in movement.

Users can upload their full-body image or choose an AI model to preview outfits. However, Google warns that the fit and details might not always be accurate at an early stage.

Doppl is currently only available in the US for Android and iOS users aged 18 or older. While Google encourages sharing videos with friends and followers, the tool raises concerns about misuse, such as generating content using photos of others.

Google’s policy requires disclosure if someone impersonates another person, but the company admits that some abuse may occur. To address the issue, Doppl content will include invisible watermarks for tracking.

In its privacy notice, Google confirmed that user uploads and generated videos will be used to improve AI technologies and services. However, data will be anonymised and separated from user accounts before any human review is allowed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children safety online in 2025: Global leaders demand stronger rules

At the 20th Internet Governance Forum in Lillestrøm, Norway, global leaders, technology firms, and child rights advocates gathered to address the growing risks children face from algorithm-driven digital platforms.

The high-level session, Ensuring Child Security in the Age of Algorithms, explored the impact of engagement-based algorithmic systems on children’s mental health, cultural identity, and digital well-being.

Shivanee Thapa, Senior News Editor at Nepal Television and moderator of the session, opened with a personal note on the urgency of the issue, calling it ‘too urgent, too complex, and too personal.’

She outlined the session’s three focus areas: identifying algorithmic risks, reimagining child-centred digital systems, and defining accountability for all stakeholders.

 Crowd, Person, Audience, Electrical Device, Microphone, Podium, Speech, People

Leanda Barrington-Leach, Executive Director of the Five Rights Foundation, delivered a powerful opening, sharing alarming data: ‘Half of children feel addicted to the internet, and more than three-quarters encounter disturbing content.’

She criticised tech platforms for prioritising engagement and profit over child safety, warning that children can stumble from harmless searches to harmful content in a matter of clicks.

‘The digital world is 100% human-engineered. It can be optimised for good just as easily as for bad,’ she said.

Norway is pushing for age limits on social media and implementing phone bans in classrooms, according to Minister of Digitalisation and Public Governance Karianne Tung.

‘Children are not commodities,’ she said. ‘We must build platforms that respect their rights and wellbeing.’

Salima Bah, Sierra Leone’s Minister of Science, Technology, and Innovation, raised concerns about cultural erasure in algorithmic design. ‘These systems often fail to reflect African identities and values,’ she warned, noting that a significant portion of internet traffic in Sierra Leone flows through TikTok.

Bah emphasised the need for inclusive regulation that works for regions with different digital access levels.

From the European Commission, Thibaut Kleiner, Director for Future Networks at DG Connect, pointed to the Digital Services Act as a robust regulatory model.

He challenged the assumption of children as ‘digital natives’ and called for stronger age verification systems. ‘Children use apps but often don’t understand how they work — this makes them especially vulnerable,’ he said.

Representatives from major platforms described their approaches to online safety. Christine Grahn, Head of Public Policy at TikTok Europe, emphasised safety-by-design features such as private default settings for minors and the Global Youth Council.

‘We show up, we listen, and we act,’ she stated, describing TikTok’s ban on beauty filters that alter appearance as a response to youth feedback.

Emily Yu, Policy Senior Director at Roblox, discussed the platform’s Trust by Design programme and its global teen council.

‘We aim to innovate while keeping safety and privacy at the core,’ she said, noting that Roblox emphasises discoverability over personalised content for young users.

Thomas Davin, Director of Innovation at UNICEF, underscored the long-term health and societal costs of algorithmic harm, describing it as a public health crisis.

‘We are at risk of losing the concept of truth itself. Children increasingly believe what algorithms feed them,’ he warned, stressing the need for more research on screen time’s effect on neurodevelopment.

The panel agreed that protecting children online requires more than regulation alone. Co-regulation, international cooperation, and inclusion of children’s voices were cited as essential.

Davin called for partnerships that enable companies to innovate responsibly. At the same time, Grahn described a successful campaign in Sweden to help teens avoid criminal exploitation through cross-sector collaboration.

Tung concluded with a rallying message: ‘Looking back 10 or 20 years from now, I want to know I stood on the children’s side.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Parliamentarians call for stronger platform accountability and human rights protections at IGF 2025

At the 2025 Internet Governance Forum in Lillestrøm, Norway, parliamentarians from around the world gathered to share perspectives on how to regulate harmful online content without infringing on freedom of expression and democratic values. The session, moderated by Sorina Teleanu, Diplo’s Director of Knowledge, highlighted the increasing urgency for social media platforms to respond more swiftly and responsibly to harmful content, particularly content generated by AI that can lead to real-world consequences such as harassment, mental health issues, and even suicide.

Pakistan’s Anusha Rahman Ahmad Khan delivered a powerful appeal, pointing to cultural insensitivity and profit-driven resistance by platforms that often ignore urgent content removal requests. Representatives from Argentina, Nepal, Bulgaria, and South Africa echoed the need for effective legal frameworks that uphold safety and fundamental rights.

Argentina’s Franco Metaza, Member of Parliament of Mercosur, cited disturbing content that promotes eating disorders among young girls and detailed the tangible danger of disinformation, including an assassination attempt linked to online hate. Nepal’s MP Yogesh Bhattarai advocated for regulation without authoritarian control, underscoring the importance of constitutional safeguards for speech.

Member of European Parliament, Tsvetelina Penkova from Bulgaria, outlined the EU’s multifaceted digital laws, like the Digital Services Act and GDPR, which aim to protect users while grappling with implementation challenges across 27 diverse member states.

Youth engagement and digital literacy emerged as key themes, with several speakers emphasising that involving young people in policymaking leads to better, more inclusive policies. Panellists also stressed that education is essential for equipping users with the tools to navigate online spaces safely and critically.

Calls for multistakeholder cooperation rang throughout the session, with consensus on the need for collaboration between governments, tech companies, civil society, and international organisations. A thought-provoking proposal from a Congolese parliamentarian suggested that digital rights be recognised as a new, fourth generation of human rights—akin to civil, economic, and environmental rights already codified in international frameworks.

Other attendees welcomed the idea and agreed that without such recognition, the enforcement of digital protections would remain fragmented. The session concluded on a collaborative and urgent note, with calls for shared responsibility, joint strategies, and stronger international frameworks to create a safer, more just digital future.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

MIT study links AI chatbot use to reduced brain activity and learning

A new preprint study from MIT has revealed that using AI chatbots for writing tasks significantly reduces brain activity and impairs memory retention.

The research, led by Dr Nataliya Kosmyna at the MIT Media Lab, involved Boston-area students writing essays under three conditions: unaided, using a search engine, or assisted by OpenAI’s GPT-4o. Participants wore EEG headsets to monitor brain activity throughout.

Results indicated that those relying on AI exhibited the weakest neural connectivity, with up to 55% lower cognitive engagement than the unaided group. Those using search engines showed a moderate drop of up to 48%.

The researchers used Dynamic Directed Transfer Function (dDTF) to assess cognitive load and information flow across brain regions. They found that while the unaided group activated broad neural networks, AI users primarily engaged in procedural tasks with shallow encoding of information.

Participants using GPT-4o also performed worst in recall and perceived ownership of their written work. In follow-up sessions, students previously reliant on AI struggled more when the tool was removed, suggesting diminished internal processing skills.

Meanwhile, those who used their own cognitive skills earlier showed improved performance when later given AI support.

The findings suggest that early AI use in education may hinder deeper learning and critical thinking. Researchers recommend that students first engage in self-driven learning before incorporating AI tools to enhance understanding.

Dr Kosmyna emphasised that while the results are preliminary and not yet peer-reviewed, the study highlights the need for careful consideration of AI’s cognitive impact.

MIT’s team now plans to explore similar effects in coding tasks, studying how AI tools like code generators influence brain function and learning outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta sues AI firm over fake nude images created without consent

Meta has filed a lawsuit against Joy Timeline HK Ltd in Hong Kong, accusing the firm of using its platforms to promote a generative AI app called CrushAI.

The app allows users to digitally strip clothes from images of people, often without consent. Meta said the company repeatedly attempted to bypass ad review systems to push harmful content, advertising phrases like ‘see anyone naked’ on Facebook and Instagram.

The lawsuit follows Meta’s broader investigation into ‘nudity’ apps, which are increasingly being used to create sexualised deepfakes. Despite bans on nonconsensual explicit content, the company said such apps evade detection by disguising ads or rotating domain names after bans.

According to research by Cornell Tech, over 8,000 ads linked to CrushAI appeared on Meta platforms in recent months. Meta responded by updating its detection systems with a broader range of flagged terms and emojis.

While many of the manipulated images target celebrities, concerns are growing about the use of such technology to exploit minors. In one case in Florida, two teenagers used similar AI tools to create sexualised images of classmates.

The issue has sparked legal action in the US, where the Take It Down Act, signed into law earlier this year, criminalises the publication of nonconsensual deepfake imagery and simplifies removal processes for victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government backs AI to help teachers and reduce admin

The UK government has unveiled new guidance for schools that promotes the use of AI to reduce teacher workloads and increase face-to-face time with pupils.

The Department for Education (DfE) says AI could take over time-consuming administrative tasks such as lesson planning, report writing, and email drafting—allowing educators to focus more on classroom teaching.

The guidance, aimed at schools and colleges in the UK, highlights how AI can assist with formative assessments like quizzes and low-stakes feedback, while stressing that teachers must verify outputs for accuracy and data safety.

It also recommends using only school-approved tools and limits AI use to tasks that support rather than replace teaching expertise.

Education unions welcomed the move but said investment is needed to make it work. Leaders from the NAHT and ASCL praised AI’s potential to ease pressure on staff and help address recruitment issues, but warned that schools require proper infrastructure and training.

The government has pledged £1 million to support AI tool development for marking and feedback.

Education Secretary Bridget Phillipson said the plan will free teachers to deliver more personalised support, adding: ‘We’re putting cutting-edge AI tools into the hands of our brilliant teachers to enhance how our children learn and develop.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s AI tools disabled for gaokao exam

As millions of high school students across China began the rigorous ‘gaokao’ college entrance exam, the country’s leading tech companies took unprecedented action by disabling AI features on their popular platforms.

Apps from Tencent, ByteDance, and Moonshot AI temporarily blocked functionalities like photo recognition and real-time question answering. This move aimed to prevent students from using AI chatbots to cheat during the critical national examination, which largely dictates university admissions in China.

This year, approximately 13.4 million students are participating in the ‘gaokao,’ a multi-day test that serves as a pivotal determinant for social mobility, particularly for those from rural or lower-income backgrounds.

The immense pressure associated with the exam has historically fueled intense test preparation. However, screenshots circulating on Chinese social media app Rednote confirmed that AI chatbots like Tencent’s YuanBao, ByteDance’s Doubao, and Moonshot AI’s Kimi displayed messages indicating the temporary closure of exam-relevant features to ensure fairness.

China’s ‘gaokao’ exam highlights a balanced approach to AI: promoting its education from a young age, with compulsory instruction in Beijing schools this autumn, while firmly asserting it’s for learning, not cheating. Regulators draw a clear line, reinforcing that AI aids development, but never compromises academic integrity.

This coordinated action by major tech firms reinforces the message that AI has no place in the examination hall, despite China’s broader push to cultivate an AI-literate generation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!