UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

Apple unveils age verification tech amid legal debates

Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.

This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.

The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.

For more information on these topics, visit diplomacy.edu.

Europol busts criminal group distributing AI-generated child abuse content

Europol announced on Friday that two dozen people have been arrested for their involvement in a criminal network distributing AI-generated images of child sexual abuse. This operation marks one of the first of its kind, highlighting concerns over the use of AI in creating illegal content. Europol noted that there is currently a lack of national legislation addressing AI-generated child abuse material.

The primary suspect, a Danish national, operated an online platform where he distributed the AI-generated content he created. Users from around the world paid a ‘symbolic online payment’ to access the material. The platform has raised significant concerns about the potential misuse of AI tools for such criminal purposes.

The ongoing operation, which involves authorities from 19 countries, resulted in 25 arrests, with most occurring simultaneously on Wednesday under the leadership of Danish authorities. Europol indicated that more arrests are expected in the coming weeks as the investigation continues.

For more information on these topics, visit diplomacy.edu.

Bluesky teams up with IWF to tackle harmful content

Bluesky, the rapidly growing decentralised social media platform, has partnered with the UK-based Internet Watch Foundation (IWF) to combat the spread of child sexual abuse material (CSAM). As part of the collaboration, Bluesky will gain access to the IWF’s tools, which include a list of websites containing CSAM and a catalogue of digital fingerprints, or ‘hashes,’ that identify abusive images. This partnership aims to reduce the risk of users encountering illegal content while helping to keep the platform safe from such material.

Bluesky’s head of trust and safety, Aaron Rodericks, welcomed the partnership as a significant step in protecting users from harmful content. With the platform’s rapid growth—reaching over 30 million users by the end of last month—the move comes at a crucial time. In November, Bluesky announced plans to expand its moderation team to address the rise in harmful material following the influx of new users.

The partnership also highlights the growing concern over online child sexual abuse material. The IWF reported record levels of harmful content last year, with over 291,000 web pages removed from the internet. The foundation’s CEO, Derek Ray-Hill, stressed the urgency of tackling the crisis, calling for a collective effort from governments, tech companies, and society.

For more information on these topics, visit diplomacy.edu.

Australia slaps A$1 million fine on Telegram

Australia’s eSafety Commission has fined messaging platform Telegram A$1 million ($640,000) for failing to respond promptly to questions regarding measures it took to prevent child abuse and extremist content. The Commission had asked social media platforms, including Telegram, to provide details on their efforts to combat harmful content. Telegram missed the May 2024 deadline, submitting its response in October, which led to the fine.

eSafety Commissioner Julie Inman Grant emphasised the importance of timely transparency and adherence to Australian law. Telegram, however, disagreed with the penalty, stating that it had fully responded to the questions, and plans to appeal the fine, which it claims was solely due to the delay in response time.

The fine comes amid increasing global scrutiny of Telegram, with growing concerns over its use by extremists. Australia’s spy agency recently noted that a significant portion of counter-terrorism cases involved youth, highlighting the increasing risk posed by online extremist content. If Telegram does not comply with the penalty, the eSafety Commission could pursue further legal action.

For more information on these topics, visit diplomacy.edu.

Australian kids overlook social media age checks

A recent report by Australia’s eSafety regulator reveals that children in the country are finding it easy to bypass age restrictions on social media platforms. The findings come ahead of a government ban, set to take effect at the end of 2025, that will prevent children under the age of 16 from using these platforms. The report highlights data from a national survey on social media use among 8 to 15-year-olds and feedback from eight major services, including YouTube, Facebook, and TikTok.

The report shows that 80% of Australian children aged 8 to 12 were using social media in 2024, with YouTube, TikTok, Instagram, and Snapchat being the most popular platforms. While most platforms, except Reddit, require users to enter their date of birth during sign-up, the report indicates that these systems rely on self-declaration, which can be easily manipulated. Despite these weaknesses, 95% of teens under 16 were found to be active on at least one of the platforms surveyed.

While some platforms, such as TikTok, Twitch, and YouTube, have introduced tools to proactively detect underage users, others have not fully implemented age verification technologies. YouTube remains exempt from the upcoming ban, allowing children under 13 to use the platform with parental supervision. However, eSafety Commissioner Julie Inman Grant stressed that there is still significant work needed to enforce the government’s minimum age legislation effectively.

The report also noted that most of the services surveyed had conducted research to improve their age verification processes. However, as the law approaches, there are increasing calls for app stores to take greater responsibility for enforcing age restrictions.

For more information on these topics, visit diplomacy.edu.

Young people rely on social media for political news

A growing number of young Europeans are turning to social media platforms like TikTok, Instagram, and YouTube as their primary news source, surpassing traditional outlets such as TV and print media. According to the latest European Parliament Youth Survey, 42% of people aged 16 to 30 rely on social media for news about politics and social issues. This shift highlights changing preferences toward fast-paced, accessible content but also raises concerns about the growing risk of disinformation among younger generations.

Younger users, especially those aged 16 to 18, are more likely to trust platforms like TikTok and Instagram, while those aged 25 to 30 tend to rely more on Facebook, online press, and radio for their news. However, the rise of social media as a news source has also led to increased exposure to fake news. A report from the Reuters Institute revealed that 27% of TikTok users struggle to identify misleading content, while Instagram has faced criticism for relaxing its fact-checking systems.

Despite being aware of the risks, young Europeans continue to engage with social media for news. A significant 76% of respondents reported encountering fake news in the past week, yet platforms like Instagram remain the most popular news sources. This trend is impacting trust in political institutions, with many young people expressing scepticism toward the EU and skipping elections due to a lack of information.

The reliance on social media for news has shifted political discourse, as fake news and AI-generated content have been used to manipulate public opinion. The constant exposure to sensationalised false information is also having psychological effects, increasing anxiety and confusion among young people and pushing some to avoid news altogether.

For more information on these topics, visit diplomacy.edu.

FTC names new technology chief as leadership shifts

Jake Denton, a former researcher at the Heritage Foundation, has been appointed as chief technology officer of the US Federal Trade Commission. He replaces Stephanie Nguyen, who had held the position since 2022. The role was first established during the Obama administration to provide insights on emerging technology challenges.

Denton steps into the role as Andrew Ferguson takes over as FTC chairman. Ferguson has voiced concerns about Big Tech’s dominance while cautioning against excessive regulation that could hinder US innovation. Denton has supported artificial intelligence legislation and has urged stronger US involvement in shaping global AI policies.

The Heritage Foundation’s Project 2025, linked to potential conservative policies under a future Trump administration, has outlined proposals for antitrust enforcement that align with right-leaning priorities. Some suggestions have even questioned the FTC’s necessity. Meanwhile, the agency is preparing for a trial against Meta in April and is pursuing an antitrust lawsuit against Amazon.

Ferguson’s stance on ongoing FTC investigations remains unclear, including probes into Microsoft’s partnership with OpenAI and potential consumer protection issues. Trump has praised Ferguson as a leader who supports innovation, making his regulatory approach to Big Tech a key focus in the coming months.

For more information on these topics, visit diplomacy.edu.

Greece to launch AI tool for personalised education

Greece‘s Ministry of Education is developing an AI-powered digital assistant aimed at helping students bridge learning gaps. Set to launch in the 2025-2026 school year, the tool will analyse student responses to exercises, identifying areas where they struggle and recommending targeted study materials. Initially focused on middle and senior high school students, it may eventually expand to lower elementary grades as well.

The AI assistant uses machine-learning algorithms to assess students’ strengths and weaknesses, tailoring study plans accordingly. Integrated with Greece’s Digital Tutoring platform, it will leverage over 15,000 interactive exercises and 7,500 educational videos. Teachers will also have access to the data, allowing them to better support their students.

Education Minister Kyriakos Pierrakakis highlighted that the project, part of the “Enhancing the Digital School” initiative, is designed to complement, not replace, traditional teaching methods. The initiative, which aims to modernise Greece’s education system, will be funded through the EU Recovery and Resilience Facility. Approval is expected in March, after which competitive bidding will begin for the project’s implementation.

For more information on these topics, visit diplomacy.edu.

Meta partners with UNESCO to improve AI language technology

Meta has launched a new initiative with UNESCO to enhance AI language recognition and translation, focusing on underserved languages. The Language Technology Partner Program invites collaborators to provide speech recordings, transcriptions, and translated texts to help train AI models. The finalised models will be open-sourced, allowing broader accessibility and research.

The government of Nunavut in Canada is among the early partners, contributing recordings in Inuktut, a language spoken by some Indigenous communities. Meta is also releasing an open-source machine translation benchmark to evaluate AI performance across seven languages, available on Hugging Face.

While Meta presents the initiative as a philanthropic effort, improved AI language tools could benefit the company’s broader goals. Meta AI continues to expand multilingual support, including automatic translation for content creators. However, the company has faced criticism for its handling of non-English content, with reports highlighting inconsistencies in content moderation across languages.