Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

Apple unveils age verification tech amid legal debates

Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.

This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.

The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.

For more information on these topics, visit diplomacy.edu.

Europol busts criminal group distributing AI-generated child abuse content

Europol announced on Friday that two dozen people have been arrested for their involvement in a criminal network distributing AI-generated images of child sexual abuse. This operation marks one of the first of its kind, highlighting concerns over the use of AI in creating illegal content. Europol noted that there is currently a lack of national legislation addressing AI-generated child abuse material.

The primary suspect, a Danish national, operated an online platform where he distributed the AI-generated content he created. Users from around the world paid a ‘symbolic online payment’ to access the material. The platform has raised significant concerns about the potential misuse of AI tools for such criminal purposes.

The ongoing operation, which involves authorities from 19 countries, resulted in 25 arrests, with most occurring simultaneously on Wednesday under the leadership of Danish authorities. Europol indicated that more arrests are expected in the coming weeks as the investigation continues.

For more information on these topics, visit diplomacy.edu.

Bluesky teams up with IWF to tackle harmful content

Bluesky, the rapidly growing decentralised social media platform, has partnered with the UK-based Internet Watch Foundation (IWF) to combat the spread of child sexual abuse material (CSAM). As part of the collaboration, Bluesky will gain access to the IWF’s tools, which include a list of websites containing CSAM and a catalogue of digital fingerprints, or ‘hashes,’ that identify abusive images. This partnership aims to reduce the risk of users encountering illegal content while helping to keep the platform safe from such material.

Bluesky’s head of trust and safety, Aaron Rodericks, welcomed the partnership as a significant step in protecting users from harmful content. With the platform’s rapid growth—reaching over 30 million users by the end of last month—the move comes at a crucial time. In November, Bluesky announced plans to expand its moderation team to address the rise in harmful material following the influx of new users.

The partnership also highlights the growing concern over online child sexual abuse material. The IWF reported record levels of harmful content last year, with over 291,000 web pages removed from the internet. The foundation’s CEO, Derek Ray-Hill, stressed the urgency of tackling the crisis, calling for a collective effort from governments, tech companies, and society.

For more information on these topics, visit diplomacy.edu.

Australia slaps A$1 million fine on Telegram

Australia’s eSafety Commission has fined messaging platform Telegram A$1 million ($640,000) for failing to respond promptly to questions regarding measures it took to prevent child abuse and extremist content. The Commission had asked social media platforms, including Telegram, to provide details on their efforts to combat harmful content. Telegram missed the May 2024 deadline, submitting its response in October, which led to the fine.

eSafety Commissioner Julie Inman Grant emphasised the importance of timely transparency and adherence to Australian law. Telegram, however, disagreed with the penalty, stating that it had fully responded to the questions, and plans to appeal the fine, which it claims was solely due to the delay in response time.

The fine comes amid increasing global scrutiny of Telegram, with growing concerns over its use by extremists. Australia’s spy agency recently noted that a significant portion of counter-terrorism cases involved youth, highlighting the increasing risk posed by online extremist content. If Telegram does not comply with the penalty, the eSafety Commission could pursue further legal action.

For more information on these topics, visit diplomacy.edu.

Australian kids overlook social media age checks

A recent report by Australia’s eSafety regulator reveals that children in the country are finding it easy to bypass age restrictions on social media platforms. The findings come ahead of a government ban, set to take effect at the end of 2025, that will prevent children under the age of 16 from using these platforms. The report highlights data from a national survey on social media use among 8 to 15-year-olds and feedback from eight major services, including YouTube, Facebook, and TikTok.

The report shows that 80% of Australian children aged 8 to 12 were using social media in 2024, with YouTube, TikTok, Instagram, and Snapchat being the most popular platforms. While most platforms, except Reddit, require users to enter their date of birth during sign-up, the report indicates that these systems rely on self-declaration, which can be easily manipulated. Despite these weaknesses, 95% of teens under 16 were found to be active on at least one of the platforms surveyed.

While some platforms, such as TikTok, Twitch, and YouTube, have introduced tools to proactively detect underage users, others have not fully implemented age verification technologies. YouTube remains exempt from the upcoming ban, allowing children under 13 to use the platform with parental supervision. However, eSafety Commissioner Julie Inman Grant stressed that there is still significant work needed to enforce the government’s minimum age legislation effectively.

The report also noted that most of the services surveyed had conducted research to improve their age verification processes. However, as the law approaches, there are increasing calls for app stores to take greater responsibility for enforcing age restrictions.

For more information on these topics, visit diplomacy.edu.

Young people rely on social media for political news

A growing number of young Europeans are turning to social media platforms like TikTok, Instagram, and YouTube as their primary news source, surpassing traditional outlets such as TV and print media. According to the latest European Parliament Youth Survey, 42% of people aged 16 to 30 rely on social media for news about politics and social issues. This shift highlights changing preferences toward fast-paced, accessible content but also raises concerns about the growing risk of disinformation among younger generations.

Younger users, especially those aged 16 to 18, are more likely to trust platforms like TikTok and Instagram, while those aged 25 to 30 tend to rely more on Facebook, online press, and radio for their news. However, the rise of social media as a news source has also led to increased exposure to fake news. A report from the Reuters Institute revealed that 27% of TikTok users struggle to identify misleading content, while Instagram has faced criticism for relaxing its fact-checking systems.

Despite being aware of the risks, young Europeans continue to engage with social media for news. A significant 76% of respondents reported encountering fake news in the past week, yet platforms like Instagram remain the most popular news sources. This trend is impacting trust in political institutions, with many young people expressing scepticism toward the EU and skipping elections due to a lack of information.

The reliance on social media for news has shifted political discourse, as fake news and AI-generated content have been used to manipulate public opinion. The constant exposure to sensationalised false information is also having psychological effects, increasing anxiety and confusion among young people and pushing some to avoid news altogether.

For more information on these topics, visit diplomacy.edu.

FTC names new technology chief as leadership shifts

Jake Denton, a former researcher at the Heritage Foundation, has been appointed as chief technology officer of the US Federal Trade Commission. He replaces Stephanie Nguyen, who had held the position since 2022. The role was first established during the Obama administration to provide insights on emerging technology challenges.

Denton steps into the role as Andrew Ferguson takes over as FTC chairman. Ferguson has voiced concerns about Big Tech’s dominance while cautioning against excessive regulation that could hinder US innovation. Denton has supported artificial intelligence legislation and has urged stronger US involvement in shaping global AI policies.

The Heritage Foundation’s Project 2025, linked to potential conservative policies under a future Trump administration, has outlined proposals for antitrust enforcement that align with right-leaning priorities. Some suggestions have even questioned the FTC’s necessity. Meanwhile, the agency is preparing for a trial against Meta in April and is pursuing an antitrust lawsuit against Amazon.

Ferguson’s stance on ongoing FTC investigations remains unclear, including probes into Microsoft’s partnership with OpenAI and potential consumer protection issues. Trump has praised Ferguson as a leader who supports innovation, making his regulatory approach to Big Tech a key focus in the coming months.

For more information on these topics, visit diplomacy.edu.