UK watchdog launches enforcement on file-sharing services

The UK’s internet watchdog, Ofcom, has launched a new enforcement programme under the Online Safety Act (OSA), targeting storage and file-sharing services due to concerns over the sharing of child sexual abuse material (CSAM).

The regulator has identified these services as particularly vulnerable to misuse for distributing CSAM and will assess the safety measures in place to prevent such activities.

As part of the enforcement programme, Ofcom has contacted a number of file-storage and sharing services, warning them that formal information requests will be issued soon.

These requests will require the services to submit details on the measures they have implemented or plan to introduce to combat CSAM, along with risk assessments related to illegal content.

Failure to comply with the requirements of the OSA could result in substantial penalties for these companies, with fines reaching up to 10% of their global annual turnover.

Ofcom’s crackdown highlights the growing responsibility for online services to prevent illegal content from being shared on their platforms.

For more information on these topics, visit diplomacy.edu.

FTC confirms no delay in Amazon trial

The US Federal Trade Commission (FTC) announced on Wednesday that it does not need to delay its September trial against Amazon, contradicting an earlier claim by one of its attorneys about resource shortages.

Jonathan Cohen, an FTC lawyer, retracted his statement that cost-cutting measures had strained the agency’s ability to proceed, assuring the court that the FTC is fully prepared to litigate the case.

FTC Chairman Andrew Ferguson reaffirmed the agency’s commitment, dismissing concerns over budget constraints and stating that the FTC will not back down from taking on Big Tech.

Earlier in the day, Cohen had described a ‘dire resource situation,’ citing employee resignations, a hiring freeze, and restrictions on legal expenses. However, he later clarified that these challenges would not impact the case.

The lawsuit, filed in 2023, accuses Amazon of using ‘dark patterns’ to mislead consumers into enrolling in automatically renewing Prime subscriptions, a program with over 200 million users.

With claims exceeding $1 billion, the trial is expected to be a high-profile battle between regulators and one of the world’s largest tech companies. Amazon has denied any wrongdoing, and three of its senior executives are also named in the case.

For more information on these topics, visit diplomacy.edu.

Spain approves bill to regulate AI-generated content

Spain’s government has approved a bill imposing heavy fines on companies that fail to label AI-generated content, aiming to combat the spread of deepfakes.

The legislation, which aligns with the European Union’s AI Act, classifies non-compliance as a serious offence, with penalties reaching up to €35 million or 7% of a company’s global revenue.

Digital Transformation Minister Oscar Lopez stressed that AI can be a force for good but also a tool for misinformation and threats to democracy.

The bill also bans manipulative AI techniques, such as subliminal messaging targeting vulnerable groups, and restricts the use of AI-driven biometric profiling, except in cases of national security.

Spain is one of the first EU nations to implement these strict AI regulations, going beyond the looser US approach, which relies on voluntary compliance.

A newly established AI supervisory agency, AESIA, will oversee enforcement, alongside sector-specific regulators handling privacy, financial markets, and law enforcement concerns.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

Apple unveils age verification tech amid legal debates

Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.

This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.

The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.

For more information on these topics, visit diplomacy.edu.

Europol busts criminal group distributing AI-generated child abuse content

Europol announced on Friday that two dozen people have been arrested for their involvement in a criminal network distributing AI-generated images of child sexual abuse. This operation marks one of the first of its kind, highlighting concerns over the use of AI in creating illegal content. Europol noted that there is currently a lack of national legislation addressing AI-generated child abuse material.

The primary suspect, a Danish national, operated an online platform where he distributed the AI-generated content he created. Users from around the world paid a ‘symbolic online payment’ to access the material. The platform has raised significant concerns about the potential misuse of AI tools for such criminal purposes.

The ongoing operation, which involves authorities from 19 countries, resulted in 25 arrests, with most occurring simultaneously on Wednesday under the leadership of Danish authorities. Europol indicated that more arrests are expected in the coming weeks as the investigation continues.

For more information on these topics, visit diplomacy.edu.

Google faces lawsuit over AI search impact on publishers

An online education company has filed a lawsuit against Google, claiming its AI-generated search overviews are damaging digital publishing.

Chegg alleges the technology reduces demand for original content by keeping users on Google’s platform, ultimately eroding financial incentives for publishers. The company warns this could lead to a weaker online information ecosystem.

Chegg, which provides textbook rentals and homework help, says Google’s AI features have contributed to a drop in traffic and subscribers.

As a result, the company is considering a sale or a move to go private. Chegg’s CEO Nathan Schultz argues Google is profiting from the company’s content without proper compensation, threatening the future of quality educational resources.

A Google spokesperson rejected the claims, insisting AI overviews enhance search and create more opportunities for content discovery. The company maintains that search traffic remains strong, with billions of clicks sent to websites daily.

However, Chegg argues that Google’s dominance in online search allows it to pressure publishers into providing data for AI summaries, leading to fewer visitors to original sites.

The lawsuit marks the first time an individual company has accused Google of antitrust violations over AI-generated search features. A similar case was previously filed on behalf of the news industry. A US judge overseeing another case involving Google’s search monopoly is handling this lawsuit as well.

Google intends to challenge the claims and is appealing a previous ruling that found it held an illegal monopoly in online search.

For more information on these topics, visit diplomacy.edu.