Spain approves bill to regulate AI-generated content

Spain’s government has approved a bill imposing heavy fines on companies that fail to label AI-generated content, aiming to combat the spread of deepfakes.

The legislation, which aligns with the European Union’s AI Act, classifies non-compliance as a serious offence, with penalties reaching up to €35 million or 7% of a company’s global revenue.

Digital Transformation Minister Oscar Lopez stressed that AI can be a force for good but also a tool for misinformation and threats to democracy.

The bill also bans manipulative AI techniques, such as subliminal messaging targeting vulnerable groups, and restricts the use of AI-driven biometric profiling, except in cases of national security.

Spain is one of the first EU nations to implement these strict AI regulations, going beyond the looser US approach, which relies on voluntary compliance.

A newly established AI supervisory agency, AESIA, will oversee enforcement, alongside sector-specific regulators handling privacy, financial markets, and law enforcement concerns.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

Tech giants challenge Australia’s exemption for YouTube

Major social media companies, including Meta, Snapchat, and TikTok, have urged Australia to reconsider its decision to exempt YouTube from a new law banning under-16s from social media platforms.

The legislation, passed in November, imposes strict age restrictions and threatens heavy fines for non-compliance. YouTube, however, is set to be excluded due to its educational value and parental supervision features.

Industry leaders argue that YouTube shares key features with other platforms, such as algorithmic content recommendations and social interaction tools, making its exemption inconsistent with the law’s intent.

Meta called for equal enforcement, while TikTok warned that excluding YouTube would create an ‘illogical, anticompetitive, and short-sighted’ regulation. Snapchat echoed these concerns, insisting that all platforms should be treated fairly.

Experts have pointed out that YouTube, like other platforms, can expose children to addictive and harmful content. The company has responded by strengthening content moderation and expanding its automated detection systems.

The debate highlights broader concerns over online safety and fair competition as Australia moves to enforce some of the world’s strictest social media regulations.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.

UK regulator scrutinises TikTok and Reddit for child privacy concerns

Britain’s privacy regulator, the Information Commissioner’s Office (ICO), has launched an investigation into the child privacy practices of TikTok, Reddit, and Imgur. The ICO is scrutinising how these platforms manage personal data and age verification for users, particularly teenagers, to ensure they comply with UK data protection laws.

The investigation focuses on TikTok’s use of data from 13-17-year-olds to recommend content via its algorithm. The ICO is also examining how Reddit and Imgur assess and protect the privacy of child users. If evidence of legal breaches is found, the ICO will take action, as it did in 2023 when TikTok was fined £12.7 million for mishandling data from children under 13.

Both Reddit and Imgur have expressed a commitment to adhering to UK regulations. Reddit, for example, stated that it plans to roll out updates to meet new age-assurance requirements. Meanwhile, TikTok and Imgur have not yet responded to requests for comment.

The investigation comes amid stricter UK legislation aimed at safeguarding children online, including measures requiring social media platforms to limit harmful content and enforce age checks to prevent underage access to inappropriate material.

For more information on these topics, visit diplomacy.edu.

Apple unveils age verification tech amid legal debates

Apple has rolled out a new feature called ‘age assurance’ to help protect children’s privacy while using apps. The technology allows parents to input their child’s age when setting up an account without disclosing sensitive information like birthdays or government IDs. Instead, parents can share a general ‘age range’ with app developers, putting them in control of what data is shared.

This move comes amid growing pressure from US lawmakers, including those in Utah and South Carolina, who are considering age-verification laws for social media apps. Apple has expressed concerns about collecting sensitive personal data for such verifications, arguing it would require users to hand over unnecessary details for apps that don’t require it.

The age assurance tool allows parents to maintain control over their children’s data while limiting what third parties can access. Meta, which has supported legislation for app stores to verify children’s ages, welcomed the new tech as a step in the right direction, though it raised concerns about its practical implementation.

For more information on these topics, visit diplomacy.edu.

Europol busts criminal group distributing AI-generated child abuse content

Europol announced on Friday that two dozen people have been arrested for their involvement in a criminal network distributing AI-generated images of child sexual abuse. This operation marks one of the first of its kind, highlighting concerns over the use of AI in creating illegal content. Europol noted that there is currently a lack of national legislation addressing AI-generated child abuse material.

The primary suspect, a Danish national, operated an online platform where he distributed the AI-generated content he created. Users from around the world paid a ‘symbolic online payment’ to access the material. The platform has raised significant concerns about the potential misuse of AI tools for such criminal purposes.

The ongoing operation, which involves authorities from 19 countries, resulted in 25 arrests, with most occurring simultaneously on Wednesday under the leadership of Danish authorities. Europol indicated that more arrests are expected in the coming weeks as the investigation continues.

For more information on these topics, visit diplomacy.edu.

Google faces lawsuit over AI search impact on publishers

An online education company has filed a lawsuit against Google, claiming its AI-generated search overviews are damaging digital publishing.

Chegg alleges the technology reduces demand for original content by keeping users on Google’s platform, ultimately eroding financial incentives for publishers. The company warns this could lead to a weaker online information ecosystem.

Chegg, which provides textbook rentals and homework help, says Google’s AI features have contributed to a drop in traffic and subscribers.

As a result, the company is considering a sale or a move to go private. Chegg’s CEO Nathan Schultz argues Google is profiting from the company’s content without proper compensation, threatening the future of quality educational resources.

A Google spokesperson rejected the claims, insisting AI overviews enhance search and create more opportunities for content discovery. The company maintains that search traffic remains strong, with billions of clicks sent to websites daily.

However, Chegg argues that Google’s dominance in online search allows it to pressure publishers into providing data for AI summaries, leading to fewer visitors to original sites.

The lawsuit marks the first time an individual company has accused Google of antitrust violations over AI-generated search features. A similar case was previously filed on behalf of the news industry. A US judge overseeing another case involving Google’s search monopoly is handling this lawsuit as well.

Google intends to challenge the claims and is appealing a previous ruling that found it held an illegal monopoly in online search.

For more information on these topics, visit diplomacy.edu.

Bluesky teams up with IWF to tackle harmful content

Bluesky, the rapidly growing decentralised social media platform, has partnered with the UK-based Internet Watch Foundation (IWF) to combat the spread of child sexual abuse material (CSAM). As part of the collaboration, Bluesky will gain access to the IWF’s tools, which include a list of websites containing CSAM and a catalogue of digital fingerprints, or ‘hashes,’ that identify abusive images. This partnership aims to reduce the risk of users encountering illegal content while helping to keep the platform safe from such material.

Bluesky’s head of trust and safety, Aaron Rodericks, welcomed the partnership as a significant step in protecting users from harmful content. With the platform’s rapid growth—reaching over 30 million users by the end of last month—the move comes at a crucial time. In November, Bluesky announced plans to expand its moderation team to address the rise in harmful material following the influx of new users.

The partnership also highlights the growing concern over online child sexual abuse material. The IWF reported record levels of harmful content last year, with over 291,000 web pages removed from the internet. The foundation’s CEO, Derek Ray-Hill, stressed the urgency of tackling the crisis, calling for a collective effort from governments, tech companies, and society.

For more information on these topics, visit diplomacy.edu.

UK users face reduced cloud security as Apple responds to government pressure

Apple has withdrawn its Advanced Data Protection (ADP) feature for cloud backups in Britain, citing government requirements.

Users attempting to enable the encryption service now receive an error message, while existing users will eventually have to deactivate it. The move weakens iCloud security in the country, allowing authorities access to data that would otherwise be encrypted.

Experts warn that the change compromises user privacy and exposes data to potential cyber threats. Apple has insisted it will not create a backdoor for encrypted services, as doing so would increase security risks.

The UK government has not confirmed whether it issued a Technical Capability Notice, which could mandate such access.

Apple’s decision highlights ongoing tensions between tech companies and governments over encryption policies. Similar legal frameworks exist in countries like Australia, raising concerns that other nations could follow suit.

Security advocates argue that strong encryption is essential for protecting user privacy and safeguarding sensitive information from cybercriminals.

For more information on these topics, visit diplomacy.edu.