European regulators are investigating a previously undisclosed advertising partnership between Google and Meta that targeted teenagers on YouTube and Instagram, the Financial Times reports. The now-cancelled initiative aimed at promoting Instagram to users aged 13 to 17 allegedly bypassed Google’s policies restricting ad personalisation for minors.
The partnership, initially launched in the US with plans for global expansion, has drawn the attention of the European Commission, which has requested extensive internal records from Google, including emails and presentations, to evaluate potential violations. Google, defending its practices, stated that its safeguards for minors remain industry-leading and emphasised recent internal training to reinforce policy compliance.
This inquiry comes amid heightened concerns about the impact of social media on young users. Earlier this year, Meta introduced enhanced privacy features for teenagers on Instagram, reflecting the growing demand for stricter online protections for minors. Neither Meta nor the European Commission has commented on the investigation so far.
OpenAI has launched its text-to-video AI model, Sora, to ChatGPT Plus and Pro users, signalling a broader push into multimodal AI technologies. Initially limited to safety testers, Sora is now available as Sora Turbo at no additional cost, allowing users to create videos up to 20 seconds long in various resolutions and aspect ratios.
The move positions OpenAI to compete with similar tools from Meta, Google, and Stability AI. While the model is accessible in most regions, it remains unavailable in EU countries, the UK, and Switzerland due to regulatory considerations. OpenAI plans to introduce tailored pricing options for Sora next year.
The company emphasised safeguards against misuse, such as blocking harmful content like child exploitation and deepfake abuse. It also plans to gradually expand features, including uploads of people, as it enhances protections. Sora marks another step in OpenAI’s efforts to innovate responsibly in the AI space.
Content moderators in Kenya are suing Meta and its former contractor, Sama, for wrongful dismissal and blacklisting after attempting to unionise. The moderators allege they were excluded from reapplying for similar roles when Meta transitioned to a new contractor, Majorel. This legal dispute sheds light on challenges faced by moderators, particularly those focusing on Ethiopia, who say they received death threats from the Oromo Liberation Army (OLA) for removing violent posts but were ignored by their employer.
According to court filings, the moderators accuse Sama of initially dismissing their complaints, accusing them of fabricating the threats. One moderator, publicly identified by the rebels, was eventually sent to a safe house. The OLA reportedly warned moderators to stop deleting their graphic posts, escalating the atmosphere of fear among employees. Moderators claim Meta failed to address hate speech effectively, leaving them in a constant cycle of reviewing harmful content that did not breach Meta’s policies.
The case also highlights broader concerns over how Meta manages its global network of moderators tasked with handling violent and graphic content. This comes amid separate allegations that Meta allowed violent and hateful posts to proliferate during Ethiopia’s civil conflict, worsening tensions. Out-of-court settlement talks failed last year, and the legal outcomes could shape how content moderation is approached worldwide.
Meta and Sama have refrained from commenting on the latest allegations, while the OLA did not respond to requests. As the trial unfolds, it raises critical questions about accountability and workplace protections for moderators operating in volatile regions.
TikTok and its parent company, ByteDance, have filed an emergency motion with a federal appeals court to temporarily halt a US law that would force ByteDance to sell TikTok by 19 January or face a nationwide ban. The companies argue that without the delay, the popular app could shut down in the US, affecting 170 million monthly users and numerous businesses reliant on the platform.
The motion follows a decision by an appeals court panel upholding the divestment requirement. TikTok’s lawyers assert the Supreme Court should have time to review the case and highlight President-elect Donald Trump’s stated intention to prevent the ban. The incoming administration, they argue, could reconsider the law and render the case moot.
The law granting the US government authority to ban foreign-owned apps over data security concerns has faced criticism, with TikTok warning the decision could disrupt services globally. As the January deadline looms, ByteDance faces challenges in demonstrating sufficient progress toward a divestment to secure an extension, even as political and legal battles intensify.
Pavel Durov, founder of Telegram, appeared in a Paris court on 6 December to address allegations that the messaging app has facilitated criminal activity. Represented by his lawyers, Durov reportedly stated he trusted the French justice system but declined to comment further on the case.
The legal proceedings stem from charges brought against Durov in August, accusing him of running a platform that enables illicit transactions. Following his arrest at Le Bourget airport, he posted a $6 million bail and has been barred from leaving France until March 2025. If convicted, he could face up to 10 years in prison and a fine of 500,000 euros.
Industry experts fear the case against Durov reflects a broader crackdown on privacy-preserving technologies in the Web3 space. Parallels have been drawn with the arrest of Tornado Cash developer Alexey Pertsev, raising concerns over government overreach and the implications for digital privacy.
American TikTok creators are urging their followers to connect on platforms like Instagram and YouTube after a federal appeals court upheld a law that could ban TikTok in the US unless its Chinese parent company, ByteDance, sells its American operations by January 19. The looming deadline has sparked anxiety among creators and businesses reliant on TikTok’s vast reach, which includes 170 million US users.
The platform’s popularity, especially among younger audiences, has turned it into a hub for creators, advertisers, and small businesses, with features like TikTok Shop driving significant economic activity. Some creators, like social media influencer Chris Mowrey, expressed fears about losing their livelihoods, emphasising the potential economic blow to small enterprises and content creators.
While some users are bracing for a shutdown, others remain sceptical about the ban’s likelihood, holding off on major changes until more clarity emerges. In the meantime, creators like Chris Burkett and SnipingForDom are diversifying their presence across platforms to safeguard their communities and content. For many, the uncertainty surrounding TikTok’s future is a stark reminder of the fragile nature of digital ecosystems.
The International Committee of the Red Cross (ICRC) has introduced principles for using AI in its operations, aiming to harness the technology’s benefits while protecting vulnerable populations. The guidelines, unveiled in late November, reflect the organisation’s cautious approach amid growing interest in generative AI, such as ChatGPT, across various sectors.
ICRC delegate Philippe Stoll emphasised the importance of ensuring AI tools are robust and reliable to avoid unintended harm in high-stakes humanitarian contexts. The ICRC defines AI broadly as systems that perform tasks requiring human-like cognition and reasoning, extending beyond popular large language models.
Guided by its core principles of humanity, neutrality, and independence, the ICRC prioritises data protection and insists that AI tools address real needs rather than seeking problems to solve. That approach stems from the risks posed by deploying technologies in regions poorly represented in AI training data, as highlighted by a 2022 cyberattack that exposed sensitive beneficiary information.
Collaboration with academia is central to the ICRC’s strategy. Partnerships like the Meditron project with Switzerland’s EPFL focus on AI for clinical decision-making and logistics. These initiatives aim to improve supply chain management and enhance field operations while aligning with the organisation’s principles.
Despite interest in AI’s potential, Stoll cautioned against using off-the-shelf tools unsuited to specific local challenges, underscoring the need for adaptability and responsible innovation in humanitarian work.
President-elect Donald Trump‘s transition team has invited tech giants, including Google, Microsoft, Meta, Snap, and TikTok, to a mid-December meeting focused on combating online drug sales, according to a report by The Information. The meeting aims to gather insights from these companies about challenges and priorities in addressing illegal drug activity on their platforms.
Trump has pledged to tackle the fentanyl crisis, emphasising stricter measures against its flow into the US from Mexico and Canada. He has also proposed a nationwide advertising campaign to educate the public about the dangers of fentanyl. Tech companies have faced scrutiny in the past for their platforms’ roles in facilitating drug sales, with Meta under investigation and eBay recently settling a case for failing to prevent the sale of devices used to make counterfeit pills.
The transition team has not commented publicly on the meeting, but it underscores the growing intersection between technology and public health issues, particularly as the US grapples with the devastating impact of fentanyl addiction and trafficking.
A US federal appeals court has upheld a law requiring TikTok’s Chinese parent company, ByteDance, to sell its US operations by 19 January or face a nationwide ban. The ruling marks a significant win for the Justice Department, citing national security concerns over ByteDance’s access to Americans’ data and its potential to influence public discourse. TikTok plans to appeal to the Supreme Court, hoping to block the divestment order.
The decision reflects bipartisan efforts to counter perceived threats from China, with Attorney General Merrick Garland calling it a vital step in preventing the Chinese government from exploiting TikTok. Critics, including the ACLU, argue that banning the app infringes on First Amendment rights, as 170 million Americans rely on TikTok for creative and social expression. The Chinese Embassy denounced the ruling, warning it could damage US-China relations.
Unless overturned or extended by President Biden, the law could also set a precedent for restricting other foreign-owned apps. Meanwhile, TikTok’s rivals, such as Meta and Google, have seen gains in the wake of the decision, as advertisers prepare for potential shifts in the social media landscape.
The European Union has directed TikTok to retain data related to Romania’s elections under the Digital Services Act, citing concerns over foreign interference. The move follows pro-Russia ultranationalist Calin Georgescu’s unexpected success in the presidential race’s first round, raising alarm about coordinated social media promotion.
Declassified documents revealed TikTok’s role in amplifying Georgescu’s profile via coordinated accounts and paid algorithms, despite his claim of no campaign spending. Romania‘s security agencies have flagged these efforts as ‘hybrid Russian attacks,’ accusations Russia denies.
TikTok stated its cooperation with the EU in addressing concerns and pledged to establish facts amid allegations. Romania’s runoff presidential vote is seen as pivotal for the country’s EU alignment.