Content moderators in Kenya are suing Meta and its former contractor, Sama, for wrongful dismissal and blacklisting after attempting to unionise. The moderators allege they were excluded from reapplying for similar roles when Meta transitioned to a new contractor, Majorel. This legal dispute sheds light on challenges faced by moderators, particularly those focusing on Ethiopia, who say they received death threats from the Oromo Liberation Army (OLA) for removing violent posts but were ignored by their employer.
According to court filings, the moderators accuse Sama of initially dismissing their complaints, accusing them of fabricating the threats. One moderator, publicly identified by the rebels, was eventually sent to a safe house. The OLA reportedly warned moderators to stop deleting their graphic posts, escalating the atmosphere of fear among employees. Moderators claim Meta failed to address hate speech effectively, leaving them in a constant cycle of reviewing harmful content that did not breach Meta’s policies.
The case also highlights broader concerns over how Meta manages its global network of moderators tasked with handling violent and graphic content. This comes amid separate allegations that Meta allowed violent and hateful posts to proliferate during Ethiopia’s civil conflict, worsening tensions. Out-of-court settlement talks failed last year, and the legal outcomes could shape how content moderation is approached worldwide.
Meta and Sama have refrained from commenting on the latest allegations, while the OLA did not respond to requests. As the trial unfolds, it raises critical questions about accountability and workplace protections for moderators operating in volatile regions.
TikTok and its parent company, ByteDance, have filed an emergency motion with a federal appeals court to temporarily halt a US law that would force ByteDance to sell TikTok by 19 January or face a nationwide ban. The companies argue that without the delay, the popular app could shut down in the US, affecting 170 million monthly users and numerous businesses reliant on the platform.
The motion follows a decision by an appeals court panel upholding the divestment requirement. TikTok’s lawyers assert the Supreme Court should have time to review the case and highlight President-elect Donald Trump’s stated intention to prevent the ban. The incoming administration, they argue, could reconsider the law and render the case moot.
The law granting the US government authority to ban foreign-owned apps over data security concerns has faced criticism, with TikTok warning the decision could disrupt services globally. As the January deadline looms, ByteDance faces challenges in demonstrating sufficient progress toward a divestment to secure an extension, even as political and legal battles intensify.
Pavel Durov, founder of Telegram, appeared in a Paris court on 6 December to address allegations that the messaging app has facilitated criminal activity. Represented by his lawyers, Durov reportedly stated he trusted the French justice system but declined to comment further on the case.
The legal proceedings stem from charges brought against Durov in August, accusing him of running a platform that enables illicit transactions. Following his arrest at Le Bourget airport, he posted a $6 million bail and has been barred from leaving France until March 2025. If convicted, he could face up to 10 years in prison and a fine of 500,000 euros.
Industry experts fear the case against Durov reflects a broader crackdown on privacy-preserving technologies in the Web3 space. Parallels have been drawn with the arrest of Tornado Cash developer Alexey Pertsev, raising concerns over government overreach and the implications for digital privacy.
American TikTok creators are urging their followers to connect on platforms like Instagram and YouTube after a federal appeals court upheld a law that could ban TikTok in the US unless its Chinese parent company, ByteDance, sells its American operations by January 19. The looming deadline has sparked anxiety among creators and businesses reliant on TikTok’s vast reach, which includes 170 million US users.
The platform’s popularity, especially among younger audiences, has turned it into a hub for creators, advertisers, and small businesses, with features like TikTok Shop driving significant economic activity. Some creators, like social media influencer Chris Mowrey, expressed fears about losing their livelihoods, emphasising the potential economic blow to small enterprises and content creators.
While some users are bracing for a shutdown, others remain sceptical about the ban’s likelihood, holding off on major changes until more clarity emerges. In the meantime, creators like Chris Burkett and SnipingForDom are diversifying their presence across platforms to safeguard their communities and content. For many, the uncertainty surrounding TikTok’s future is a stark reminder of the fragile nature of digital ecosystems.
The International Committee of the Red Cross (ICRC) has introduced principles for using AI in its operations, aiming to harness the technology’s benefits while protecting vulnerable populations. The guidelines, unveiled in late November, reflect the organisation’s cautious approach amid growing interest in generative AI, such as ChatGPT, across various sectors.
ICRC delegate Philippe Stoll emphasised the importance of ensuring AI tools are robust and reliable to avoid unintended harm in high-stakes humanitarian contexts. The ICRC defines AI broadly as systems that perform tasks requiring human-like cognition and reasoning, extending beyond popular large language models.
Guided by its core principles of humanity, neutrality, and independence, the ICRC prioritises data protection and insists that AI tools address real needs rather than seeking problems to solve. That approach stems from the risks posed by deploying technologies in regions poorly represented in AI training data, as highlighted by a 2022 cyberattack that exposed sensitive beneficiary information.
Collaboration with academia is central to the ICRC’s strategy. Partnerships like the Meditron project with Switzerland’s EPFL focus on AI for clinical decision-making and logistics. These initiatives aim to improve supply chain management and enhance field operations while aligning with the organisation’s principles.
Despite interest in AI’s potential, Stoll cautioned against using off-the-shelf tools unsuited to specific local challenges, underscoring the need for adaptability and responsible innovation in humanitarian work.
President-elect Donald Trump‘s transition team has invited tech giants, including Google, Microsoft, Meta, Snap, and TikTok, to a mid-December meeting focused on combating online drug sales, according to a report by The Information. The meeting aims to gather insights from these companies about challenges and priorities in addressing illegal drug activity on their platforms.
Trump has pledged to tackle the fentanyl crisis, emphasising stricter measures against its flow into the US from Mexico and Canada. He has also proposed a nationwide advertising campaign to educate the public about the dangers of fentanyl. Tech companies have faced scrutiny in the past for their platforms’ roles in facilitating drug sales, with Meta under investigation and eBay recently settling a case for failing to prevent the sale of devices used to make counterfeit pills.
The transition team has not commented publicly on the meeting, but it underscores the growing intersection between technology and public health issues, particularly as the US grapples with the devastating impact of fentanyl addiction and trafficking.
A US federal appeals court has upheld a law requiring TikTok’s Chinese parent company, ByteDance, to sell its US operations by 19 January or face a nationwide ban. The ruling marks a significant win for the Justice Department, citing national security concerns over ByteDance’s access to Americans’ data and its potential to influence public discourse. TikTok plans to appeal to the Supreme Court, hoping to block the divestment order.
The decision reflects bipartisan efforts to counter perceived threats from China, with Attorney General Merrick Garland calling it a vital step in preventing the Chinese government from exploiting TikTok. Critics, including the ACLU, argue that banning the app infringes on First Amendment rights, as 170 million Americans rely on TikTok for creative and social expression. The Chinese Embassy denounced the ruling, warning it could damage US-China relations.
Unless overturned or extended by President Biden, the law could also set a precedent for restricting other foreign-owned apps. Meanwhile, TikTok’s rivals, such as Meta and Google, have seen gains in the wake of the decision, as advertisers prepare for potential shifts in the social media landscape.
The European Union has directed TikTok to retain data related to Romania’s elections under the Digital Services Act, citing concerns over foreign interference. The move follows pro-Russia ultranationalist Calin Georgescu’s unexpected success in the presidential race’s first round, raising alarm about coordinated social media promotion.
Declassified documents revealed TikTok’s role in amplifying Georgescu’s profile via coordinated accounts and paid algorithms, despite his claim of no campaign spending. Romania‘s security agencies have flagged these efforts as ‘hybrid Russian attacks,’ accusations Russia denies.
TikTok stated its cooperation with the EU in addressing concerns and pledged to establish facts amid allegations. Romania’s runoff presidential vote is seen as pivotal for the country’s EU alignment.
AI startup Perplexity has expanded its publisher partnerships, adding media outlets such as the Los Angeles Times and The Independent. These new partners will benefit from a program that shares ad revenue when their content is referenced on the platform. The initiative also provides publishers with access to Perplexity’s API and analytics tools, enabling them to track content performance and trends.
The program, launched in July, has attracted notable partners from Japan, Spain, and Latin America, including Prisa Media and Newspicks. Existing collaborators include TIME, Der Spiegel, and Fortune. Perplexity highlighted the importance of diverse media representation, stating that the partnerships enhance the accuracy and depth of its AI-powered responses.
Backed by Amazon founder Jeff Bezos and Nvidia, Perplexity aims to challenge Google’s dominance in the search engine market. The company has also begun testing advertising on its platform, seeking to monetise its AI search capabilities.
Perplexity’s growth has not been without challenges. It faces lawsuits from News Corp-owned publishers, including Dow Jones and New York Post, over alleged copyright violations. The New York Times has also issued a cease-and-desist notice, demanding the removal of its content from Perplexity’s generative AI tools.
Microsoft has introduced Copilot Vision, an AI-powered feature available in a limited US preview for users of Microsoft Edge. This experimental tool, part of the Copilot Labs program, can read web pages to answer user queries, summarise and translate content, and even assist with tasks like finding discounts or offering gaming tips. For example, it can provide recipes from a cooking site or strategic advice during an online chess game.
To address privacy concerns, Microsoft emphasises that Copilot Vision deletes all processed data at the end of each session and does not store information for model training. The feature is initially restricted to a pre-approved list of popular websites, excluding sensitive or paywalled content, though Microsoft plans to expand compatibility over time.
Microsoft’s cautious rollout reflects ongoing efforts to balance innovation with publisher concerns over AI’s use of web data. The company is collaborating with third-party publishers to ensure the tool benefits users without compromising website content or functionality.