Meta to test eBay integration on Facebook Marketplace

Meta is set to trial a new feature allowing users in Germany, France, and the United States to browse eBay listings directly on Facebook Marketplace. Transactions will still be completed on eBay’s platform, but the integration aims to provide Facebook users with a wider selection of products while giving eBay sellers greater exposure.

The move follows a hefty $840 million fine imposed by the European Commission in November over alleged anticompetitive practices related to Facebook Marketplace. While Meta continues to appeal the decision, it says it is working to address regulators’ concerns. The European Commission has yet to comment on the latest development.

Meta’s partnership with eBay reflects broader efforts by tech companies to expand online marketplaces and enhance user experience. The initiative is expected to benefit both buyers and sellers by increasing reach and streamlining access to listings.

EU denies censorship claims made by Meta

The European Commission has rejected accusations from Meta CEO Mark Zuckerberg that European Union laws censor social media, saying regulations only target illegal content. Officials clarified that platforms are required to remove posts deemed harmful to children or democracy, not lawful content.

Zuckerberg recently criticised EU regulations, claiming they stifle innovation and institutionalise censorship. In response, the Commission strongly denied the claims, emphasising its Digital Services Act does not impose censorship but ensures public safety through content regulation.

Meta has decided to end fact-checking in the US for Facebook, Instagram and Threads, opting for a ‘community notes’ system. The system allows users to highlight misleading posts, with notes published if diverse contributors agree they are helpful.

The EU confirmed that such a system could be acceptable in Europe if platforms submit risk assessments and demonstrate effectiveness in content moderation. Independent fact-checking for European users will remain available for US-based content.

EU Court orders damages for data breach by Commission

In a landmark decision, the EU General Court ruled on Wednesday that the European Commission must pay €400 ($412) in damages to a German citizen for violating data protection laws. The case marks the first time the Commission has been held liable for failing to comply with its data regulations.

The court found that the Commission improperly transferred the citizen’s personal data, including an IP address, to Meta Platforms in the United States without adequate safeguards. The breach occurred when the individual used the ‘Sign in with Facebook’ option on the EU login webpage to register for a conference.

The Commission acknowledged the ruling, stating it would review the judgment and its implications. The decision underscores the robust enforcement of the EU’s General Data Protection Regulation (GDPR), which has led to significant penalties against major firms like Meta, LinkedIn, and Klarna for non-compliance.

Apple denies misusing Siri data following $95 million settlement

Apple has clarified that it has never sold data collected by its Siri voice assistant or used it to create marketing profiles. The statement, issued Wednesday, follows a $95 million settlement last week to resolve a class action lawsuit alleging that Siri had inadvertently recorded private conversations and shared them with third parties, including advertisers. Apple denied the claims and admitted no wrongdoing as part of the settlement, which could result in payouts of up to $20 per Siri-enabled device for millions of affected customers.

The controversy stemmed from claims that Siri sometimes activated unintentionally, recording sensitive interactions. Apple emphasised in its statement that Siri data is used minimally and only for real-time server input when necessary, with no retention of audio recordings unless users explicitly opt-in. Even in such cases, the recordings are solely used to improve Siri’s functionality. Apple reaffirmed its commitment to privacy, stating, ‘Apple has never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone.’

This case has drawn attention alongside a similar lawsuit targeting Google’s Voice Assistant, currently pending in federal court in San Jose, California. Both lawsuits are spearheaded by the same legal teams, highlighting growing scrutiny over how tech companies handle voice assistant data.

Brazil warns tech firms to follow laws or face expulsion

Brazilian Supreme Court Judge Alexandre de Moraes reiterated on Wednesday that technology companies must comply with national laws to continue operating in the country. His statement followed Meta’s recent announcement to scale back its US fact-checking program, raising concerns about its impact on Brazil.

Speaking at an event marking the anniversary of anti-institution riots, Moraes emphasised that the court would not tolerate the use of hate speech for profit. Last year, he ordered the suspension of social media platform X for over a month due to its failure to moderate hate speech, a decision later upheld by the court. X owner Elon Musk criticised the move as censorship but ultimately complied with court demands to restore the platform’s services in Brazil.

Brazilian prosecutors have also asked Meta to clarify whether its US fact-checking changes will apply in Brazil, citing an ongoing investigation into social media platforms’ efforts to combat misinformation and violence. Meta has been given 30 days to respond but declined to comment through its local office.

Google TV introduces AI-powered news summaries with Gemini

Google has announced a major update to its TV operating system at CES 2025, integrating its Gemini AI assistant to deliver personalised news summaries. The new ‘News Brief’ feature will scrape news articles and YouTube headlines from trusted sources to generate a concise recap of daily events. Google plans to roll out the feature to both new and existing Google TV devices by late 2025.

The move marks Google’s deeper foray into AI-generated news, a space that has faced legal challenges from media companies over copyright concerns. While rival firms like OpenAI and Microsoft have been sued over unlicensed content use, Google’s News Brief does not currently display its sources, apart from related YouTube videos. AI-generated news has also faced accuracy issues, with previous AI models producing misleading or entirely false headlines.

Beyond news summaries, Google aims to make TVs more interactive, with Gemini allowing users to search for films, shows, and YouTube videos using natural language. Future Google TVs will include sensors to detect when users enter the room, enabling a more personalised experience. As the company continues expanding AI features in consumer technology, the success of News Brief may depend on how well it addresses content accuracy and transparency concerns.

Spain urges neutrality from social media platforms

The Spanish government stressed social media platforms must remain neutral and avoid interfering in political matters. The statement came after X’s owner, Elon Musk, commented on crime data involving foreigners in Catalonia.

Government spokesperson Pilar Alegria emphasised the need for absolute impartiality from such platforms when responding to questions about Musk’s remarks and his ongoing disagreements with European leaders like Keir Starmer and Emmanuel Macron.

Musk had reposted crime statistics from a Spanish newspaper, leading to criticism from Catalan officials. Catalonia’s Socialist leader Salvador Illa warned against using the region’s name to promote hate speech, while Spanish Prime Minister Pedro Sanchez rejected any link between immigration and crime rates.

The Spanish Interior Ministry previously reported stable or declining crime rates, affirming that immigration has no significant impact on criminal activity.

Meta ends fact-checking program in the US

Meta Platforms has announced the termination of its US fact-checking program and eased restrictions on politically charged discussions, such as immigration and gender identity. The decision, which affects Facebook, Instagram, and Threads, marks a significant shift in the company’s content moderation strategy. CEO Mark Zuckerberg framed the move as a return to ‘free expression,’ citing recent US elections as a cultural tipping point. The changes come as Meta seeks to build rapport with the incoming Trump administration.

In place of fact-checking, Meta plans to adopt a ‘Community Notes’ system, similar to that used by Elon Musk’s platform X. The company will also scale back proactive monitoring of hate speech, relying instead on user reports, while continuing to address high-severity violations like terrorism and scams. Meta is also relocating some policy teams from California to other states, signalling a broader operational shift. The decision follows the promotion of Republican policy executive Joel Kaplan to head of global affairs and the appointment of Trump ally Dana White to Meta’s board.

The move has sparked criticism from fact-checking organisations and free speech advocates. Angie Drobnic Holan, head of the International Fact-Checking Network, pushed back against Zuckerberg’s claims of bias, asserting that fact-checkers provide context rather than censorship. Critics, including the Centre for Information Resilience, warn that the policy rollback could exacerbate disinformation. For now, the changes will apply only to the US, with Meta maintaining its fact-checking operations in regions like the European Union, where stricter tech regulations are in place.

As Meta rolls out its ‘Community Notes’ system, global scrutiny is expected to intensify. The European Commission, already investigating Musk’s X over similar practices, noted Meta’s announcement and emphasised compliance with the EU’s Digital Services Act, which mandates robust content regulation. While Meta navigates a complex regulatory and political landscape, the impact of its new policies on disinformation and public trust remains uncertain.

US Supreme Court to decide TikTok’s fate amid ban fears

The future of TikTok in the United States hangs in the balance as the Supreme Court prepares to hear arguments on 10 January over a law that could force the app to sever ties with its Chinese parent company, ByteDance, or face a ban. The case centres on whether the law violates the First Amendment, with TikTok and its creators arguing that it does, while the US government maintains that national security concerns justify the measure. If the government wins, TikTok has stated it would shut down its US operations by 19 January.

Creators who rely on TikTok for income are bracing for uncertainty. Many have taken to the platform to express their frustrations, fearing disruption to their businesses and online communities. Some are already diversifying their presence on other platforms like Instagram and YouTube, though they acknowledge TikTok’s unique algorithm has provided visibility and opportunities not found elsewhere. Industry experts believe many creators are adopting a wait-and-see approach, avoiding drastic moves until the Supreme Court reaches a decision.

The Biden administration has pushed for a resolution without success, while President-elect Donald Trump has asked the court to delay the ban so he can weigh in once in office. If the ban proceeds, app stores and internet providers will be required to stop supporting TikTok, ultimately rendering it unusable. TikTok has warned that even a temporary shutdown could lead to a sharp decline in users, potentially causing lasting damage to the platform. A ruling from the Supreme Court is expected in the coming weeks.

TikTok faces new allegations of child exploitation

TikTok is under heightened scrutiny following newly unsealed allegations from a Utah lawsuit claiming the platform knowingly allowed harmful activities, including child exploitation and sexual misconduct, to persist on its livestreaming feature, TikTok Live. According to the lawsuit, TikTok disregarded the issue because it ‘profited significantly’ from these livestreams. The revelations come as the app faces a potential nationwide ban in the US unless its parent company, ByteDance, divests ownership.

The complaint, filed by Utah’s Division of Consumer Protection in June, accuses TikTok Live of functioning as a ‘virtual strip club,’ connecting minors with adult predators in real-time. Internal documents and investigations, including Project Meramec and Project Jupiter probes, reveal that TikTok was aware of the dangers. The findings indicate that hundreds of thousands of minors bypassed age restrictions and were allegedly groomed by adults to perform explicit acts in exchange for virtual gifts. The probes also uncovered criminal activities such as money laundering and drug sales facilitated through TikTok Live.

TikTok has defended itself, claiming it prioritises user safety and accusing the lawsuit of distorting facts by selectively quoting outdated internal documents. A spokesperson emphasised the platform’s ‘proactive measures’ to support community safety and dismissed the allegations as misleading. However, the unsealed material from the case, released by Utah Judge Coral Sanchez, paints a stark picture of TikTok Live’s risks to minors.

This lawsuit is not an isolated case. In October, 13 US states and Washington, D.C., filed a bipartisan lawsuit accusing TikTok of exploiting children and fostering addiction to the app. Utah Attorney General Sean Reyes called social media a pervasive tool for exploiting America’s youth and welcomed the disclosure of TikTok’s internal communications as critical evidence for demonstrating the platform’s culpability.

Why does it matter?

The controversy unfolds amid ongoing national security concerns about TikTok’s ties to China. President Joe Biden signed legislation authorising a TikTok ban last April, citing risks that the app could share sensitive data with the Chinese government. The US Supreme Court is set to hear arguments on whether to delay the ban on 10 January, with a decision expected shortly thereafter. The case underscores the intensifying debate over social media’s role in safeguarding users while balancing innovation and accountability.