Germany weighs exit from X over algorithm concerns

The German government is debating whether to delete its presence on X due to concerns that the platform’s algorithms encourage polarisation rather than balanced discourse. A government spokesperson confirmed that discussions are ongoing but noted that remaining on X allows access to a wide audience.

Elon Musk’s increasing support for far-right and anti-establishment parties in Europe has intensified scrutiny of the platform. His recent endorsement of Germany’s far-right AfD party, which is classified as extremist by German security services, has drawn criticism. Several German institutions, including universities and trade unions, have already left X in protest.

Government officials insist that their concerns about X are not directly linked to Musk’s political involvement but rather to broader issues surrounding the platform’s influence on public discourse. Compliance with European regulations, particularly in the lead-up to elections, remains under Brussels’ jurisdiction.

China to tighten oversight of online platforms and livestream e-commerce

China’s State Administration for Market Regulation (SAMR) announced plans to strengthen regulations on online platforms and the growing livestream e-commerce sector. The move aims to foster fair competition, protect smaller businesses, and improve consumer trust, according to SAMR Deputy Head Shu Wei.

At a press briefing, Shu highlighted plans to enhance transparency, reduce merchants’ operational costs, and address concerns over platform practices that disrupt fair competition. The regulator aims to improve existing frameworks to safeguard merchants’ and consumers’ rights against platform rule abuse.

The SAMR also intends to crack down on deceptive marketing in livestream e-commerce, a sector experiencing rapid growth but facing criticism for misleading tactics. The initiative is expected to address dishonest practices while ensuring a healthier and more balanced market environment.

Lemon8 gains popularity amid TikTok uncertainty

As the possibility of a US TikTok ban looms, social media influencers are increasingly turning to Lemon8, a new app owned by TikTok’s parent company, ByteDance, as a potential alternative. Lemon8, which launched in the US and UK in 2023, combines the best aspects of Instagram and Pinterest, offering a “lifestyle community” with an emphasis on aesthetically pleasing images, videos, and lifestyle topics like beauty, fashion, food, travel, and pets. With over 1 million daily active users in the US, it has quickly gained traction, especially among Gen Z users.

Influencers are particularly drawn to Lemon8’s integration with TikTok, allowing creators to easily cross-post and boost engagement. Despite the platform’s appeal, however, Lemon8’s future remains uncertain. Like TikTok, it is owned by ByteDance, making it potentially subject to the same US regulations, including a law requiring the company to divest from TikTok or face a ban. This uncertainty is causing anxiety among creators who fear the loss of their primary platform and are seeking safer options like Lemon8.

The app itself is gaining attention for its simplicity and visual appeal. Lemon8 stands out by offering a quieter, less chaotic environment compared to the bustling, ad-heavy content on Instagram and TikTok. Its user interface is designed for easy scrolling, and the app encourages creativity through tools that enhance text, stickers, and music, making posts feel inspirational. While it’s still early days, Lemon8 offers a nostalgic, aesthetically curated space for users who may be growing weary of the larger social media giants.

Though the app is still new, it could provide a refreshing change from the current social media landscape, where content can often feel oversaturated or too commercialised. For now, Lemon8 offers a simpler, more intentional way to engage with online content—a return to a more “authentic” era of social media, reminiscent of earlier Instagram days. Whether it will succeed in the long term remains to be seen, but for now, it’s carving out a niche for users seeking a quieter digital space.

Supreme Court weighs TikTok ban amid national security concerns

The US Supreme Court on Friday appeared inclined to uphold a law requiring a sale or ban of TikTok in the United States by January 19, citing national security risks tied to its Chinese parent company, ByteDance. Justices questioned TikTok’s potential role in enabling the Chinese government to collect data on its 170 million American users and influence public opinion covertly. Chief Justice John Roberts and others expressed concerns about China’s potential to exploit the platform, while also probing implications for free speech protections under the First Amendment.

The law, passed with bipartisan support and signed by outgoing President Joe Biden, has been challenged by TikTok, ByteDance, and app users who argue it infringes on free speech. TikTok’s lawyer, Noel Francisco, warned that without a resolution or extension by President-elect Donald Trump, the platform would likely shut down on January 19. Francisco emphasised TikTok’s role as a key platform for expression and called for at least a temporary halt to the law.

Liberal and conservative justices alike acknowledged the tension between national security and constitutional rights. Justice Elena Kagan raised historical parallels to Cold War-era restrictions, while Justice Brett Kavanaugh highlighted the long-term risks of data collection. Solicitor General Elizabeth Prelogar, representing the Biden administration, argued that TikTok’s foreign ownership poses a grave threat, enabling covert manipulation and espionage. She defended Congress’s right to act in the interest of national security.

With global trade tensions and fears of digital surveillance mounting, the Supreme Court’s decision will have wide-ranging implications for technology, free speech, and US-China relations. The court is now considering whether to grant a temporary stay, providing Trump’s incoming administration an opportunity to address the issue politically.

Tokyo plans to expose makers of malicious AI systems

The Japanese government is considering publicly disclosing the names of developers behind malicious artificial intelligence systems as part of efforts to combat disinformation and cyberattacks. The move, aimed at ensuring accountability, follows a government panel’s recommendation that stricter legal frameworks are necessary to prevent AI misuse.

The proposed bill, expected to be submitted to parliament soon, will focus on gathering information on harmful AI activities and encouraging developers to cooperate with government investigations. However, it will stop short of imposing penalties on offenders, amid concerns that harsh measures might discourage AI innovation.

Japan’s government may also share its findings with the public if harmful AI systems cause significant damage, such as preventing access to vital public services. While the bill aims to balance innovation with public safety, questions remain about how the government will decide what constitutes a “malicious” AI system and the potential impact on freedom of expression.

Regulators weigh in on Musk’s lawsuit against OpenAI and Microsoft

US antitrust regulators provided legal insights on Elon Musk’s lawsuit against OpenAI and Microsoft, alleging anticompetitive practices. While not taking a formal stance, the Federal Trade Commission (FTC) and Department of Justice (DOJ) highlighted key legal doctrines supporting Musk’s claims ahead of a court hearing in Oakland, California. Musk, a co-founder of OpenAI and now leading AI startup xAI, accuses OpenAI of enforcing restrictive agreements and sharing board members with Microsoft to stifle competition.

The lawsuit also claims OpenAI orchestrated an investor boycott against rivals. Regulators noted such boycotts are legally actionable, even if the alleged organiser isn’t directly involved. OpenAI has denied these allegations, labelling them baseless harassment. Meanwhile, the FTC is conducting a broader probe into AI partnerships, including those between Microsoft and OpenAI, to assess potential antitrust violations.

Microsoft declined to comment on the case, while OpenAI pointed to prior court filings refuting Musk’s claims. However, the FTC and DOJ stressed that even former board members, like Reid Hoffman, could retain sensitive competitive information, reinforcing Musk’s concerns about anticompetitive practices.

Musk’s legal team sees the regulators’ involvement as validation of the seriousness of the case, underscoring the heightened scrutiny around AI collaborations and their impact on competition.

Meta accused of using pirated books for AI

A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.

The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.

The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.

US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.

Meta pushes free speech at the cost of content control

Meta has announced that Instagram and Threads users will no longer be able to opt out of seeing political content from accounts they don’t follow. The change, part of a broader push toward promoting “free expression,” will take effect in the US this week and expand globally soon after. Users will be able to adjust how much political content they see but won’t be able to block it entirely.

Adam Mosseri, head of Instagram and Threads, had previously expressed reluctance to feature political posts, favouring community-focused content like sports and fashion. However, he now claims that users have asked to see more political material. Critics, including social media experts, argue the shift is driven by changing political dynamics in the US, particularly with Donald Trump’s imminent return to the White House.

While some users have welcomed Meta’s stance on free speech, many worry it could amplify misinformation and hate speech. Experts also caution that marginalised groups may face increased harm due to fewer content moderation measures. The changes could also push discontented users toward rival platforms like Bluesky, raising questions about Meta’s long-term strategy.

Brazil’s Lula criticises Meta’s move to end US fact-checking program

Brazilian President Luiz Inácio Lula da Silva has condemned Meta’s decision to discontinue its fact-checking program in the United States, calling it a grave issue. Speaking in Brasília on Thursday, Lula emphasised the need for accountability in digital communication, equating its responsibilities to those of traditional media. He announced plans to meet with government officials to discuss the matter.

Meta’s recent decision has prompted Brazilian prosecutors to seek clarification on whether the changes will affect the country. The company has been given 30 days to respond as part of an ongoing investigation into how social media platforms address misinformation and online violence in Brazil.

Justice Alexandre de Moraes of Brazil’s Supreme Court, known for his strict oversight of tech companies, reiterated that social media firms must adhere to Brazilian laws to continue operating in the country. Last year, he temporarily suspended X (formerly Twitter) over non-compliance with local regulations.

Meta has so far declined to comment on the matter in Brazil, fueling concerns over its commitment to tackling misinformation globally. The outcome of Brazil’s inquiry could have broader implications for how tech firms balance local laws with global policy changes.

Google introduces AI-powered ‘Daily Listen’ podcast feature

Google is testing a new feature called “Daily Listen,” which generates personalised AI-powered podcasts based on users’ Discover feeds. The feature, currently rolling out to US users in the Search Labs experiment, provides a five-minute audio summary of topics tailored to individual interests. Each podcast includes links to related stories, allowing listeners to explore subjects in greater depth.

The experience is integrated with Google’s Discover and Search tools, using followed topics to refine content recommendations. Daily Listen functions similarly to NotebookLM’s Audio Overviews, which create AI-generated audio summaries based on shared documents. Users who have access to the feature will see a “Daily Listen” card on their Google app’s home screen, displaying a play button and episode length.

Once launched, the podcast plays alongside a rolling transcript, offering a seamless blend of text and audio. Google aims to enhance how users consume news and stay informed, making the experience more interactive and personalised. The feature reflects the company’s ongoing push into AI-driven content delivery.