Tokyo plans to expose makers of malicious AI systems

The Japanese government is considering publicly disclosing the names of developers behind malicious artificial intelligence systems as part of efforts to combat disinformation and cyberattacks. The move, aimed at ensuring accountability, follows a government panel’s recommendation that stricter legal frameworks are necessary to prevent AI misuse.

The proposed bill, expected to be submitted to parliament soon, will focus on gathering information on harmful AI activities and encouraging developers to cooperate with government investigations. However, it will stop short of imposing penalties on offenders, amid concerns that harsh measures might discourage AI innovation.

Japan’s government may also share its findings with the public if harmful AI systems cause significant damage, such as preventing access to vital public services. While the bill aims to balance innovation with public safety, questions remain about how the government will decide what constitutes a “malicious” AI system and the potential impact on freedom of expression.

Regulators weigh in on Musk’s lawsuit against OpenAI and Microsoft

US antitrust regulators provided legal insights on Elon Musk’s lawsuit against OpenAI and Microsoft, alleging anticompetitive practices. While not taking a formal stance, the Federal Trade Commission (FTC) and Department of Justice (DOJ) highlighted key legal doctrines supporting Musk’s claims ahead of a court hearing in Oakland, California. Musk, a co-founder of OpenAI and now leading AI startup xAI, accuses OpenAI of enforcing restrictive agreements and sharing board members with Microsoft to stifle competition.

The lawsuit also claims OpenAI orchestrated an investor boycott against rivals. Regulators noted such boycotts are legally actionable, even if the alleged organiser isn’t directly involved. OpenAI has denied these allegations, labelling them baseless harassment. Meanwhile, the FTC is conducting a broader probe into AI partnerships, including those between Microsoft and OpenAI, to assess potential antitrust violations.

Microsoft declined to comment on the case, while OpenAI pointed to prior court filings refuting Musk’s claims. However, the FTC and DOJ stressed that even former board members, like Reid Hoffman, could retain sensitive competitive information, reinforcing Musk’s concerns about anticompetitive practices.

Musk’s legal team sees the regulators’ involvement as validation of the seriousness of the case, underscoring the heightened scrutiny around AI collaborations and their impact on competition.

Meta accused of using pirated books for AI

A group of authors, including Ta-Nehisi Coates and Sarah Silverman, has accused Meta Platforms of using pirated books to train its AI systems with CEO Mark Zuckerberg’s approval. Newly disclosed court documents filed in California allege that Meta knowingly relied on the LibGen dataset, which contains millions of pirated works, to develop its large language model, Llama.

The lawsuit, initially filed in 2023, claims Meta infringed on copyright by using the authors’ works without permission. The authors argue that internal Meta communications reveal concerns within the company about the dataset’s legality, which were ultimately overruled. Meta has not yet responded to the latest allegations.

The case is one of several challenging the use of copyrighted materials to train AI systems. While defendants in similar lawsuits have cited fair use, the authors contend that newly uncovered evidence strengthens their claims. They have requested permission to file an updated complaint, adding computer fraud allegations and revisiting dismissed claims related to copyright management information.

US District Judge Vince Chhabria has allowed the authors to file an amended complaint but expressed doubts about the validity of some new claims. The outcome of the case could have broader implications for how AI companies utilise copyrighted content in training data.

Meta pushes free speech at the cost of content control

Meta has announced that Instagram and Threads users will no longer be able to opt out of seeing political content from accounts they don’t follow. The change, part of a broader push toward promoting “free expression,” will take effect in the US this week and expand globally soon after. Users will be able to adjust how much political content they see but won’t be able to block it entirely.

Adam Mosseri, head of Instagram and Threads, had previously expressed reluctance to feature political posts, favouring community-focused content like sports and fashion. However, he now claims that users have asked to see more political material. Critics, including social media experts, argue the shift is driven by changing political dynamics in the US, particularly with Donald Trump’s imminent return to the White House.

While some users have welcomed Meta’s stance on free speech, many worry it could amplify misinformation and hate speech. Experts also caution that marginalised groups may face increased harm due to fewer content moderation measures. The changes could also push discontented users toward rival platforms like Bluesky, raising questions about Meta’s long-term strategy.

Brazil’s Lula criticises Meta’s move to end US fact-checking program

Brazilian President Luiz Inácio Lula da Silva has condemned Meta’s decision to discontinue its fact-checking program in the United States, calling it a grave issue. Speaking in Brasília on Thursday, Lula emphasised the need for accountability in digital communication, equating its responsibilities to those of traditional media. He announced plans to meet with government officials to discuss the matter.

Meta’s recent decision has prompted Brazilian prosecutors to seek clarification on whether the changes will affect the country. The company has been given 30 days to respond as part of an ongoing investigation into how social media platforms address misinformation and online violence in Brazil.

Justice Alexandre de Moraes of Brazil’s Supreme Court, known for his strict oversight of tech companies, reiterated that social media firms must adhere to Brazilian laws to continue operating in the country. Last year, he temporarily suspended X (formerly Twitter) over non-compliance with local regulations.

Meta has so far declined to comment on the matter in Brazil, fueling concerns over its commitment to tackling misinformation globally. The outcome of Brazil’s inquiry could have broader implications for how tech firms balance local laws with global policy changes.

Google introduces AI-powered ‘Daily Listen’ podcast feature

Google is testing a new feature called “Daily Listen,” which generates personalised AI-powered podcasts based on users’ Discover feeds. The feature, currently rolling out to US users in the Search Labs experiment, provides a five-minute audio summary of topics tailored to individual interests. Each podcast includes links to related stories, allowing listeners to explore subjects in greater depth.

The experience is integrated with Google’s Discover and Search tools, using followed topics to refine content recommendations. Daily Listen functions similarly to NotebookLM’s Audio Overviews, which create AI-generated audio summaries based on shared documents. Users who have access to the feature will see a “Daily Listen” card on their Google app’s home screen, displaying a play button and episode length.

Once launched, the podcast plays alongside a rolling transcript, offering a seamless blend of text and audio. Google aims to enhance how users consume news and stay informed, making the experience more interactive and personalised. The feature reflects the company’s ongoing push into AI-driven content delivery.

Frank McCourt’s Project Liberty proposes TikTok US buyout

Frank McCourt’s Project Liberty, along with a group of partners, has formally proposed a bid to acquire TikTok’s US assets from ByteDance. The consortium announced its intentions just ahead of ByteDance’s January 19 deadline to sell the platform or face a ban under legislation signed by President Joe Biden in April.

The group has gathered sufficient financial backing, including interest from private equity funds, family offices, and high-net-worth individuals, with debt financing from a leading US bank. The proposed value of the deal has not been disclosed.

McCourt stated the goal is to keep TikTok accessible to millions of US users without relying on its current algorithm while preventing a ban. Efforts are underway to engage with ByteDance, President-elect Trump, and the incoming administration to finalise the deal.

Meta to test eBay integration on Facebook Marketplace

Meta is set to trial a new feature allowing users in Germany, France, and the United States to browse eBay listings directly on Facebook Marketplace. Transactions will still be completed on eBay’s platform, but the integration aims to provide Facebook users with a wider selection of products while giving eBay sellers greater exposure.

The move follows a hefty $840 million fine imposed by the European Commission in November over alleged anticompetitive practices related to Facebook Marketplace. While Meta continues to appeal the decision, it says it is working to address regulators’ concerns. The European Commission has yet to comment on the latest development.

Meta’s partnership with eBay reflects broader efforts by tech companies to expand online marketplaces and enhance user experience. The initiative is expected to benefit both buyers and sellers by increasing reach and streamlining access to listings.

EU denies censorship claims made by Meta

The European Commission has rejected accusations from Meta CEO Mark Zuckerberg that European Union laws censor social media, saying regulations only target illegal content. Officials clarified that platforms are required to remove posts deemed harmful to children or democracy, not lawful content.

Zuckerberg recently criticised EU regulations, claiming they stifle innovation and institutionalise censorship. In response, the Commission strongly denied the claims, emphasising its Digital Services Act does not impose censorship but ensures public safety through content regulation.

Meta has decided to end fact-checking in the US for Facebook, Instagram and Threads, opting for a ‘community notes’ system. The system allows users to highlight misleading posts, with notes published if diverse contributors agree they are helpful.

The EU confirmed that such a system could be acceptable in Europe if platforms submit risk assessments and demonstrate effectiveness in content moderation. Independent fact-checking for European users will remain available for US-based content.

EU Court orders damages for data breach by Commission

In a landmark decision, the EU General Court ruled on Wednesday that the European Commission must pay €400 ($412) in damages to a German citizen for violating data protection laws. The case marks the first time the Commission has been held liable for failing to comply with its data regulations.

The court found that the Commission improperly transferred the citizen’s personal data, including an IP address, to Meta Platforms in the United States without adequate safeguards. The breach occurred when the individual used the ‘Sign in with Facebook’ option on the EU login webpage to register for a conference.

The Commission acknowledged the ruling, stating it would review the judgment and its implications. The decision underscores the robust enforcement of the EU’s General Data Protection Regulation (GDPR), which has led to significant penalties against major firms like Meta, LinkedIn, and Klarna for non-compliance.