Authors challenge Meta’s use of their books in AI training

A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Meta has taken a significant step forward as a federal judge has ruled that the case will continue.

The authors allege that Meta used their books to train its Llama AI models without consent, violating their intellectual property rights.

They further claim that Meta intentionally removed copyright management information (CMI) from the works to conceal the alleged infringement.

Meta, however, defends its actions, arguing that the training of AI models qualifies as fair use and that the authors lack standing to sue.

Despite this, the judge allowed the lawsuit to move ahead, acknowledging that the authors’ claims suggest concrete injury, specifically regarding the removal of CMI to hide the use of copyrighted works.

While the lawsuit touches on several legal points, the judge dismissed claims related to the California Comprehensive Computer Data Access and Fraud Act, stating that there was no evidence of Meta accessing the authors’ computers or servers.

Meta’s defence team has continued to assert that the AI training practices were legally sound, though the ongoing case will likely provide more insight into the company’s stance on copyright.

The ruling adds to the growing list of copyright-related lawsuits involving AI models, including one filed by The New York Times against OpenAI. As the debate around AI and intellectual property rights intensifies, this case could set important precedents.

For more information on these topics, visit diplomacy.edu.

NHS looks into Medefer data flaw after security concerns

NHS is investigating allegations that a software flaw at private medical services company Medefer left patient data vulnerable to hacking.

The flaw, discovered in November, affected Medefer’s internal patient record system in the UK, which handles 1,500 NHS referrals monthly.

A software engineer who found the issue believes the vulnerability may have existed for six years, but Medefer denies this claim, stating no data has been compromised.

The engineer discovered that unprotected application programming interfaces (APIs) could have allowed outsiders to access sensitive patient information.

While Medefer has insisted that there is no evidence of any breach, they have commissioned an external security agency to review their systems. The agency confirmed that no breach was found, and the company asserts that the flaw was fixed within 48 hours of being discovered.

Cybersecurity experts have raised concerns about the potential risks posed by the flaw, emphasising that a proper investigation should have been conducted immediately.

Medefer reported the issue to the Information Commissioner’s Office (ICO) and the Care Quality Commission (CQC), both of which found no further action necessary. However, experts suggest that a more thorough response could have been beneficial given the sensitive nature of the data involved.

For more information on these topics, visit diplomacy.edu.

Labour probe launched into Scale AI’s pay and working conditions

The United States Department of Labor is investigating Scale AI, a data labeling startup backed by Nvidia, Amazon, and Meta, for its compliance with fair pay and working conditions under the Fair Labor Standards Act.

The inquiry began nearly a year ago during Joe Biden’s presidency, with officials examining whether the company meets federal labour regulations. Scale AI has been cooperating with the department to clarify its business practices and the evolving nature of the AI sector.

Founded in 2016, Scale AI plays a crucial role in training advanced AI models by providing accurately labeled data. The company also operates a platform where researchers exchange AI-related insights, with contributors spanning over 9,000 locations worldwide.

In response to the investigation, a company spokesperson stated that the majority of payments to contributors are made on time, with 90% of payment-related inquiries resolved within three days.

Valued at $14 billion following a late-stage funding round last year, Scale AI serves major clients such as OpenAI, Cohere, Microsoft, and Morgan Stanley.

The company insists that contributor feedback is overwhelmingly positive and maintains that it prioritises fair pay and support for its workforce.

For more information on these topics, visit diplomacy.edu.

Fast-delivery firms face antitrust scrutiny in India

Fast-delivery giants Zomato, Swiggy, and Zepto are facing an antitrust investigation in India over allegations of deep discounting practices that harm smaller retailers.

The All India Consumer Products Distributors Federation (AICPDF), which represents 400,000 distributors, has filed a case with the Competition Commission of India (CCI) to examine the business practices of these companies.

They claim that the discounting strategies of these platforms result in unfair pricing models that harm traditional retailers.

The quick-commerce sector in India, where products are delivered within minutes from local warehouses, has grown rapidly in recent years. However, this growth has come at the expense of brick-and-mortar stores, which cannot match the discounts offered by online platforms.

A recent survey showed a significant shift in consumer behaviour, with many shoppers reducing their purchases from supermarkets and independent stores due to the appeal of fast-delivery options.

The filing by the AICPDF, which has reviewed the pricing of several popular products, accuses companies like Zepto, Swiggy’s Instamart, and Zomato’s Blinkit of offering products at prices significantly lower than those available in traditional stores.

However, this has raised concerns about the long-term impact on local businesses. The CCI is now set to review the case, which may result in a formal investigation.

As India’s quick-commerce market continues to grow, estimated to reach $35 billion by 2030, the regulatory scrutiny of this sector is intensifying. The outcome of this case could shape the future of the industry, especially as companies like Zepto and Swiggy prepare for further expansion.

For more information on these topics, visit diplomacy.edu.

Google acknowledges AI being used for harmful content

Google has reported receiving over 250 complaints globally about its AI software being used to create deepfake terrorist content, according to Australia’s eSafety Commission.

The tech giant also acknowledged dozens of user reports alleging that its AI program, Gemini, was being exploited to generate child abuse material. Under Australian law, companies must provide regular updates on their efforts to minimise harm or risk hefty fines.

The eSafety Commission described Google’s disclosure as a ‘world-first insight’ into how AI tools may be misused to produce harmful and illegal content.

Between April 2023 and February 2024, Google received 258 reports of suspected AI-generated extremist material and 86 related to child exploitation. However, the company did not specify how many of these reports were verified.

A Google spokesperson stated that the company strictly prohibits AI-generated content related to terrorism, child abuse, and other illegal activities.

While it uses automated detection to remove AI-generated child exploitation material, the same system is not applied to extremist content.

Meanwhile, the regulator has previously fined platforms like X (formerly Twitter) and Telegram for failing to meet reporting requirements, with both companies planning to appeal.

For more information on these topics, visit diplomacy.edu.

UK artists raise alarm over AI law proposals

A new proposal by the UK government to alter copyright laws has sparked significant concern among artists, particularly in Devon. The changes would allow AI companies to use the content found on the internet, including artwork, to help train their models unless the creators opt-out. Artists like Sarah McIntyre, an illustrator from Bovey Tracey, argue that such a shift could undermine their rights, making it harder for them to control the use of their work and potentially depriving them of income.

The Devon Artist Network has expressed strong opposition to these plans, warning that they could have a devastating impact on creative industries. They believe that creators should retain control over their work, without needing to actively opt out of its use by AI. While some, like Mike Phillips from the University of Plymouth in the UK, suggest that AI could help artists track copyright violations, the majority of artists remain wary of the proposed changes.

The Department for Science, Innovation and Technology has acknowledged the concerns and confirmed that no decisions have yet been made. However, it has stated that the current copyright framework is limiting the potential of both the creative and AI sectors. As consultations close, the future of the proposal remains uncertain.

For more information on these topics, visit diplomacy.edu.

Google warns breakup plans could harm economy and security

Google has urged the Trump administration to reconsider efforts to break up the company as part of ongoing antitrust lawsuits.

The meeting with government officials took place last week, according to a source familiar with the matter. The United States Department of Justice (DOJ) is pursuing two cases against Google, focusing on its dominance in search and advertising technology.

Executives at Google have expressed concerns that proposed remedies, including the potential divestment of the Chrome browser and changes to search engine agreements, could negatively impact the American economy and national security.

The DOJ has not yet commented on the discussions. A trial to determine appropriate remedies is set for April, with a final ruling expected in August.

President Trump’s administration is expected to take a softer approach to antitrust enforcement compared to his predecessor.

Industry experts believe this could lead to adjustments in the DOJ’s stance on breaking up Google, potentially reshaping the legal battle over its market power.

For more information on these topics, visit diplomacy.edu.

Aylo Holdings faces legal pressure over privacy concerns

Canada’s privacy commissioner has launched legal action against Aylo Holdings, the Montreal-based operator of Pornhub and other adult websites, for failing to ensure consent from individuals featured in uploaded content.

Commissioner Philippe Dufresne said Aylo had not adequately addressed concerns raised in an earlier investigation, which found the company allowed intimate images to be shared without the direct permission of those depicted.

A Federal Court order is being sought to enforce compliance with privacy laws in Canada. Aylo Holdings has denied violating privacy laws and expressed disappointment at the legal action.

The company claims it has been in ongoing discussions with regulators and has implemented significant measures to prevent non-consensual content from being shared. These include mandatory uploader verification, proof of consent for all participants, stricter moderation, and banning content downloads.

The case stems from a complaint by a woman whose ex-boyfriend uploaded intimate images of her without her consent.

Although Aylo says the incident occurred in 2015 and policies have since improved, the privacy commissioner insists that stronger enforcement is needed. The legal battle could have significant implications for content moderation policies in the adult entertainment industry.

For more information on these topics, visit diplomacy.edu.

Kraken wins legal battle as SEC ends registration lawsuit

The US Securities and Exchange Commission (SEC) has agreed to dismiss its lawsuit against cryptocurrency exchange Kraken, marking a significant shift in regulatory oversight under the new administration.

Kraken, which was accused of operating as an unregistered securities exchange, announced that the case was dismissed with prejudice, meaning it cannot be refiled. The company maintained that the lawsuit was politically motivated and hindered innovation in the crypto sector.

Kraken stated that the dismissal involved no admission of wrongdoing, no penalties, and no required changes to its business model.

The SEC had sued Kraken in 2023 as part of a broader crackdown on crypto firms under former SEC Chair Gary Gensler. However, the regulator has since scaled back its enforcement efforts, also ending a similar case against Coinbase and considering a resolution in its fraud case against entrepreneur Justin Sun.

The decision follows United States President Donald Trump’s appointment of Paul Atkins, a lawyer with a pro-crypto stance, to lead the SEC. Kraken remains one of the world’s largest cryptocurrency exchanges, ranking 10th globally in trading volume and liquidity.

The outcome signals a shift in the regulatory landscape, with growing support for digital assets under the current administration.

For more information on these topics, visit diplomacy.edu.

UK regulator sets deadline for assessing online content risks

Britain’s media regulator, Ofcom, has set a 31 March deadline for social media and online platforms to submit a risk assessment on the likelihood of users encountering illegal content. This move follows new laws passed last year requiring companies such as Meta’s Facebook and Instagram, as well as ByteDance’s TikTok, to take action against criminal activities on their platforms. Under the Online Safety Act, these firms must assess and address the risks of offences like terrorism, hate crimes, child sexual exploitation, and financial fraud.

The risk assessment must evaluate how likely it is for users to come across illegal content, or how user-to-user services could facilitate criminal activities. Ofcom has warned that failure to meet the deadline could result in enforcement actions against the companies. The new regulations aim to make online platforms safer and hold them accountable for the content shared on their sites.

The deadline is part of the UK‘s broader push to regulate online content and enhance user safety. Social media giants are now facing stricter scrutiny to ensure they are addressing potential risks associated with their platforms and protecting users from harmful content.

For more information on these topics, visit diplomacy.edu.