Mark Zuckerberg confirms Llama’s soaring popularity

Meta’s open AI model family, Llama, has reached a significant milestone, surpassing 1 billion downloads, according to CEO Mark Zuckerberg. The announcement, made on Threads, highlights a rapid rise in adoption, with downloads increasing by 53% since December 2024. Llama powers Meta’s AI assistant across Facebook, Instagram, and WhatsApp, forming a crucial part of the company’s expanding AI ecosystem.

Despite its success, Llama has not been without controversy. Meta faces a lawsuit alleging the model was trained on copyrighted material without permission, while regulatory concerns have stalled its rollout in some European markets. Additionally, emerging competitors, such as China’s DeepSeek R1, have challenged Llama’s technological edge, prompting Meta to intensify its AI research efforts.

Looking ahead, Meta plans to launch several new Llama models, including those with advanced reasoning and multimodal capabilities. Zuckerberg has hinted at ‘agentic’ features, suggesting the AI could soon perform tasks autonomously. More details are expected at LlamaCon, Meta’s first AI developer conference, set for 29 April.

For more information on these topics, visit diplomacy.edu.

UK Technology Secretary uses ChatGPT for advice on media and AI

Technology Secretary Peter Kyle has been using ChatGPT to seek advice on media appearances and to define technical terms related to his role.

His records, obtained by New Scientist through freedom of information laws, reveal that he asked the AI tool for recommendations on which podcasts to feature and for explanations of terms like ‘digital inclusion’ and ‘anti-matter.’

ChatGPT suggested The Infinite Monkey Cage and The Naked Scientists due to their broad reach and scientific focus.

Kyle also inquired why small and medium-sized businesses in the UK have been slow to adopt AI. The chatbot pointed to factors such as a lack of awareness about government initiatives, funding limitations, and concerns over data protection regulations like GDPR.

While AI adoption remains a challenge, Labour leader Sir Keir Starmer has praised its potential, arguing that the UK government should embrace AI more to improve efficiency.

Despite Kyle’s enthusiasm for AI, he has faced criticism for allegedly prioritising the interests of Big Tech over Britain’s creative industries. Concerns have been raised over a proposed policy that could allow tech firms to train AI on copyrighted material without permission unless creators opt out.

His department defended his use of AI, stating that while he utilises the tool, it does not replace expert advice from officials.

For more information on these topics, visit diplomacy.edu.

Meta faces lawsuit in France over copyrighted AI training data

Leading French publishers and authors have filed a lawsuit against Meta, alleging the tech giant used their copyrighted content to train its artificial intelligence systems without permission.

The National Publishing Union (SNE), the National Union of Authors and Composers (SNAC), and the Society of Men of Letters (SGDL) argue that Meta’s actions constitute significant copyright infringement and economic ‘parasitism.’ The complaint was lodged earlier this week in a Paris court.

This lawsuit is the first of its kind in France but follows a wave of similar actions in the US, where authors and visual artists are challenging the use of their works by companies like Meta to train AI models.

As the issue of AI-generated content continues to grow, these legal actions highlight the mounting concerns over how tech companies utilise vast amounts of copyrighted material without compensation or consent from creators.

For more information on these topics, visit diplomacy.edu.

EU draft AI code faces industry pushback

The tech industry remains concerned about a newly released draft of the Code of Practice on General-Purpose Artificial Intelligence (GPAI), which aims to help AI providers comply with the EU‘s AI Act.

The proposed rules, which cover transparency, copyright, risk assessment, and mitigation, have sparked significant debate, especially among copyright holders and publishers.

Industry representatives argue that the draft still presents serious issues, particularly regarding copyright obligations and external risk assessments, which they believe could hinder innovation.

Tech lobby groups, such as the CCIA and DOT Europe, have expressed dissatisfaction with the latest draft, highlighting that it continues to impose burdensome requirements beyond the scope of the original AI Act.

Notably, the mandatory third-party risk assessments both before and after deployment remain a point of contention. Despite some improvements in the new version, these provisions are seen as unnecessary and potentially damaging to the industry.

Copyright concerns remain central, with organisations like News Media Europe warning that the draft still fails to respect copyright law. They argue that AI companies should not be merely expected to make ‘best efforts’ not to use content without proper authorisation.

Additionally, the draft is criticised for failing to fully address fundamental rights risks, which, according to experts, should be a primary concern for AI model providers.

The draft is open for feedback until 30 March, with the final version expected to be released in May. However, the European Commission’s ability to formalise the Code under the AI Act, which comes into full effect in 2027, remains uncertain.

Meanwhile, the issue of copyright and AI is also being closely examined by the European Parliament.

For more information on these topics, visit diplomacy.edu.

IBM triumphs in UK Court over trade secrets

IBM secured a legal victory in the UK on March 10, 2025, after the High Court ruled in its favour against LzLabs. The lawsuit, which IBM filed against the Swiss-based company and its owner, John Moores, centred on accusations of stealing trade secrets. IBM claimed LzLabs’ UK subsidiary, Winsopia, misused its mainframe computer licence to reverse-engineer IBM’s proprietary software.

The court sided with IBM, agreeing that Winsopia had violated the terms of its licence agreement. Judge Finola O’Farrell concluded that LzLabs and Moores had unlawfully facilitated these breaches. Although LzLabs defended its actions, arguing that its software was developed independently over many years, the court ruled that the company had acted inappropriately.

This ruling is seen as a major win for IBM, reinforcing the value of its technological investments. The case, which will proceed to a hearing to determine potential damages, reflects the company’s commitment to protecting its intellectual property. LzLabs and Moores did not immediately comment on the decision.

For more information on these topics, visit diplomacy.edu.

Authors challenge Meta’s use of their books in AI training

A lawsuit filed by authors Richard Kadrey, Sarah Silverman, and Ta-Nehisi Coates against Meta has taken a significant step forward as a federal judge has ruled that the case will continue.

The authors allege that Meta used their books to train its Llama AI models without consent, violating their intellectual property rights.

They further claim that Meta intentionally removed copyright management information (CMI) from the works to conceal the alleged infringement.

Meta, however, defends its actions, arguing that the training of AI models qualifies as fair use and that the authors lack standing to sue.

Despite this, the judge allowed the lawsuit to move ahead, acknowledging that the authors’ claims suggest concrete injury, specifically regarding the removal of CMI to hide the use of copyrighted works.

While the lawsuit touches on several legal points, the judge dismissed claims related to the California Comprehensive Computer Data Access and Fraud Act, stating that there was no evidence of Meta accessing the authors’ computers or servers.

Meta’s defence team has continued to assert that the AI training practices were legally sound, though the ongoing case will likely provide more insight into the company’s stance on copyright.

The ruling adds to the growing list of copyright-related lawsuits involving AI models, including one filed by The New York Times against OpenAI. As the debate around AI and intellectual property rights intensifies, this case could set important precedents.

For more information on these topics, visit diplomacy.edu.

UK artists raise alarm over AI law proposals

A new proposal by the UK government to alter copyright laws has sparked significant concern among artists, particularly in Devon. The changes would allow AI companies to use the content found on the internet, including artwork, to help train their models unless the creators opt-out. Artists like Sarah McIntyre, an illustrator from Bovey Tracey, argue that such a shift could undermine their rights, making it harder for them to control the use of their work and potentially depriving them of income.

The Devon Artist Network has expressed strong opposition to these plans, warning that they could have a devastating impact on creative industries. They believe that creators should retain control over their work, without needing to actively opt out of its use by AI. While some, like Mike Phillips from the University of Plymouth in the UK, suggest that AI could help artists track copyright violations, the majority of artists remain wary of the proposed changes.

The Department for Science, Innovation and Technology has acknowledged the concerns and confirmed that no decisions have yet been made. However, it has stated that the current copyright framework is limiting the potential of both the creative and AI sectors. As consultations close, the future of the proposal remains uncertain.

For more information on these topics, visit diplomacy.edu.

EU plans legislation for car data access

The European Commission is preparing to introduce legislation that would allow insurers, leasing firms, and repair shops greater access to vehicle data.

The proposed law is expected to be published later this year, and it is a response to growing tensions between car service providers, automakers, and tech companies over the control and monetisation of valuable in-vehicle data.

Currently, vehicle data, ranging from driving habits to fuel efficiency, is not clearly defined in European law, leading to disputes over who owns it.

With the connected car market projected to be worth billions in the coming years, the Commission is stepping in to ensure that all sectors of the automotive industry can benefit from this data.

However, carmakers have expressed concerns, warning that the new law could compromise trade secrets and lead to misuse of sensitive information.

The debate has also highlighted fears about the dominance of Big Tech, with companies like Google and Apple already making inroads into car infotainment systems.

The Commission’s proposal could reshape the landscape by offering more equitable access to the data that is crucial for developing new products and services.

For more information on these topics, visit diplomacy.edu.

Microsoft executive says firms are lagging in AI adoption

Microsoft’s UK boss has warned that many companies are ‘stuck in neutral’ when it comes to AI, with a significant number of private and public sector organisations lacking any formal AI strategy. According to a Microsoft survey of nearly 1,500 senior leaders and 1,440 employees in the UK, more than half of executives report that their organisations have no official AI plan. Additionally, many recognise a growing productivity gap between employees using AI and those who are not.

Darren Hardman, Microsoft’s UK chief executive, stated that some companies are caught in the experimentation phase rather than fully deploying AI. Microsoft, a major backer of OpenAI, has been promoting AI deployment in workplaces through autonomous AI agents designed to perform tasks without human intervention. Early adopters, like consulting giant McKinsey, are already using AI agents for tasks such as scheduling meetings.

Hardman also discussed AI’s potential impact on jobs, with the Tony Blair Institute estimating that AI could displace up to 3 million UK jobs, though the net job loss will likely be much lower as new roles are created. He compared AI’s transformative impact on the workplace to how the internet revolutionised retail, creating roles like data analysts and social media managers. Hardman also backed proposed UK copyright law reforms, which would allow tech companies to use copyright-protected work for training AI models, arguing that the changes could drive economic growth and support AI development.

For more information on these topics, visit diplomacy.edu.

UK Court rules in favour of Lenovo in patent battle

Lenovo has won an appeal in a UK court that will allow it to secure a temporary licence for Ericsson’s patents, marking a significant development in the ongoing patent dispute between the two companies.

The case, which revolves around fair, reasonable, and non-discriminatory (FRAND) licensing terms for 4G and 5G wireless technology, has seen both companies take legal action in various countries, including the UK, Brazil, and the US.

In his ruling, Judge Richard Arnold determined that Ericsson had failed to act in good faith by pursuing legal claims in foreign courts despite Lenovo’s willingness to accept the FRAND terms set by the English courts.

He stated that, as a willing licensor, Ericsson should have agreed to an interim licence, with Lenovo being required to pay a substantial sum to Ericsson. Lenovo’s Chief Legal Officer hailed the decision as a victory for transparency and fairness in global patent licensing.

The ruling follows Lenovo’s 2023 lawsuit against Ericsson in the UK, a part of the broader dispute between the two over the terms for the use of each other’s patents. Ericsson has yet to comment on the decision.

For more information on these topics, visit diplomacy.edu.