Former Meta AI leaders launch Yutori with $15 million in funding

Two former Meta AI executives have secured $15 million in funding for Yutori, a San Francisco-based startup focused on developing AI personal assistants.

The funding round was led by Radical Ventures, with backing from prominent investors including AI pioneer Fei-Fei Li and Google DeepMind’s Jeff Dean.

Yutori aims to create autonomous AI agents capable of executing complex online tasks without human intervention. Unlike traditional chatbots, these AI assistants will handle real-world actions, from ordering food to managing travel plans, streamlining everyday digital interactions.

The company is also advancing post-training techniques to enhance AI models’ ability to navigate the web efficiently.

With a team of experts who previously worked on Meta’s AI projects, including the development of Llama 3 and Llama 4 models, Yutori is positioning itself at the forefront of AI-driven automation.

For more information on these topics, visit diplomacy.edu.

Meta’s use of pirated content in AI development raises legal and ethical challenges

In its quest to develop the Llama 3 AI model, Meta faced significant ethical and legal hurdles regarding sourcing a large volume of high-quality text required for AI training. The company evaluated legal licensing for acquiring books and research papers but dismissed these options due to high costs and delays.

Internal discussions indicated a preference for maintaining legal flexibility by avoiding licensing constraints and pursuing a ‘fair use’ strategy. Consequently, Meta turned to Library Genesis (LibGen), a vast database of pirated books and papers, a move reportedly sanctioned by CEO Mark Zuckerberg.

That decision led to copyright-infringement lawsuits from authors, including Sarah Silverman and Junot Díaz, underlining the complexities of pirated content in AI development. Meta and OpenAI have defended their use of copyrighted materials by invoking ‘fair use’, arguing that their AI systems transform original works into new creations.

Despite this defence, the legality remains contentious, especially as Meta’s internal communications acknowledged the legal risks and outlined measures to reduce exposure, such as removing data marked as pirated.

The situation draws attention to broader issues in the publishing world, where expensive and restricted access to literature and research has fuelled the rise of piracy sites like LibGen and Sci-Hub. While providing wider access, these platforms threaten intellectual creation’s sustainability by bypassing compensation for authors and researchers.

The challenges facing Meta and other AI companies raise important questions about managing the flow of knowledge in the digital era. While LibGen and similar repositories democratise access, they undermine intellectual property rights, disrupting the balance between accessibility and the protection of creators’ contributions.

For more information on these topics, visit diplomacy.edu.

Instagram users react to Meta’s new AI experiment

Meta has come under fire once again, this time over a new AI experiment on Instagram that suggests comments for users. Some users accused the company of using AI to inflate engagement metrics, potentially misleading advertisers and diminishing authentic user interaction.

The feature, spotted by test users, involves a pencil icon next to the comment bar on Instagram posts. Tapping it generates suggested replies based on the image’s content.

Meta has confirmed the feature is in testing but did not reveal plans for a broader launch. The company stated that it is exploring ways to incorporate Meta AI across different parts of its apps, including feeds, comments, groups, and search.

Public reaction has been largely negative, with concerns that AI-generated comments could flood the platform with inauthentic conversations. Social media users voiced fears of fake interactions replacing genuine ones, and some accused Meta of deceiving advertisers through inflated statistics.

Comparisons to dystopian scenarios were common, as users questioned the future of online social spaces.

This isn’t the first time Meta has faced backlash for its AI ventures. Previous attempts included AI personas modelled on celebrities and diverse identities, which were criticised for being disingenuous and engineered by largely homogenous development teams.

The future of AI-generated comments on Instagram remains uncertain as scrutiny continues to mount.

For more information on these topics, visit diplomacy.edu.

Meta agrees to halt targeted ads in landmark UK privacy case

Meta, the owner of Facebook and Instagram, has agreed to stop targeting a UK citizen with personalised adverts as part of a settlement in a landmark privacy case.

The case, which avoided a high court trial, was brought by human rights campaigner Tanya O’Carroll in 2022, who claimed Meta had violated UK data laws by processing her personal data for targeted advertising without her consent.

O’Carroll’s case received support from the UK’s data watchdog, the Information Commissioner’s Office (ICO), which stated that users have the right to opt out of targeted ads.

The settlement has been hailed as a victory for O’Carroll, with potential implications for millions of social media users in the UK. Meta, however, disagreed with the claims. Instead of this, the company was considering introducing a subscription model in the UK for users who want an advert-free version of its platforms.

The ICO’s stance in favour of privacy rights could prompt similar lawsuits in the future, as users are increasingly demanding control over how their data is used online.

O’Carroll argued that the case demonstrated the growing desire for more control over surveillance advertising and said that the ICO’s support could encourage more people to object to targeted ads.

Meta, which generates most of its revenue from advertising, emphasised that it took its privacy obligations seriously and was exploring the option of a paid, ad-free service for UK users.

For more information on these topics, visit diplomacy.edu.

FuriosaAI rejects $800m acquisition offer from Meta

FuriosaAI, a South Korean startup specialising in AI chips, has reportedly turned down an $800 million acquisition offer from Meta.

Instead of selling, FuriosaAI plans to continue developing its AI chips. Disagreements over post-acquisition business strategy and organisational structure were reportedly the cause of the breakdown in negotiations, rather than issues over price.

Meta, which has been trying to reduce its reliance on Nvidia for chips specialised in training large language models (LLMs), unveiled its custom AI chips last year. The company also announced plans to invest up to $65 billion this year to support its AI initiatives.

FuriosaAI, founded in 2017 by June Paik, who previously worked at Samsung Electronics and AMD, has developed two AI chips—Warboy and Renegade (RNGD).

The startup is also in talks to raise approximately $48 million and is planning to launch the RNGD chips later this year, with LG AI Research already testing them for use in its AI infrastructure.

FuriosaAI’s decision to focus on expanding its chip production signals its confidence in competing with giants like Nvidia and AMD in the rapidly growing AI hardware market.

For more information on these topics, visit diplomacy.edu.

Whistle-blower claims Meta is hindering legislative engagement

Former Facebook executive turned whistle-blower Sarah Wynn-Williams says Meta is preventing her from speaking to lawmakers about her experiences at the company following the release of her memoir Careless People. Meta filed for emergency arbitration the day her book was published, claiming it violated a non-disparagement agreement she signed upon leaving.

An arbitrator then temporarily barred her from promoting the book or making any critical remarks about Meta. As a result, Wynn-Williams says she cannot respond to requests from US, UK, and the EU lawmakers who want to speak with her about serious public interest issues raised in her memoir.

These include Meta’s alleged ties with the Chinese government and the platform’s impact on teenage girls. Her lawyers argue the arbitration order unfairly blocks her from contributing to ongoing investigations and legislative inquiries.

Meta maintains it does not intend to interfere with Wynn-Williams’ legal rights and insists the claims in her book are outdated or false. The company also points out that she can still file complaints with government agencies.

Wynn-Williams has filed whistle-blower complaints with the SEC and the Department of Justice. Her memoir, which describes internal controversies at Meta — including sexual harassment claims and the company’s ambitions in China — debuted on the New York Times best-seller list.

Despite Meta’s legal pushback, her legal team argues that silencing her voice is a disservice to the public and lawmakers working to address the social media giant’s influence and accountability.

For more information on these topics, visit diplomacy.edu.

Mark Zuckerberg confirms Llama’s soaring popularity

Meta’s open AI model family, Llama, has reached a significant milestone, surpassing 1 billion downloads, according to CEO Mark Zuckerberg. The announcement, made on Threads, highlights a rapid rise in adoption, with downloads increasing by 53% since December 2024. Llama powers Meta’s AI assistant across Facebook, Instagram, and WhatsApp, forming a crucial part of the company’s expanding AI ecosystem.

Despite its success, Llama has not been without controversy. Meta faces a lawsuit alleging the model was trained on copyrighted material without permission, while regulatory concerns have stalled its rollout in some European markets. Additionally, emerging competitors, such as China’s DeepSeek R1, have challenged Llama’s technological edge, prompting Meta to intensify its AI research efforts.

Looking ahead, Meta plans to launch several new Llama models, including those with advanced reasoning and multimodal capabilities. Zuckerberg has hinted at ‘agentic’ features, suggesting the AI could soon perform tasks autonomously. More details are expected at LlamaCon, Meta’s first AI developer conference, set for 29 April.

For more information on these topics, visit diplomacy.edu.

Meta cracks down on misinformation in Australia

Meta Platforms has announced new measures to combat misinformation and deepfakes in Australia ahead of the country’s upcoming national election.

The company’s independent fact-checking program, supported by Agence France-Presse and the Australian Associated Press, will detect and limit misleading content, while also removing any material that could incite violence or interfere with voting.

Deepfakes, AI-generated media designed to appear real, will also face stricter scrutiny. Meta stated that any content violating its policies would be removed or labelled as ‘altered’ to reduce its visibility.

Users sharing AI-generated content will be encouraged to disclose its origin, aiming to improve transparency.

Meta’s Australian policy follows similar strategies used in elections across India, the UK and the US.

The company is also navigating regulatory challenges in the country, including a proposed levy on big tech firms profiting from local news content and new requirements to enforce a ban on users under 16 by the end of the year.

For more information on these topics, visit diplomacy.edu.

Meta faces lawsuit in France over copyrighted AI training data

Leading French publishers and authors have filed a lawsuit against Meta, alleging the tech giant used their copyrighted content to train its artificial intelligence systems without permission.

The National Publishing Union (SNE), the National Union of Authors and Composers (SNAC), and the Society of Men of Letters (SGDL) argue that Meta’s actions constitute significant copyright infringement and economic ‘parasitism.’ The complaint was lodged earlier this week in a Paris court.

This lawsuit is the first of its kind in France but follows a wave of similar actions in the US, where authors and visual artists are challenging the use of their works by companies like Meta to train AI models.

As the issue of AI-generated content continues to grow, these legal actions highlight the mounting concerns over how tech companies utilise vast amounts of copyrighted material without compensation or consent from creators.

For more information on these topics, visit diplomacy.edu.

Meta has developed an AI chip to cut reliance on Nvidia, Reuters reports

Meta, the owner of Facebook, Instagram, and WhatsApp, is testing its first in-house chip designed for training AI systems, sources told Reuters.

The social media giant has started a limited rollout of the chip, planning to scale up production if testing delivers positive results. The move represents a crucial step in Meta’s strategy to lessen dependence on external suppliers like Nvidia and lower substantial infrastructure costs.

The company has projected expenses between $114 billion and $119 billion for 2025, with up to $65 billion dedicated to AI infrastructure.

The chip, part of Meta’s Meta Training and Inference Accelerator (MTIA) series, is a dedicated AI accelerator, meaning it is specifically designed for AI tasks rather than general processing. This could make it more power-efficient than traditional GPUs.

Meta is collaborating with Taiwan-based chip manufacturer TSMC to produce the new hardware. The test phase follows Meta’s first ‘tape-out’ of the chip, a crucial milestone in silicon development where an initial design is sent to a chip factory.

However, this process is costly and time-consuming, with no guarantee of success, and any failure would require repeating the tape-out step.

Meta has previously faced setbacks in its custom chip development, including scrapping an earlier version of an inference chip after poor test results. However, the company has since used another MTIA chip for AI-powered recommendations on Facebook and Instagram.

The new training chip aims to first enhance recommendation systems before expanding to generative AI applications like the chatbot Meta AI.

Meta executives hope to implement their own chips for AI training by 2026, although the company continues to be one of Nvidia’s biggest customers, investing heavily in GPUs for its AI operations.

The development comes as AI researchers increasingly question whether scaling up large language models by adding more computing power will continue to drive progress. The recent emergence of more efficient AI models, such as those from Chinese startup DeepSeek, has intensified these debates.

While Nvidia remains a dominant force in AI hardware, fluctuating investor confidence and broader market concerns have caused turbulence in the company’s stock value.

For more information on these topics, visit diplomacy.edu.