Former Facebook executive says Meta misled over China

Former Facebook executive Sarah Wynn-Williams has accused Meta of compromising US national security to grow its business in China.

Testifying before the Senate Judiciary Committee, Wynn-Williams alleged that company executives misled employees, lawmakers, and the public about their dealings with the Chinese Communist Party.

Wynn-Williams claimed Meta aimed to gain favour in Beijing while secretly pursuing an $18 billion venture there.

In her remarks, Wynn-Williams said Meta removed the Facebook account of Chinese dissident Guo Wengui under pressure from Beijing. While the company maintains the removal was due to violations of its policies, she framed it as part of a broader pattern of submission to Chinese demands.

She also accused Meta of ignoring security warnings linked to the proposed Pacific Light Cable Network, a project that could have allowed China access to United States user data. According to her, the plans were only halted after lawmakers intervened.

Meta has denied the claims, calling her testimony false and out of touch with reality. A spokesperson noted that the company does not operate in China and that Mark Zuckerberg’s interest in the market had long been public.

The allegations arrive days before Meta’s major antitrust trial, which could result in the breakup of its ownership of Instagram and WhatsApp.

For more information on these topics, visit diplomacy.edu.

LMArena tightens rules after Llama 4 incident

Meta has come under scrutiny after submitting a specially tuned version of its Llama 4 AI model to the LMArena leaderboard, sparking concerns about fair competition.

The ‘experimental’ version, dubbed Llama-4-Maverick-03-26-Experimental, ranked second in popularity, trailing only Google’s Gemini-2.5-Pro.

While Meta openly labelled the model as experimental, many users assumed it reflected the public release. Once the official version became available, users quickly noticed it lacked the expressive, emoji-filled responses seen in the leaderboard battles.

LMArena, a crowdsourced platform where users vote on chatbot responses, said Meta’s custom variant appeared optimised for human approval, possibly skewing the results.

The group released over 2,000 head-to-head matchups to back its claims, showing the experimental Llama 4 consistently offered longer, more engaging answers than the more concise public build.

In response, LMArena updated its policies to ensure greater transparency and stated that Meta’s use of the experimental model did not align with expectations for leaderboard submissions.

Meta defended its approach, stating the experimental model was designed to explore chat optimisation and was never hidden. While company executives denied any misconduct, including speculation around training on test data, they acknowledged inconsistent performance across platforms.

Meta’s GenAI chief Ahmad Al-Dahle said it would take time for all public implementations to stabilise and improve. Meanwhile, LMArena plans to upload the official Llama 4 release to its leaderboard for more accurate evaluation going forward.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

Meta faces backlash over Llama 4 release

Over the weekend, Meta unveiled two new Llama 4 models—Scout, a smaller version, and Maverick, a mid-sized variant it claims outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across multiple benchmarks.

Maverick quickly climbed to second place on LMArena, an AI benchmarking platform where human evaluators compare and vote on model outputs. Meta proudly pointed to Maverick’s ELO score of 1417, placing it just beneath Gemini 2.5 Pro, instead of trailing behind the usual leaders.

However, AI researchers noticed a critical detail buried in Meta’s documentation: the version of Maverick that ranked so highly wasn’t the one released to the public. Instead of using the standard model, Meta had submitted an ‘experimental’ version specifically optimised for conversations.

LMArena later criticised this move, saying Meta failed to clearly indicate the model was customised, prompting the platform to update its policies to ensure future evaluations remain fair and reproducible.

Meta’s spokesperson acknowledged the use of experimental variants, insisting the company frequently tests different configurations.

While this wasn’t a violation of LMArena’s existing rules, the episode raised concerns about the credibility of benchmark rankings when companies submit fine-tuned models instead of the ones accessible to the wider community.

Independent AI researcher Simon Willison expressed frustration, saying the impressive ranking lost all meaning once it became clear the public couldn’t even use the same version.

The controversy unfolded against a backdrop of mounting competition in open-weight AI, with Meta under pressure following high-profile releases like China’s DeepSeek model.

Instead of offering a smooth rollout, Meta released Llama 4 on a Saturday—an unusual move—which CEO Mark Zuckerberg explained simply as ‘that’s when it was ready.’ But for many in the AI space, the launch has only deepened confusion around what these models can genuinely deliver.

For more information on these topics, visit diplomacy.edu.

Llama 4 Maverick and Scout challenge top AI benchmarks

Meta has officially launched two of its new Llama 4 AI models, Maverick and Scout, following reported delays earlier in the year.

The release forms part of Meta’s wider ambition to build and open-source the world’s most powerful AI systems. Llama 4 Behemoth, another model announced alongside them, has yet to become available.

The newly released models go head-to-head with Google’s latest AI offerings. According to Meta, Llama 4 Maverick surpasses Gemini 2.0 (Flash) in benchmarks such as coding, reasoning, and image tasks, while Llama 4 Scout outperforms both Gemini 2.0 Flash Lite and Gemma 3 in summarisation and code analysis.

Google CEO Sundar Pichai offered unexpected congratulations to the Llama 4 team on social media, reflecting the high-profile nature of the launch.

Llama 4 Maverick features 17 billion active parameters and 128 experts, making it a versatile choice for general-purpose AI assistants and creative tasks.

Llama 4 Scout shares the same number of active parameters but with a leaner expert setup, tailored for more focused tasks like document summarisation and code reasoning. Meta plans to release additional advanced models, including Llama Behemoth and Llama Reasoning, in the near future.

For more information on these topics, visit diplomacy.edu.

Meta unveils Llama 4 models to boost AI across platforms

Meta has launched Llama 4, its latest and most advanced family of open-weight AI models, aiming to enhance the intelligence of Meta AI across services like WhatsApp, Instagram, and Messenger.

Instead of keeping these models cloud-restricted, Meta has made them available for download through its official Llama website and Hugging Face, encouraging wider developer access.

Two models, Llama 4 Scout and Maverick, are now publicly available. Scout, the lighter model with 17 billion active parameters, supports a 10 million-token context window and can run on a single Nvidia H100 GPU.

It outperforms rivals like Google’s Gemma 3 and Mistral 3.1 in benchmark tests. Maverick, the more capable model, uses the same number of active parameters but with 128 experts, offering competitive performance against GPT-4o and DeepSeek v3 while being more efficient.

Meta also revealed the Llama 4 Behemoth model, still in training, which serves as a teacher for the rest of the Llama 4 line. Instead of targeting lightweight use, Behemoth focuses on heavy multimodal tasks with 288 billion active parameters and nearly two trillion in total.

Meta claims it outpaces GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in key STEM-related evaluations.

These open-weight AI models allow local deployment instead of relying on cloud APIs, though some licensing limits may apply. With Scout and Maverick already accessible, Meta is gradually integrating Llama 4 capabilities into its messaging and social platforms worldwide.

For more information on these topics, visit diplomacy.edu.

GPT-4.5 outperforms humans in updated Turing Test

Two leading AI systems, OpenAI’s GPT-4.5 and Meta’s Llama-3.1, have passed a key milestone by outperforming humans in a modern version of the Turing Test.

The experiment, conducted by researchers at the University of California San Diego, found that GPT-4.5 was mistaken for a human 73% of the time, surpassing the human identification rate. Meta’s Llama-3.1 followed closely, with a 56% success rate.

The study used a three-party test where participants held simultaneous five-minute conversations with both a human and an AI, and then tried to determine which was which.

These trials were conducted across two independent groups: university undergraduates and prolific online workers. The results provide the first substantial evidence that AI can convincingly mimic human responses in spontaneous conversations.

Earlier language models such as ELIZA and GPT-4o were correctly identified as non-human in over 75% of cases.

The success of newer models in passing this benchmark points to how rapidly conversational AI is evolving, raising fresh questions about the ethical and societal implications of indistinguishable AI interactions.

For more information on these topics, visit diplomacy.edu.

Tech giants face pushback over AI and book piracy

Meta and Anthropic’s recent attempts to defend their use of copyrighted books in training AI tools under the US legal concept of ‘fair use’ are unlikely to succeed in UK courts, according to the Publishers Association and the Society of Authors.

Legal experts argue that ‘fair use’ is far broader than the UK’s stricter ‘fair dealing’ rules, which limit the unauthorised use of copyrighted works.

The controversy follows revelations that Meta may have used pirated books from LibraryGenesis to train its AI model, Llama 3. Legal filings in the US claim the use of these books was transformative and formed only a small part of the training data.

However, UK organisations and authors insist that such use amounts to large-scale copyright infringement and would not be justified under UK law.

Calls for transparency and licensing reform are growing, with more than 8,000 writers signing a petition and protests planned outside Meta’s London headquarters.

Critics, including Baroness Beeban Kidron, argue that AI models rely on the creativity and quality of copyrighted content—making it all the more important for authors to retain control and receive proper compensation.

For more information on these topics, visit diplomacy.edu.

Meta and UFC to transform fight experience

UFC President Dana White has announced a groundbreaking partnership with Meta, following his recent appointment to the tech giant’s board.

The collaboration marks a significant moment for both organisations, with Meta CEO Mark Zuckerberg, a well-known MMA enthusiast and practitioner, praising White’s ability to elevate global sports brands.

The deal aims to revolutionise fan engagement through cutting-edge technologies. According to White, plans are already underway to redesign the UFC’s ranking system, with hopes of delivering more compelling matchups.

While details remain under wraps, he hinted that AI could be central to the project, potentially transforming how fights are scored and analysed in real time.

Zuckerberg expressed excitement about the future of UFC fan experiences, suggesting Meta’s tech resources could introduce innovative ways for audiences to connect with the sport.

Enhanced data analysis may also support fighters in training and strategy, leading to higher-quality contests and fewer controversial decisions.

The full impact of the partnership will unfold in the coming years, with fans and athletes alike anticipating significant change.

For more information on these topics, visit diplomacy.edu.

Authors in London protest Meta’s copyright violations

A wave of protest has hit Meta’s London headquarters today as authors and publishing professionals gather to voice their outrage over the tech giant’s reported use of pirated books to develop AI tools.

Among the protesters are acclaimed novelists Kate Mosse and Tracy Chevalier and poet Daljit Nagra, who assembled in Granary Square near Meta’s King’s Cross office to deliver a complaint letter from the Society of Authors (SoA).

At the heart of the protest is Meta’s alleged reliance on LibGen, a so-called ‘shadow library’ known for hosting over 7.5 million books, many without the consent of their authors.

A recent searchable database published by The Atlantic revealed that thousands of copyrighted works, including those by renowned authors, may have been used to train Meta’s AI models, provoking public outcry and legal action in the US.

Vanessa Fox O’Loughlin, chair of the SoA, condemned Meta’s reported actions as ‘illegal, shocking, and utterly devastating for writers,’ arguing that such practices devalue authors’ time and creativity.

‘A book can take a year or longer to write. Meta has stolen books so that their AI can reproduce creative content, potentially putting these same authors out of business’ she said.

Meta has denied any wrongdoing, with a spokesperson stating that the company respects intellectual property rights and believes its AI training practices comply with existing laws.

Still, the damage to trust within the creative community appears significant. Author AJ West, who discovered his novels were listed on LibGen, described the experience as a personal violation:

‘I was horrified to see that my novels were on the LibGen database, and I’m disgusted by the government’s silence on the matter,’ he said, adding, ‘To have my beautiful books ripped off like this without my permission and without a penny of compensation then fed to the AI monster feels like I’ve been mugged.’

Legal action is already underway in the US, where a group of high-profile writers, including Ta-Nehisi Coates, Junot Díaz, and Sarah Silverman, have filed a lawsuit against Meta for copyright infringement.

The suit alleges that Meta CEO Mark Zuckerberg and other top executives knew that LibGen hosts pirated content when they greenlit its use for AI development.

The protest is also aimed at UK lawmakers. Authors like Richard Osman and Kazuo Ishiguro have joined the call for British officials to summon Meta executives before parliament.

The Society of Authors has launched a petition on Change.org that has already attracted over 7,000 signatures.

Demonstrators were urged to bring placards and spread their message online using hashtags like #MetaBookThieves and #MakeItFair as they rally against alleged copyright violations and for broader protection of creative work in the age of AI.

The case, one of the lots, describes the increasingly tense relationship between the tech industry, content and data policies in training AI systems, which hardly depend on the written word and the most various literature, facts, and info from the written tradition to be trained (and thus able) to respond to most various user requests and alongside be accurate in their responses.

For more information on these topics, visit diplomacy.edu.