Meta faces landmark antitrust trial

An antitrust trial against Meta commenced in Washington, with the US Federal Trade Commission (FTC) arguing that the company’s acquisitions of Instagram in 2012 and WhatsApp in 2014 were designed to crush competition instead of fostering innovation.

Although the FTC initially approved these deals, it now claims they effectively handed Meta a monopoly. Should the FTC succeed, Meta may be forced to sell off both platforms, a move that would reshape the tech landscape.

Meta has countered by asserting that users have benefited from Instagram’s development under its ownership, instead of being harmed by diminished competition. Legal experts believe the company will focus on consumer outcomes rather than corporate intent.

Nevertheless, statements made by Meta CEO Mark Zuckerberg, such as his remark that it’s ‘better to buy than to compete,’ may prove pivotal. Zuckerberg and former COO Sheryl Sandberg are both expected to testify during the trial, which could span several weeks in the US.

Political tensions loom over the case, which was first launched under Donald Trump’s presidency. Reports suggest Zuckerberg has privately lobbied Trump to drop the lawsuit, while Meta has criticised the FTC’s reversal years after approving the acquisitions.

The recent dismissal of two Democratic commissioners from the FTC by Trump has raised concerns over political interference, especially as the commission now holds a Republican majority.

While the FTC seeks to challenge Meta’s dominance, experts caution that proving harm in this case will be far more difficult than in the ongoing antitrust battle against Google.

Unlike the search engine market, which is clearly monopolised, the social media space remains highly competitive, with platforms like TikTok, YouTube and X offering strong alternatives.

For more information on these topics, visit diplomacy.edu.

Gerry Adams targets Meta over use of his books

Gerry Adams, the former president of Sinn Féin, is considering legal action against Meta for allegedly using his books to train AI. Adams claims that at least seven of his books were included in a large collection of copyrighted material Meta used to develop its AI systems.

He has handed the matter over to his solicitor. The books in question include his autobiography Before the Dawn, prison memoir Cage Eleven, and reflections on Northern Ireland’s peace process Hope and History, among others.

Adams is not the only author voicing concerns about Meta’s use of copyrighted works. A group of writers filed a US court case in January, accusing Meta of using the controversial Library Genesis (LibGen) database, which hosts over 7.5 million books, many believed to be pirated.

The discovery followed a searchable database of titles from LibGen being published by The Atlantic, which led several authors to identify their works being used to train Meta’s Llama AI model.

The Society of Authors has condemned Meta’s actions, with chair Vanessa Fox O’Loughlin calling the move ‘shocking and devastating’ for authors.

Many authors are concerned that AI models like Llama, which power tools such as chatbots, could undermine their work by reproducing creative content without permission. Meta has defended its actions, claiming that its use of information to train AI models is in line with existing laws.

Adams, a prolific author and former MP, joins other Northern Irish writers, including Booker Prize winner Anna Burns, in opposing the use of their work for AI training without consent.

For more information on these topics, visit diplomacy.edu.

Meta to block livestreaming for under 16s without parental permission

Meta will soon prevent children under 16 from livestreaming on Instagram unless their parents explicitly approve.

The new safety rule is part of broader efforts to protect young users online and will first be introduced in the UK, US, Canada and Australia, before being extended to the rest of Europe and beyond in the coming months.

The company explained that teenagers under 16 will also need parental permission to disable a feature that automatically blurs images suspected of containing nudity in direct messages.

These updates build on Meta’s teen supervision programme introduced last September, which gives parents more control over how their children use Instagram.

Instead of limiting the changes to Instagram alone, Meta is now extending similar protections to Facebook and Messenger.

Teen accounts on those platforms will be set to private by default, and will automatically block messages from strangers, reduce exposure to violent or sensitive content, and include reminders to take breaks after an hour of use. Notifications will also pause during usual bedtime hours.

Meta said these safety tools are already being used across at least 54 million teen accounts. The company claims the new measures will better support teenagers and parents alike in making social media use safer and more intentional, instead of leaving young users unprotected or unsupervised online.

For more information on these topics, visit diplomacy.edu.

Former Facebook executive says Meta misled over China

Former Facebook executive Sarah Wynn-Williams has accused Meta of compromising US national security to grow its business in China.

Testifying before the Senate Judiciary Committee, Wynn-Williams alleged that company executives misled employees, lawmakers, and the public about their dealings with the Chinese Communist Party.

Wynn-Williams claimed Meta aimed to gain favour in Beijing while secretly pursuing an $18 billion venture there.

In her remarks, Wynn-Williams said Meta removed the Facebook account of Chinese dissident Guo Wengui under pressure from Beijing. While the company maintains the removal was due to violations of its policies, she framed it as part of a broader pattern of submission to Chinese demands.

She also accused Meta of ignoring security warnings linked to the proposed Pacific Light Cable Network, a project that could have allowed China access to United States user data. According to her, the plans were only halted after lawmakers intervened.

Meta has denied the claims, calling her testimony false and out of touch with reality. A spokesperson noted that the company does not operate in China and that Mark Zuckerberg’s interest in the market had long been public.

The allegations arrive days before Meta’s major antitrust trial, which could result in the breakup of its ownership of Instagram and WhatsApp.

For more information on these topics, visit diplomacy.edu.

LMArena tightens rules after Llama 4 incident

Meta has come under scrutiny after submitting a specially tuned version of its Llama 4 AI model to the LMArena leaderboard, sparking concerns about fair competition.

The ‘experimental’ version, dubbed Llama-4-Maverick-03-26-Experimental, ranked second in popularity, trailing only Google’s Gemini-2.5-Pro.

While Meta openly labelled the model as experimental, many users assumed it reflected the public release. Once the official version became available, users quickly noticed it lacked the expressive, emoji-filled responses seen in the leaderboard battles.

LMArena, a crowdsourced platform where users vote on chatbot responses, said Meta’s custom variant appeared optimised for human approval, possibly skewing the results.

The group released over 2,000 head-to-head matchups to back its claims, showing the experimental Llama 4 consistently offered longer, more engaging answers than the more concise public build.

In response, LMArena updated its policies to ensure greater transparency and stated that Meta’s use of the experimental model did not align with expectations for leaderboard submissions.

Meta defended its approach, stating the experimental model was designed to explore chat optimisation and was never hidden. While company executives denied any misconduct, including speculation around training on test data, they acknowledged inconsistent performance across platforms.

Meta’s GenAI chief Ahmad Al-Dahle said it would take time for all public implementations to stabilise and improve. Meanwhile, LMArena plans to upload the official Llama 4 release to its leaderboard for more accurate evaluation going forward.

For more information on these topics, visit diplomacy.edu.

Meta rolls out restricted teen accounts across platforms

Meta is expanding its ‘Teen Accounts’ feature to Facebook and Messenger following its initial launch on Instagram last September

The rollout begins in the US, UK, Australia, and Canada, with plans to reach more countries soon. 

These accounts are designed to give younger users an app experience with stronger safety measures, automatically activating restrictions to limit exposure to harmful content and interactions.

Teen users will be automatically placed in a more controlled environment that restricts who can message, comment, or tag them. 

Only friends and previously contacted users can reach out via Messenger or see their stories, but tagging and mentions are also limited. 

These settings require parental approval for any changes, and teens under 16 must have consent to alter key safety features.

On Instagram, Meta is introducing stricter safeguards. Users under 16 now need parental permission to go live or to turn off the tool that blurs images containing suspected nudity in direct messages. 

Meta also implements reminders to limit screen time, prompting teens to log off after one hour and enabling overnight ‘Quiet mode’ to reduce late-night use.

The initiative follows increasing pressure on social media platforms to address concerns around teen mental health. 

In recent years, US lawmakers and the Surgeon General have highlighted the risks associated with young users’ exposure to unregulated digital environments. 

Some states have even mandated parental consent for teen access to social platforms.

Meta reports that over 54 million Instagram accounts have migrated to Teen Accounts. 

According to the company, 97% of users aged 13 to 15 keep the default protections in place. 

A study commissioned by Meta and Ipsos found that 94% of surveyed parents support Teen Accounts, with 85% saying the controls help ensure more positive online experiences for their children.

As digital safety continues to evolve as a priority, Meta’s expansion of Teen Accounts signals the willingness to build more accountable, youth-friendly online spaces across its platforms.

For more information on these topics, visit diplomacy.edu.

Meta faces backlash over Llama 4 release

Over the weekend, Meta unveiled two new Llama 4 models—Scout, a smaller version, and Maverick, a mid-sized variant it claims outperforms OpenAI’s GPT-4o and Google’s Gemini 2.0 Flash across multiple benchmarks.

Maverick quickly climbed to second place on LMArena, an AI benchmarking platform where human evaluators compare and vote on model outputs. Meta proudly pointed to Maverick’s ELO score of 1417, placing it just beneath Gemini 2.5 Pro, instead of trailing behind the usual leaders.

However, AI researchers noticed a critical detail buried in Meta’s documentation: the version of Maverick that ranked so highly wasn’t the one released to the public. Instead of using the standard model, Meta had submitted an ‘experimental’ version specifically optimised for conversations.

LMArena later criticised this move, saying Meta failed to clearly indicate the model was customised, prompting the platform to update its policies to ensure future evaluations remain fair and reproducible.

Meta’s spokesperson acknowledged the use of experimental variants, insisting the company frequently tests different configurations.

While this wasn’t a violation of LMArena’s existing rules, the episode raised concerns about the credibility of benchmark rankings when companies submit fine-tuned models instead of the ones accessible to the wider community.

Independent AI researcher Simon Willison expressed frustration, saying the impressive ranking lost all meaning once it became clear the public couldn’t even use the same version.

The controversy unfolded against a backdrop of mounting competition in open-weight AI, with Meta under pressure following high-profile releases like China’s DeepSeek model.

Instead of offering a smooth rollout, Meta released Llama 4 on a Saturday—an unusual move—which CEO Mark Zuckerberg explained simply as ‘that’s when it was ready.’ But for many in the AI space, the launch has only deepened confusion around what these models can genuinely deliver.

For more information on these topics, visit diplomacy.edu.

Llama 4 Maverick and Scout challenge top AI benchmarks

Meta has officially launched two of its new Llama 4 AI models, Maverick and Scout, following reported delays earlier in the year.

The release forms part of Meta’s wider ambition to build and open-source the world’s most powerful AI systems. Llama 4 Behemoth, another model announced alongside them, has yet to become available.

The newly released models go head-to-head with Google’s latest AI offerings. According to Meta, Llama 4 Maverick surpasses Gemini 2.0 (Flash) in benchmarks such as coding, reasoning, and image tasks, while Llama 4 Scout outperforms both Gemini 2.0 Flash Lite and Gemma 3 in summarisation and code analysis.

Google CEO Sundar Pichai offered unexpected congratulations to the Llama 4 team on social media, reflecting the high-profile nature of the launch.

Llama 4 Maverick features 17 billion active parameters and 128 experts, making it a versatile choice for general-purpose AI assistants and creative tasks.

Llama 4 Scout shares the same number of active parameters but with a leaner expert setup, tailored for more focused tasks like document summarisation and code reasoning. Meta plans to release additional advanced models, including Llama Behemoth and Llama Reasoning, in the near future.

For more information on these topics, visit diplomacy.edu.

Meta unveils Llama 4 models to boost AI across platforms

Meta has launched Llama 4, its latest and most advanced family of open-weight AI models, aiming to enhance the intelligence of Meta AI across services like WhatsApp, Instagram, and Messenger.

Instead of keeping these models cloud-restricted, Meta has made them available for download through its official Llama website and Hugging Face, encouraging wider developer access.

Two models, Llama 4 Scout and Maverick, are now publicly available. Scout, the lighter model with 17 billion active parameters, supports a 10 million-token context window and can run on a single Nvidia H100 GPU.

It outperforms rivals like Google’s Gemma 3 and Mistral 3.1 in benchmark tests. Maverick, the more capable model, uses the same number of active parameters but with 128 experts, offering competitive performance against GPT-4o and DeepSeek v3 while being more efficient.

Meta also revealed the Llama 4 Behemoth model, still in training, which serves as a teacher for the rest of the Llama 4 line. Instead of targeting lightweight use, Behemoth focuses on heavy multimodal tasks with 288 billion active parameters and nearly two trillion in total.

Meta claims it outpaces GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in key STEM-related evaluations.

These open-weight AI models allow local deployment instead of relying on cloud APIs, though some licensing limits may apply. With Scout and Maverick already accessible, Meta is gradually integrating Llama 4 capabilities into its messaging and social platforms worldwide.

For more information on these topics, visit diplomacy.edu.

GPT-4.5 outperforms humans in updated Turing Test

Two leading AI systems, OpenAI’s GPT-4.5 and Meta’s Llama-3.1, have passed a key milestone by outperforming humans in a modern version of the Turing Test.

The experiment, conducted by researchers at the University of California San Diego, found that GPT-4.5 was mistaken for a human 73% of the time, surpassing the human identification rate. Meta’s Llama-3.1 followed closely, with a 56% success rate.

The study used a three-party test where participants held simultaneous five-minute conversations with both a human and an AI, and then tried to determine which was which.

These trials were conducted across two independent groups: university undergraduates and prolific online workers. The results provide the first substantial evidence that AI can convincingly mimic human responses in spontaneous conversations.

Earlier language models such as ELIZA and GPT-4o were correctly identified as non-human in over 75% of cases.

The success of newer models in passing this benchmark points to how rapidly conversational AI is evolving, raising fresh questions about the ethical and societal implications of indistinguishable AI interactions.

For more information on these topics, visit diplomacy.edu.