Meta unveils Llama 4 models to boost AI across platforms
Instead of focusing solely on commercial models, Meta releases open-weight Llama 4 Scout and Maverick to empower broader use and multimodal AI development.

Meta has launched Llama 4, its latest and most advanced family of open-weight AI models, aiming to enhance the intelligence of Meta AI across services like WhatsApp, Instagram, and Messenger.
Instead of keeping these models cloud-restricted, Meta has made them available for download through its official Llama website and Hugging Face, encouraging wider developer access.
Two models, Llama 4 Scout and Maverick, are now publicly available. Scout, the lighter model with 17 billion active parameters, supports a 10 million-token context window and can run on a single Nvidia H100 GPU.
It outperforms rivals like Google’s Gemma 3 and Mistral 3.1 in benchmark tests. Maverick, the more capable model, uses the same number of active parameters but with 128 experts, offering competitive performance against GPT-4o and DeepSeek v3 while being more efficient.
Meta also revealed the Llama 4 Behemoth model, still in training, which serves as a teacher for the rest of the Llama 4 line. Instead of targeting lightweight use, Behemoth focuses on heavy multimodal tasks with 288 billion active parameters and nearly two trillion in total.
Meta claims it outpaces GPT-4.5, Claude Sonnet 3.7, and Gemini 2.0 Pro in key STEM-related evaluations.
These open-weight AI models allow local deployment instead of relying on cloud APIs, though some licensing limits may apply. With Scout and Maverick already accessible, Meta is gradually integrating Llama 4 capabilities into its messaging and social platforms worldwide.
For more information on these topics, visit diplomacy.edu.