Meta pursues two AI paths with internal tension
Yann LeCun aims for open-source AI, but Meta’s leadership is shifting focus to closed, text-based models despite previous commitments to openness.
Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.
While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.
The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.
Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.
LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.
Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.
Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.
Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!