Rewriting the AI playbook: How Meta plans to win through openness

Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.

The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.

Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.

By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.

The rise of Llama: From open-source curiosity to strategic priority

When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.

Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.

Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.

Zuckerberg’s bet: AI as the engine of Meta’s next chapter

Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.

He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.

This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.

A costly future: Meta’s massive AI infrastructure investment

Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.

That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.

However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.

Economic clouds: Revenue growth vs Wall Street’s expectations

Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.

Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.

A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.

Global headwinds: China, tariffs, and the shifting tech supply chain

Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.

Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.

At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.

Llama 4 in context: How it compares to GPT-4 and Gemini

Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.

However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.

But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.

Open source vs closed AI: Strategic gamble or masterstroke?

Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.

Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.

Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.

Can Meta’s open strategy deliver long-term returns?

Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.

The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.

Meta is facing a major antitrust trial as the FTC argues its Instagram and WhatsApp acquisitions were made to eliminate competition rather than foster innovation.

Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.

Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches academy to lead in AI innovation

The UAE has announced the launch of its AI Academy, aiming to strengthen the country’s position in AI innovation both regionally and globally.

Developed in partnership with the Polynom Group and the Abu Dhabi School of Management, it is designed to foster a skilled workforce in AI and programming.

It will offer short courses in multiple languages, covering AI fundamentals, national strategies, generative tools, and executive-level applications.

A flagship offering is the specialised Chief AI Officer (CAIO) Programme, tailored for leadership roles across sectors.

NVIDIA’s technologies will be integrated into select courses, enhancing the UAE academy’s technical edge and helping drive the development of AI capabilities throughout the region.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU criticised for secretive security AI plans

A new report by Statewatch has revealed that the European Union is quietly laying the groundwork for the widespread use of experimental AI technologies in policing, border control, and criminal justice.

The report warns that these developments pose serious threats to transparency, accountability, and fundamental rights.

Despite the adoption of the EU AI Act in 2024, broad exemptions allow law enforcement and migration agencies to bypass safeguards, including a full exemption for certain high-risk systems until 2031.

Institutions like Europol and eu-LISA are involved in building technical infrastructure for security-focused AI, often without public knowledge or oversight.

The study also highlights how secretive working groups, such as the European Clearing Board, have influenced legislation to favour police interests.

Critics argue that these moves risk entrenching discrimination and reducing democratic control, especially at a time of rising authoritarian influence within EU institutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek returns to South Korea after data privacy overhaul

Chinese AI service DeepSeek is once again available for download in South Korea after a two-month suspension.

The app was initially removed from platforms like the App Store and Google Play Store in February, following accusations of breaching South Korea’s data protection regulations.

Authorities discovered that DeepSeek had transferred user data abroad without appropriate consent.

Significant changes to DeepSeek’s privacy practices have now allowed its return. The company updated its policies to comply with South Korea’s Personal Information Protection Act, offering users the choice to refuse the transfer of personal data to companies based in China and the United States.

These adjustments were crucial in meeting the recommendations made by South Korea’s Personal Information Protection Commission (PIPC).

Although users can once again download DeepSeek, South Korean authorities have promised continued monitoring to ensure the app maintains higher standards of data protection.

DeepSeek’s future in the market will depend heavily on its ongoing compliance with the country’s strict privacy requirements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

FBI warns users not to click on suspicious messages

Cybersecurity experts are raising fresh alarms following an FBI warning that clicking on a single link could lead to disaster.

With cyberattacks becoming more sophisticated, hackers now need just 60 seconds to compromise a victim’s device after launching an attack.

Techniques range from impersonating trusted brands like Google to deploying advanced malware and using AI tools to scale attacks even further.

The FBI has revealed that internet crimes caused $16 billion in losses during 2024 alone, with more than 850,000 complaints recorded.

Criminals exploit emotional triggers like fear and urgency in phishing emails, often sent from what appear to be genuine business accounts. A single click could expose sensitive data, install malware automatically, or hand attackers access to personal accounts by stealing browser session cookies.

To make matters worse, many attacks now originate from smartphone farms targeting both Android and iPhone users. Given the evolving threat landscape, the FBI has urged everyone to be extremely cautious.

Their key advice is clear: do not click on anything received via unsolicited emails or text messages, no matter how legitimate it might appear.

Remaining vigilant, avoiding interaction with suspicious messages, and reporting any potential threats are critical steps in combating the growing tide of cybercrime.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake victims gain new rights with House-approved bill

The US House of Representatives has passed the ‘Take It Down’ Act with overwhelming bipartisan support, aiming to protect Americans from the spread of deepfake and revenge pornography.

The bill, approved by a 409-2 vote, criminalises the distribution of non-consensual intimate imagery—including AI-generated content—and now heads to President Donald Trump for his signature.

First Lady Melania Trump, who returned to public advocacy earlier this year, played a key role in supporting the legislation. She lobbied lawmakers last month and celebrated the bill’s passage, saying she was honoured to help guide it through Congress.

The White House confirmed she will attend the signing ceremony.

The law requires social media platforms and similar websites to remove such harmful content upon request from victims, instead of allowing it to remain unchecked.

Victims of deepfake pornography have included both public figures such as Taylor Swift and Alexandria Ocasio-Cortez, and private individuals like high school students.

Introduced by Republican Senator Ted Cruz and backed by Democratic lawmakers including Amy Klobuchar and Madeleine Dean, the bill reflects growing concern across party lines about online abuse.

Melania Trump, echoing her earlier ‘Be Best’ initiative, stressed the need to ensure young people—especially girls—can navigate the internet safely instead of being left vulnerable to digital exploitation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI research project aims to improve drug-resistant epilepsy outcomes

A research collaboration between Swansea University and King’s College London has secured a prestigious Medical Research Council project grant to tackle drug-resistant epilepsy.

The project brings together clinicians, data scientists, AI specialists, and individuals with lived experience from the Epilepsy Research Institute’s Shape Network to advance understanding and treatment of the condition.

Drug-resistant epilepsy affects around 30% of the 600,000 people living with epilepsy in the UK, leading to ongoing seizures, memory issues, and mood disorders.

Researchers will use advanced natural language processing, AI, and anonymised healthcare data to better predict who will develop resistance to medications and how treatments can be prioritised.

Project lead Dr Owen Pickrell from Swansea University highlighted the unique opportunity to combine real-world clinical data with cutting-edge AI to benefit people living with the condition.

Annee Amjad from the Epilepsy Research Institute also welcomed the project, noting that it addresses several of the UK’s top research priorities for epilepsy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urged to outlaw apps creating deepfake abuse images

The Children’s Commissioner has urged the UK Government to ban AI apps that create sexually explicit images through “nudification” technology. AI tools capable of manipulating real photos to make people appear naked are being used to target children.

Concerns in the UK are growing as these apps are now widely accessible online, often through social media and search platforms. In a newly published report, Dame Rachel warned that children, particularly girls, are altering their online behaviour out of fear of becoming victims of such technologies.

She stressed that while AI holds great potential, it also poses serious risks to children’s safety. The report also recommends stronger legal duties for AI developers and improved systems to remove explicit deepfake content from the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI guidelines aim to cut NHS waiting times

The UK government has announced new guidelines to encourage the use of AI tools in the NHS, aiming to streamline administrative processes and improve patient care. AI that transcribes spoken conversations into structured medical documents will be used across hospitals and GP surgeries.

Reducing bureaucracy is expected to free clinicians to spend more time with patients. Early trials of ambient voice technologies, such as those at Great Ormond Street Hospital, show improvements in emergency department efficiency and clinician productivity.

AI-generated documentation is reviewed by medical staff before being added to health records, preserving patient safety and ensuring accuracy. Privacy, data compliance, and staff training remain central to the government’s guidelines.

NHS England evaluations indicate AI integration is already contributing to shorter waiting times and an increase in appointment availability. The technology also supports broader NHS goals to digitise care, reduce costs, and enhance diagnostic accuracy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SK Telecom begins SIM card replacement after data breach

South Korea’s largest carrier, SK Telecom, began replacing SIM cards for its 23 million customers on Monday following a serious data breach.

Instead of revealing the full extent of the damage or the perpetrators, the company has apologised and offered free USIM chip replacements at 2,600 stores nationwide, urging users to either change their chips or enrol in an information protection service.

The breach, caused by malicious code, compromised personal information and prompted a government-led review of South Korea’s data protection systems.

However, SK Telecom has secured less than five percent of the USIM chips required, planning to procure an additional five million by the end of May instead of having enough stock ready for immediate replacement.

Frustrated customers, like 30-year-old Jang waiting in line in Seoul, criticised the company for failing to be transparent about the amount of data leaked and the number of users affected.

Instead of providing clear answers, SK Telecom has focused on encouraging users to seek chip replacements or protective measures.

South Korea, often regarded as one of the most connected countries globally, has faced repeated cyberattacks, many attributed to North Korea.

Just last year, police confirmed that North Korean hackers had stolen over a gigabyte of sensitive financial data from a South Korean court system over a two-year span.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!