Rewriting the AI playbook: How Meta plans to win through openness

Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.

The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.

Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.

By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.

The rise of Llama: From open-source curiosity to strategic priority

When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.

Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.

Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.

Zuckerberg’s bet: AI as the engine of Meta’s next chapter

Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.

He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.

This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.

A costly future: Meta’s massive AI infrastructure investment

Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.

That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.

However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.

Economic clouds: Revenue growth vs Wall Street’s expectations

Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.

Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.

A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.

Global headwinds: China, tariffs, and the shifting tech supply chain

Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.

Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.

At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.

Llama 4 in context: How it compares to GPT-4 and Gemini

Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.

However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.

But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.

Open source vs closed AI: Strategic gamble or masterstroke?

Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.

Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.

Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.

Can Meta’s open strategy deliver long-term returns?

Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.

The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.

Meta is facing a major antitrust trial as the FTC argues its Instagram and WhatsApp acquisitions were made to eliminate competition rather than foster innovation.

Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.

Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Big Tech accused of undue influence over EU AI Code

The European Commission is facing growing criticism after a joint investigation revealed that Big Tech companies had disproportionate influence over the drafting of the EU’s Code of Practice on General Purpose AI.

The report, published by Corporate Europe Observatory and LobbyControl, claims firms such as Google, Microsoft, Meta, Amazon, and OpenAI were granted privileged access to shaping the voluntary code, which aims to help companies comply with the upcoming AI Act.

While 13 Commission-appointed experts led the process and over 1,000 participants were involved in feedback workshops, civil society groups and smaller stakeholders were largely side-lined.

Their input was often limited to reacting through emojis on an online platform instead of engaging in meaningful dialogue, the report found.

The US government also waded into the debate, sending a letter to the Commission opposing the Code. The Trump administration argued the EU’s digital regulations would stifle innovation.

Critics meanwhile say the EU’s current approach opens the door to Big Tech lobbying, potentially weakening the Code’s effectiveness just as it nears finalisation.

Although the Code was due in early May, it is now expected by June or July, just before new rules on general-purpose AI tools come into force in August.

The Commission has yet to confirm the revised timeline.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPT-4o update rolled back over user discomfort

OpenAI has reversed a recent update to its GPT-4o model after users reported that the chatbot had become overly flattering and disingenuous.

The update, which was intended to refine the model’s personality and usefulness, was criticised for creating interactions that felt uncomfortably sycophantic. According to OpenAI, the changes prioritised short-term feedback at the expense of authentic, balanced responses.

The behaviour was exclusive to GPT-4o, the latest flagship model currently used in the free version of ChatGPT. Introduced with capabilities across text, vision, and audio, GPT-4o is now under revised guidelines to ensure more honest and transparent interactions.

OpenAI has admitted that designing a single default personality for a global user base is complex and can lead to unintended effects. To prevent similar issues in future, the company is introducing stronger guardrails and expanding pre-release testing to a wider group of users.

It also plans to give people greater control over the chatbot’s tone and behaviour, including options for real-time feedback and customisable default personalities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants circle as Chrome faces possible break-up

Alphabet, Google’s parent company, may soon be forced to split into separate entities, with its Chrome browser emerging as a particularly attractive target.

With Chrome controlling over 65% of the global browser market, interest is mounting from AI-driven firms and legacy tech companies alike, all eager to take control of a platform that reaches billions of users.

OpenAI, known for ChatGPT, sees Chrome as a natural fit for its expanding AI ecosystem, especially with search features increasingly integrated into its chatbot.

Rival AI search firm Perplexity is also eyeing Chrome instead of building from scratch, viewing it as a shortcut to mainstream adoption and a rich source of user data and engagement.

Yahoo, backed by Apollo Global Management, is reportedly considering a $50 billion bid, even while developing its own browser internally.

Despite legal uncertainties and the threat of drawn-out regulatory battles, the opportunity to own Chrome could radically shift influence in the tech sector, especially while Google faces mounting antitrust scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM commits billions to future US computing

IBM has unveiled a bold plan to invest $150 billion in the United States over the next five years. The move is designed to accelerate technological development while reinforcing IBM’s leading role in computing and AI.

A significant portion, over $30 billion, will support research and development, with a strong emphasis on manufacturing mainframes and quantum computers on American soil.

These efforts build on IBM’s legacy in the US, where it has long played a key role in advancing national infrastructure and innovation.

IBM highlighted the importance of its Poughkeepsie facility, which produces systems powering over 70% of global transaction value.

It also views quantum computing as a leap that could unlock solutions beyond today’s digital capabilities, bolstering economic growth, job creation, and national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI coming soon to smartwatches and cars

Google has revealed plans to expand its Gemini AI assistant to a wider range of Android-connected devices later in 2025.

CEO Sundar Pichai confirmed the development during the company’s Q1 earnings call, naming tablets, smartwatches, headphones, and vehicles running Android Auto as upcoming platforms.

Gemini will gradually replace Google Assistant, offering more natural, conversational interactions and potentially new features like real-time responses through ‘Gemini Live’. Though a detailed rollout schedule remains undisclosed, more information is expected at Google I/O 2025 next month.

Evidence of Gemini’s AI integration has already surfaced in Wear OS and Android Auto updates, suggesting enhanced voice control and contextual features.

It remains unclear whether the assistant’s processing will be cloud-based or supported locally through connected Android devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches academy to lead in AI innovation

The UAE has announced the launch of its AI Academy, aiming to strengthen the country’s position in AI innovation both regionally and globally.

Developed in partnership with the Polynom Group and the Abu Dhabi School of Management, it is designed to foster a skilled workforce in AI and programming.

It will offer short courses in multiple languages, covering AI fundamentals, national strategies, generative tools, and executive-level applications.

A flagship offering is the specialised Chief AI Officer (CAIO) Programme, tailored for leadership roles across sectors.

NVIDIA’s technologies will be integrated into select courses, enhancing the UAE academy’s technical edge and helping drive the development of AI capabilities throughout the region.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU criticised for secretive security AI plans

A new report by Statewatch has revealed that the European Union is quietly laying the groundwork for the widespread use of experimental AI technologies in policing, border control, and criminal justice.

The report warns that these developments pose serious threats to transparency, accountability, and fundamental rights.

Despite the adoption of the EU AI Act in 2024, broad exemptions allow law enforcement and migration agencies to bypass safeguards, including a full exemption for certain high-risk systems until 2031.

Institutions like Europol and eu-LISA are involved in building technical infrastructure for security-focused AI, often without public knowledge or oversight.

The study also highlights how secretive working groups, such as the European Clearing Board, have influenced legislation to favour police interests.

Critics argue that these moves risk entrenching discrimination and reducing democratic control, especially at a time of rising authoritarian influence within EU institutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds generative AI has not boosted worker earnings

Generative AI tools like ChatGPT, Claude, and Gemini have had little impact on wages or job losses, according to a new study.

Research by economists Anders Humlum and Emilie Vestergaard found no significant changes in earnings or working hours across 11 occupations often considered vulnerable to AI disruption, such as accountants, teachers, and journalists.

Despite rapid adoption of chatbots in workplaces, the promised economic benefits have yet to materialise.

Company investment has boosted chatbot adoption, helping most users save time; however, average time savings remain small, at just 2.8 percent of working hours. New tasks created by AI, such as reviewing chatbot outputs or monitoring student cheating, often cancel out the potential time saved.

Researchers argue that automation tools historically generate new demands for workers, but so far, AI has not significantly altered productivity or earnings.

The tech industry’s enormous spending on AI infrastructure may face greater scrutiny, as companies like Microsoft and Amazon already scale back investments due to slower-than-expected business adoption.

While there are modest gains, Humlum concludes that transformative effects predicted for AI tools have not yet appeared in real-world economic data, and any future impact will require better integration and a shift in workplace processes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Demystifying AI: How to prepare international organisations for AI transformation?

AI as a turning point, not a trend

Jovan Kurbalija, Director of Diplo, opened the conversation by framing AI as both a challenge and an opportunity. It’s not just about adopting a new tool but fundamentally rethinking the structures, workflows, and values underpinning international organisations. This transformation is particularly urgent for Geneva, home to a dense web of multilateral institutions. AI, he argued, needs to be shaped by the values of human rights, public service, and multilateral cooperation. It shouldn’t just be plugged in like a new software package—it has to reflect Geneva’s ethical and institutional DNA and the wider UN system.

Kurbalija emphasised that organisations must stop waiting for mandates or budget allocations to experiment with AI. Change is happening quickly, and the longer institutions wait, the more reactive—and less prepared—they become. It’s not a question of whether AI will become part of international work, but how and on whose terms.

He broke down the process of AI adoption into stages. Setting up a basic AI tool like Chatgpt takes minutes, but truly integrating AI into an institution—to work in harmony with daily operations, internal processes, and organisational culture—can take a year or more. That transformation isn’t about code but people, mindsets, and habits.

To help demystify AI, Kurbalija walked through a simple explanation of how large language models work. These systems operate on pattern recognition and probability—they look for recurring structures in massive datasets to predict what comes next. Using the example of national flags, he showed how AI might group them by common features like colours or symbols. But while AI is good at spotting patterns, it’s not always great at understanding exceptions. Human judgement, nuance, and even rebellion against the expected still matter. The example of Greenland rejecting a typical Nordic cross flag in favour of a unique design served as a reminder: humans don’t always follow the algorithm.

Rethinking knowledge and data

This led to a deeper point about how we think about knowledge. In many digital policy conversations, the term ‘data’ has taken over, while older concepts like ‘knowledge’ or ‘wisdom’ have faded into the background. But AI isn’t just about data—it’s about how we know and interpret the world. When we use tools like ChatGPT, we’re not just feeding in facts but engaging with systems that model human thought, reasoning, and understanding. That’s a big leap from traditional tech tools and requires a different mindset.

One of the most important messages was a caution against ‘plug-and-play’ illusions. Some consultancy firms market AI as a magic solution—something you can install quickly to appear innovative. But that misses the point. Real AI adoption is slow, strategic, and deeply tied to how an organisation functions. The goal isn’t just to install AI—it’s to rethink how decisions are made, how institutional knowledge is captured, and how work gets done.

Diplo’s journey served as an example. With limited funding and a small team, Diplo couldn’t compete with tech giants in terms of scale. However, it focused on enriching its own data, for example, by annotating half a million UN documents to create a highly structured knowledge base. This allowed it to build AI tools that are far more useful and context-aware than generic models. Kurbalija pointed out that while large models keep growing, they hit diminishing returns. The real value now lies in the quality and structure of the underlying data, not just the quantity.

Making it work: From tools to transformation

The second part of the session highlighted how AI is reshaping three core work areas for international organisations: reporting, translation, and training.

In terms of reporting, diplomats spend vast amounts of time summarising meetings, drafting briefs, and crafting position papers. AI can help—tools like ChatGPT can generate drafts, but they need to be trained to reflect specific organisational or national perspectives. A generic summary isn’t enough when it comes to nuanced diplomatic language. The technology can be a time-saver, but only if adapted to context.

Translation and interpretation came next. Geneva depends heavily on these services, and AI tools like DeepL are already widely used. But the challenge goes beyond just language. AI tools struggle with accents, institutional jargon, and acronyms. To be truly effective in Geneva, translation tools must be trained on international diplomacy’s unique linguistic landscape.

Training staff for the AI era was the final major theme. It’s not enough to hold theoretical sessions on AI ethics—what’s needed is hands-on experience. That’s where Diplo’s AI Apprenticeship online course comes in.

AI apprenticeship

Introduced by Anita Lamprecht, the online course helps participants build their own AI agents tailored to their organisation’s needs. The process is surprisingly simple: participants interact with the bot, give it instructions, define tone and values, and teach it to behave like a knowledgeable assistant.

But the training goes deeper than just prompt engineering. The program is designed around systems thinking—it encourages participants to see AI not as a standalone tool, but as part of an interconnected institutional ecosystem. Over several weeks, participants explore everything from risk and data labelling to cybersecurity and knowledge mapping. They test different AI engines, assess their outputs, and finish with a project tailored to their own institution. Future editions of the program are already in the works.

Boundary spanners: The people who connect the dots

The idea of the ‘boundary spanner’ continued throughout the session. These are the people who connect communities—techies, diplomats, policy folks—and help ideas move across domains. Geneva, for all its density of institutions, still operates in silos. A data-driven analysis found that only 3% of hyperlinks on Geneva-based websites connect to other Geneva-based organisations. That’s a stark indicator of how disconnected even closely situated institutions can be.

The solution isn’t to eliminate silos—they’re human and inevitable—but to build more bridges. Whether it’s casual AI meetups or formal partnerships, organisations need more people who can connect the dots. This is where innovation happens—not in isolation, but at the intersections.

The bureaucracy bottleneck

The session also highlighted how bureaucracy remains one of the most significant barriers to innovation. One participant raised a simple, practical idea: instead of using open-source AI tools that store sensitive data externally, why not build an in-house model? Technically, it’s easy and cheap. But institutionally, it’s slow—committees, approval chains, and consultant reports can stall even the simplest project.

The key message was that many young professionals already have the skills and ideas. What they lack is the space to act. If international organisations want to thrive in the AI era, they need to empower their internal talent—give them a sandbox, and let them experiment.

Watch the event in full below.