Elon Musk has suggested that AI should replace many federal government workers, criticising the US administration as bloated and inefficient.
Speaking privately at the Milken Institute Global Conference in Beverly Hills, Musk argued AI could perform government tasks faster and with greater accuracy, ultimately saving taxpayers money.
His remarks coincided with the winding down of his controversial volunteer role leading the Department of Government Efficiency (DOGE), an initiative born under Donald Trump’s presidency.
Musk spent over 100 days embedded in the White House, even setting up a small office in the West Wing. Despite joking about its minimal view and sleeping in the Lincoln Bedroom, he claimed his work had major impacts — including rooting out fraud and slashing federal budgets.
Musk said DOGE was responsible for cutting $160 billion in government spending, although no formal evidence has been released to support that figure.
The programme has sparked intense backlash. Thousands of federal employees were reportedly dismissed or resigned during the DOGE audits, prompting lawsuits and allegations of illegal firings.
Critics say the sweeping cuts have left the US less prepared for emergencies and reduced its global influence, allowing China to expand its reach. Protesters have targeted Tesla in response, leading Trump to defend Musk and condemn the attacks.
Although scaling back his involvement in Washington, Musk isn’t leaving entirely. He will now spend only one or two days a week on government affairs, returning more of his focus to Tesla amid flagging sales and investor pressure.
Despite the chaos, DOGE has inspired new political groups in Congress, blurring the line between satire and policy. Musk himself finds it all surreal, asking, ‘Are we in a simulation here?’
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
At this year’s Michelin Guide awards in France, AI sparked nearly as much conversation as the stars themselves.
Paris-based chef Matan Zaken, of the one-star restaurant Nhome, said AI dominated discussions among chefs, even though many are hesitant to admit they already rely on tools like ChatGPT for inspiration and recipe development.
Zaken openly embraces AI in his kitchen, using platforms like ChatGPT Premium to generate ingredient pairings—such as peanuts and wild garlic—that he might not have considered otherwise. Instead of starting with traditional tastings, he now consults vast databases of food imagery and chemical profiles.
In a recent collaboration with the digital collective Obvious Art, AI-generated food photos came first, and Zaken created dishes to match them.
Still, not everyone is sold on AI’s place in haute cuisine. Some top chefs insist that no algorithm can replace the human palate or creativity honed by years of training.
Philippe Etchebest, who just earned a second Michelin star, argued that while AI may be helpful elsewhere, it has no place in the artistry of the kitchen. Others worry it strays too far from the culinary traditions rooted in local produce and craftsmanship.
Many chefs, however, seem more open to using AI behind the scenes. From managing kitchen rotas to predicting ingredient costs or carbon footprints, phone apps like Menu and Fullsoon are gaining popularity.
Experts believe molecular databases and cookbook analysis could revolutionise flavour pairing and food presentation, while robots might one day take over laborious prep work—peeling potatoes included.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The Reserve Bank of New Zealand has warned that the swift uptake of AI in the financial sector could pose a threat to financial stability.
A report released on Monday highlighted how errors in AI systems, data privacy breaches and potential market distortions might magnify existing vulnerabilities instead of simply streamlining operations.
The central bank also expressed concern over the increasing dependence on a handful of third-party AI providers, which could lead to market concentration instead of healthy competition.
A reliance like this, it said, could create new avenues for systemic risk and make the financial system more susceptible to cyber-attacks.
Despite the caution, the report acknowledged that AI is bringing tangible advantages, such as greater modelling accuracy, improved risk management and increased productivity. It also noted that AI could help strengthen cyber resilience rather than weaken it.
The analysis was published just ahead of the central bank’s twice-yearly Financial Stability Report, scheduled for release on Wednesday.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has admitted in court that it can use website content to train AI features in its search products, even when publishers have opted out of such training.
Although Google offers a way for sites to block their data from being used by its AI lab, DeepMind, the company confirmed that its broader search division can still use that data for AI-powered tools like AI Overviews.
An initiative like this has raised concern among publishers who seek reduced traffic as Google’s AI summarises answers directly at the top of search results, diverting users from clicking through to original sources.
Eli Collins, a vice-president at Google DeepMind, acknowledged during a Washington antitrust trial that Google’s search team could train AI using data from websites that had explicitly opted out.
The only way for publishers to fully prevent their content from being used in this way is by opting out of being indexed by Google Search altogether—something that would effectively make them invisible on the web.
Google’s approach relies on the robots.txt file, a standard that tells search bots whether they are allowed to crawl a site.
The trial is part of a broader effort by the US Department of Justice to address Google’s dominance in the search market, which a judge previously ruled had been unlawfully maintained.
The DOJ is now asking the court to impose major changes, including forcing Google to sell its Chrome browser and stop paying to be the default search engine on other devices. These changes would also apply to Google’s AI products, which the DOJ argues benefit from its monopoly.
Testimony also revealed internal discussions at Google about how using extensive search data, such as user session logs and search rankings, could significantly enhance its AI models.
Although no model was confirmed to have been built using that data, court documents showed that top executives like DeepMind CEO Demis Hassabis had expressed interest in doing so.
Google’s lawyers have argued that competitors in AI remain strong, with many relying on direct data partnerships instead of web scraping.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft CEO Satya Nadella revealed that AI now writes between 20% and 30% of the company’s internal code.
He shared this figure during a fireside conversation with Meta CEO at the recent LlamaCon conference. Nadella added that AI-generated output varies by programming language.
Nadella’s comments came in response to a question from Zuckerberg, who admitted he didn’t know the figure for Meta. Google’s CEO Sundar Pichai recently reported similar figures, saying AI now generates over 30% of Google’s code.
Despite these bold claims, there’s still no industry-wide standard for measuring AI-written code. The ambiguity suggests such figures should be interpreted cautiously. Nevertheless, the trend highlights the growing impact of generative AI on software development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta Platforms has launched a dedicated AI assistant app powered by its open-source Llama 4 language model, stepping up efforts to compete with leading chatbot providers like OpenAI.
Unlike typical AI chat tools, Meta AI integrates personal data from the company’s popular platforms, including Facebook, Instagram, WhatsApp and Messenger, to deliver more tailored responses.
According to Meta, the assistant can remember details users choose to share and adapt its replies based on individual preferences and behaviours across its services.
The personalised functionality is currently limited to users in the United States and Canada. The launch coincides with Meta’s first LlamaCon event, held on 29 April at its California headquarters.
CEO Mark Zuckerberg has committed up to $65 billion in capital expenditure to strengthen the company’s AI infrastructure. He believes Meta AI will become the world’s most widely-used assistant by 2025, potentially reaching more than 1 billion users.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Duolingo has come under fire after CEO Luis von Ahn announced the company is transitioning to an ‘AI-first’ model, with plans to replace certain human roles with AI.
In a lengthy email and LinkedIn post, the CEO argued that AI is essential to scale content creation and build new features like video calls. He stated that relying on manual processes is unsustainable and that embracing AI now will help Duolingo stay competitive and better deliver on its educational mission.
The company’s plan includes phasing out contractors whose work can be automated and using AI proficiency as a factor in hiring and performance evaluations. Von Ahn acknowledged the changes would require rethinking workflows and, in some cases, rebuilding systems from scratch.
While he reassured employees that Duolingo still values its workforce and wants them focused on creative and meaningful tasks, the announcement has sparked mixed reactions online.
Some users welcomed the bold move, seeing it as a way to push the boundaries of AI and education. Others, however, expressed concern about job losses and the company’s shifting priorities.
Several users threatened to cancel subscriptions or uninstall the app, arguing that prioritising AI over people contradicts Duolingo’s claims of caring about employees.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Microsoft has unveiled a set of five digital commitments aimed at supporting Europe’s technological and economic future.
Central to the announcement is a major expansion of its cloud and AI infrastructure, including plans to grow its datacentre capacity by 40% across 16 European countries.
The company says this will help nations strengthen digital sovereignty, boost economic competitiveness and ensure data remains under European jurisdiction.
They reaffirmed commitments to EU data privacy laws, expanding its EU Data Boundary and offering customers advanced encryption and control tools.
As geopolitical tensions persist, Microsoft pledges to uphold Europe’s digital resilience and continuity of service. However, this includes a legally binding Digital Resilience Commitment, European oversight of datacentre operations, and partnerships to ensure operational continuity in the event of disruption.
Cybersecurity remains a core focus, with a new Deputy Chief Information Security Officer for Europe and increased support for compliance with the EU’s evolving regulations.
Microsoft also recommitted to open access principles for AI development and support for local innovation, including open-source ecosystems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.
Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.
Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.
Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.
With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.
At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.
Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.
The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.
Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.
By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.
The rise of Llama: From open-source curiosity to strategic priority
When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.
Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.
Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.
Zuckerberg’s bet: AI as the engine of Meta’s next chapter
Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.
He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.
This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.
A costly future: Meta’s massive AI infrastructure investment
Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.
That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.
However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.
Economic clouds: Revenue growth vs Wall Street’s expectations
Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.
Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.
A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.
Global headwinds: China, tariffs, and the shifting tech supply chain
Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.
Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.
At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.
Llama 4 in context: How it compares to GPT-4 and Gemini
Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.
However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.
But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.
Open source vs closed AI: Strategic gamble or masterstroke?
Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.
Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.
Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.
Can Meta’s open strategy deliver long-term returns?
Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.
The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.
Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.
Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!