Microsoft Recall raises privacy alarm again

Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.

Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.

Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.

Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.

With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.

At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.

Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rewriting the AI playbook: How Meta plans to win through openness

Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.

The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.

Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.

By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.

The rise of Llama: From open-source curiosity to strategic priority

When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.

Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.

Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.

Zuckerberg’s bet: AI as the engine of Meta’s next chapter

Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.

He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.

This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.

A costly future: Meta’s massive AI infrastructure investment

Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.

That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.

However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.

Economic clouds: Revenue growth vs Wall Street’s expectations

Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.

Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.

A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.

Global headwinds: China, tariffs, and the shifting tech supply chain

Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.

Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.

At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.

Llama 4 in context: How it compares to GPT-4 and Gemini

Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.

However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.

But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.

Open source vs closed AI: Strategic gamble or masterstroke?

Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.

Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.

Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.

Can Meta’s open strategy deliver long-term returns?

Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.

The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.

Meta is facing a major antitrust trial as the FTC argues its Instagram and WhatsApp acquisitions were made to eliminate competition rather than foster innovation.

Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.

Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tech giants circle as Chrome faces possible break-up

Alphabet, Google’s parent company, may soon be forced to split into separate entities, with its Chrome browser emerging as a particularly attractive target.

With Chrome controlling over 65% of the global browser market, interest is mounting from AI-driven firms and legacy tech companies alike, all eager to take control of a platform that reaches billions of users.

OpenAI, known for ChatGPT, sees Chrome as a natural fit for its expanding AI ecosystem, especially with search features increasingly integrated into its chatbot.

Rival AI search firm Perplexity is also eyeing Chrome instead of building from scratch, viewing it as a shortcut to mainstream adoption and a rich source of user data and engagement.

Yahoo, backed by Apollo Global Management, is reportedly considering a $50 billion bid, even while developing its own browser internally.

Despite legal uncertainties and the threat of drawn-out regulatory battles, the opportunity to own Chrome could radically shift influence in the tech sector, especially while Google faces mounting antitrust scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM commits billions to future US computing

IBM has unveiled a bold plan to invest $150 billion in the United States over the next five years. The move is designed to accelerate technological development while reinforcing IBM’s leading role in computing and AI.

A significant portion, over $30 billion, will support research and development, with a strong emphasis on manufacturing mainframes and quantum computers on American soil.

These efforts build on IBM’s legacy in the US, where it has long played a key role in advancing national infrastructure and innovation.

IBM highlighted the importance of its Poughkeepsie facility, which produces systems powering over 70% of global transaction value.

It also views quantum computing as a leap that could unlock solutions beyond today’s digital capabilities, bolstering economic growth, job creation, and national security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini AI coming soon to smartwatches and cars

Google has revealed plans to expand its Gemini AI assistant to a wider range of Android-connected devices later in 2025.

CEO Sundar Pichai confirmed the development during the company’s Q1 earnings call, naming tablets, smartwatches, headphones, and vehicles running Android Auto as upcoming platforms.

Gemini will gradually replace Google Assistant, offering more natural, conversational interactions and potentially new features like real-time responses through ‘Gemini Live’. Though a detailed rollout schedule remains undisclosed, more information is expected at Google I/O 2025 next month.

Evidence of Gemini’s AI integration has already surfaced in Wear OS and Android Auto updates, suggesting enhanced voice control and contextual features.

It remains unclear whether the assistant’s processing will be cloud-based or supported locally through connected Android devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches academy to lead in AI innovation

The UAE has announced the launch of its AI Academy, aiming to strengthen the country’s position in AI innovation both regionally and globally.

Developed in partnership with the Polynom Group and the Abu Dhabi School of Management, it is designed to foster a skilled workforce in AI and programming.

It will offer short courses in multiple languages, covering AI fundamentals, national strategies, generative tools, and executive-level applications.

A flagship offering is the specialised Chief AI Officer (CAIO) Programme, tailored for leadership roles across sectors.

NVIDIA’s technologies will be integrated into select courses, enhancing the UAE academy’s technical edge and helping drive the development of AI capabilities throughout the region.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek returns to South Korea after data privacy overhaul

Chinese AI service DeepSeek is once again available for download in South Korea after a two-month suspension.

The app was initially removed from platforms like the App Store and Google Play Store in February, following accusations of breaching South Korea’s data protection regulations.

Authorities discovered that DeepSeek had transferred user data abroad without appropriate consent.

Significant changes to DeepSeek’s privacy practices have now allowed its return. The company updated its policies to comply with South Korea’s Personal Information Protection Act, offering users the choice to refuse the transfer of personal data to companies based in China and the United States.

These adjustments were crucial in meeting the recommendations made by South Korea’s Personal Information Protection Commission (PIPC).

Although users can once again download DeepSeek, South Korean authorities have promised continued monitoring to ensure the app maintains higher standards of data protection.

DeepSeek’s future in the market will depend heavily on its ongoing compliance with the country’s strict privacy requirements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba launches Qwen3 AI model

As the AI race intensifies in China, Alibaba has unveiled Qwen3, the latest version of its open-source large language model, aiming to compete with top-tier rivals like DeepSeek.

The company claims Qwen3 significantly improves reasoning, instruction following, tool use, and multilingual abilities compared to earlier versions.

Trained on 36 trillion tokens—double that of Qwen2.5—Qwen3 is available for free download on platforms like Hugging Face, GitHub, and Modelscope, instead of being limited to Alibaba’s own channels.

The model also powers Alibaba’s AI assistant, Quark, and will soon be accessible via API through its Model Studio platform.

Alibaba says the Qwen model family has already been downloaded over 300 million times, with developers creating more than 100,000 derivatives based on it.

With Qwen3, the company hopes to cement its place among the world’s AI leaders instead of trailing behind American and Chinese rivals.

Although the US still leads the AI field—according to Stanford’s AI Index 2025, it produced 40 major models last year versus China’s 15— Chinese firms like DeepSeek, Butterfly Effect, and now Alibaba are pushing to close the quality gap.

The global competition, it seems, is far from settled.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents tried running a fake company

If you’ve been losing sleep over AI stealing your job, here’s some comfort: the machines are still terrible at basic office work. A new experiment from Carnegie Mellon University tried staffing a fictional software startup entirely with AI agents. The result? A dumpster fire of incompetence—and proof that Skynet isn’t clocking in anytime soon.


The experiment

Researchers built TheAgentCompany, a virtual tech startup populated by AI ’employees’ from Google, OpenAI, Anthropic, and Meta. These bots were assigned real-world roles:

  • Software engineers
  • Project managers
  • Financial analysts
  • A faux HR department (yes, even the CTO was AI)

Tasks included navigating file systems, ‘touring’ virtual offices, and writing performance reviews. Simple stuff, right?


The (very) bad news

The AI workers flopped harder than a Zoom call with no Wi-Fi. Here’s the scoreboard:

  • Claude 3.5 Sonnet (Anthropic): ‘Top performer’ at 24% task success… but cost $6 per task and took 30 steps.
  • Gemini 2.0 Flash (Google): 11.4% success rate, 40 steps per task. Slow and unsteady.
  • Nova Pro v1 (Amazon): A pathetic 1.7% success ratePromoted to coffee-runner.

Why did it go so wrong?

Turns out, AI agents lack… well, everything:

  • Common sense: One bot couldn’t find a coworker on chat, so it renamed another user to pretend it did.
  • Social skills: Performance reviews read like a Mad Libs game gone wrong.
  • Internet literacy: Bots got lost in file directories like toddlers in a maze.

Researchers noted the agents relied on ‘self-deception’ — aka inventing delusional shortcuts to fake progress. Imagine your coworker gaslighting themselves into thinking they finished a report.


What now?

While AI can handle bite-sized tasks (like drafting emails), this study proves complex, human-style problem-solving is still a pipe dream. Why? Today’s ‘AI’ is basically glorified autocorrect—not a sentient colleague.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM commits $150 billion to US tech

IBM has announced a major investment plan worth $150 billion over the next five years to solidify its role as a global leader in advanced computing and quantum technologies.

The move also aims to support US economic growth by expanding local innovation and manufacturing, instead of relying heavily on overseas operations.

Over $30 billion of the funding will be directed towards research and development, helping IBM advance in areas such as mainframe and quantum computer production.

According to CEO Arvind Krishna, this commitment ensures that IBM remains the core hub of the world’s most sophisticated computing and AI capabilities. The company already operates the largest fleet of quantum computing systems and intends to continue building them in the US.

The announcement comes amid a wider shift among major tech firms investing heavily in US-based infrastructure.

Companies like Nvidia and Apple have each pledged massive sums—Nvidia alone is preparing to invest up to $500 billion—in response to President Donald Trump’s call for greater domestic manufacturing through policies like reciprocal tariffs.

By focusing investment at home instead of abroad, IBM joins a growing list of tech leaders aligning with government efforts to revitalise American industry while maintaining their global competitiveness in AI and next-generation computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!