Marks & Spencer has confirmed that a cyberattack has disrupted food availability in some stores and forced the temporary shutdown of online services. The company has not officially confirmed the nature of the breach, but cybersecurity experts suspect a ransomware attack.
The retailer paused clothing and home orders on its website and app after issues arose over the Easter weekend, affecting contactless payments and click-and-collect systems. M&S said it took some systems offline as a precautionary measure.
Reports have linked the incident to the hacking group Scattered Spider, although M&S has declined to comment further or provide a timeline for the resumption of online orders. The disruption has already led to minor product shortages and analysts anticipate a short-term hit to profits.
Still, M&S’s food division had been performing strongly, with grocery spending rising 14.4% year-on-year, according to Kantar. The retailer, which operates around 1,000 UK stores, earns about one-third of its non-food sales online. Shares dropped earlier in the week but closed Tuesday slightly up.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta is introducing facial recognition tools to help UAE users recover hacked accounts on Facebook and Instagram and stop scams that misuse public figures’ images. The technology compares suspicious ads to verified profile photos and removes them automatically if a match is found.
Well-known individuals in the region are automatically enrolled in the programme but can opt out if they choose. A new video selfie feature has also been rolled out to help users regain access to compromised accounts.
This allows identity verification through a short video matched with existing profile photos, offering a faster and more secure alternative to document-based checks.
Meta confirmed that all facial data used for verification is encrypted, deleted immediately after use, and never repurposed.
The company says this is part of a broader effort to fight impersonation scams and protect both public figures and regular users, not just in the UAE but elsewhere too.
Meta’s regional director highlighted the emotional and financial harm such scams can cause, reinforcing the need for proactive defences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
France has publicly accused Russia’s military intelligence agency of launching cyberattacks against key French institutions, including the 2017 presidential campaign of Emmanuel Macron and organisations tied to the Paris 2024 Olympics.
The allegations were presented by Foreign Minister Jean-Noël Barrot at the UN Security Council, where he condemned the attacks as violations of international norms. French authorities linked the operations to APT28, a well-known Russian hacking group connected to the GRU.
The group also allegedly orchestrated the 2015 cyberattack on TV5 Monde and attempted to manipulate voters during the 2017 French election by leaking thousands of campaign documents. A rise in attacks has been noted ahead of major events like the Olympics and future elections.
France’s national cybersecurity agency recorded a 15% increase in Russia-linked attacks in 2024, targeting ministries, defence firms, and cultural venues. French officials warn the hacks aim to destabilise society and erode public trust.
France plans closer cooperation with Poland and pledged to counter Russia’s cyber operations with all available means.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.
Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.
Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.
Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.
With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.
At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.
Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.
Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.
‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’
Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.
However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.
The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.
Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.
Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.
His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has reversed a recent update to its GPT-4o model after users complained it had become overly flattering and blindly agreeable. The behaviour, widely mocked online, saw ChatGPT praising dangerous or clearly misguided user ideas, leading to concerns over the model’s reliability and integrity.
The change had been part of a broader attempt to make GPT-4o’s default personality feel more ‘intuitive and effective’. However, OpenAI admitted the update relied too heavily on short-term user feedback and failed to consider how interactions evolve over time.
In a blog post published Tuesday, OpenAI said the model began producing responses that were ‘overly supportive but disingenuous’. The company acknowledged that sycophantic interactions could feel ‘uncomfortable, unsettling, and cause distress’.
Following CEO Sam Altman’s weekend announcement of an impending rollback, OpenAI confirmed that the previous, more balanced version of GPT-4o had been reinstated.
It also outlined steps to avoid similar problems in future, including refining model training, revising system prompts, and expanding safety guardrails to improve honesty and transparency.
Further changes in development include real-time feedback mechanisms and allowing users to choose between multiple ChatGPT personalities. OpenAI says it aims to incorporate more diverse cultural perspectives and give users greater control over the assistant’s behaviour.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.
The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.
Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.
By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.
The rise of Llama: From open-source curiosity to strategic priority
When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.
Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.
Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.
Zuckerberg’s bet: AI as the engine of Meta’s next chapter
Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.
He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.
This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.
A costly future: Meta’s massive AI infrastructure investment
Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.
That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.
However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.
Economic clouds: Revenue growth vs Wall Street’s expectations
Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.
Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.
A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.
Global headwinds: China, tariffs, and the shifting tech supply chain
Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.
Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.
At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.
Llama 4 in context: How it compares to GPT-4 and Gemini
Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.
However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.
But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.
Open source vs closed AI: Strategic gamble or masterstroke?
Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.
Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.
Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.
Can Meta’s open strategy deliver long-term returns?
Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.
The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.
Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.
Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UAE has announced the launch of its AI Academy, aiming to strengthen the country’s position in AI innovation both regionally and globally.
Developed in partnership with the Polynom Group and the Abu Dhabi School of Management, it is designed to foster a skilled workforce in AI and programming.
It will offer short courses in multiple languages, covering AI fundamentals, national strategies, generative tools, and executive-level applications.
A flagship offering is the specialised Chief AI Officer (CAIO) Programme, tailored for leadership roles across sectors.
NVIDIA’s technologies will be integrated into select courses, enhancing the UAE academy’s technical edge and helping drive the development of AI capabilities throughout the region.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new report by Statewatch has revealed that the European Union is quietly laying the groundwork for the widespread use of experimental AI technologies in policing, border control, and criminal justice.
The report warns that these developments pose serious threats to transparency, accountability, and fundamental rights.
Despite the adoption of the EU AI Act in 2024, broad exemptions allow law enforcement and migration agencies to bypass safeguards, including a full exemption for certain high-risk systems until 2031.
Institutions like Europol and eu-LISA are involved in building technical infrastructure for security-focused AI, often without public knowledge or oversight.
The study also highlights how secretive working groups, such as the European Clearing Board, have influenced legislation to favour police interests.
Critics argue that these moves risk entrenching discrimination and reducing democratic control, especially at a time of rising authoritarian influence within EU institutions.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The app was initially removed from platforms like the App Store and Google Play Store in February, following accusations of breaching South Korea’s data protection regulations.
Authorities discovered that DeepSeek had transferred user data abroad without appropriate consent.
Significant changes to DeepSeek’s privacy practices have now allowed its return. The company updated its policies to comply with South Korea’s Personal Information Protection Act, offering users the choice to refuse the transfer of personal data to companies based in China and the United States.
These adjustments were crucial in meeting the recommendations made by South Korea’s Personal Information Protection Commission (PIPC).
Although users can once again download DeepSeek, South Korean authorities have promised continued monitoring to ensure the app maintains higher standards of data protection.
DeepSeek’s future in the market will depend heavily on its ongoing compliance with the country’s strict privacy requirements.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!