Hyperscale data centres planned under Meta and NVIDIA deal

Meta announced a multiyear partnership with NVIDIA to build large-scale AI infrastructure across on-premises and cloud systems. Plans include hyperscale data centres designed for both training and inference workloads, forming a core part of the company’s long-term AI roadmap.

Deployment will include millions of Blackwell and Rubin GPUs, plus expanded use of NVIDIA CPUs and Spectrum-X networking. According to Mark Zuckerberg, the collaboration is intended to support advanced AI systems and broaden access to high-performance computing capabilities worldwide.

Jensen Huang highlighted the scale of Meta’s AI operations and the role of deep hardware-software integration in improving performance.

Efficiency gains remain a central objective, with Meta increasing the rollout of Arm-based NVIDIA Grace CPUs to improve performance per watt in data centres. Future Vera CPU deployment is being considered to expand energy-efficient computing later in the decade.

Privacy-focused AI development forms another pillar of the partnership. NVIDIA Confidential Computing will first power secure AI features on WhatsApp, with plans to expand across more services as Meta scales AI to billions of users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gabon imposes indefinite social media shutdown over national security concerns

Gabon’s media regulator, the High Authority for Communication (HAC), has announced a nationwide open-ended suspension of social media, citing online content that it says is fueling tensions and undermining social cohesion. In a statement, the HAC framed the move as a response to material it described as defamatory or hateful and, in some cases, a threat to national security, telling telecom operators and internet service providers to block access to major platforms.

The regulator pointed to what it called a rise in coordinated cyberbullying and the unauthorised sharing of personal data, saying existing moderation measures were not working and that the shutdown was necessary to stop violations of Gabon’s 2016 Communications Code.

The announcement arrives amid mounting labour pressure. Teachers began a high-profile strike in December 2025 over pay, status and working conditions, and the dispute has become one of the most visible signs of broader public-sector discontent. At the same time, the economic stakes are significant: Gabon had an estimated 850,000 active social media users in late 2025 (around a third of the population), and platforms are widely used for marketing and small-business sales.

Why does it matter?

Governments increasingly treat social media suspensions as a rapid-response tool for ‘public order’, but they also reshape information access, civic debate and commerce, especially in countries where mobile apps are a primary channel for news and income. The current announcement comes at a politically sensitive moment, since Gabon has a precedent here: during the 2023 election period, authorities shut down internet access, citing the need to counter calls for violence and misinformation. Gabon is still in transition after the August 2023 coup, and President Brice Oligui Nguema, who led the takeover, won the subsequent presidential election by a landslide in 2025, consolidating power while facing rising expectations for reform and stability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EDPS urges stronger safeguards in EU temporary chat-scanning rules

Concerns over privacy safeguards have resurfaced as the European Data Protection Supervisor urges legislators to limit indiscriminate chat-scanning in the upcoming extension of temporary EU rules.

The supervisor warns that the current framework risks enabling broad surveillance instead of focusing on targeted action against criminal content.

The EU institutions are considering a short-term renewal of the interim regime governing the detection of online material linked to child protection.

Privacy officials argue that such measures need clearer boundaries and stronger oversight to ensure that automated scanning tools do not intrude on the communications of ordinary users.

EDPS is also pressing lawmakers to introduce explicit safeguards before any renewal is approved. These include tighter definitions of scanning methods, independent verification, and mechanisms that prevent the processing of unrelated personal data.

According to the supervisor, temporary legislation must not create long-term precedents that weaken confidentiality across messaging services.

The debate comes as the EU continues discussions on a wider regulatory package covering child-protection technologies, encryption and platform responsibilities.

Privacy authorities maintain that targeted tools can be more practical than blanket scanning, which they consider a disproportionate response.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Mistral AI expands European footprint with acquisition of Koyeb

Mistral AI has strengthened its position in Europe’s AI sector through the acquisition of Koyeb. The deal forms part of its strategy to build end-to-end capacity for deploying advanced AI systems across European infrastructure.

The company has been expanding beyond model development into large-scale computing. It is currently building new data centre facilities, including a primary site in France and a €1.2 billion facility in Sweden, both aimed at supporting high-performance AI workloads.

The acquisition follows a period of rapid growth for Mistral AI, which reached a valuation of €11.7 billion after investment from ASML. French public support has also played a role in accelerating its commercial and research progress.

Mistral AI now positions itself as a potential European technology champion, seeking to combine model development, compute infrastructure and deployment tools into a fully integrated AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Rising DRAM prices push memory to the centre of AI strategy

The cost of running AI systems is shifting towards memory rather than compute, as the price of DRAM has risen sharply over the past year. Efficient memory orchestration is now becoming a critical factor in keeping inference costs under control, particularly for large-scale deployments.

Analysts such as Doug O’Laughlin and Val Bercovici of Weka note that prompt caching is turning into a complex field.

Anthropic has expanded its caching guidance for Claude, with detailed tiers that determine how long data remains hot and how much can be saved through careful planning. The structure enables significant efficiency gains, though each additional token can displace previously cached content.

The growing complexity reflects a broader shift in AI architecture. Memory is being treated as a valuable and scarce resource, with optimisation required at multiple layers of the stack.

Startups such as Tensormesh are already working on cache optimisation tools, while hyperscalers are examining how best to balance DRAM and high-bandwidth memory across their data centres.

Better orchestration should reduce the number of tokens required for queries, and models are becoming more efficient at processing those tokens. As costs fall, applications that are currently uneconomical may become commercially viable.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China boosts AI leadership with major model launches ahead of Lunar New Year

Leading Chinese AI developers have unveiled a series of advanced models ahead of the Lunar New Year, strengthening the country’s position in the global AI sector.

Major firms such as Alibaba, ByteDance, and Zhipu AI introduced new systems designed to support more sophisticated agents, faster workflows and broader multimedia understanding.

Industry observers also expect an imminent release from DeepSeek, whose previous model disrupted global markets last year.

Alibaba’s Qwen 3.5 model provides improved multilingual support across text, images and video while enabling rapid AI agent deployment instead of slower generation pipelines.

ByteDance followed up with updates to its Doubao chatbot and the second version of its image-to-video tool, SeeDance, which has drawn copyright concerns from the Motion Picture Association due to the ease with which users can recreate protected material.

Zhipu AI expanded the landscape further with GLM-5, an open-source model built for long-context reasoning, coding tasks, and multi-step planning. The company highlighted the model’s reliance on Huawei hardware as part of China’s efforts to strengthen domestic semiconductor resilience.

Meanwhile, excitement continues to build for DeepSeek’s fourth-generation system, expected to follow the widespread adoption and market turbulence associated with its V3 model.

Authorities across parts of Europe have restricted the use of DeepSeek models in public institutions because of data security and cybersecurity concerns.

Even so, the rapid pace of development in China suggests intensifying competition in the design of agent-focused systems capable of managing complex digital tasks without constant human oversight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Ireland’s DPC opens data privacy probe into X’s Grok

Ireland’s Data Protection Commission (DPC) has opened a formal investigation into X, focusing on whether the platform complied with its EU privacy obligations after users reportedly generated and shared sexualised, AI-altered images using Grok, the chatbot integrated into X. The inquiry will examine how the EU users’ personal data was processed in connection with this feature, under Ireland’s Data Protection Act and the GDPR framework.

The controversy centres on prompts that can ‘edit’ real people’s photos, sometimes producing non-consensual sexualised imagery, with allegations that some outputs involve children. The DPC has said it has been engaging with X since the reports first emerged and has now launched what it describes as a large-scale inquiry into the platform’s compliance with core GDPR duties.

Public and political reaction has intensified as examples of users altering images posted by others without consent, including ‘undressing’ edits, circulated. Child-safety concerns have widened the issue beyond platform moderation into questions of legality, safeguards, and accountability for generative tools embedded in mass-use social networks.

X has said it has introduced restrictions and safety measures around Grok’s image features, but regulators appear unconvinced that guardrails are sufficient when tools can be repurposed for non-consensual sexual content at scale. The DPC’s inquiry will test, in practical terms, whether a platform can roll out powerful image-generation/editing functions while still meeting the EU privacy requirements for lawful processing, risk management, and protection of individuals.

Why does it matter?

The DPC (Data Protection Commission) is Ireland’s national data protection authority, an Irish public regulator, but at the same time, it operates within the EU’s GDPR system as part of the network of EU/EEA regulators (the ‘supervisory authorities’). The DPC’s probe lands on top of a separate European Commission investigation launched in January under the EU’s Digital Services Act, after concerns that Grok-fuelled deepfakes on X included manipulated sexually explicit images that ‘may amount to child sexual abuse material,’ and questions about whether X properly assessed and mitigated those risks before deployment. Together, the two tracks show how the EU is using both privacy law (GDPR) and platform safety rules (DSA) to pressure large platforms to prove that ‘generative’ features are not being shipped faster than the safeguards needed to prevent serious harm, especially when women and children are the most likely targets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot 

Parliament halts built-in AI tools on tablets and other devices over data risks

The European Parliament has disabled built-in AI features on tablets issued to lawmakers, citing cybersecurity and data protection risks. An internal email states that writing assistants, summarisation tools, and enhanced virtual assistants were turned off after security assessments.

Officials said some AI functions on tablets rely on cloud processing for tasks that could be handled locally, potentially transmitting data off the device. A review is underway to clarify how much information may be shared with service providers.

Only pre-installed AI tools were affected, while third-party apps remain available. Lawmakers were advised to review AI settings on personal devices, limit app permissions, and avoid exposing work emails or documents to AI systems.

The step reflects wider European concerns about digital sovereignty and reliance on overseas technology providers. US legislation, such as the Cloud Act, allows authorities to access data held by American companies, raising cross-border data protection questions.

Debate over AI security is intensifying as institutions weigh innovation against the risks of remote processing and granular data access. Parliament’s move signals growing caution around handling sensitive information in cloud-based AI environments.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking

BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.

A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.

Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.

The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.

Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!