NVIDIA invests $2 billion as CoreWeave expands AI factory network

CoreWeave’s long-running partnership has deepened with NVIDIA to accelerate AI infrastructure deployment, including ambitious plans for multi-gigawatt AI factory capacity by 2030.

As part of the agreement, the US company is investing $2 billion in CoreWeave through the purchase of Class A common stock, signalling strong confidence in the company’s growth strategy and AI-focused cloud platform.

Both companies aim to deepen alignment across infrastructure, software and platform development, with CoreWeave building and operating AI factories using NVIDIA’s accelerated computing technologies and early access to upcoming architectures such as Rubin, Vera CPUs and BlueField systems.

The collaboration will also test and integrate CoreWeave’s AI-native software and reference designs into NVIDIA’s broader cloud and enterprise ecosystem, while NVIDIA supports faster site development through financial backing for land and power procurement.

Executives from both firms described the expansion as a response to surging global demand for AI computing, positioning large-scale AI factories as the backbone of future industrial AI deployment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Non-consensual deepfakes, consent, and power in synthetic media

ΑΙ has reshaped almost every domain of digital life, from creativity and productivity to surveillance and governance.

One of the most controversial and ethically fraught areas of AI deployment involves pornography, particularly where generative systems are used to create, manipulate, or simulate sexual content involving real individuals without consent.

What was once a marginal issue confined to niche online forums has evolved into a global policy concern, driven by the rapid spread of AI-powered nudity applications, deepfake pornography, and image-editing tools integrated into mainstream platforms.

Recent controversies surrounding AI-powered nudity apps and the image-generation capabilities of Elon Musk’s Grok have accelerated public debate and regulatory scrutiny.

grok generative ai safety incident

Governments, regulators, and civil society organisations increasingly treat AI-generated sexual content not as a matter of taste or morality, but as an issue of digital harm, gender-based violence, child safety, and fundamental rights.

Legislative initiatives such as the US Take It Down Act illustrate a broader shift toward recognising non-consensual synthetic sexual content as a distinct and urgent category of abuse.

Our analysis examines how AI has transformed pornography, why AI-generated nudity represents a qualitative break from earlier forms of online sexual content, and how governments worldwide are attempting to respond.

It also explores the limits of current legal frameworks and the broader societal implications of delegating sexual representation to machines.

From online pornography to synthetic sexuality

Pornography has long been intertwined with technological change. From photography and film to VHS tapes, DVDs, and streaming platforms, sexual content has often been among the earliest adopters of new media technologies.

The transition from traditional pornography to AI-generated sexual content, however, marks a deeper shift than earlier format changes.

Conventional online pornography relies on human performers, production processes, and contractual relationships, even where exploitation or coercion exists. AI-generated pornography, instead of depicting real sexual acts, simulates them using algorithmic inference.

Faces, bodies, voices, and identities can be reconstructed or fabricated at scale, often without the knowledge or consent of the individuals whose likenesses are used.

AI nudity apps exemplify such a transformation. These tools allow users to upload images of real people and generate artificial nude versions, frequently marketed as entertainment or novelty applications.

DIPLO AI tools featured image Reporting AIassistant

The underlying technology relies on diffusion models trained on vast datasets of human bodies and sexual imagery, enabling increasingly realistic outputs. Unlike traditional pornography, the subject of the image may never have participated in any sexual act, yet the resulting content can be indistinguishable from authentic photography.

Such a transformation carries profound ethical implications. Instead of consuming representations of consensual adult sexuality, users often engage in simulations of sexual advances on real individuals who have not consented to being sexualised.

Such a distinction between fantasy and violation becomes blurred, particularly when such content is shared publicly or used for harassment.

AI nudity apps and the normalisation of non-consensual sexual content

The recent proliferation of AI nudity applications has intensified concerns around consent and harm. These apps are frequently marketed through euphemistic language, emphasising humour, experimentation, or artistic exploration instead of sexual exploitation.

Their core functionality, however, centres on digitally removing clothing from images of real people.

Regulators and advocacy groups increasingly argue that such tools normalise a culture in which consent is irrelevant. The ability to undress someone digitally, without personal involvement, reflects a broader pattern of technological power asymmetry, where the subject of the image lacks meaningful control over how personal likeness is used.

The ongoing Grok controversy illustrates how quickly the associated harms can scale when AI tools are embedded within major platforms. Reports that Grok can generate or modify images of women and children in sexualised ways have triggered backlash from governments, regulators, and victims’ rights organisations.

65cb63d3d76285ff8805193c blog gen ai deepfake

Even where companies claim that safeguards are in place, the repeated emergence of abusive outputs suggests systemic design failures rather than isolated misuse.

What distinguishes AI-generated sexual content from earlier forms of online abuse lies not only in realism but also in replicability. Once an image or model exists, reproduction can occur endlessly, with the content shared across jurisdictions and recontextualised in new forms. Victims often face a permanent loss of control over digital identity, with limited avenues for redress.

Gendered harm and child protection

The impact of AI-generated pornography remains unevenly distributed. Research and reporting consistently show that women and girls are disproportionately targeted by non-consensual synthetic sexual content.

Public figures, journalists, politicians, and private individuals alike have found themselves subjected to sexualised deepfakes designed to humiliate, intimidate, or silence them.

computer keyboard with red deepfake button key deepfake dangers online

Children face even greater risk. AI tools capable of generating nudified or sexualised images of minors raise alarm across legal and ethical frameworks. Even where no real child experiences physical abuse during content creation, the resulting imagery may still constitute child sexual abuse material under many legal definitions.

The existence of such content contributes to harmful sexualisation and may fuel exploitative behaviour. AI complicates traditional child protection frameworks because the abuse occurs at the level of representation, not physical contact.

Legal systems built around evidentiary standards tied to real-world acts struggle to categorise synthetic material, particularly where perpetrators argue that no real person suffered harm during production.

Regulators increasingly reject such reasoning, recognising that harm arises through exposure, distribution, and psychological impact rather than physical contact alone.

Platform responsibility and the limits of self-regulation

Technology companies have historically relied on self-regulation to address harmful content. In the context of AI-generated pornography, such an approach has demonstrated clear limitations.

Platform policies banning non-consensual sexual content often lag behind technological capabilities, while enforcement remains inconsistent and opaque.

The Grok case highlights these challenges. Even where companies announce restrictions or safeguards, questions remain regarding enforcement, detection accuracy, and accountability.

AI systems struggle to reliably determine whether an image depicts a real person, whether consent exists, or whether local laws apply. Technical uncertainty frequently serves as justification for delayed action.

Commercial incentives further complicate moderation efforts. AI image tools drive user engagement, subscriptions, and publicity. Restricting capabilities may conflict with business objectives, particularly in competitive markets.

As a result, companies tend to act only after public backlash or regulatory intervention, instead of proactively addressing foreseeable harm.

Such patterns have contributed to growing calls for legally enforceable obligations rather than voluntary guidelines. Regulators increasingly argue that platforms deploying generative AI systems should bear responsibility for foreseeable misuse, particularly where sexual harm is involved.

Legal responses and the emergence of targeted legislation

Governments worldwide are beginning to address AI-generated pornography through a combination of existing laws and new legislative initiatives. The Take It Down Act represents one of the most prominent attempts to directly confront non-consensual intimate imagery, including AI-generated content.

The Act strengthens platforms’ obligations to remove intimate images shared without consent, regardless of whether the content is authentic or synthetic. Victims’ rights to request takedowns are expanded, while procedural barriers that previously left individuals navigating complex reporting systems are reduced.

Crucially, the law recognises that harm does not depend on image authenticity, but on the impact experienced by the individual depicted.

Within the EU, debates around AI nudity apps intersect with the AI Act and the Digital Services Act (DSA). While the AI Act categorises certain uses of AI as prohibited or high-risk, lawmakers continue to question whether nudity applications fall clearly within existing bans.

European Commission EU AI Act amendments Digital Omnibus European AI Office

Calls to explicitly prohibit AI-powered nudity tools reflect concern that legal ambiguity creates enforcement gaps.

Other jurisdictions, including Australia, the UK, and parts of Southeast Asia, are exploring regulatory approaches combining platform obligations, criminal penalties, and child protection frameworks.

Such efforts signal a growing international consensus that AI-generated sexual abuse requires specific legal recognition rather than fragmented treatment.

Enforcement challenges and jurisdictional fragmentation

Despite legislative progress, enforcement remains a significant challenge. AI-generated pornography operates inherently across borders. Applications may be developed in one country, hosted in another, and used globally. Content can be shared instantly across platforms, subject to different legal regimes.

Jurisdictional fragmentation complicates takedown requests and criminal investigations. Victims often face complex reporting systems, language barriers, and inconsistent legal standards. Even where a platform complies with local law in one jurisdiction, identical material may remain accessible elsewhere.

Technical enforcement presents additional difficulties. Automated detection systems struggle to distinguish consensual adult content from non-consensual synthetic imagery. Over-reliance on automation risks false positives and censorship, while under-enforcement leaves victims unprotected.

Balancing accuracy, privacy, and freedom of expression remains unresolved.

Broader societal implications

Beyond legal and technical concerns, AI-generated pornography raises deeper questions about sexuality, power, and digital identity.

The ability to fabricate sexual representations of others undermines traditional understandings of bodily autonomy and consent. Sexual imagery becomes detached from lived experience, transformed into manipulable data.

Such shifts risk normalising the perception of individuals as visual assets rather than autonomous subjects. When sexual access can be simulated without consent, the social meaning of consent itself may weaken.

Critics argue that such technologies reinforce misogynistic and exploitative norms, particularly where women’s bodies are treated as endlessly modifiable digital material.

Deepfakes and the AI scam header

At the same time, defenders of generative AI warn of moral panic and excessive regulation. Arguments persist that not all AI-generated sexual content is harmful, particularly where fictional or consenting adult representations are involved.

The central challenge lies in distinguishing legitimate creative expression from abuse without enabling exploitative practices.

In conclusion, we must admit that AI has fundamentally altered the landscape of pornography, transforming sexual representation into a synthetic, scalable, and increasingly detached process.

AI nudity apps and controversies surrounding AI tools demonstrate how existing social norms and legal frameworks remain poorly equipped to address non-consensual synthetic sexual content.

Global responses indicate a growing recognition that AI-generated pornography constitutes a distinct category of digital harm. Regulation alone, however, will not resolve the issue.

Effective responses require legal clarity, platform accountability, technical safeguards, and cultural change, especially with the help of the educational system.

As AI systems become more powerful and accessible, societies must confront difficult questions about consent, identity, and responsibility in the digital age.

The challenge lies not merely in restricting technology, but in defining ethical boundaries that protect our human dignity while preserving legitimate innovation.

In the days, weeks or months ahead, decisions taken by governments, platforms, and communities will shape the future relationship between AI and our precious human autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The EU Commission opens DMA proceedings on Google interoperability and search data

The European Commission has opened two specification proceedings to spell out how Google should meet key obligations under the EU’s Digital Markets Act (DMA), focusing on Android’s AI-related features and access to Google Search data for competitors.

The first proceeding targets the DMA’s interoperability requirement for Android. In practical terms, Brussels wants to clarify how third-party AI services can get access, free and effectively, to the same Android hardware/software functionalities that power Google’s own AI offerings, including Gemini, so that rivals can compete on a more equal footing on mobile devices.

The second proceeding addresses Google’s obligation to provide rival search engines access to anonymised search data (such as ranking, query, click, and view data) on fair, reasonable, and non-discriminatory terms. The Commission is also considering whether AI chatbot providers should qualify for that access, an essential question as ‘search’ increasingly blurs with conversational AI.

These proceedings are designed to define how compliance should work rather than immediately sanction Google. The Commission is expected to wrap them up within six months, with draft measures and preliminary findings shared earlier in the process, and with scope for third-party feedback. A separate non-compliance track could still follow later, and DMA penalties for breaches can reach up to 10% of global turnover.

Google, for its part, says Android is ‘open by design’ and argues it is already licensing Search data, while warning that additional requirements, especially those it views as competitor-driven, could undermine user privacy, security, and innovation.

Why does it matter?

The EU is trying to prevent dominant platforms from turning control over operating systems and data into an ‘unfair advantage’ in the next wave of consumer tech, particularly as AI assistants become built into phones and as search data becomes fuel for competing discovery tools. The move also sits within a broader DMA enforcement push: the Commission has already opened DMA-related proceedings into Alphabet in other areas, signalling that Brussels sees gatekeeper compliance as an ongoing, hands-on exercise rather than a one-off checkbox.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI fails most tasks in virtual company experiment

Researchers at Carnegie Mellon University created a virtual company staffed solely by AI ’employees’ trained on large language models from vendors including Anthropic, OpenAI, and Google, assigning them roles such as financial analyst and software engineer.

In this simulated work environment, the AI agents struggled to complete most tasks, with even the best-performing model only completing about a quarter of its assignments.

The experiment highlighted key weaknesses in current AI systems, including difficulty interpreting nuanced instructions, managing web navigation with pop-ups, and coordinating multi-step workflows without human intervention.

These gaps suggest that human judgement, adaptability and collaboration remain essential in real workplaces for the foreseeable future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK firms prioritise cyber resilience and AI growth

Cybersecurity is set to receive the largest budget increases over the next 12 months, as organisations respond to rising geopolitical tensions and a surge in high-profile cyber-attacks, according to the KPMG Global Tech Report 2026.

More than half of UK firms plan to lift cybersecurity spending by over 10 percent, outpacing global averages and reflecting heightened concern over digital resilience.

AI and data analytics are also attracting substantial investment, with most organisations increasing budgets as they anticipate stronger returns by the end of 2026. Executives expect AI to shift from an efficiency tool to a core revenue driver, signalling a move toward large-scale deployment.

Despite strong investment momentum, scaling remains a major challenge. Fewer than one in 10 organisations report fully deployed AI or cybersecurity systems today, although around half expect to reach that stage within a year.

Structural barriers, fragmented ownership, and unclear accountability continue to slow execution, highlighting the complexity of translating strategy into operational impact.

Agentic AI is emerging as a central focus, with most organisations already embedding autonomous systems into workflows. Demand for specialist AI roles is rising, alongside closer collaboration to ensure secure deployment, governance, and continuous monitoring.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Survey finds Gen Z turns to AI for sexual health questions despite misdiagnoses

According to a January 2026 survey of 2,520 US adults aged 18 to 29, roughly 20 percent of Gen Z have queried AI chatbots about STIs/STDs, and 1 in 10 specifically sought help with diagnosis or suspicion of infection.

Among those who later sought formal medical testing, about 31 percent said the chatbot’s assessment was incorrect, highlighting risks of relying on AI for health diagnostics.

Respondents often shared symptom details and even photos with the bots, and many said they were more comfortable discussing sensitive topics with an AI than with a clinician, despite potential privacy and accuracy limitations.

Medical experts emphasise that while AI can support general health education, these tools are not replacements for clinical diagnosis or professional medical testing, which remain necessary for accurate STI/STD identification and treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Aquila transforms warehouse operations using AI automation

Aquila has completed a €5 million investment in AI-driven warehouse automation at its Ilfov, Dragomiresti logistics centre. The project is a strategic response to increasing portfolio complexity and growing distribution volumes in the FMCG sector.

The automation solution is built around AI-based vision systems that identify products directly from images using shape, colour and visual characteristics. The technology removes the need for labels or manual scanning, even when packaging orientation or appearance shows minor variations.

According to the company, the system improves the speed and accuracy of warehouse operations while reducing manual work and optimising storage space. These efficiency gains allow better use of operational resources.

The investment enables Aquila to scale logistics operations without proportional increases in resources. The company reports improved internal efficiency, stronger service quality for customers and the creation of medium-term competitive advantages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

The UK labour market feels a sharper impact from AI use

Companies are reporting net job losses linked to AI adoption, with research showing a sharper impact than in other major economies. A Morgan Stanley survey found that firms using the technology for at least a year cut more roles than they created, particularly across the UK labour market.

The study covered sectors including retail, real estate, transport, healthcare equipment and automotive manufacturing, showing an average productivity increase of 11.5% among UK businesses. Comparable firms in the United States reported similar efficiency gains but continued to expand employment overall.

Researchers pointed to higher operating costs and tax pressures as factors amplifying the employment impact in Britain. Unemployment has reached a four-year high, while increases in the minimum wage and employer national insurance contributions have tightened hiring across industries.

Public concern over AI-driven displacement is also rising, with more than a quarter of UK workers fearing their roles could disappear within five years, according to recruitment firm Randstad. Younger workers expressed the highest anxiety, while older generations showed greater confidence in adapting.

Political leaders warn that unmanaged AI-driven change could disrupt labour markets. London mayor Sadiq Khan said the technology may cut many white-collar jobs, calling for action to create replacement roles.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ten cybersecurity predictions for 2026 from experts: How AI will reshape cyber risks

Evidence from threat intelligence reporting and incident analysis in 2025 suggests that AI will move from experimental use to routine deployment in malicious cyber operations in 2026. Rather than introducing entirely new threats, AI is expected to accelerate existing attack techniques, reduce operational costs for attackers, and increase the scale and persistence of campaigns.

Security researchers and industry analysts point to ten areas where AI is most likely to reshape the cyber threat landscape over the coming year:

  1. AI-enabled malware is expected to adapt during execution. Threat intelligence reporting indicates that malware using AI models is already capable of modifying behaviour in real time. In 2026, such capabilities are expected to become more common, allowing malicious code to adjust tactics in response to defensive measures.
  2. AI agents are likely to automate key stages of cyberattacks. Researchers expect wider use of agentic AI systems that can independently conduct reconnaissance, exploit vulnerabilities, and maintain persistence, reducing the need for continuous human control.
  3. Prompt injection will be treated as a practical attack technique against AI deployments. As organisations embed AI assistants and agents into workflows, attackers are expected to target the AI layer itself (e.g. through prompt injection, unsafe tool use, and weak guardrails) to trigger unintended actions or expose data.
  4. Threat actors will use AI to target humans at scale. The text emphasises AI-enhanced social engineering: conversational bots, real-time manipulation, and automated account takeover, shifting attacks from isolated human-led attempts to continuous, scalable interaction.
  5. AI will expose APIs as a too-easily-exploited attack surface. The experts argue that AI agents capable of discovering and interacting with software interfaces will lower the barrier to abusing APIs, including undocumented or unintended ones. As agents gain broader permissions and access to cloud services, APIs are expected to become a more frequent point of exploitation and concealment.
  6. Extortion will evolve beyond ransomware encryption. Extortion campaigns are expected to rely less on encryption alone and more on a combination of tactics, including data theft, threats to leak or alter information, and disruption of cloud services, backups, and supply chains.
  7. Cyber incidents will increasingly spread from IT into industrial operations. Ransomware and related intrusions are expected to move beyond enterprise IT systems and disrupt operational technology and industrial control environments, amplifying downtime, supply-chain disruption, and operational impact.
  8. The insider threat will increasingly include imposter employees. Analysts anticipate insider risks will extend beyond malicious or negligent staff to include external actors who gain physical or remote access by posing as legitimate employees, including through hardware implants or direct device access that bypasses end point security.
  9. Nation-state cyber activity will continue to target Western governments and industries. Experts point to continued cyber operations by state-linked actors, including financially motivated campaigns and influence operations, with increased use of social engineering, deception techniques, and AI-enabled tools to scale and refine targeting.
  10. Identity management is expected to remain a primary failure point. The rapid growth of human and machine identities, including AI agents, across SaaS, cloud platforms and third-party environments is likely to reinforce credential misuse as a leading cause of major breaches.

Taken together, these trends suggest that in 2026, cyber risk will increasingly reflect systemic exposure created by the combination of AI adoption, identity sprawl, and interconnected digital infrastructure, rather than isolated technical failures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France proposes EU tools to map foreign tech dependence

France has unveiled a new push to reduce Europe’s dependence on US and Chinese technology suppliers, placing digital sovereignty back at the centre of the EU policy debates.

Speaking in Paris, France’s minister for AI and digital affairs, Anne Le Hénanff, presented initiatives to expose and address the structural reliance on non-EU technologies across public administrations and private companies.

Central to the strategy is the creation of a Digital Sovereignty Observatory, which will map foreign technology dependencies and assess organisational exposure to geopolitical and supply-chain risks.

The body, led by former Europe minister Clément Beaune, is intended to provide the evidence base needed for coordinated action rather than symbolic declarations of autonomy.

France is also advancing a Digital Resilience Index, expected to publish its first findings in early 2026. The index will measure reliance on foreign digital services and products, identifying vulnerabilities linked to cloud infrastructure, AI, cybersecurity and emerging technologies.

Industry data suggests Europe’s dependence on external tech providers costs the continent hundreds of billions of euros annually.

Paris is using the initiative to renew calls for a European preference in public-sector digital procurement and for a standard EU definition of European digital services.

Such proposals remain contentious among member states, yet France argues they are essential for restoring strategic control over critical digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!