Microsoft expands Sovereign Cloud with secure offline support for large AI models

Digital sovereignty is gaining urgency as organisations seek infrastructure that remains secure and reliable under strict regulatory conditions.

Microsoft is expanding its Sovereign Cloud to help public bodies, regulated industries and enterprises maintain control of data and operations even when environments must operate without external connectivity.

The updated portfolio allows customers to choose how each workload is governed, rather than relying on a single deployment model.

Azure Local now supports disconnected operations, keeping mission-critical systems running with full Azure governance within sovereign boundaries. Management, policies and workloads stay entirely on site, so services continue during periods of isolation.

Microsoft 365 Local extends the resilience to the productivity layer by enabling Exchange Server, SharePoint Server and Skype for Business Server to run locally, giving teams secure collaboration within the same protected boundary as their infrastructure.

Support for large multimodal AI models is delivered through Foundry Local, which enables advanced inference on customer-controlled hardware using technology from partners such as NVIDIA.

Such an approach helps organisations bring modern AI capabilities into highly restricted environments while preserving control over data, identities and operational procedures.

Microsoft positions it as a unified stack that works across connected, hybrid and fully disconnected modes without increasing operational complexity.

These additions create a framework designed for governments and regulated industries that regard sovereignty as a strategic priority.

With global availability for qualified customers, the Sovereign Cloud aims to preserve continuity, reinforce governance and expand AI capability while keeping every layer of the environment within local control.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Enterprises rethink cloud amid digital sovereignty push

Digital sovereignty has moved to the boardroom as geopolitical tensions rise and cloud adoption accelerates. Organisations are reassessing infrastructure to protect autonomy, ensure compliance, and manage jurisdictional risk. Cloud strategy is increasingly shaped by data location, control, and resilience.

Regulations such as NIS2, DORA, and national data laws have intensified scrutiny of cross-border dependencies. Sovereignty concerns now extend beyond governments to sectors such as healthcare and finance. Vendor selection increasingly prioritises sovereign regions and stricter data controls.

Hybrid cloud remains dominant. Organisations place sensitive workloads on private platforms to strengthen oversight while retaining public cloud innovation. Large-scale repatriation is rare due to cost and complexity, though compliance pressures are driving broader multicloud diversification.

Government investment and oversight are reinforcing the shift. Sovereignty is becoming part of national resilience policy, prompting stricter audits and governance expectations. Enterprises face growing pressure to demonstrate control over critical systems, supply chains, and data flows.

A pragmatic approach, often described as minimum viable sovereignty, helps reduce exposure without unnecessary complexity. Organisations can identify critical workloads, secure enforceable vendor commitments, and plan for disruption. Early adaptation supports resilience and long-term flexibility.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OCC approval moves Crypto.com closer to US trust bank

Crypto.com has secured conditional approval from the Office of the Comptroller of the Currency to move ahead with plans to launch a federally regulated national trust bank in the United States.

Approval marks a notable step in the firm’s regulatory roadmap. It also signals continued alignment with US supervisory expectations as the digital asset sector seeks deeper integration with traditional financial infrastructure.

Plans focus on establishing Foris Dax National Trust Bank. The entity is designed to provide a consolidated suite of services, including digital asset custody, staking across multiple blockchain ecosystems such as Cronos, and trade settlement.

Full approval would place the entity under direct federal oversight, positioning it to serve institutional clients that require qualified custodians operating within a clear regulatory perimeter.

Leadership described the decision as recognition of its compliance and risk management framework. Executives said the structure would offer institutions a single regulated gateway to digital asset infrastructure and strengthen market confidence.

Existing operations at Crypto.com Custody Trust Company in New Hampshire will continue without interruption. Final authorisation will determine the timeline for launching the national trust bank and expanding federally supervised US services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI faces legal action in South Korea from top networks

South Korea’s leading terrestrial broadcasters have filed a lawsuit against OpenAI, claiming that the company trained its ChatGPT model using their news content without permission. KBS, MBC, and SBS are seeking an injunction to halt the alleged infringement and to recover damages.

The Korea Broadcasters Association said OpenAI generates significant revenue from its GPT services and has licensing agreements with media organisations worldwide.

Despite this, the company has refused to negotiate with the South Korean networks, leaving them without recourse to ensure proper use of their content.

The lawsuit emphasises the protection of intellectual property and creators’ rights, arguing that domestic copyright holders face high legal costs and barriers when confronting global technology companies. It also raises broader questions about South Korea’s data sovereignty in the age of AI.

Earlier action against Naver set a precedent for copyright enforcement in AI applications.

Although KBS subsequently partnered with Naver for AI-driven media solutions, the current case underscores continuing disputes over lawful access to broadcast content for generative AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU DSA fine against X heads to court in key test case

X Corp., owned by Elon Musk, has filed an appeal with the General Court of the European Union against a €120 million fine imposed by the European Commission for breaching the Digital Services Act. The penalty, issued in December, marks the first enforcement action under the 2022 law.

The Commission concluded that X violated transparency obligations and misled users through its verification design, arguing that paid blue checkmarks made it harder to assess account authenticity. Officials also cited concerns about advertising transparency and researchers’ access to platform data.

Henna Virkkunen, the EU’s executive vice-president for tech sovereignty, security, and democracy, said deceptive verification and opaque advertising had no place online. The Commission opened its probe in December 2023, examining risk management, moderation practices, and alleged dark patterns.

X Corp. argued that the decision followed an incomplete investigation and a flawed reading of the DSA, citing procedural errors and due-process concerns. It said the appeal could shape future enforcement standards and penalty calculations under the regulation.

The EU is also assessing whether X mitigated systemic risks, including deepfaked content and child sexual abuse material linked to its Grok chatbot. US critics describe DSA enforcement as a threat to free speech, while EU officials say it strengthens accountability across the digital single market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU–US draft data pact allows automated decisions on travellers

A draft data-sharing agreement between the EU and the US Department of Homeland Security would allow automated decisions about European travellers to continue under certain conditions, despite attempts to tighten protections.

The text permits such decisions when authorised under domestic law and relies on safeguards that let individuals request human intervention instead of leaving outcomes entirely to algorithms.

A deal designed to preserve visa-free travel would require national authorities to grant access to biometric databases containing fingerprints and facial scans.

Negotiators are attempting to reconcile the framework with the General Data Protection Regulation, even though the draft states that the new rules would supplement and supersede earlier bilateral arrangements.

Sensitive information, including political views, trade union membership and biometric identifiers, could be transferred as long as protective conditions are applied.

EU countries face a deadline at the end of 2026 to conclude individual agreements, and failure to do so could result in suspension from the US Visa Waiver Program.

A separate clause keeps disputes firmly outside judicial scrutiny by requiring disagreements to be resolved through a Joint Committee instead of national or international courts.

The draft also restricts onward sharing, obliging US authorities to seek explicit consent before passing European-supplied data to third parties.

Further negotiations are expected, with the European Parliament’s Committee on Civil Liberties, Justice and Home Affairs preparing to hold a closed-door review of the talks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

EU drops revised GDPR personal data definition amid regulatory pressure

Governments across the EU have withdrawn the revised definition of personal data from the GDPR omnibus package, softening earlier proposals that had prompted strong resistance from regulators and civil society.

A decision that signals a preference for maintaining the original scope of the General Data Protection Regulation instead of reopening sensitive debates that risked weakening long-standing protections.

Greater attention is now placed on the forthcoming pseudonymisation guidelines prepared by the European Data Protection Board. These guidelines are expected to shape how organisations interpret key safeguards, offering practical direction instead of altering the legal definition of personal data.

The updated prominence given to the guidance reflects a broader trend within the Council towards regulatory clarity rather than legislative redesign.

The compromise text also maintains links with the wider review of the ePrivacy Directive, keeping future updates aligned with existing digital-rights rules.

Member states appear increasingly cautious about reopening foundational privacy concepts, opting to strengthen enforcement through guidance and implementation rather than altering core definitions in law.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Turkey reviews children’s data handling as identity checks planned for social platforms

The data protection authority of Turkey has opened a new review into how major social media platforms manage children’s personal data.

A decision that places scrutiny on TikTok, Instagram, Facebook, YouTube, X and Discord as Ankara prepares legislation that would expand state authority over digital activity instead of relying on existing rules alone.

Regulators aim to assess safeguards for children and ensure stronger compliance with local standards.

The ruling party is expected to introduce a family package that would require identity verification for every account through phone numbers or the e-Devlet system. Children under 15 would not be allowed to create profiles and further limits could apply to users under 18.

A proposal that would also allow authorities to order the rapid removal of content deemed unlawful without waiting for court approval, while platforms that fail to comply may face penalties such as phased bandwidth reductions.

Rights advocates warn that mandatory verification and broader enforcement powers could reshape online speech across the country. Some argue that linking accounts to verified identities threatens anonymity and could restrict legitimate expression instead of fostering safety.

Turkey has already expanded online oversight since 2016 through laws that increased the government’s ability to block websites, require content removal and oblige major platforms to maintain a legal presence in the country.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Chinese AI video tool unsettles Hollywood

A new AI video model developed by ByteDance has unsettled Hollywood after generating cinema-quality clips from brief text prompts. Seedance 2.0, launched in 2025, went viral for producing realistic action scenes featuring western cinematic characters such as Spider Man and Deadpool.

In response, major studios, including Disney and Paramount, issued cease and desist letters over alleged copyright infringement. Japan has also begun investigating ByteDance after AI-generated anime videos spread widely online.

Industry experts say Seedance 2.0 stands out for combining text, visuals and audio within a single system. Analysts in Singapore and Melbourne argue that Chinese AI models are now matching US competitors at the technological frontier.

As Seedance 2.0 gains traction, Beijing continues to prioritise AI and robotics in its economic strategy. The rise of tools from China has intensified debate in the US and beyond over copyright, regulation and the future of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot