Microsoft expands Sovereign Cloud with secure offline support for large AI models

Digital sovereignty is gaining urgency as organisations seek infrastructure that remains secure and reliable under strict regulatory conditions.

Microsoft is expanding its Sovereign Cloud to help public bodies, regulated industries and enterprises maintain control of data and operations even when environments must operate without external connectivity.

The updated portfolio allows customers to choose how each workload is governed, rather than relying on a single deployment model.

Azure Local now supports disconnected operations, keeping mission-critical systems running with full Azure governance within sovereign boundaries. Management, policies and workloads stay entirely on site, so services continue during periods of isolation.

Microsoft 365 Local extends the resilience to the productivity layer by enabling Exchange Server, SharePoint Server and Skype for Business Server to run locally, giving teams secure collaboration within the same protected boundary as their infrastructure.

Support for large multimodal AI models is delivered through Foundry Local, which enables advanced inference on customer-controlled hardware using technology from partners such as NVIDIA.

Such an approach helps organisations bring modern AI capabilities into highly restricted environments while preserving control over data, identities and operational procedures.

Microsoft positions it as a unified stack that works across connected, hybrid and fully disconnected modes without increasing operational complexity.

These additions create a framework designed for governments and regulated industries that regard sovereignty as a strategic priority.

With global availability for qualified customers, the Sovereign Cloud aims to preserve continuity, reinforce governance and expand AI capability while keeping every layer of the environment within local control.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI tool launched by Amazon Ads enables professional ad creation

Amazon Ads has unveiled Creative Agent, a new AI-powered tool that enables advertisers in Europe to create professional-quality ads rapidly. The tool handles the entire creative process, from brainstorming and scripts to video, animation, voiceovers, music, and final delivery.

Creative Agent uses Amazon retail insights and customer data to develop ad concepts that align with the brand and engage audiences. Its conversational interface guides users, explains creative choices, and lets them refine visuals, scripts, and audio in real time.

Advertisers can produce multi-format campaigns suitable for Sponsored Brands, Sponsored Display, Amazon DSP, Streaming TV, and Brand Stores.

The tool also manages localisation, cultural nuances, and multi-market campaigns efficiently, allowing mid-market and smaller brands to access capabilities previously reserved for large companies.

Built on AWS, including Amazon Nova and Anthropic Claude, Creative Agent enhances Amazon’s AI ad tools, reducing creative barriers and enabling fast experimentation. Early adopters say the platform enhances creative innovation while reducing time and cost across campaigns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OURA launches AI model tailored to women’s physiology with privacy-first design

Guidance for women’s health is entering a new phase as ŌURA introduces a proprietary large language model designed specifically for reproductive and hormonal wellbeing.

The model sits within Oura Advisor and is available for testing through Oura Labs, drawing on clinical standards, peer-reviewed evidence and biometric signals collected through the Oura Ring to create personalised and context-aware responses.

The system interprets questions through women’s physiology instead of depending on general-purpose models that miss critical hormonal and life-stage variables.

It supports the full spectrum of reproductive health, from the earliest menstrual patterns to menopause, and is intentionally tuned to be non-dismissive and emotionally supportive.

By combining longitudinal sleep, activity, stress, cycle and pregnancy data with clinician-reviewed research, the model aims to strengthen understanding and preparation ahead of medical appointments.

Privacy forms the centre of the architecture, with all processing hosted on infrastructure controlled entirely by the company. Conversations are neither shared nor sold, reflecting ŌURA’s broader push for private AI.

Oura Labs operates as an opt-in experimental environment where new features are tested in collaboration with members who can leave at any time.

Women who take part influence the model’s evolution by contributing feedback that informs future development.

These interactions help refine personalised insights across fertility, cycle irregularities, pregnancy changes and other hormonal shifts, marking a significant step in how the Finland-founded company advances preventive, data-guided care for its global community.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Sony targets AI music copyright use

Sony Group has developed technology designed to identify the original sources of music generated by AI. The move comes amid growing concern over the unauthorised use of copyrighted works in AI training.

According to Sony Group, the system can extract data from an underlying AI model and compare generated tracks with original compositions. The process aims to quantify how much specific works contributed to the output.

Composers, songwriters and publishers could use the technology to seek compensation from AI developers if their material was used without permission. Sony said the goal is to help ensure creators are properly rewarded.

Efforts to safeguard intellectual property have intensified across the music industry. Sony Music Entertainment in the US previously filed a copyright infringement lawsuit in 2024 over AI-generated music, underscoring wider tensions around AI and creative rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI-generated film removed from cinemas after public backlash

A prize-winning AI-generated short film has been pulled from cinemas following criticism from audiences. Thanksgiving Day, created by filmmaker Igor Alferov, was due to screen in selected theatres before feature presentations.

Concerns emerged after news of the screening spread online, prompting complaints directed at AMC Theatres. The chain stated it had not programmed the film and that pre-show advertising partner Screenvision Media had arranged the placement.

AMC confirmed it would not participate in the initiative, meaning the AI film will no longer appear in its locations. The animated short, produced using Google’s Gemini 3.1 and Nano Banana Pro tools, had recently won an AI film festival award.

The episode comes amid broader debate about artificial intelligence in Hollywood. Industry insiders suggest studios are quietly increasing AI use in production, even as concerns grow over job losses and economic uncertainty within Los Angeles’ entertainment sector.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots can reinforce delusions and mania

AI chatbots may pose serious risks for people with severe mental illnesses, according to a new study from Acta Psychiatrica Scandinavica. Researchers found that tools such as ChatGPT can worsen psychiatric conditions by reinforcing users’ delusions, paranoia, mania, suicidal thoughts, and eating disorders.

The team examined health records from more than 54,000 patients and identified dozens of cases where AI interactions appeared to exacerbate symptoms. Experts warn that the actual number of affected individuals is likely far higher.

AI’s design to follow and validate a user’s input can unintentionally strengthen delusional thinking, turning digital assistants into echo chambers for psychosis.

Despite potential benefits for psychoeducation or alleviating loneliness, experts caution against using AI as a substitute for trained therapists. Chatbots should be tested in rigorous clinical trials before any therapeutic use, says Professor Søren Dinesen Østergaard.

The researchers urge healthcare providers to discuss AI chatbot use with patients, particularly those with severe mental illnesses, and call for central regulation of the technology. They argue that lessons from social media show that early oversight is essential to protect vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI drives faster modernisation of legacy COBOL systems

Critical to finance, airlines, and government, COBOL handles about 95% of US ATM transactions. Despite its ubiquity, the pool of developers able to read and maintain COBOL is shrinking as seasoned engineers retire and universities offer limited instruction.

Institutional knowledge is now embedded in decades-old code, and documentation often lags.

Modernising COBOL differs from typical software updates. It requires untangling intricate dependencies and reverse-engineering business logic that has evolved over decades.

Traditional modernisation efforts involved large teams of consultants over the years, resulting in high costs and lengthy timelines. AI tools are changing that paradigm by automating the most labour-intensive tasks.

AI-driven solutions like Claude Code map code dependencies, trace execution paths, document workflows, and identify risks. They provide teams with actionable insights for prioritisation, risk management, and refactoring, dramatically shortening modernisation timelines from years to months.

Human experts remain essential to reviewing AI recommendations, ensuring regulatory compliance, and making strategic decisions about which components to modernise first.

Implementation follows an incremental approach. AI translates COBOL logic into modern languages, creates integration scaffolding, and supports side-by-side operation with legacy components.

Continuous validation at each step reduces risk, allowing teams to build confidence as complex parts of the system are modernised. AI automation combined with expert oversight makes large-scale COBOL modernisation feasible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI faces legal action in South Korea from top networks

South Korea’s leading terrestrial broadcasters have filed a lawsuit against OpenAI, claiming that the company trained its ChatGPT model using their news content without permission. KBS, MBC, and SBS are seeking an injunction to halt the alleged infringement and to recover damages.

The Korea Broadcasters Association said OpenAI generates significant revenue from its GPT services and has licensing agreements with media organisations worldwide.

Despite this, the company has refused to negotiate with the South Korean networks, leaving them without recourse to ensure proper use of their content.

The lawsuit emphasises the protection of intellectual property and creators’ rights, arguing that domestic copyright holders face high legal costs and barriers when confronting global technology companies. It also raises broader questions about South Korea’s data sovereignty in the age of AI.

Earlier action against Naver set a precedent for copyright enforcement in AI applications.

Although KBS subsequently partnered with Naver for AI-driven media solutions, the current case underscores continuing disputes over lawful access to broadcast content for generative AI training.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!