OURA launches AI model tailored to women’s physiology with privacy-first design

Guidance for women’s health is entering a new phase as ŌURA introduces a proprietary large language model designed specifically for reproductive and hormonal wellbeing.

The model sits within Oura Advisor and is available for testing through Oura Labs, drawing on clinical standards, peer-reviewed evidence and biometric signals collected through the Oura Ring to create personalised and context-aware responses.

The system interprets questions through women’s physiology instead of depending on general-purpose models that miss critical hormonal and life-stage variables.

It supports the full spectrum of reproductive health, from the earliest menstrual patterns to menopause, and is intentionally tuned to be non-dismissive and emotionally supportive.

By combining longitudinal sleep, activity, stress, cycle and pregnancy data with clinician-reviewed research, the model aims to strengthen understanding and preparation ahead of medical appointments.

Privacy forms the centre of the architecture, with all processing hosted on infrastructure controlled entirely by the company. Conversations are neither shared nor sold, reflecting ŌURA’s broader push for private AI.

Oura Labs operates as an opt-in experimental environment where new features are tested in collaboration with members who can leave at any time.

Women who take part influence the model’s evolution by contributing feedback that informs future development.

These interactions help refine personalised insights across fertility, cycle irregularities, pregnancy changes and other hormonal shifts, marking a significant step in how the Finland-founded company advances preventive, data-guided care for its global community.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

CrowdStrike warns of faster AI driven threats

Cyber adversaries increasingly used AI to accelerate attacks and evade detection in 2025, according to CrowdStrike’s 2026 Global Threat Report. The company described the period as the year of the evasive adversary, marked by subtle and rapid intrusions.

The average time to a financially motivated online crime breakout fell to 29 minutes, with the fastest recorded at 27 seconds. CrowdStrike observed an 89 percent rise in attacks by AI-enabled threat actors compared with 2024.

Attackers also targeted AI systems themselves, exploiting GenAI tools at more than 90 organisations through malicious prompt injection. Supply chain compromises and the abuse of valid credentials enabled intrusions to blend into legitimate activity, with most detections classified as malware-free.

China linked activity rose by 38 percent across sectors, while North Korea linked incidents increased by 130 percent. CrowdStrike tracked more than 281 adversaries in total, warning that speed, credential abuse, and AI fluency now define the modern threat landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sony targets AI music copyright use

Sony Group has developed technology designed to identify the original sources of music generated by AI. The move comes amid growing concern over the unauthorised use of copyrighted works in AI training.

According to Sony Group, the system can extract data from an underlying AI model and compare generated tracks with original compositions. The process aims to quantify how much specific works contributed to the output.

Composers, songwriters and publishers could use the technology to seek compensation from AI developers if their material was used without permission. Sony said the goal is to help ensure creators are properly rewarded.

Efforts to safeguard intellectual property have intensified across the music industry. Sony Music Entertainment in the US previously filed a copyright infringement lawsuit in 2024 over AI-generated music, underscoring wider tensions around AI and creative rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study warns AI chatbots can reinforce delusions and mania

AI chatbots may pose serious risks for people with severe mental illnesses, according to a new study from Acta Psychiatrica Scandinavica. Researchers found that tools such as ChatGPT can worsen psychiatric conditions by reinforcing users’ delusions, paranoia, mania, suicidal thoughts, and eating disorders.

The team examined health records from more than 54,000 patients and identified dozens of cases where AI interactions appeared to exacerbate symptoms. Experts warn that the actual number of affected individuals is likely far higher.

AI’s design to follow and validate a user’s input can unintentionally strengthen delusional thinking, turning digital assistants into echo chambers for psychosis.

Despite potential benefits for psychoeducation or alleviating loneliness, experts caution against using AI as a substitute for trained therapists. Chatbots should be tested in rigorous clinical trials before any therapeutic use, says Professor Søren Dinesen Østergaard.

The researchers urge healthcare providers to discuss AI chatbot use with patients, particularly those with severe mental illnesses, and call for central regulation of the technology. They argue that lessons from social media show that early oversight is essential to protect vulnerable populations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenClaw users face account suspensions under Google AI rules

Google has suspended access to its Antigravity AI platform for numerous OpenClaw users, citing violations of its terms of service. Developers had used OpenClaw’s OAuth plugin to access subsidised Gemini model tokens, triggering backend strain and service degradation.

OpenClaw, launched in November 2025, gained more than 219,000 GitHub stars by enabling local AI agents for tasks such as email management and web browsing. Users authenticated through Antigravity to access advanced Gemini models at reduced cost, bypassing official distribution channels.

Google said the third-party integration powered non-authorised products on Antigravity infrastructure, triggering usage flagged as malicious. In February 2026, AI Ultra subscribers reported 403 errors and account restrictions, with some citing temporary disruptions to Gmail and Workspace.

Varun Mohan of Google DeepMind said the surge had degraded service quality and that enforcement prioritised legitimate users. Limited reinstatement options were offered to those unaware of violations, while capacity constraints were cited as the reason.

The move follows similar restrictions by Anthropic on third-party OAuth usage. Developers are shifting to alternative forks, as debate intensifies over open tooling, platform control, and the risks of agentic AI ecosystems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Global privacy regulators warn of rising AI deepfake harms

Privacy regulators from around the world have issued a joint warning about the rise of AI-generated deepfakes, arguing that the spread of non-consensual images poses a global risk instead of remaining a problem confined to individual countries.

Sixty-one authorities endorsed a declaration that draws attention to AI images and videos depicting real people without their knowledge or consent.

The signatories highlight the rapid growth of intimate deepfakes, particularly those targeting children and individuals from vulnerable communities. They note that such material often circulates widely on social platforms and may fuel exploitation or cyberbullying.

The declaration argues that the scale of the threat requires coordinated action rather than isolated national responses.

European authorities, including the European Data Protection Board and the European Data Protection Supervisor, support the effort to build global cooperation.

Regulators say that only joint oversight can limit the harms caused by AI systems that generate false depictions, rather than protecting individuals’ privacy as required under frameworks such as the General Data Protection Regulation.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Anthropic uncovers large-scale AI model theft operations

Three AI laboratories have been found conducting large-scale illicit campaigns to extract capabilities from Anthropic’s Claude AI, the company revealed.

DeepSeek, Moonshot, and MiniMax used around 24,000 fraudulent accounts to generate more than 16 million interactions, violating terms of service and regional access restrictions. The technique, called distillation, trains a weaker model on outputs from a stronger one, speeding AI development.

Distilled models obtained in this manner often lack critical safeguards, creating serious national security concerns. Without protections, these capabilities could be integrated into military, intelligence, surveillance, or cyber operations, potentially by authoritarian governments.

The attacks also undermine export controls designed to preserve the competitive edge of US AI technology and could give a misleading impression of foreign labs’ independent AI progress.

Each lab followed coordinated playbooks using proxy networks and large-scale automated prompts to target specific capabilities such as agentic reasoning, coding, and tool use.

Anthropic attributed the campaigns using request metadata, infrastructure indicators, and corroborating observations from industry partners. The investigation detailed how distillation attacks operate from data generation to model launch.

In response, Anthropic has strengthened detection systems, implemented stricter access controls, shared intelligence with other labs and authorities, and introduced countermeasures to reduce the effectiveness of illicit distillation.

The company emphasises that addressing these attacks will require coordinated action across the AI industry, cloud providers, and policymakers to protect frontier AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Medical AI risks in Turkey highlight data bias and privacy challenges

Ankara is seeing growing debate over the risks and benefits of medical AI as experts warn that poorly governed systems could threaten patient safety.

Associate professor Agah Tugrul Korucu said AI offers meaningful potential for healthcare only when supported by rigorous ethical rules and strong oversight instead of rapid deployment without proper safeguards.

Korucu explained that data bias remains one of the most significant dangers because AI models learn directly from the information they receive. Underrepresented age groups, regions or social classes can distort outcomes and create systematic errors.

Turkey’s national health database e-Nabiz provides a strategic advantage, yet raw information cannot generate value unless it is processed correctly and supported by clear standards, quality controls and reliable terminology.

He added that inconsistent hospital records, labelling errors and privacy vulnerabilities can mislead AI systems and pose legal challenges. Strict anonymisation and secure analysis environments are needed to prevent harmful breaches.

Medical AI works best as a second eye in fields such as radiology and pathology, where systems can reduce workloads by flagging suspicious areas instead of leaving clinicians to assess every scan alone.

Korucu said physicians must remain final decision makers because automation bias could push patients towards unnecessary risks.

He expects genomic data combined with AI to transform personalised medicine over the coming decade, allowing faster diagnoses and accurate medication choices for rare conditions.

Priority development areas for Turkey include triage tools, intensive care early warning systems and chronic disease management. He noted that the long-term model will be the AI-assisted physician rather than a fully automated clinician.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

AWS warns of AI powered cybercrime

Amazon Web Services has revealed that a Russian-speaking threat actor used commercial AI tools to compromise more than 600 FortiGate firewalls across 55 countries. AWS described the campaign as an AI-powered assembly line for cybercrime.

According to AWS, the attacker relied on exposed management ports and weak single-factor credentials rather than exploiting software vulnerabilities. The campaign targeted FortiGate devices globally and focused on harvesting credentials and configuration data.

AWS said the potentially Russian group appeared unsophisticated but achieved scale through AI-assisted mass scanning and automation. When encountering stronger defences, the attackers reportedly shifted to easier targets rather than persist.

The company advised organisations using FortiGate appliances to secure management interfaces, change default credentials and enforce complex passwords. Amazon said it was not compromised during the campaign.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pension savers increasingly rely on AI for retirement planning

AI is becoming a preferred tool for those beginning their retirement planning. Data on searches and website traffic suggests AI is meeting early-stage needs for pension guidance.

Platforms offering general financial information, such as MoneyHelper, have seen traffic fall by 10% over the past six months. At the same time, AI-generated overviews of pension content are on the rise.

AI tools are mainly used to sense-check retirement decisions, model ‘what-if’ scenarios, simplify pension jargon, and assist with tax planning. Users view AI as a thinking partner rather than a replacement for regulated advice.

Despite the rise of AI, bespoke advisory services, such as Pension Wise, have remained relevant, providing personalised guidance that AI cannot fully replace. PensionBee highlights that AI is helpful for basic guidance, but services remain essential for more complex planning.

Experts warn that the retirement sector faces a challenge in maintaining trust and relevance as AI continues to improve. Savers increasingly rely on technology for guidance, signalling a shift in how pensions are researched and managed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!