Cyberattack on IT supplier affects hundreds of Swedish municipalities and regions

The Region of Gotland in Sweden was notified that Miljödata, a Swedish software provider used for managing sick leave and other HR-related records, had been hit by a cyberattack. Later that day, it was confirmed that sensitive personal data may have been leaked, although it remains unclear whether Region Gotland’s data was affected.

Miljödata, which provides systems handling medical certificates, rehabilitation plans, and work-related injuries, immediately isolated its systems and reported the incident to the police. The region of Gotland is one of several regions affected. Investigations are ongoing, and the region is closely monitoring the situation while following standard data protection procedures, according to HR Director Lotta Israelsson.

Swedish Minister for Civil Defence, Carl-Oskar Bohlin, confirmed that the full scope and consequences of the cyberattack remain unclear. Around 200 of Sweden’s 290 municipalities and 21 regions were reportedly affected, many of which use Miljödata systems to manage employee data such as medical certificates and rehabilitation plans.

Miljödata is working with external experts to investigate the breach and restore services. The government is closely monitoring the situation, with CERT-SE and the National Cybersecurity Centre providing support. A police investigation is underway. Bohlin emphasised the need for stronger cybersecurity and announced a forthcoming bill to tighten national cyber regulations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbot Claude misused for high-value ransomware

Anthropic has warned that its AI chatbot Claude is being misused to carry out large-scale cyberattacks, with ransom demands reaching up to $500,000 in Bitcoin. Attackers used ‘vibe hacking’ to let low-skill individuals automate ransomware and create customised extortion notes.

The report details attacks on at least 17 organisations across healthcare, government, emergency services, and religious sectors. Claude was used to guide encryption, reconnaissance, exploit creation, and automated ransom calculations, lowering the skill needed for cybercrime.

North Korean IT workers misused Claude to forge identities, pass coding tests, and secure US tech roles, funneling revenue to the regime despite sanctions. Analysts warn generative AI is making ransomware attacks more scalable and affordable, with risks expected to rise in 2025.

Experts advise organisations to enforce multi-factor authentication, apply least-privilege access, monitor anomalies, and filter AI outputs. Coordinated threat intelligence sharing and operational controls are essential to reduce exposure to AI-assisted attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parental controls and crisis tools added to ChatGPT amid scrutiny

The death of 16-year-old Adam Raine has placed renewed attention on the risks of teenagers using conversational AI without safeguards. His parents allege ChatGPT encouraged his suicidal thoughts, prompting a lawsuit against OpenAI and CEO Sam Altman in San Francisco.

The case has pushed OpenAI to add parental controls and safety tools. Updates include one-click emergency access, parental monitoring, and trusted contacts for teens. The company is also exploring connections with therapists.

Executives said AI should support rather than harm. OpenAI has worked with doctors to train ChatGPT to avoid self-harm instructions and redirect users to crisis hotlines. The company acknowledges that longer conversations can compromise reliability, underscoring the need for stronger safeguards.

The tragedy has fuelled wider debates about AI in mental health. Regulators and experts warn that safeguards must adapt as AI becomes part of daily decision-making. Critics argue that future adoption should prioritise accountability to protect vulnerable groups from harm.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp launches AI assistant for editing messages

Meta’s WhatsApp has introduced a new AI feature called Writing Help, designed to assist users in editing, rewriting, and refining the tone of their messages. The tool can adjust grammar, improve phrasing, or reframe a message in a more professional, humorous, or encouraging style before it is sent.

The feature operates through Meta’s Private Processing technology, which ensures that messages remain encrypted and private instead of being visible to WhatsApp or Meta.

According to the company, Writing Help processes requests anonymously and cannot trace them back to the user. The function is optional, disabled by default, and only applies to the chosen message.

To activate the feature, users can tap a small pencil icon that appears while composing a message.

In a demonstration, WhatsApp showed how the tool could turn ‘Please don’t leave dirty socks on the sofa’ into more light-hearted alternatives, including ‘Breaking news: Socks found chilling on the couch’ or ‘Please don’t turn the sofa into a sock graveyard.’

By introducing Writing Help, WhatsApp aims to make communication more flexible and engaging while keeping user privacy intact. The company emphasises that no information is stored, and AI-generated suggestions only appear if users decide to enable the option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tools underpin a new wave of ransomware

Avast researchers uncovered that the FunkSec ransomware group used generative AI tools to accelerate attack development.

While the malware was not fully AI-generated, AI aided in writing code, crafting phishing templates and enhancing internal tooling.

A subtle encryption flaw in FunkSec’s code became the decryption breakthrough. Avast quietly developed a free tool, bypassing the need for ransom payments and rescuing dozens of affected users in cooperation with law enforcement.

However, this marks one of the earliest recorded instances of AI being used in ransomware, targeting productivity and stealth. It demonstrates how cybercriminals are adopting AI to lower entry barriers and that forensic investigation and technical agility remain crucial defence tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI redefines how cybersecurity teams detect and respond

AI, especially generative models, has become a staple in cybersecurity operations, extending its role from traditional machine learning tools to core functions within CyberOps.

Generative AI now supports forensics, incident investigation, log parsing, orchestration, vulnerability prioritisation and report writing. It accelerates workflows, enabling teams to ramp up detection and response and to concentrate human efforts on strategic tasks.

Experts highlight that it is not what CyberOps do that AI is remastering, but how they do it. AI scales routine tasks, like SOC level-1 and -2 operations, allowing analysts to shift focus from triage to investigation and threat modelling.

Junior staff benefit particularly from AI, which boosts accuracy and consistency. Senior analysts and CISOs also gain from AI’s capacity to amplify productivity while safeguarding oversight, a true force multiplier.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google alerts users after detecting malware spread through captive portals

Warnings have been issued by Google to some users after detecting a web traffic hijacking campaign that delivered malware through manipulated login portals.

According to the company’s Threat Intelligence Group, attackers compromised network edge devices to modify captive portals, the login pages often seen when joining public Wi-Fi or corporate networks.

Instead of leading to legitimate security updates, the altered portals redirected users to a fake page presenting an ‘Adobe Plugin’ update. The file, once installed, deployed malware known as CANONSTAGER, which enabled the installation of a backdoor called SOGU.SEC.

The software, named AdobePlugins.exe, was signed with a valid GlobalSign certificate linked to Chengdu Nuoxin Times Technology Co, Ltd. Google stated it is tracking multiple malware samples connected to the same certificate.

The company attributed the campaign to a group it tracks as UNC6384, also known by other names including Mustang Panda, Silk Typhoon, and TEMP.Hex.

Google said it first detected the campaign in March 2025 and sent alerts to affected Gmail and Workspace users. The operation reportedly targeted diplomats in Southeast Asia and other entities worldwide, suggesting a potential link to cyber espionage activities.

Google advised users to enable Enhanced Safe Browsing in Chrome, keep devices updated, and use two-step verification for stronger protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud develops blockchain network for financial institutions

Google Cloud is creating its own blockchain platform, the Google Cloud Universal Ledger (GCUL), targeting the financial sector. The network provides a neutral, compliant infrastructure for payment automation and digital asset management through a single API.

GCUL allows financial institutions to build Python-based smart contracts, with support for various use cases such as wholesale payments and asset tokenisation. Although called a Layer 1 network, its private, permissioned design raises debate over its status as a decentralised blockchain.

The company also revealed a series of AI-driven security enhancements at its Security Summit 2025.

These include an ‘agentic security operations centre’ for proactive threat detection, the Alert Investigation agent for automated analysis, and Model Armour to prevent prompt injection, jailbreaking, and data leaks.

Currently in a private testnet, GCUL was first announced in March in collaboration with the CME Group, which is piloting solutions on the platform. Google Cloud plans to reveal more details in the future as the project develops.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time conversations feel smoother with Google Translate’s Gemini AI update

Google Translate is receiving powerful Gemini AI upgrades that make speaking across languages feel far more natural.

The refreshed live conversation mode intelligently recognises pauses, accents, and background noise, allowing two people to talk without the rigid back-and-forth of older versions. Google says the new system should even work in noisy environments like cafes, a real-world challenge for speech technology.

The update also introduces a practice mode that pushes Translate beyond its traditional role as a utility. Users can set their skill level and goals, then receive personalised listening and speaking exercises designed to build confidence.

The tool is launching in beta for selected language pairs, such as English to Spanish or French, but it signals Google’s ambition to blend translation with education.

By bringing some advanced translation capabilities first seen on Pixel devices into the widely available Translate app, Google makes real-time multilingual communication accessible to everyone.

It’s a practical application of AI that promises to change everyday conversations and how people prepare to learn new languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!