Finnish data breach exposed thousands of patients

A major data breach at Finnish psychotherapy provider Vastaamo exposed the private therapy records of around 33,000 patients in 2020. Hackers demanded bitcoin payments and threatened to publish deeply personal notes if victims refused to pay.

Among those affected was Meri-Tuuli Auer, who described intense fear after learning her confidential therapy details could be accessed online. Stolen records included discussions of mental health, abuse, and suicidal thoughts, causing nationwide shock.

The breach became the largest criminal investigation in Finland, prompting emergency government talks led by then prime minister Sanna Marin. Despite efforts to stop the leak, the full database had already circulated on the dark web.

Finnish courts later convicted cybercriminal Julius Kivimäki, sentencing him to more than six years in prison. Many victims say the damage remains permanent, with trust in therapy and digital health systems severely weakened.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Researchers report increased ransomware and hacktivist activities targeting industrial systems in 2025

Industrial technology environments experienced a higher volume of cyber incidents in 2025, alongside a reported doubling in the exploitation of industrial control system (ICS) vulnerabilities.

According to the Cyble Research & Intelligence Labs Annual Threat Landscape Report 2025, manufacturing and healthcare (both highly dependent on ICS) were the sectors most affected by ransomware. The report recorded a 37% increase in total ransomware incidents between 2024 and 2025.

The analysis shows that the increase in reported ICS vulnerabilities is partly linked to greater exploitation by threat actors targeting human–machine interfaces (HMIs) and supervisory control and data acquisition (SCADA) systems. Over the reporting period, 600 manufacturing entities and 477 healthcare organizations were affected by ransomware incidents.

In parallel, hacktivist activity targeting ICT- and OT-reliant sectors, including energy, utilities, and transportation, increased in 2025. Several groups focused on ICS environments, primarily by exposing internet-accessible HMIs and other operational interfaces. Cyble further noted that 27 of the disclosed ICT vulnerabilities involved internet-exposed assets across multiple critical infrastructure sectors.

The report assessed hacktivism as increasingly coordinated across borders, with activity patterns aligning with geopolitical developments. Cyber operations linked to tensions between Israel and Iran involved 74 hacktivist groups, while India–Pakistan tensions were associated with approximately 1.5 million intrusion attempts.

Based on these observations, Cyble researchers assess that in 2026, threat actors are likely to continue focusing on exposed HMI and SCADA systems, including through virtual network computing (VNC) access, where such systems remain reachable from the internet.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Energy-efficient AI training with memristors

Scientists in China developed an error-aware probabilistic update (EaPU) to improve neural network training on memristor hardware. The method tackles accuracy and stability limits in analog computing.

Training inefficiency caused by noisy weight updates has slowed progress beyond inference tasks. EaPU applies probabilistic, threshold-based updates that preserve learning and sharply reduce write operations.

Experiments and simulations show major gains in energy efficiency, accuracy and device lifespan across vision models. Results suggest broader potential for sustainable AI training using emerging memory technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

French regulator fines Free and Free Mobile €42 million

France’s data protection regulator CNIL has fined telecom operators Free Mobile and Free a combined €42 million over a major customer data breach. The sanctions follow an October 2024 cyberattack that exposed personal data linked to 24 million subscriber contracts.

Investigators found security safeguards were inadequate, allowing attackers to access sensitive personal data, including bank account details. Weak VPN authentication and poor detection of abnormal system activity were highlighted as key failures under the GDPR.

The French regulator also ruled that affected customers were not adequately informed about the risks they faced. Notification emails lacked sufficient detail to explain potential consequences or protective steps, thereby breaching obligations to clearly communicate data breach impacts.

Free Mobile faced an additional penalty for retaining former customer data longer than permitted. Authorities ordered both companies to complete security upgrades and data clean-up measures within strict deadlines.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI power demand pushes nuclear energy back into focus

Rising AI-driven electricity demand is straining power grids and renewing focus on nuclear energy as a stable, low-carbon solution. Data centres powering AI systems already consume electricity at the scale of small cities, and demand is accelerating rapidly.

Global electricity consumption could rise by more than 10,000 terawatt-hours by 2035, largely driven by AI workloads. In advanced economies, data centres are expected to drive over a fifth of electricity-demand growth by 2030, outpacing many traditional industries.

Nuclear energy is increasingly positioned as a reliable backbone for this expansion, offering continuous power, high energy density, and grid stability.

Governments, technology firms, and nuclear operators are advancing new reactor projects, while long-term power agreements between tech companies and nuclear plants are becoming more common.

Alongside large reactors, interest is growing in small modular reactors designed for faster deployment near data centres. Supporters say these systems could ease grid bottlenecks and deliver dedicated power for AI, strengthening nuclear energy’s role in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

xAI faces stricter pollution rules for Memphis data centre

US regulators have closed a loophole that allowed Elon Musk’s AI company, xAI, to operate gas-burning turbines at its Memphis data centre without full air pollution permits. The move follows concerns over emissions and local health impacts.

The US Environmental Protection Agency clarified that mobile gas turbines cannot be classified as ‘non-road engines’ to avoid Clean Air Act requirements. Companies must now obtain permits if their combined emissions exceed regulatory thresholds.

Local authorities had previously allowed the turbines to operate without public consultation or environmental review. The updated federal rule may slow xAI’s expansion plans in the Memphis area.

The Colossus data centre, opened in 2024, supports training and inference for Grok AI models and other services linked to Musk’s X platform. NVIDIA hardware is used extensively at the site.

Residents and environmental groups have raised concerns about air quality, particularly in nearby communities. Legal advocates say xAI’s future operations will be closely monitored for regulatory compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU revises Cybersecurity Act to streamline certification

The European Commission plans to revise the Cybersecurity Act to expand certification schemes beyond ICT products and services. Future assessments would also cover companies’ overall risk-management posture, including governance and supply-chain practices.

Only one EU-wide scheme, the Common Criteria framework, has been formally adopted since 2019. Cloud, 5G, and digital identity certifications remain stalled due to procedural complexity and limited transparency under the current Cybersecurity Act framework.

The reforms aim to introduce clearer rules and a rolling work programme to support long-term planning. Managed security services, including incident response and penetration testing, would become eligible for EU certification.

ENISA would take on a stronger role as the central technical coordinator across member states. Additional funding and staff would be required to support its expanding mandate under the newer cybersecurity laws.

Stakeholders broadly support harmonisation to reduce administrative burden and regulatory fragmentation. The European Commission says organisational certification would assess cybersecurity maturity alongside technical product compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

CIRO discloses scale of August 2025 cyber incident

Canada’s investment regulator has confirmed a major data breach affecting around 750,000 people after a phishing attack in August 2025.

The Canadian Investment Regulatory Organization (CIRO) said threat actors accessed and copied a limited set of investigative, compliance, and market surveillance data. Some internal systems were taken offline as a precaution, but core regulatory operations continued across the country.

CIRO reported that personal and financial information was exposed, including income details, identification records, contact information, account numbers, and financial statements collected during regulatory activities in Canada.

No passwords or PINs were compromised, and the organisation said there is no evidence that the stolen data has been misused or shared on the dark web.

Affected individuals are being offered two years of free credit monitoring and identity theft protection as CIRO continues to monitor for further malicious activity nationwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

What happens to software careers in the AI era

AI is rapidly reshaping what it means to work as a software developer, and the shift is already visible inside organisations that build and run digital products every day. In the blog ‘Why the software developer career may (not) survive: Diplo’s experience‘, Jovan Kurbalija argues that while AI is making large parts of traditional coding less valuable, it is also opening a new professional lane for people who can embed, configure, and improve AI systems in real-world settings.

Kurbalija begins with a personal anecdote, a Sunday brunch conversation with a young CERN programmer who believes AI has already made human coding obsolete. Yet the discussion turns toward a more hopeful conclusion.

The core of software work, in this view, is not disappearing so much as moving away from typing syntax and toward directing AI tools, shaping outcomes, and ensuring what is produced actually fits human needs.

One sign of the transition is the rise of describing apps in everyday language and receiving working code in seconds, often referred to as ‘vibe coding.’ As AI tools take over boilerplate code, basic debugging, and routine code review, the ‘bad news’ is clear: many tasks developers were trained for are fading.

The ‘good news,’ Kurbalija writes, is that teams can spend less time on repetitive work and more time on higher-value decisions that determine whether technology is useful, safe, and trusted. A central theme is that developers may increasingly be judged by their ability to bridge the gap between neat code and messy reality.

That means listening closely, asking better questions, navigating organisational politics, and understanding what users mean rather than only what they say. Kurbalija suggests hiring signals could shift accordingly, with employers valuing empathy and imagination, sometimes even seeing artistic or humanistic interests as evidence of stronger judgment in complex human environments.

Another pressure point is what he calls AI’s ‘paradox of plenty.’ If AI makes building easier, the harder question becomes what to build, what to prioritise, and what not to automate.

In that landscape, the scarce skill is not writing code quickly but framing the right problem, defining success, balancing trade-offs, and spotting where technology introduces new risks, especially in large organisations where ‘requirements’ can hide unresolved conflicts.

Kurbalija also argues that AI-era systems will be more interconnected and fragile, turning developers into orchestrators of complexity across services, APIs, agents, and vendors. When failures cascade or accountability becomes blurred, teams still need people who can design for resilience, privacy, and observability and who can keep systems understandable as tools and models change.

Some tasks, like debugging and security audits, may remain more human-led in the near term, even if that window narrows as AI improves.

Transformation of Diplo is presented as a practical case study of the broader shift. Kurbalija describes a move from a technology-led phase toward a more content and human-led approach, where the decisive factor is not which model is used but how well knowledge is prepared, labelled, evaluated, and embedded into workflows, and how effectively people adapt to constant change.

His bottom line is stark. Many developers will struggle, but those who build strong non-coding skills, communication, systems thinking, product judgment, and comfort with uncertainty may do exceptionally well in the new era.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI outlines advertising plans for ChatGPT access

The US AI firm, OpenAI, has announced plans to test advertising within ChatGPT as part of a broader effort to widen access to advanced AI tools.

An initiative that focuses on supporting the free version and the low-cost ChatGPT Go subscription, while paid tiers such as Plus, Pro, Business, and Enterprise will continue without advertisements.

According to the company, advertisements will remain clearly separated from ChatGPT responses and will never influence the answers users receive.

Responses will continue to be optimised for usefulness instead of commercial outcomes, with OpenAI emphasising that trust and perceived neutrality remain central to the product’s value.

User privacy forms a core pillar of the approach. Conversations will stay private, data will not be sold to advertisers, and users will retain the ability to disable ad personalisation or remove advertising-related data at any time.

During early trials, ads will not appear for accounts linked to users under 18, nor within sensitive or regulated areas such as health, mental wellbeing, or politics.

OpenAI describes advertising as a complementary revenue stream rather than a replacement for subscriptions.

The company argues that a diversified model can help keep advanced intelligence accessible to a wider population, while maintaining long term incentives aligned with user trust and product quality.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!