Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ethernet wins in raw security, but Wi-Fi can compete with the right setup

The way you connect to the internet matters, not just the speed, but also your privacy and security. That’s the main takeaway from a recent Fox News report comparing Ethernet and Wi-Fi security.

At its core, Ethernet is inherently more secure in many scenarios because it requires physical access. Data travels along a cable directly to your router, reducing risks of eavesdropping or intercepting signals mid-air.

Wi-Fi, by contrast, sends data through the air. That makes it more vulnerable, especially if a network uses weak passwords or outdated encryption standards. Attackers within signal range might exploit poorly secured networks.

But Ethernet isn’t a guaranteed fortress. The Fox article emphasises that security depends largely on your entire setup. A Wi-Fi network with strong encryption (ideally WPA3), robust passwords, regular firmware updates, and a well-configured router can approach the network security level of wired connections.

Each device you connect, smartphones, smart home gadgets, IoT sensors, increases your network’s exposure. Wi-Fi amplifies that risk since more devices can join wirelessly. Ethernet limits the number of direct connection points, which reduces the attack surface.

In short, Ethernet gives you a baseline security advantage, but a well-secured Wi-Fi network can be quite robust. The critical factor is how carefully you manage your network settings and devices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google cautions Australia on youth social media ban proposal

The US tech giant, Google (also owner of YouTube), has reiterated its commitment to children’s online safety while cautioning against Australia’s proposed ban on social media use for those under 16.

Speaking before the Senate Environment and Communications References Committee, Google’s Public Policy Senior Manager Rachel Lord said the legislation, though well-intentioned, may be difficult to enforce and could have unintended effects.

Lord highlighted the 23-year presence of Google in Australia, contributing over $53 billion to the economy in 2024, while YouTube’s creative ecosystem added $970 million to GDP and supported more than 16,000 jobs.

She said the company’s investments, including the $1 billion Digital Future Initiative, reflect its long-term commitment to Australia’s digital development and infrastructure.

According to Lord, YouTube already provides age-appropriate products and parental controls designed to help families manage their children’s experiences online.

Requiring children to access YouTube without accounts, she argued, would remove these protections and risk undermining safe access to educational and creative content used widely in classrooms, music, and sport.

She emphasised that YouTube functions primarily as a video streaming platform rather than a social media network, serving as a learning resource for millions of Australian children.

Lord called for legislation that strengthens safety mechanisms instead of restricting access, saying the focus should be on effective safeguards and parental empowerment rather than outright bans.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

No breakthrough in EU debate over chat scanning

EU negotiations over the controversial ‘chat control’ proposal have once again failed to reach a breakthrough, leaving the future of the plan uncertain. The European Commission’s three-year-old proposal aims to curb the spread of child sexual abuse material by allowing authorities to require chat services to screen messages before they are encrypted.

Critics, however, warn that such measures would undermine privacy and amount to state surveillance of private communications.

Under the plan, chat services could only be ordered to scan messages after approval from a judicial authority, and the system would target known child abuse images stored in databases. Text-based messages would not be monitored, according to the Danish EU presidency, which insists that sufficient safeguards are in place.

Despite those assurances, several member states remain unconvinced. Germany has yet to reach a unified position, with Justice Minister Stefanie Hubig stressing that ‘chat control without cause must be taboo in a rule of law.’

Belgium, too, continues to deliberate, with Interior Minister Bernard Quintin calling for a ‘balanced and proportional’ approach between privacy protection and child safety.

The debate remains deeply divisive across Europe, as lawmakers and citizens grapple with a difficult question. How to effectively combat online child abuse without sacrificing the right to private communication?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Denmark moves to ban social media for under-15s amid child safety concerns

Joining the broader trend, Denmark plans to ban children under 15 from using social media as Prime Minister Mette Frederiksen announced during her address to parliament on Tuesday.

Describing platforms as having ‘stolen our children’s childhood’, she said the government must act to protect young people from the growing harms of digital dependency.

Frederiksen urged lawmakers to ‘tighten the law’ to ensure greater child safety online, adding that parents could still grant consent for children aged 13 and above to have social media accounts.

Although the proposal is not yet part of the government’s legislative agenda, it builds on a 2024 citizen initiative that called for banning platforms such as TikTok, Snapchat and Instagram.

The prime minister’s comments reflect Denmark’s broader push within the EU to require age verification systems for online platforms.

Her statement follows a broader debate across Europe over children’s digital well-being and the responsibilities of tech companies in verifying user age and safeguarding minors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Dutch government criticised over reliance on Microsoft cloud

Despite privacy concerns and parliamentary criticism, the Dutch Tax Administration will move much of its digital workplace to Microsoft’s cloud. State Secretary Eugène Heijnen told lawmakers that no suitable European alternatives met the technical, legal, and functional requirements.

Privacy advocates warn that using a US-based provider could put compliance with GDPR at risk, especially when data may leave the EU. Concerns about long-term dependency on a single cloud vendor have also been raised, making future transitions costly and complex.

Heijnen said sensitive documents would remain on internal servers, while cloud services would handle workplace functions. Employees had complained that the current system was inefficient and difficult to use.

The Court of Audit reported earlier this year that nearly two-thirds of the Dutch government’s public cloud services had not been properly risk-assessed. Despite this, Heijnen insisted that Microsoft offered the most viable option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NIST pushes longer passphrases and MFA over strict rules

The US National Institute of Standards and Technology (NIST) has updated its password guidelines, urging organisations to drop strict complexity rules. NIST states that requirements such as mandatory symbols and frequent resets often harm usability without significantly improving security.

Instead, the agency recommends using blocklists for breached or commonly used passwords, implementing hashed storage, and rate limiting to resist brute-force attacks. Multi-factor authentication and password managers are encouraged as additional safeguards.

Password length remains essential. Short strings are easily cracked, but users should be allowed to create longer passphrases. NIST recommends limiting only extremely long passwords that slow down hashing.

The new approach replaces mandatory resets with changes triggered only after suspected compromise, such as a data breach. NIST argues this method reduces fatigue while improving overall account protection.

Businesses adopting these guidelines must audit their existing policies, reconfigure authentication systems, deploy blocklists, and train employees to adapt accordingly. Clear communication of the changes will be key to ensuring compliance.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!