Microsoft warns of a surge in ransomware and extortion incidents

Financially motivated cybercrime now accounts for the majority of global digital threats, according to Microsoft’s latest Digital Defense Report.

The company’s analysts found that over half of all cyber incidents with known motives in the past year were driven by extortion or ransomware, while espionage represented only a small fraction.

Microsoft warns that automation and accessible off-the-shelf tools have allowed criminals with limited technical skills to launch widespread attacks, making cybercrime a constant global threat.

The report reveals that attackers increasingly target critical services such as hospitals and local governments, where weak security and urgent operational demands make them easy victims.

Cyberattacks on these sectors have already led to real-world harm, from disrupted emergency care to halted transport systems. Microsoft highlights that collaboration between governments and private industry is essential to protect vulnerable sectors and maintain vital services.

While profit-seeking criminals dominate by volume, nation-state actors are also expanding their reach. State-sponsored operations are growing more sophisticated and unpredictable, with espionage often intertwined with financial motives.

Some state actors even exploit the same cybercriminal networks, complicating attribution and increasing risks for global organisations.

Microsoft notes that AI is being used by both attackers and defenders. Criminals are employing AI to refine phishing campaigns, generate synthetic media and develop adaptive malware, while defenders rely on AI to detect threats faster and close security gaps.

The report urges leaders to prioritise cybersecurity as a strategic responsibility, adopt phishing-resistant multifactor authentication, and build strong defences across industries.

Security, Microsoft concludes, must now be treated as a shared societal duty rather than an isolated technical task.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

An awards win for McAfee’s consumer-first AI defence

McAfee won ‘Best Use of AI in Cybersecurity’ at the 2025 A.I. Awards for its Scam Detector. The tool, which McAfee says is the first to automate deepfake, email, and text-scam detection, underscores a consumer-focused defence. The award recognises its bid to counter fast-evolving online fraud.

Scams are at record levels, with one in three US residents reporting victimisation and average losses of $1,500. Threats now range from fake job offers and text messages to AI-generated deepfakes, increasing the pressure on tools that can act in real time across channels.

McAfee’s Scam Detector uses advanced AI to analyse text, email, and video, blocking dangerous links and flagging deepfakes before they cause harm. It is included with core McAfee plans and available on PC, mobile, and web, positioning it as a default layer for everyday protection.

Adoption has been rapid, with the product crossing one million users in its first months, according to the company. Judges praised its proactive protection and emphasis on accuracy and trust, citing its potential to restore user confidence as AI-enabled deception becomes more sophisticated.

McAfee frames the award as validation of its responsible, consumer-first AI strategy. The company says it will expand Scam Detector’s capabilities while partnering with the wider ecosystem to keep users a step ahead of emerging threats, both online and offline.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft finds 71% of UK workers use unapproved AI tools on the job

A new Microsoft survey has revealed that nearly three in four employees in the UK use AI tools at work without company approval.

A practice, referred to as ‘shadow AI’, that involves workers relying on unapproved systems such as ChatGPT to complete routine tasks. Microsoft warned that unauthorised AI use could expose businesses to data leaks, non-compliance risks, and cyber attacks.

The survey, carried out by Censuswide, questioned over 2,000 employees across different sectors. Seventy-one per cent admitted to using AI tools outside official policies, often because they were already familiar with them in their personal lives.

Many reported using such tools to respond to emails, prepare presentations, and perform financial or administrative tasks, saving almost eight hours of work each week.

Microsoft said only enterprise-grade AI systems can provide the privacy and security organisations require. Darren Hardman, Microsoft’s UK and Ireland chief executive, urged companies to ensure workplace AI tools are designed for professional use rather than consumer convenience.

He emphasised that secure integration can allow firms to benefit from AI’s productivity gains while protecting sensitive data.

The study estimated that AI technology saves 12.1 billion working hours annually across the UK, equivalent to about £208 billion in employee time. Workers reported using the time gained through AI to improve work-life balance, learn new skills, and focus on higher-value projects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teen content on Instagram now guided by PG-13 standards

Instagram is aligning its Teen Accounts with PG-13 movie standards, aiming to ensure that users under 18 only see age-appropriate material. Teens will automatically be placed in a 13+ setting and will need parental permission to change it.

Parents who want tighter supervision can activate a new ‘Limited Content’ mode that filters out even more material and restricts comments and AI interactions.

The company reviewed its policies to match familiar parental guidelines, further limiting exposure to content with strong language, risky stunts, or references to substances. Teens will also be blocked from following accounts that share inappropriate content or contain suggestive names and bios.

Searches for sensitive terms such as ‘gore’ or ‘alcohol’ will no longer return results, and the same restrictions will extend to Explore, Reels, and AI chat experiences.

Instagram worked with thousands of parents worldwide to shape these policies, collecting more than three million content ratings to refine its protections. Surveys show strong parental support, with most saying the PG-13 system makes it easier to understand what their teens are likely to see online.

The updates begin rolling out in the US, UK, Australia, and Canada and will expand globally by the end of the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers expose weak satellite security with cheap equipment

Scientists in the US have shown how easy it is to intercept private messages and military information from satellites using equipment costing less than €500.

Researchers from the University of California, San Diego and the University of Maryland scanned internet traffic from 39 geostationary satellites and 411 transponders over seven months.

They discovered unencrypted data, including phone numbers, text messages, and browsing history from networks such as T-Mobile, TelMex, and AT&T, as well as sensitive military communications from the US and Mexico.

The researchers used everyday tools such as TV satellite dishes to collect and decode the signals, proving that anyone with a basic setup and a clear view of the sky could potentially access unprotected data.

They said there is a ‘clear mismatch’ between how satellite users assume their data is secured and how it is handled in reality. Despite the industry’s standard practice of encrypting communications, many transmissions were left exposed.

Companies often avoid stronger encryption because it increases costs and reduces bandwidth efficiency. The researchers noted that firms such as Panasonic could lose up to 30 per cent in revenue if all data were encrypted.

While intercepting satellite data still requires technical skill and precise equipment alignment, the study highlights how affordable tools can reveal serious weaknesses in global satellite security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New YouTube tools provide trusted health advice for teens

YouTube is introducing a new shelf of mental health and wellbeing content designed specifically for teenagers. The feature will provide age-appropriate, evidence-based videos covering topics such as depression, anxiety, ADHD, and eating disorders.

Content is created in collaboration with trusted organisations and creators, including Black Dog Institute, ReachOut Australia, and Dr Syl, to ensure it is both reliable and engaging.

The initiative will initially launch in Australia, with plans to expand to the US, the UK, and Canada. Videos are tailored to teens’ developmental stage, offering practical advice, coping strategies, and medically-informed guidance.

By providing credible information on a familiar platform, YouTube hopes to improve mental health literacy and reduce stigma among young users.

YouTube has implemented teen-specific safeguards for recommendations, content visibility, and advertising eligibility, making it easier for adolescents to explore their interests safely.

The company emphasises that the platform is committed to helping teens access trustworthy resources, while supporting their wellbeing in a digital environment increasingly filled with misinformation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK government urges firms to keep paper backups for cyberattack recovery

The UK government has issued a strong warning to company leaders to prepare for cyber incidents by maintaining paper-based contingency plans. The National Cyber Security Centre (NCSC) emphasised that firms must plan how to continue operations and rebuild IT systems if networks are compromised.

The advice follows a series of high-profile cyberattacks this year targeting major UK firms, including Marks & Spencer, The Co-op, and Jaguar Land Rover, which experienced production halts and supply disruptions after their systems were breached.

According to NCSC chief executive Richard Horne, organisations need to adopt ‘resilience engineering’ strategies, systems designed to anticipate, absorb, recover, and adapt during cyberattacks.

The agency recommends storing response plans offline and outlining alternative communication methods, such as phone trees and manual record-keeping, should email systems fail.

While the total number of cyber incidents investigated by the NCSC, 429 in the first nine months of 2025, remained stable, the number of ‘nationally significant’ attacks nearly doubled from 89 to 204. These include Category 1–3 incidents, ranging from ‘significant’ to ‘national cyber emergency.’

Recent cases highlight the human and operational toll of such events, including a ransomware attack on a London blood testing provider last year that caused severe clinical disruption and contributed to at least one patient death.

Experts say the call for offline backups may sound old-fashioned but is pragmatic. ‘You wouldn’t walk onto a building site without a helmet, yet companies still go online without basic protection,’ said Graeme Stewart, head of public sector at Check Point. ‘Cybersecurity must be treated like health and safety: not optional, but essential.’

The government is also encouraging companies, particularly SMEs, to use the NCSC’s free support tools, including cyber insurance linked to its Cyber Essentials programme.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU nations back Danish plan to strengthen child protection online

EU countries have agreed to step up efforts to improve child protection online by supporting Denmark’s Jutland Declaration. The initiative, signed by 25 member states, focuses on strengthening existing EU rules that safeguard minors from harmful and illegal online content.

However, Denmark’s proposal to ban social media for children under 15 did not gain full backing, with several governments preferring other approaches.

The declaration highlights growing concern about young people’s exposure to inappropriate material and the addictive nature of online platforms.

It stresses the need for more reliable age verification tools and refers to the upcoming Digital Fairness Act as an opportunity to introduce such safeguards. Ministers argued that the same protections applied offline should exist online, where risks for minors remain significant.

Danish officials believe stronger measures are essential to address declining well-being among young users. Some EU countries, including Germany, Spain and Greece, expressed support for tighter protections but rejected outright bans, calling instead for balanced regulation.

Meanwhile, the European Commission has asked major platforms such as Snapchat, YouTube, Apple and Google to provide details about their age verification systems under the Digital Services Act.

These efforts form part of a broader EU drive to ensure a safer digital environment for children, as investigations into online platforms continue across Europe.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft strengthens UAE AI infrastructure

Microsoft has announced a strategic investment to enable in-country data processing for Microsoft 365 Copilot in the UAE. The service will be available to qualified UAE organisations in early 2026, hosted in Microsoft’s Dubai and Abu Dhabi cloud centres for secure, local AI processing.

The move aligns with the UAE’s ambition to become a global AI hub, supported by initiatives such as the National Artificial Intelligence Strategy 2031 and the Dubai Universal Blueprint for AI.

Government leaders emphasise that in-country AI infrastructure strengthens trust, cyber resilience, and innovation across ministries and public entities.

Collaboration with the UAE Cybersecurity Council (CSC) and the Dubai Electronic Security Center (DESC) ensures Microsoft 365 Copilot complies with national AI policies and data governance standards.

Local processing cuts latency, protects data, and supports regulated environments, allowing government stakeholders to adopt AI securely.

Microsoft and its strategic partner G42 International highlight the initiative’s broader impact on the UAE’s digital economy. The project could create 152,000 jobs and train one million UAE learners in AI by 2027, supporting a secure and innovative digital future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!