Stryker cyberattack wipes devices via Microsoft environment without malware

A major cyber incident has impacted Stryker Corporation, where attackers targeted its internal Microsoft environment and remotely wiped tens of thousands of employee devices without deploying traditional malware.

Access to systems was reportedly achieved through a compromised administrator account, allowing attackers to issue remote wipe commands via Microsoft Intune.

As a result, large parts of the company’s internal infrastructure were disrupted, with some services remaining offline and business operations affected.

Responsibility has been claimed by Handala, a group often associated with broader geopolitical cyber activity. The incident reflects a growing trend of cyber operations blending disruption, data theft and strategic messaging.

Despite the scale of the attack, the company confirmed that its medical devices and patient-facing technologies were not impacted.

The case highlights increasing risks linked to identity compromise and cloud-based management tools, where attackers can cause significant damage without relying on conventional malware techniques.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Global leaders gather to tackle fraud

A major international effort to tackle fraud is set to take place in Vienna, as global leaders gather for the Global Fraud Summit 2026 on 16–17 March. The event will highlight emerging challenges in cross-border and digital fraud, bringing global attention to the need for stronger cooperation.

The meeting is organised by the UNODC in partnership with INTERPOL, bringing together government officials, law enforcement authorities, private sector representatives, civil society and academics to discuss emerging fraud trends.

Fraud is increasingly seen as a cross-border and digitally driven threat, making coordination between countries more important than ever. Discussions among leaders and other representatives are expected to focus on how fraud operates across jurisdictions, examine current and emerging fraud trends, why detection remains difficult, and what practical steps can improve both prevention and enforcement.

Particular attention will be given to how institutions and their leaders can enhance information sharing and cooperation. Stronger partnerships between public and private actors are seen as key to responding more effectively, especially as fraud schemes grow more sophisticated.

Beyond immediate enforcement, the summit aims to strengthen long-term capacity and build more resilient systems. Greater alignment between states and organisations could play a decisive role in addressing fraud globally.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NSA warns of AI supply chain risks in new cybersecurity guidance

The National Security Agency has released new guidance on managing risks across the AI supply chain, highlighting growing cybersecurity concerns tied to AI and machine learning systems. The joint information sheet outlines how organisations can better assess vulnerabilities when deploying or sourcing AI technologies.

The document defines the AI and machine learning supply chain as a combination of key components, including training data, models, software, infrastructure, hardware, and third-party services. Each element can introduce risks affecting confidentiality, integrity, or availability, particularly as advanced tools such as large language models and AI agents become more widely adopted.

Security risks associated with data include bias, poisoning attacks, and exposure via techniques such as model inversion and data extraction. For models, the guidance warns of hidden backdoors, malware, evasion attacks, and model manipulation. Organisations are advised to use trusted sources, perform integrity checks, and maintain verified model registries to mitigate such threats.

The paper also highlights software and infrastructure vulnerabilities, noting that AI systems often rely on complex dependencies that expand the attack surface. Recommended measures include malware scanning, testing, patching, and maintaining software bills of materials. Additional risks arise from third-party services, which may introduce weaknesses through their own supply chains or shared environments.

To manage these risks, organisations are urged to improve visibility across their AI ecosystems, identify suppliers and subcontractors, and require documentation such as AI and software bills of materials. The guidance aligns with frameworks from the National Institute of Standards and Technology and MITRE, reinforcing the need for coordinated approaches to AI supply chain security.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Microsoft Purview tools target data oversharing and AI governance

Microsoft has announced new integrations between Microsoft Purview and Microsoft Fabric, aimed at helping organisations identify AI-driven data risks, prevent sensitive data from being overshared, and strengthen governance across their data estates.

The updates come as enterprises accelerate AI adoption and face growing pressure to ensure that the data powering those systems is both protected and trustworthy.

Key new capabilities include Data Loss Prevention policies for Fabric workloads such as Warehouse and databases, Insider Risk Management tools that can detect risky actions such as unauthorised data exports from Fabric lakehouses, and new preview features for managing AI data exposure, including the ability to identify sensitive data appearing in Copilot prompts and responses.

Data Security Posture Management tools provide risk assessments to surface unprotected assets and recommend corrective action.

On the governance side, updates to Microsoft Purview Unified Catalogue introduce centralised workflows for data owners to control the publication of data products and run quality checks on unmanaged assets, enabling faster validation at scale.

Microsoft describes the combined offering as an ‘integrated and unified foundation’ that allows organisations to innovate with AI whilst keeping their data protected, governed, and trusted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN calls for global action against online scam networks

Online scam networks operating across Southeast Asia are defrauding victims worldwide, using AI, impersonation techniques, and complex cyber tools to steal billions of dollars.

At the Global Fraud Summit in Vienna, the UN Office on Drugs and Crime (UNODC) and INTERPOL brought together governments, law enforcement, and private-sector actors to strengthen international cooperation against these crimes.

Victims include individuals from diverse backgrounds, often highly educated and financially experienced. One Australian couple, Kim and Allan Sawyer, lost more than $2.5 million after engaging with what appeared to be a legitimate investment opportunity. ‘The scammer was extraordinarily believable,’ Kim Sawyer said. ‘He had a British accent, used all the right financial market terms and knew how to induce us by appearing credible every time.’

UNODC officials warn that these operations extend beyond fraud, forming part of a broader criminal ecosystem driven by organised scam networks, involving human trafficking, corruption, and money laundering.

‘We need to be looking into prosecuting high-level criminals, following the money through financial investigations and identifying the giant networks that operate behind these operations,’ said Delphine Schantz, UNODC’s regional representative for Southeast Asia and the Pacific.

Authorities say the scale and complexity of these crimes require a coordinated global response to dismantle scam networks effectively. ‘The complexity of these crimes requires an equally complex, whole-of-government approach and enhanced coordination among governments, financial intelligence units and digital banks,’ Schantz added.

Investigations in countries such as the Philippines and Cambodia have revealed how scam networks operate on the ground. In Manila, a former scam compound uncovered facilities used to control trafficked workers and evidence of corruption linked to local officials. ‘How do you prove a cybercrime in 36 hours? It is not possible,’ said the Philippines’ Presidential Anti-Organised Crime Commission (PAOCC) operations director, recalling the challenges investigators faced during early raids.

In Cambodia, international prosecutors and investigators have focused on improving cooperation mechanisms, including extradition, asset recovery, and the handling of digital evidence. These efforts are seen as critical in addressing the cross-border nature of scam networks.

Despite increased enforcement efforts, these networks continue to adapt and relocate, maintaining a global reach. At recent international meetings, including a summit in Bangkok involving nearly 60 countries and major technology firms, officials agreed on the need for shared intelligence, joint investigations and coordinated prosecutions.

Victims continue to call for stronger responses. ‘The scammer works twice: they take your money, and they take your soul. They really do. They take your self-worth. And then, you feel like you’re being scammed again, by the authorities’ lack of response,’ Sawyer said.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyber operation led by INTERPOL dismantles 45,000+ malicious IP addresses

An INTERPOL-coordinated operation targeting phishing, malware, and ransomware infrastructure has resulted in the takedown of more than 45,000 malicious IP addresses and servers.

Law enforcement agencies from 72 countries and territories participated in Operation Synergia III (from 18 July 2025 to 31 January 2026). The operation resulted in 94 arrests, with 110 additional individuals under investigation. A total of 212 electronic devices and servers were seized.

During the operation, INTERPOL processed threat data into actionable intelligence, facilitated cross-border coordination, and provided tactical operational support to participating countries. Preliminary investigations informed a series of coordinated national actions, including searches of identified locations and the disruption of malicious cyber infrastructure.

Several investigations remain ongoing. Preliminary case reports illustrate the range of criminal methods. For instance, in Macau, China, law enforcement identified more than 33,000 phishing and fraudulent websites impersonating casinos, banks, government portals, and payment services.

The sites were used to collect payments via fraudulent top-up mechanisms or to harvest users’ personal and financial data.

In Togo, police arrested 10 suspects operating from a residential location. The group’s activities included unauthorised access to social media accounts and social engineering schemes such as romance fraud and sextortion.

After compromising accounts, suspects contacted the account holder’s connections, impersonating the original user to initiate fraudulent relationships or solicit money transfers from secondary victims.

In Bangladesh, police arrested 40 suspects and seized 134 electronic devices linked to a range of schemes, including fraudulent loan and employment offers, identity theft, and credit card fraud.

INTERPOL collaborated with private sector partners Group-IB, Trend Micro, and S2W to monitor illicit cyber activity and identify malicious servers during the operation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Major tech firms pledge to fight online fraud

Major technology and consumer-facing companies, including Google, Amazon, and OpenAI, have signed the ‘Industry Accord Against Online Scams and Fraud’ to share threat intelligence and strengthen defences against online fraud.

The voluntary pact brings together 11 signatories: Amazon, Adobe, Google, Levi Strauss & Co., LinkedIn, Match Group, Microsoft, Meta, OpenAI, Pinterest, and Target. It aims to improve coordination among companies and strengthen cooperation with governments, law enforcement, and NGOs.

The accord commits to sharing intelligence on criminal networks, using AI to detect fraud, and strengthening verification for financial transactions. Participating companies will also provide clearer reporting channels for users and encourage governments to prioritise scam prevention.

Executives emphasised that tackling scams requires collective effort. Meta’s Nathaniel Gleicher said the accord enables companies to share insights beyond individual cases, while Microsoft’s Steven Masada highlighted the need for faster collaboration to disrupt scams and track perpetrators globally.

The move comes as online scams grow in scale and sophistication, aided by AI-generated content and cross-platform operations. Consumers lost over $16 billion to online scams in 2024, prompting firms to boost safety features and push for stronger regulations and law enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Young investors warned on crypto and AI advice

Australia’s financial regulator has warned young investors to be cautious with social media influencers and AI chatbots. A survey by the Australian Securities and Investments Commission found one in four Gen Z Australians invest in crypto, often guided by online content.

The survey of 1,127 participants aged 18 to 28 showed 63% use social media for financial information, 18% rely on AI platforms, and 30% consult YouTube. AI was the most trusted source at 64%, but over half still trust influencers and social media despite possible misinformation.

ASIC previously issued warnings to 18 influencers suspected of promoting high-risk products without a licence. Commissioner Alan Kirkland said some social media marketing promotes crypto scams or risky super switches that threaten young people’s key assets.

The regulator is also watching AI financial guidance. Personalised advice from unlicensed sources is illegal, and young investors should carefully check sources, especially as crypto exchanges increasingly use AI bots for trading guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Phishing attack on Starbucks employee portal exposes nearly 900 workers

Starbucks has disclosed a data breach affecting 889 employees after attackers gained unauthorised access to Starbucks Partner Central accounts, the internal platform workers use to manage their employment details, payroll, and benefits information.

The company discovered suspicious activity on 6 February 2026, with investigators finding that accounts had been compromised between 19 January and 11 February.

Attackers obtained valid login credentials by directing employees to fraudulent websites designed to impersonate the legitimate Partner Central login page, a phishing tactic that allowed them to authenticate into real accounts without ever directly breaching Starbucks’ core infrastructure.

The exposed data included full names, Social Security numbers, dates of birth, and financial account and banking routing numbers linked to direct deposit records.

Starbucks notified law enforcement, strengthened security controls on Partner Central, and confirmed the breach does not affect customers. The company is offering affected employees two years of free credit monitoring and identity protection through Experian IdentityWorks.

Cybersecurity experts have warned that the exposed data, including Social Security numbers and financial identifiers, retains value to criminal groups for years and cannot simply be reset like a password.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!