Ransomware gangs feud after M&S cyberattack

A turf war has erupted between two significant ransomware gangs, DragonForce and RansomHub, following cyberattacks on UK retailers including Marks and Spencer and Harrods.

Security experts warn that the feud could result in companies being extorted multiple times as criminal groups compete to control the lucrative ransomware-as-a-service (RaaS) market.

DragonForce, a predominantly Russian-speaking group, reportedly triggered the conflict by rebranding as a cartel and expanding its affiliate base.

Tensions escalated after RansomHub’s dark-web site was taken offline in what is believed to be a hostile move by DragonForce, prompting retaliation through digital vandalism.

Cybersecurity analysts say the breakdown in relationships between hacking groups has created instability, increasing the likelihood of future attacks. Experts also point to a growing risk of follow-up extortion attempts by affiliates when criminal partnerships collapse.

The rivalry reflects the ruthless dynamics of the ransomware economy, which is forecast to cost businesses $10 trillion globally by the end of 2025. Victims now face not only technical challenges but also the legal and financial fallout of navigating increasingly unpredictable criminal networks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Cyber Command proposes $5M AI Initiative for 2026 budget

US Cyber Command is seeking $5 million in its fiscal year 2026 budget to launch a new AI project to advance data integration and operational capabilities.

While the amount represents a small fraction of the command’s $1.3 billion research and development (R&D) portfolio, the effort reflects growing emphasis on incorporating AI into cyber operations.

The initiative follows congressional direction set in the fiscal year (FY) 2023 National Defense Authorization Act, which tasked Cyber Command and the Department of Defense’s Chief Information Officer—working with the Chief Digital and Artificial Intelligence Officer, DARPA, the NSA, and the Undersecretary of Defense for Research and Engineering—to produce a five-year guide and implementation plan for rapid AI adoption.

However, this roadmap, developed shortly after, identified priorities for deploying AI systems, applications, and supporting data processes across cyber forces.

Cyber Command formed an AI task force within its Cyber National Mission Force (CNMF) to operationalise these priorities. The newly proposed funding would support the task force’s efforts to establish core data standards, curate and tag operational data, and accelerate the integration of AI and machine learning solutions.

Known as Artificial Intelligence for Cyberspace Operations, the project will focus on piloting AI technologies using an agile 90-day cycle. This approach is designed to rapidly assess potential solutions against real-world use cases, enabling quick iteration in response to evolving cyber threats.

Budget documents indicate the CNMF plans to explore how AI can enhance threat detection, automate data analysis, and support decision-making processes. The command’s Cyber Immersion Laboratory will be essential in testing and evaluating these cyber capabilities, with external organisations conducting independent operational assessments.

The AI roadmap identifies five categories for applying AI across Cyber Command’s enterprise: vulnerabilities and exploits; network security, monitoring, and visualisation; modelling and predictive analytics; persona and identity management; and infrastructure and transport systems.

To fund this effort, Cyber Command plans to shift resources from its operations and maintenance account into its R&D budget as part of the transition from FY2025 to FY2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humanitarian, peace, and media sectors join forces to tackle harmful information

At the WSIS+20 High-Level Event in Geneva, a powerful session brought together humanitarian, peacebuilding, and media development actors to confront the growing threat of disinformation, more broadly reframed as ‘harmful information.’ Panellists emphasised that false or misleading content, whether deliberately spread or unintentionally harmful, can have dire consequences for already vulnerable populations, fueling violence, eroding trust, and distorting social narratives.

The session moderator, Caroline Vuillemin of Fondation Hirondelle, underscored the urgency of uniting these sectors to protect those most at risk.

Hans-Peter Wyss of the Swiss Agency for Development and Cooperation presented the ‘triple nexus’ approach, advocating for coordinated interventions across humanitarian, development, and peacebuilding efforts. He stressed the vital role of trust, institutional flexibility, and the full inclusion of independent media as strategic actors.

Philippe Stoll of the ICRC detailed an initiative that focuses on the tangible harms of information—physical, economic, psychological, and societal—rather than debating truth. That initiative, grounded in a ‘detect, assess, respond’ framework, works from local volunteer training up to global advocacy and research on emerging challenges like deepfakes.

Donatella Rostagno of Interpeace shared field experiences from the Great Lakes region, where youth-led efforts to counter misinformation have created new channels for dialogue in highly polarised societies. She highlighted the importance of inclusive platforms where communities can express their own visions of peace and hear others’.

Meanwhile, Tammam Aloudat of The New Humanitarian critiqued the often selective framing of disinformation, urging support for local journalism and transparency about political biases, including the harm caused by omission and silence.

The session concluded with calls for sustainable funding and multi-level coordination, recognising that responses must be tailored locally while engaging globally. Despite differing views, all panellists agreed on the need to shift from a narrow focus on disinformation to a broader and more nuanced understanding of information harm, grounded in cooperation, local agency, and collective responsibility.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Rights before risks: Rethinking quantum innovation at WSIS+20

At the WSIS+20 High-Level Event in Geneva, a powerful call was made to ensure the development of quantum technologies remains rooted in human rights and inclusive governance. A UNESCO-led session titled ‘Human Rights-Centred Global Governance of Quantum Technologies’ presented key findings from a new issue brief co-authored with Sciences Po and the European University Institute.

It outlined major risks—such as quantum’s dual-use nature threatening encryption, a widening technological divide, and severe gender imbalances in the field—and urged immediate global action to build safeguards before quantum capabilities mature.

UNESCO’s Guilherme Canela emphasised that innovation and human rights are not mutually exclusive but fundamentally interlinked, warning against a ‘false dichotomy’ between the two. Lead author Shamira Ahmed highlighted the need for proactive frameworks to ensure quantum benefits are equitably distributed and not used to deepen global inequalities or erode rights.

With 79% of quantum firms lacking female leadership and a mere 1 in 54 job applicants being women, the gender gap was called ‘staggering.’ Ahmed proposed infrastructure investment, policy reforms, capacity development, and leveraging the UN’s International Year of Quantum to accelerate global discussions.

Panellists echoed the urgency. Constance Bommelaer de Leusse from Sciences Po advocated for embedding multistakeholder participation into governance processes and warned of a looming ‘quantum arms race.’ Professor Pieter Vermaas of Delft University urged moving from talk to international collaboration, suggesting the creation of global quantum research centres.

Journalist Elodie Vialle raised alarms about quantum’s potential to supercharge surveillance, endangering press freedom and digital privacy, and underscored the need to close the cultural gap between technologists and civil society.

Overall, the session championed a future where quantum technology is developed transparently, governed globally, and serves as a digital public good, bridging divides rather than deepening them. Speakers agreed that the time to act is now, before today’s opportunities become tomorrow’s crises.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

How agentic AI is transforming cybersecurity

Cybersecurity is gaining a new teammate—one that never sleeps and acts independently. Agentic AI doesn’t wait for instructions. It detects threats, investigates, and responds in real-time. This new class of AI is beginning to change the way we approach cyber defence.

Unlike traditional AI systems, Agentic AI operates with autonomy. It sets objectives, adapts to environments, and self-corrects without waiting for human input. In cybersecurity, this means instant detection and response, beyond simple automation.

With networks more complex than ever, security teams are stretched thin. Agentic AI offers relief by executing actions like isolating compromised systems or rewriting firewall rules. This technology promises to ease alert fatigue and keep up with evasive threats.

A 2025 Deloitte report says 25% of GenAI-using firms will pilot Agentic AI this year. SailPoint found that 98% of organisations will expand AI agent use in the next 12 months. But rapid adoption also raises concern—96% of tech workers see AI agents as security risks.

The integration of AI agents is expanding to cloud, endpoints, and even physical security. Yet with new power comes new vulnerabilities—from adversaries mimicking AI behaviour to the risk of excessive automation without human checks.

Key challenges include ethical bias, unpredictable errors, and uncertain regulation. In sectors like healthcare and finance, oversight and governance must keep pace. The solution lies in balanced control and continuous human-AI collaboration.

Cybersecurity careers are shifting in response. Hybrid roles such as AI Security Analysts and Threat Intelligence Automation Architects are emerging. To stay relevant, professionals must bridge AI knowledge with security architecture.

Agentic AI is redefining cybersecurity. It boosts speed and intelligence but demands new skills and strong leadership. Adaptation is essential for those who wish to thrive in tomorrow’s AI-driven security landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US targets Southeast Asia to stop AI chip leaks to China

The US is preparing stricter export controls on high-end Nvidia AI chips destined for Malaysia and Thailand, in a move to block China’s indirect access to advanced GPU hardware.

According to sources cited by Bloomberg, the new restrictions would require exporters to obtain licences before sending AI processors to either country.

The change follows reports that Chinese engineers have hand-carried data to Malaysia for AI training after Singapore began restricting chip re-exports.

Washington suspects Chinese firms are using Southeast Asian intermediaries, including shell companies, to bypass existing export bans on AI chips like Nvidia’s H100.

Although some easing has occurred between the US and China in areas such as ethane and engine components, Washington remains committed to its broader decoupling strategy. The proposed measures will reportedly include safeguards to prevent regional supply chain disruption.

Malaysia’s Trade Minister confirmed earlier this year that the US had requested detailed monitoring of all Nvidia chip shipments into the country.

As the global race for AI dominance intensifies, Washington appears determined to tighten enforcement and limit Beijing’s access to advanced computing power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware disrupts Ingram Micro’s systems and operations

Ingram Micro has confirmed a ransomware attack that affected internal systems and forced some services offline. The global IT distributor says it acted quickly to contain the incident, implemented mitigation steps, and involved cybersecurity experts.

The company is working with a third-party firm to investigate the breach and has informed law enforcement. Order processing and shipping operations have been disrupted while systems are being restored.

While details remain limited, the attack is reportedly linked to the SafePay ransomware group.

According to BleepingComputer, the gang exploited Ingram’s GlobalProtect VPN to gain access last Thursday.

In response, Ingram Micro shut down multiple platforms, including GlobalProtect VPN and its Xvantage AI platform. Employees were instructed to work remotely as a precaution during the response effort.

SafePay first appeared in late 2024 and has targeted over 220 companies. It often breaches networks using password spraying and compromised credentials, primarily through VPNs.

Ingram Micro has not disclosed what data was accessed or the size of the ransom demand.

The company apologised for the disruption and said it is working to restore systems as quickly as possible.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Pakistan launches AI customs system to tackle tax evasion

Pakistan has launched its first AI-powered Customs Clearance and Risk Management System (RMS) to cut tax evasion, reduce corruption, and modernise port operations by automating inspections and declarations.

The initiative, part of broader digital reforms, is led by the Federal Board of Revenue (FBR) with support from the Intelligence Bureau.

By minimising human involvement in customs procedures, the system enables faster, fairer, and more transparent processing. It uses AI and automated bots to assess goods’ value and classification, improve risk profiling, and streamline green channel clearances.

Early trials showed a 92% boost in system performance and more than double the efficiency in identifying compliant cargo.

Prime Minister Shehbaz Sharif praised the collaboration between the FBR and IB, calling the initiative a key pillar of national economic reform. He urged full integration of the system into the country’s digital infrastructure and reaffirmed tax reform as a government priority.

The AI system is also expected to close loopholes in under-invoicing and misdeclaration, which have long been used to avoid duties.

Meanwhile, video analytics technology is trialled to detect factory tax fraud, with early tests showing 98% accuracy. In recent enforcement efforts, authorities recovered Rs178 billion, highlighting the potential of data-driven approaches in tackling fiscal losses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Samsung profits slump as US chip ban hits AI exports

Samsung Electronics expects its second-quarter operating profits to exceed half, citing Washington’s export controls on advanced AI chips to China.

The company announced a projected 56% year-on-year drop in operating profit, falling to 4.6 trillion won ($3.3 billion), with revenue down 6.5% from the previous quarter.

The semiconductor division, a core part of Samsung’s business, suffered due to reduced utilisation and inventory value adjustments.

US restrictions have made it difficult for South Korea’s largest conglomerate to ship high-end chips to China, forcing some of its production lines to run below capacity.

Despite weak performance in the foundry sector, the memory business remained relatively stable. Analysts pointed to weaker-than-expected sales of HBM chips used for AI and a drop in NAND storage prices, while a declining won-dollar exchange rate further pressured earnings.

Looking ahead, Samsung expects a modest recovery as demand for memory chips, mainly from AI-driven data centres, improves in the year’s second half.

The company is also facing political pressure from Washington, with threats of new tariffs prompting talks between Seoul and the US administration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SatanLock ends operation amid ransomware ecosystem turmoil

SatanLock, a ransomware group active since April 2025, has announced it is shutting down. The group quickly gained notoriety, claiming 67 victims on its now-defunct dark web leak site.

Cybersecurity firm Check Point says more than 65% of these victims had already appeared on other ransomware leak pages. However, this suggests the group may have used shared infrastructure or tried to hijack previously compromised networks.

Such tactics reflect growing disorder within the ransomware ecosystem, where victim double-posting is rising. SatanLock may have been part of a broader criminal network, as it shares ties to families like Babuk-Bjorka and GD Lockersec.

A shutdown message was posted on the gang’s Telegram channel and leak page, announcing plans to leak all stolen data. The reason for the sudden closure has not been disclosed.

Another group, Hunters International, announced its disbandment just days earlier.

Unlike SatanLock, Hunters offered free decryption keys to its victims in a parting gesture.

These back-to-back exits signal possible pressure from law enforcement, rivals, or internal collapse in the ransomware world. Analysts are watching closely to see whether this trend continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!