Swisscom says AI and geopolitics are reshaping the cyber threat landscape

Swisscom has published its 2026 Cybersecurity Threat Radar, warning that cyber threats have grown more complex over the past year as geopolitical tensions and disruptive technologies put added pressure on digital systems. The report presents AI, supply chain exposure, digital sovereignty, and operational technology security as four strategic risk areas for organisations.

The report highlights state-linked cyber activity, hybrid influence operations such as disinformation, and supply chain attacks as key drivers of the current threat environment. It argues that digital transformation has increased dependence on cloud services, third-party software, AI systems, and networked industrial infrastructure, making organisations more exposed to cascading failures and external dependencies.

On AI, Swisscom describes insecure AI use as a risk multiplier. While AI can improve productivity, the report warns that poor governance, weak visibility into models, and uncontrolled use of AI tools in operational environments can expand attack surfaces, affect data quality, and create new compliance challenges.

Software supply chains are also identified as a persistent vulnerability. Swisscom says a single compromised component or manipulated update process can have far-reaching consequences across interconnected systems, making software integrity, origin verification, and traceability increasingly important as mitigation measures.

The convergence of information technology and operational technology is presented as another growing area of concern. In sectors such as energy, healthcare, manufacturing, and building automation, incidents can have consequences that go well beyond financial loss, affecting critical infrastructure, production, and even human safety.

The report also places greater emphasis on digital sovereignty, arguing that organisations need clearer visibility over where data is processed, which legal regimes apply, and how dependent they are on cloud and technology providers. In that sense, Swisscom frames cybersecurity less as a narrow IT function and more as a strategic governance issue tied to resilience, control, and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Ransomware accounts for 90% of cyber losses in manufacturing, claims data shows

Ransomware is responsible for 90% of total cyber-related financial losses in the manufacturing sector, despite accounting for only 12% of claim volume by number, according to an analysis of insurance claims data published by Resilience.

The findings indicate that while ransomware incidents are not the most frequently filed claim type, they produce disproportionately large financial losses when they occur. The manufacturing sector’s low tolerance for operational downtime is identified as a contributing factor to loss severity.

Additional findings from the claims dataset include:

  • 30% of manufacturing claims are linked to phishing and transfer fraud
  • 26% of total losses are associated with multi-factor authentication (MFA) misconfiguration
  • 12% of claims involved wrongful data collection

The report identifies MFA misconfiguration as a notable area of exposure, alongside procedural gaps in financial transfer controls. Recommended mitigation measures include auditing MFA deployment, implementing transfer verification procedures, and investing in ransomware containment capabilities.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU and Republic of Korea launch aviation partnership on technical cooperation and cyber resilience

European and South Korean aviation authorities are conducting a three-week series of technical exchanges in Seoul, covering safety oversight, airspace management, and cybersecurity.

The European Union Aviation Safety Agency (EASA) and South Korea’s Ministry of Land, Infrastructure and Transport are participating under the EU–Republic of Korea Aviation Partnership Project, an EU-funded initiative announced by the European External Action Service (EEAS).

The programme began with a three-day session on the International Civil Aviation Organisation’s Universal Safety Oversight Audit Programme (USOAP), which assesses national aviation safety oversight systems. EASA presented findings from its most recent ICAO audit, with discussions covering oversight frameworks, organisational structures, and lessons identified.

A workshop on performance-based navigation and airspace management followed, addressing procedures to improve the predictability and efficiency of aircraft arrivals, including at airports with parallel runways.

A third workshop on aviation cybersecurity is scheduled for the coming week. It will cover security considerations across aviation systems, including aircraft certification processes and air traffic management infrastructure.

The activities are designed to facilitate technical exchange between Korean and European stakeholders across the aviation sector, according to EASA.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Victorian officials outline approach to managing AI risks in public sector

Ian Pham at the Victorian Managed Insurance Authority (VMIA) outlined approaches to managing AI adoption during the PSN Victorian Government Cyber Security Showcase. Organisations face the challenge of adopting AI while maintaining effective risk management as these systems become more embedded in government operations.

Cybersecurity teams have traditionally operated with a risk-averse approach focused on minimising threats. Such an approach can slow innovation when applied to AI systems used in public sector environments.

A shift towards managing risk in line with organisational objectives is presented as necessary. This includes prioritising relevant risks and moving from reactive responses towards supporting decision-making processes.

AI adoption involves secure environments for experimentation with defined guardrails, including synthetic or non-sensitive data, monitoring mechanisms, usage conditions, and identity and access controls. Exposure can then be increased gradually, supported by governance and continuous reassessment.

Risks linked to AI systems include data leakage, privacy concerns, unauthorised use, and data quality issues. These risks are described as requiring visibility and management, alongside organisational awareness and engagement to support confidence in AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model raises security risks, prompting release concerns, reports say

Anthropic is reported to have declined to release its latest AI model, Mythos, citing potential risks to global cybersecurity. The system is reported to be capable of identifying vulnerabilities across major operating systems and web browsers, raising concerns about possible misuse.

Reports indicate that the company is investigating claims that unauthorised actors may have accessed the model. A reported breach has intensified debate about whether technology firms can maintain control over increasingly powerful AI systems as development accelerates.

The Mythos model is described as part of a new class of AI tools capable of analysing complex digital environments and identifying weaknesses at scale. Such capabilities could support cybersecurity efforts, but may also present risks if exploited by malicious actors.

The case has contributed to discussions within the technology sector about balancing innovation with efforts to manage potential risks to digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore’s HTX signs agreements to advance public safety technologies

The Home Team Science and Technology Agency has signed 10 agreements with partners across government, industry and academia to advance public safety technologies. The announcement was made at MTX 2026.

The partnerships focus on areas including AI, space technology and cybersecurity, aiming to accelerate development of next-generation capabilities for public safety operations.

Several agreements involve industry collaboration to apply commercial innovations, while others expand research links with academic institutions to deepen expertise in areas such as forensics and autonomous systems.

HTX said the partnerships will strengthen collaboration, innovation and knowledge sharing across the public safety ecosystem. The agreements were announced at an event in Singapore.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CISA releases guidance on Zero Trust adoption in critical infrastructure systems

The Cybersecurity and Infrastructure Security Agency, alongside several US government partners, has released guidance to support the adoption of Zero Trust principles in operational technology systems. The document aims to strengthen cybersecurity across critical infrastructure.

The guide outlines practical steps to address risks linked to increasingly interconnected and remotely operated systems. It highlights vulnerabilities created by expanded attack surfaces and evolving cybersecurity threats.

Key recommendations include improving asset visibility, securing supply chains and implementing stronger identity and access controls. The guidance also addresses challenges such as legacy systems and operational constraints.

Officials say the approach will help organisations reduce risks and improve resilience without disrupting essential operations. US agencies in Washington issued the guidance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK NCSC publishes framework on adversarial attacks against AI systems

The UK’s National Cyber Security Centre has published a paper on adversarial attacks against machine learning and AI, setting out a framework for understanding attacks that target the operation of ML models. The paper introduces a common language intended to support awareness, threat modelling, and collaboration on AI security.

The NCSC says ML systems present a larger attack surface than traditional software because of rapid development cycles, unique architectures, large model sizes, and the widespread use of open-source components. It distinguishes adversarial machine learning attacks from broader cyberattacks by focusing on those that exploit vulnerabilities specific to the architecture, training, or operation of ML models.

The paper defines seven attack classes:

  • model characterisation
  • model inversion
  • training data poisoning
  • malicious model training
  • model input manipulation
  • model artifact manipulation
  • model hardware attacks

It says these attacks can occur across development, training, and deployment, and may target both hardware and software components.

The NCSC also maps those attack classes against eight potential goals of a malicious actor, including reconnaissance, degrading performance, wasting resources, embedding hidden behaviours, evading detection, extracting data, and gaining wider system access. The table on pages 11-12 links each class to one or more of those goals.

The paper argues that standard cybersecurity controls remain foundational, but says ML-specific weaknesses often require dedicated mitigations that are not yet mature or widely deployed.

It calls for more research into underdeveloped areas, such as model-hardware attacks and malicious model training, and recommends greater use of frameworks and guidance from the NCSC, ETSI, and the UK government’s AI cybersecurity code of practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MEPs consider stronger EU measures on cyberbullying and online harassment

The European Parliament has voted on a resolution on targeted criminal provisions and platform responsibility to address cyberbullying and online harassment, following a debate with the Commission.

The debate focused on whether EU law should go further in addressing harmful online behaviour, including through targeted criminal provisions and stronger obligations for platforms. Parliament’s plenary briefing said MEPs were expected to press the Commission on what more can be done beyond existing Digital Services Act protections.

Draft resolution texts tabled in Parliament say MEPs want the Commission to consider making cyberbullying a criminal offence under EU law and to address legal gaps in the current framework.

The vote followed the Commission’s recent action plan against cyberbullying, which Parliament said is built around a support app, coordination of national approaches, and the promotion of safer digital practices.

The debate also comes after MEPs heard testimony earlier this year from Jackie Fox, whose daughter Coco’s case led to Ireland’s Harassment, Harmful Communications and Related Offences Act 2020, known as Coco’s Law. Parliament’s briefing notes that while EU initiatives address parts of the issue, there is still no EU-wide anti-online bullying law or commonly agreed definition at the European or international level.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Crypto crackdown intensifies in Kazakhstan over illegal exchanges

Kazakhstan’s financial regulator has warned that several major cryptocurrency exchanges are operating without the licences required under the country’s current digital asset framework, reinforcing its strict authorisation regime.

The Astana Financial Services Authority identified prominent platforms, including HTX, Bitget, OKX, and MEXC, as operating without the necessary permits. Under existing rules, only entities licensed within the Astana International Financial Centre are allowed to provide regulated digital asset services.

Authorities stressed that international popularity does not exempt platforms from complying with local law. They also warned that unauthorised exchanges can expose users to financial losses, data breaches, and fraudulent schemes, and urged the public to verify platforms through the official register of licensed firms. AFSA’s website currently shows a regulated ecosystem with dozens of authorised entities across the AIFC framework.

The warning comes amid broader enforcement efforts as Kazakhstan tries to formalise its crypto sector while positioning itself as a regulated regional hub for digital assets. In parallel, law enforcement agencies have reported wider crackdowns on illegal crypto activity, including shadow exchanges and money-laundering networks.

Why does it matter?

Kazakhstan’s tightening enforcement shows a broader push to bring crypto activity into a more formal and supervised market structure. By restricting unlicensed platforms and steering users towards authorised entities, the authorities are trying to reduce exposure to financial crime, improve market transparency, and build credibility for Kazakhstan’s ambition to become a regulated regional digital asset hub.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!