UK’s NCSC warns AI could expose software vulnerabilities at scale

The NCSC says that AI is reshaping cybersecurity by exposing vulnerabilities across software ecosystems.
The National Cyber Security Centre (NCSC) warns that organisations must prepare for a large-scale patch wave. AI enables faster identification and exploitation of weaknesses than traditional defences can handle.

Technical debt, built through years of prioritising short-term efficiency instead of long-term resilience, is now being exposed at scale.

The NCSC notes that AI capabilities enable attackers to identify weaknesses faster and more comprehensively, creating pressure on organisations to respond with rapid and coordinated patching strategies across entire technology environments.

The recommended approach by NCSC prioritises internet-facing systems and external attack surfaces, followed by internal infrastructure and critical security assets.

Automated updates and hot patching are encouraged where available, while organisations lacking such capabilities must adopt scalable and risk-based update processes. Legacy systems without support present a particular risk, requiring replacement instead of reliance on patching alone.

NCSC adds that beyond software updates, the challenge reflects a deeper structural issue within digital ecosystems. Stronger cyber resilience depends on reducing systemic vulnerabilities through secure design practices, improved monitoring and supply chain readiness.

They also said that organisations that fail to prepare for continuous, large-scale patching cycles risk increased exposure as AI continues to reshape the cybersecurity landscape.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Data access emerges as cornerstone of EU AI plan

The European Commission has unveiled its AI Continent Action Plan, setting out a strategy to strengthen Europe’s position in the global AI landscape. The plan responds to rapid international advances and seeks to accelerate AI adoption across European industry and public services, where progress remains uneven.

Rather than introducing a new regulatory framework, the plan brings together targeted investments and policy measures around five priorities: expanding AI infrastructure, improving access to data, accelerating adoption in strategic sectors, strengthening skills, and supporting the implementation of existing rules.

Access to high-quality and interoperable data is presented as one of the key conditions for scaling AI in Europe. The plan links this objective to the EU’s wider data strategy and to efforts to make cross-border data use more practical, enabling organisations to train and deploy AI systems more effectively while operating within Europe’s transparency and accountability standards.

The broader ambition is to move Europe from fragmented experimentation towards more scalable and trustworthy AI deployment. In that sense, the Action Plan treats data, infrastructure, skills, and implementation capacity as parts of the same competitiveness agenda rather than separate policy tracks.

Why does it matter?

Europe’s AI challenge is no longer only about regulation, but about whether companies and public institutions can actually build and use AI at scale. If access to data remains fragmented across borders, sectors, and technical systems, the EU risks falling further behind competitors that already combine compute, capital, and data more effectively. By putting data access alongside infrastructure and skills, the Commission is signalling that AI competitiveness will depend as much on operational capacity as on rules or research strength.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!  

Agentic AI risks outlined in joint cyber agency guidance

Six cybersecurity agencies have jointly published guidance urging organisations to adopt agentic AI services cautiously. The document warns that greater autonomy can increase cyber risk, particularly as agentic AI is introduced into critical infrastructure, defence, and other mission-critical environments.

The authors say organisations should use agentic AI primarily for low-risk and non-sensitive tasks and should not grant it broad or unrestricted access to sensitive data or critical systems. The guidance also recommends incremental deployment rather than large-scale implementation from the outset.

The document was co-authored by agencies from Australia, the United States, Canada, New Zealand, and the United Kingdom: the Australian Signals Directorate’s Australian Cyber Security Centre, the US Cybersecurity and Infrastructure Security Agency and National Security Agency, the Canadian Centre for Cyber Security, New Zealand’s National Cyber Security Centre, and the UK’s National Cyber Security Centre.

It defines agentic AI as systems composed of one or more agents that rely on AI models, such as large language models, to interpret context, make decisions, and take actions, often without continuous human intervention. The guidance says these systems often combine an LLM-based agent with tools, external data, memory, and planning functions, which expands both capability and attack surface.

The agencies say agentic AI inherits many of the vulnerabilities already associated with large language models while introducing greater complexity and new systemic risks. The document identifies five broad categories of concern: privilege risks, design and configuration risks, behaviour risks, structural risks, and accountability risks.

It warns that over-privileged agents, insecure third-party tools, goal misalignment, emergent or deceptive behaviour, and opaque decision-making chains can all increase the likelihood and impact of compromise. To reduce those risks, the guidance recommends secure design, strong identity management, defence-in-depth, comprehensive testing, threat modelling, progressive deployment, isolation, continuous monitoring, and strict privilege controls.

The agencies also stress that human approval should remain in place for high-impact actions and that agentic AI security should be treated as part of broader cybersecurity governance rather than as a separate discipline. The document concludes by calling for stronger research, collaboration, and agent-specific evaluations as the technology matures.

Why does it matter?

The guidance matters because it draws a clear line between ordinary AI adoption and agentic systems that can act with far more autonomy inside real operational environments. Once AI tools move from assisting users to making decisions, calling tools, and interacting with sensitive systems, the security challenge shifts from model safety alone to full organisational risk management. That is why the document treats agentic AI not as a niche technical issue, but as a governance and cyber resilience problem that organisations need to control before deploying at scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US military expands AI deployment across classified networks

The US Department of Defence has announced agreements with leading technology firms to deploy advanced AI capabilities across classified military networks. The initiative forms part of a broader effort to position the United States as a more AI-enabled military power.

Companies including OpenAI, Google, Microsoft, Amazon Web Services, NVIDIA, and SpaceX are reported to be involved in supporting deployment within high-security Impact Level 6 and 7 environments. The integration is intended to improve data synthesis, situational awareness, and operational decision-making across defence systems.

The department’s internal platform, GenAI.mil, is also being presented as a central part of this push, with senior officials describing it as a way to put advanced AI tools into the hands of personnel across the department and across different classification levels.

Officials have emphasised that maintaining access to a range of AI providers is important to avoid vendor lock-in and preserve long-term flexibility. In that sense, the move reflects a wider attempt to strengthen national security through advanced technology while keeping the military AI stack diversified rather than dependent on a single company or model family. However, this is an inference based on the reported Pentagon framing of the agreements.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Swisscom says AI and geopolitics are reshaping the cyber threat landscape

Swisscom has published its 2026 Cybersecurity Threat Radar, warning that cyber threats have grown more complex over the past year as geopolitical tensions and disruptive technologies put added pressure on digital systems. The report presents AI, supply chain exposure, digital sovereignty, and operational technology security as four strategic risk areas for organisations.

The report highlights state-linked cyber activity, hybrid influence operations such as disinformation, and supply chain attacks as key drivers of the current threat environment. It argues that digital transformation has increased dependence on cloud services, third-party software, AI systems, and networked industrial infrastructure, making organisations more exposed to cascading failures and external dependencies.

On AI, Swisscom describes insecure AI use as a risk multiplier. While AI can improve productivity, the report warns that poor governance, weak visibility into models, and uncontrolled use of AI tools in operational environments can expand attack surfaces, affect data quality, and create new compliance challenges.

Software supply chains are also identified as a persistent vulnerability. Swisscom says a single compromised component or manipulated update process can have far-reaching consequences across interconnected systems, making software integrity, origin verification, and traceability increasingly important as mitigation measures.

The convergence of information technology and operational technology is presented as another growing area of concern. In sectors such as energy, healthcare, manufacturing, and building automation, incidents can have consequences that go well beyond financial loss, affecting critical infrastructure, production, and even human safety.

The report also places greater emphasis on digital sovereignty, arguing that organisations need clearer visibility over where data is processed, which legal regimes apply, and how dependent they are on cloud and technology providers. In that sense, Swisscom frames cybersecurity less as a narrow IT function and more as a strategic governance issue tied to resilience, control, and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Victorian officials outline approach to managing AI risks in public sector

Ian Pham at the Victorian Managed Insurance Authority (VMIA) outlined approaches to managing AI adoption during the PSN Victorian Government Cyber Security Showcase. Organisations face the challenge of adopting AI while maintaining effective risk management as these systems become more embedded in government operations.

Cybersecurity teams have traditionally operated with a risk-averse approach focused on minimising threats. Such an approach can slow innovation when applied to AI systems used in public sector environments.

A shift towards managing risk in line with organisational objectives is presented as necessary. This includes prioritising relevant risks and moving from reactive responses towards supporting decision-making processes.

AI adoption involves secure environments for experimentation with defined guardrails, including synthetic or non-sensitive data, monitoring mechanisms, usage conditions, and identity and access controls. Exposure can then be increased gradually, supported by governance and continuous reassessment.

Risks linked to AI systems include data leakage, privacy concerns, unauthorised use, and data quality issues. These risks are described as requiring visibility and management, alongside organisational awareness and engagement to support confidence in AI use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI model raises security risks, prompting release concerns, reports say

Anthropic is reported to have declined to release its latest AI model, Mythos, citing potential risks to global cybersecurity. The system is reported to be capable of identifying vulnerabilities across major operating systems and web browsers, raising concerns about possible misuse.

Reports indicate that the company is investigating claims that unauthorised actors may have accessed the model. A reported breach has intensified debate about whether technology firms can maintain control over increasingly powerful AI systems as development accelerates.

The Mythos model is described as part of a new class of AI tools capable of analysing complex digital environments and identifying weaknesses at scale. Such capabilities could support cybersecurity efforts, but may also present risks if exploited by malicious actors.

The case has contributed to discussions within the technology sector about balancing innovation with efforts to manage potential risks to digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Singapore’s HTX signs agreements to advance public safety technologies

The Home Team Science and Technology Agency has signed 10 agreements with partners across government, industry and academia to advance public safety technologies. The announcement was made at MTX 2026.

The partnerships focus on areas including AI, space technology and cybersecurity, aiming to accelerate development of next-generation capabilities for public safety operations.

Several agreements involve industry collaboration to apply commercial innovations, while others expand research links with academic institutions to deepen expertise in areas such as forensics and autonomous systems.

HTX said the partnerships will strengthen collaboration, innovation and knowledge sharing across the public safety ecosystem. The agreements were announced at an event in Singapore.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study examines trust and fraud prevention in AI-enabled banking in Bangladesh

A new non-peer-reviewed preprint examines how AI is shaping e-banking in Bangladesh, focusing on consumer decision-making, ethical trust, and fraud prevention.

The paper links AI adoption in digital banking to customer experience, risk management, process automation, financial inclusion and regulatory compliance, arguing that these factors are increasingly important as Bangladesh’s financial sector becomes more digital.

A study that uses a narrative literature review of recent research from 2024 and 2025 and builds its conceptual model on the UTAUT2 framework, which is commonly used to explain technology adoption.

The authors extend the model by adding ethical trust and fraud prevention as mediating mechanisms, arguing that consumers are more likely to use AI-enabled banking services when they see them as useful, secure, transparent and fair.

Ethical trust is treated as a central part of adoption. The paper identifies transparency, algorithmic fairness, data privacy, reliability, accountability and digital inclusion as key factors shaping how users respond to AI in banking.

It also notes that explainable AI tools and localised interfaces, including Bengali-language systems, could help reduce uncertainty for users with lower digital literacy.

Fraud prevention is presented as a critical enabler of consumer confidence. The authors point to real-time monitoring, anomaly detection, secure authentication, biometric e-KYC and explainable fraud alerts as tools that can reduce perceived risk.

Additionally, they argue that AI systems should not only detect fraud effectively, but also explain decisions clearly enough for users to trust them.

The paper also highlights Bangladesh-specific issues, including Islamic banking, Shariah-compliant AI models, rural and urban digital access gaps, and the need for inclusive design. However, the study remains conceptual and has not yet been peer reviewed.

The authors recommend future empirical research with Bangladeshi banking users to test the model across income levels, regions, generations and gender groups.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

European Parliament set to push for faster Digital Markets Act compliance proceedings

Ahead of the review of the Digital Markets Act, the European Parliament is set to call for faster compliance proceedings and closer scrutiny of AI-driven search tools and cloud services.

In a draft resolution, MEPs are expected to urge the Commission to enforce the Digital Markets Act quickly and consistently, while adapting to technological change without reopening the law’s core objectives.

The text highlights the growing strategic importance of cloud computing services and the rising use of AI-driven search tools, arguing that both require closer scrutiny under the Digital Markets Act framework.

MEPs also warn against external political pressure aimed at weakening the law. They are expected to call on the Commission to make full use of its enforcement tools, including periodic penalty payments, to stop companies from bypassing it, regardless of where they are based.

The Digital Markets Act sets obligations for the largest digital companies providing key platform services in the EU, with the aim of supporting fair competition in digital markets. The draft resolution comes after the Commission’s first non-compliance decisions and fines under the law, including action against Meta over its ‘pay or consent’ advertising model and against Apple over anti-steering obligations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!