ICO warns organisations about growing AI cyber threats

The UK Information Commissioner’s Office has warned that AI is enabling faster, more advanced and harder-to-detect cyberattacks, urging organisations to strengthen their defences against emerging threats.

In a blog post, the regulator highlighted risks such as AI-generated phishing emails, deepfake social engineering, automated vulnerability scanning, AI-powered malware, credential attacks, data poisoning and indirect prompt injection. The ICO said cybersecurity must be treated as a shared responsibility, with organisations expected to take proactive steps to protect the personal data they hold.

The ICO said strong foundational security measures remain essential, but should be reinforced with layered defences to counter AI-powered threats. It pointed to practical steps such as patching systems, restricting access through multi-factor authentication, applying least-privilege principles and managing supplier risks.

The recommendations also include monitoring systems for unusual activity, carrying out vulnerability scanning and penetration testing, and maintaining regularly tested incident response plans. The ICO said AI can also support cyber defence, but should operate within a clear framework of human oversight and accountability.

Organisations are further advised to minimise data collection, conduct regular data audits and train staff to recognise AI-powered social engineering attacks. The ICO said AI tools processing high-risk personal data should be supported by data protection impact assessments and appropriate safeguards.

Why does it matter?

The ICO’s warning links AI-powered cyber threats directly to data protection obligations. As attackers use AI to scale phishing, exploit vulnerabilities and impersonate trusted contacts, organisations are expected not only to improve technical security, but also to limit the personal data they hold, strengthen governance and prepare for faster-moving incidents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

CMA opens Strategic Market Status investigation into Microsoft business software

The UK Competition and Markets Authority has opened a Strategic Market Status investigation into Microsoft’s business software ecosystem, marking another major step in the country’s digital competition regime.

The investigation will examine Microsoft’s position across workplace software products widely used throughout the UK economy, including productivity software, personal computer and server operating systems, database management systems, security software and its growing AI assistant ecosystem, including Copilot. The CMA said more than 15 million commercial users across the UK rely on Microsoft’s software ecosystem.

Regulators will assess whether Microsoft has Strategic Market Status in business software and whether its position may limit customer choice. The CMA said it will examine concerns linked to product bundling, interoperability limits and default settings that could make it harder for businesses and public-sector organisations to switch providers or combine Microsoft tools with competing products.

The authority will also examine how competing AI services can integrate with Microsoft’s business software as workplace tools increasingly incorporate AI and agentic AI functions. The CMA said customers should be able to access software and AI services from a range of suppliers rather than being locked into a single ecosystem.

Cloud competition concerns are also linked to the probe. An SMS designation would allow the CMA to consider targeted interventions related to Microsoft’s software licensing practices, which were previously identified as reducing competition in cloud services.

The CMA will gather evidence from Microsoft, customers, rivals, challenger technology firms and other stakeholders before deciding whether to designate Microsoft with Strategic Market Status. The regulator said the investigation does not assume wrongdoing and that any future interventions would depend on the evidence and relevant legal tests.

Why does it matter?

The investigation shows how digital competition oversight is moving deeper into enterprise software, cloud infrastructure and AI-enabled workplace tools. As products such as Copilot become embedded in systems used by businesses and public services, regulators are increasingly treating interoperability, bundling and switching costs as strategic competition issues rather than narrow technical questions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI integrates Codex into ChatGPT mobile app

OpenAI has integrated Codex into the ChatGPT mobile app, allowing users to monitor and manage agentic coding workflows from iOS and Android devices.

The feature, currently in preview and available across all plans, lets users view live Codex environments, review outputs, approve commands, change models, and start new tasks from their phones. OpenAI said the update is intended to support work across multiple threads and workflows, rather than to control a single task remotely.

Codex is OpenAI’s coding agent for software development, designed to help with tasks such as building features, refactoring code, generating pull requests, testing and documentation. OpenAI describes the Codex app as a command centre for agentic coding, with agents able to work in parallel across projects through worktrees and cloud environments.

The mobile integration aligns with other recent Codex updates, including background operations in desktop environments and a browser extension for live sessions. Together, the updates point to OpenAI’s effort to turn Codex into a persistent development assistant that can continue working across devices and environments.

The move also comes amid growing competition with Anthropic’s Claude Code, which has introduced similar remote-monitoring features. Both companies are competing to make agentic coding tools central to developer workflows, particularly for businesses and technical teams seeking more autonomous software development support.

Why does it matter?

Mobile access makes agentic coding less tied to a single workstation. If developers can review outputs, approve commands and manage parallel coding tasks from a phone, AI coding agents become more like always-on collaborators than occasional coding assistants. The shift could accelerate competition between OpenAI, Anthropic and other AI firms over who controls the next layer of software development workflows.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK NAO guide sets AI oversight questions for public bodies

The UK National Audit Office has published a good practice guide for public sector organisations using AI, setting out questions for audit and risk assurance committees overseeing the planning, deployment and scaling of the technology.

The guide draws on NAO findings, the UK government’s AI Playbook and lessons from digital transformation programmes. It advises committees to assess whether organisations are clear on why they are using AI, what risks they need to manage and how responsible adoption will be assured. The NAO says the guide will evolve as AI continues to develop.

AI is already being used across government for fraud and error detection, imaging, document processing, operational management, research and monitoring, text generation, virtual assistants and coding support. The NAO notes that several of these uses may involve personal data, making governance, assurance and data protection especially important.

The guide warns that productivity gains from AI should not be assumed. AI may speed up individual tasks, but those gains do not automatically translate into organisation-wide savings, particularly where work still depends on approvals, governance processes or human judgement.

The NAO also highlights external risks from AI use, including increased demand on public services, more low-quality or repeated submissions, higher fraud risks, cyberattacks and attempts to extract sensitive data. Audit committees are advised to ensure organisations can anticipate, monitor and mitigate such risks.

Key areas for oversight include innovation, AI strategy, leadership and skills, data, security, pilots, scaling, guardrails and workforce culture. The guide says strong digital and AI strategies should be business-led, aligned with organisational priorities, backed by leadership support and supported by clear governance, funding and measurable objectives.

Data quality, accessibility and governance are presented as foundational risks, with weak data affecting model performance, bias, explainability and reliability. The NAO also warns that AI can increase exposure to operational and security risks, including data breaches, model manipulation, supply-chain risk and resilience problems.

Recommended guardrails include acceptable use policies, data protection controls, bias testing, human oversight of automated decisions and clear accountability for AI outcomes. The guide also urges organisations to plan for workforce changes, including new skills needs, role redesign, AI literacy, risks to entry-level learning, overreliance on automation and loss of institutional knowledge.

Why does it matter?

The guide shows that public-sector AI adoption is becoming an audit, governance and accountability issue, not only a technology project. By focusing on oversight questions, the NAO is pushing public bodies to test whether AI projects have clear objectives, reliable data, measurable benefits, security controls and safeguards for staff and citizens before they are scaled.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Practice Note on AI issued by Australia’s Supreme Court of Victoria

Australia’s Supreme Court of Victoria has issued a Practice Note for court users and Judicial Guidelines for judicial officers on the use of AI, setting out how the technology may be used in court processes while preserving accuracy, privacy, accountability and fairness.

The Practice Note recognises that AI may enhance access to justice, but warns court users to understand the risks when using AI to prepare court documents. It states that users remain responsible for the content of documents they file, whether or not AI has been used.

Court users are also warned that filing documents containing inaccuracies could lead to costs orders. The Practice Note outlines privacy issues linked to different types of AI tools and notes possible sanctions for legal practitioners who rely on unverified AI outputs.

The Judicial Guidelines state that generative AI must not be used for judicial decision-making. Court-approved AI tools may, however, assist judicial officers and court staff with supportive tasks such as organising and locating case materials, producing summaries and chronologies, aiding legal research and proofreading.

The guidelines stress that such uses are not a substitute for reading or listening to evidence and submissions, or for fact-finding where required in judicial decision-making. Judicial officers must consider each matter before them and exercise their own judgement in reaching decisions and giving reasons where appropriate.

The Court said the new documents build on earlier AI guidelines developed in 2024 and respond to a review by the Victorian Law Reform Commission. Chief Justice Richard Niall said the Practice Note and Judicial Guidelines would help mitigate actual and perceived risks of AI use.

Niall said AI should be ‘an aid to, not a replacement of, judicial decision-making’, adding that the Court would continue adapting its practice without sacrificing impartiality, privacy, accountability and fairness.

Why does it matter?

The guidance shows how courts are beginning to define practical limits for AI use without banning it entirely. By allowing supportive uses while excluding generative AI from judicial decision-making, Victoria’s Supreme Court is drawing a line between administrative assistance and the exercise of judicial judgement, a distinction likely to become increasingly important as AI tools enter legal practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Norway and Romania expand EEA cooperation with anti-disinformation funding

Romania and Norway have signed a new EEA and Norway Grants agreement that introduces dedicated cooperation measures against disinformation, reflecting growing European concerns over information manipulation, democratic resilience and geopolitical instability.

Norwegian Foreign Minister Espen Barth Eide signed the agreement in Bucharest alongside Romania’s Minister for European Investments and Projects, Dragoș Pîslaru. The agreement forms part of the wider 2021-2028 EEA and Norway Grants framework, which supports social, economic and institutional development across Europe.

The new cooperation programme will fund initiatives aimed at strengthening resilience against disinformation through partnerships involving public institutions, specialist communities and civil society organisations in both countries.

The agreement also supports broader programmes covering justice and police cooperation, green transition projects, energy efficiency, and measures designed to strengthen the rights and living conditions of Roma communities.

Romania will receive €596.3 million under the current funding cycle, making it the second-largest beneficiary after Poland. Norway, Iceland and Liechtenstein together provide €3.268 billion through the EEA and Norway Grants programme, with Norway contributing approximately 97% of the overall funding.

Why does it matter?

The agreement shows how disinformation is becoming part of broader European cooperation on democratic resilience and institutional capacity, not only a media or platform issue. By funding partnerships between public institutions, expert communities and civil society, the programme links information integrity with governance, security and social cohesion at a time of heightened geopolitical pressure in Europe.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Poland launches campaign to boost business cybersecurity awareness

Poland’s Ministry of Digital Affairs has launched a campaign to encourage entrepreneurs and management teams to take a more active role in protecting their companies from cyber threats.

The campaign, titled ‘Build your company’s digital security click by click’, is aimed at businesses and senior decision-makers. The ministry says its main goal is to encourage firms to address cybersecurity at both organisational and operational levels.

The campaign stresses that cybersecurity is no longer solely the responsibility of IT departments but is a key part of responsible business management. The ministry points to growing risks such as phishing and ransomware as digital technology becomes central to company operations.

According to the ministry, effective cybersecurity depends on three pillars: knowledge, processes and people. The campaign encourages firms to analyse risks, develop incident response procedures, train employees regularly and use official guidance available through cyber.gov.pl.

A separate focus is placed on medium-sized and large companies subject to requirements under Poland’s national cybersecurity system. The ministry says firms in key sectors should understand obligations related to risk management, incident reporting and the protection of information systems.

The campaign also calls on company leaders to integrate cybersecurity into business strategy, including through security policies, investment in skills and the development of a culture of responsibility across organisations.

Why does it matter?

The campaign reflects a broader shift in cybersecurity policy from technical protection towards organisational responsibility. By targeting business leaders, Poland is emphasising that cyber resilience depends not only on tools, but also on governance, staff training, incident response and compliance with national cybersecurity obligations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI sued over alleged ChatGPT role in Florida State University shooting

The family of a victim killed in the April 2025 Florida State University shooting has filed a federal lawsuit in Florida against OpenAI, alleging that ChatGPT enabled the attack. The lawsuit was filed on Sunday by Vandana Joshi, the widow of Tiru Chabba, who was killed alongside university dining director Robert Morales.

The complaint states that the accused shooter, Phoenix Ikner, engaged in extensive conversations with ChatGPT months before leading up to the incident. According to the suit, those exchanges included images and discussions about firearms he had acquired, ideological material, ideological far-right beliefs, and possible outcomes of violent attacks.

The chatbot is further accused of providing contextual information about campus activity and commenting on factors that could increase public attention in violent incidents. This is indicated by the fact that at one point, ChatGPT said, ‘if children are involved, even 2-3 victims can draw more attention’. The filing also claims Ikner asked about legal consequences and planning considerations shortly before the attack.

The lawsuit contends that OpenAI failed to identify escalating risk indicators within the conversations and did not adequately prevent harmful guidance. It argues the system ‘failed to connect the dots’ despite Ikner’s repeated questions about suicide, terrorism and mass shootings.

OpenAI has rejected responsibility for the attack, claiming its platform is not to blame. Company spokesperson Drew Pusateri said ChatGPT generated factual responses that could be found broadly across publicly available information and did not encourage or promote illegal activity. He also stated that OpenAI continues to strengthen safeguards to identify harmful intent, reduce misuse and respond appropriately when safety risks arise.

Joshi’s complaint argues that the system reinforced the shooter’s beliefs and failed to interrupt conversations involving violent ideation. The filing alleges the ChatGPT inflamed, validated and endorsed delusional thinking and contributed to planning discussions while ‘convincing him that violent acts can be required to bring about change’.

The lawsuit forms part of a broader wave of litigation involving AI systems and alleged harm. OpenAI is already facing separate lawsuits linked to incidents involving violence and suicide, raising wider questions about safeguards and user protection

Florida’s Attorney General James Uthmeier announced a criminal investigation into OpenAI and ChatGPT following a review of chat logs connected to the case. Uthmeier said in a statement that ‘If ChatGPT is a person it would be facing charges for murder’.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

G7 working group advances cybersecurity approach for AI systems

The German Federal Office for Information Security published guidance developed by the G7 Cybersecurity Working Group outlining elements for a Software Bill of Materials for AI. The document aims to support both public and private sector stakeholders in improving transparency in AI systems.

The guidance builds on a shared G7 vision introduced in 2025 and focuses on strengthening cybersecurity throughout the AI supply chain. It sets out baseline components that should be included in an AI SBOM to better track and understand system dependencies.

The document outlines seven baseline building blocks that should form part of an AI Software Bill of Materials (SBOM for AI), designed to improve visibility into how AI systems are built and how their components interact across the supply chain.

At the foundation is a Metadata cluster, which records information about the SBOM itself, including who created it, which tools and formats were used, when it was generated, and how software dependencies relate to one another.

The framework then moves to System Level Properties, covering the AI system as a whole. This includes the system’s components, producers, data flows, intended application areas, and the processing of information between internal and external services.

A dedicated Models cluster focuses on the AI models embedded within the system, documenting details such as model identifiers, versions, architectures, training methods, limitations, licenses, and dependencies. The goal is to make the origins and characteristics of models easier to trace and assess.

The document also introduces a Dataset Properties cluster to improve transparency into the data used throughout the AI lifecycle. It captures dataset provenance, content, statistical properties, sensitivity levels, licensing, and the tools used to create or modify datasets.

Beyond software and data, the framework includes an Infrastructure cluster that maps the software and hardware dependencies required to run AI systems, including links to hardware bills of materials where relevant.

Cybersecurity considerations are grouped under Security Properties, which document implemented safeguards such as encryption, access controls, adversarial robustness measures, compliance frameworks, and vulnerability references.

Finally, the framework proposes a Key Performance Indicators cluster that includes metrics related to both security and operational performance, including robustness, uptime, latency, and incident response indicators.

According to the paper, the objective is to provide practical direction that organisations can adopt to enhance visibility and manage risks linked to AI technologies. The framework is intended to support more secure development and deployment practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IPC New South Wales’ Generative AI guidance targets privacy risks in Australia

The Information and Privacy Commission New South Wales, has issued guidance for public sector agencies in Australia on managing privacy risks associated with the use of generative AI tools.

The guide states that the Privacy and Personal Information Protection Act 1998 applies to the handling of personal information through generative AI tools. It is intended to help agencies understand and comply with privacy obligations when adopting tools such as ChatGPT, Gemini, Claude, Perplexity, and Copilot.

Generative AI can support workplace tasks such as drafting, editing, document analysis, research, translation, transcription, and process automation. However, the IPC warns that these tools can create privacy risks when prompts, uploaded files, or outputs include personal or health information.

The guide highlights risks including unexpected use or disclosure of personal information, cross-border data transfers, unauthorised disclosure, data breaches, extended retention of personal information, generation of new personal information, inaccurate or discriminatory outputs, and loss of transparency or data subject control.

Some generative AI providers may collect customer data, including prompts, uploaded files, and outputs, to train or improve their models, according to the IPC. Agencies should assess whether personal or health information uploaded to a generative AI service may be processed offshore or used for purposes beyond the original collection purpose.

Recommended measures include privacy impact assessments, updates to privacy management plans and data breach response policies, clear public notices, consent where required, acceptable use policies for staff, training, pre-deployment testing, third-party vendor assessments, and data residency in Australia where possible.

Human review is also presented as an important safeguard, especially where generative AI outputs inform decisions affecting individuals’ access to services, opportunities, or benefits. The IPC urges agencies to avoid a ‘set and forget’ approach and continuously monitor generative AI use, governance, culture, and emerging privacy risks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!