G7 working group advances cybersecurity approach for AI systems
The guidance aims to strengthen cybersecurity across AI supply chains.
The German Federal Office for Information Security published guidance developed by the G7 Cybersecurity Working Group outlining elements for a Software Bill of Materials for AI. The document aims to support both public and private sector stakeholders in improving transparency in AI systems.
The guidance builds on a shared G7 vision introduced in 2025 and focuses on strengthening cybersecurity throughout the AI supply chain. It sets out baseline components that should be included in an AI SBOM to better track and understand system dependencies.
The document outlines seven baseline building blocks that should form part of an AI Software Bill of Materials (SBOM for AI), designed to improve visibility into how AI systems are built and how their components interact across the supply chain.
At the foundation is a Metadata cluster, which records information about the SBOM itself, including who created it, which tools and formats were used, when it was generated, and how software dependencies relate to one another.
The framework then moves to System Level Properties, covering the AI system as a whole. This includes the system’s components, producers, data flows, intended application areas, and the processing of information between internal and external services.
A dedicated Models cluster focuses on the AI models embedded within the system, documenting details such as model identifiers, versions, architectures, training methods, limitations, licenses, and dependencies. The goal is to make the origins and characteristics of models easier to trace and assess.
The document also introduces a Dataset Properties cluster to improve transparency into the data used throughout the AI lifecycle. It captures dataset provenance, content, statistical properties, sensitivity levels, licensing, and the tools used to create or modify datasets.
Beyond software and data, the framework includes an Infrastructure cluster that maps the software and hardware dependencies required to run AI systems, including links to hardware bills of materials where relevant.
Cybersecurity considerations are grouped under Security Properties, which document implemented safeguards such as encryption, access controls, adversarial robustness measures, compliance frameworks, and vulnerability references.
Finally, the framework proposes a Key Performance Indicators cluster that includes metrics related to both security and operational performance, including robustness, uptime, latency, and incident response indicators.
According to the paper, the objective is to provide practical direction that organisations can adopt to enhance visibility and manage risks linked to AI technologies. The framework is intended to support more secure development and deployment practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
