Shadow AI is pushing enterprise inventory beyond the traditional software bill of materials, as companies try to track not only software components but also models, datasets, prompts, agents, identities and cloud infrastructure.

Shadow AI refers to employees or teams using AI tools without formal IT approval or security governance.

In software security, a software bill of materials, or SBOM, is a machine-readable inventory used to track components, libraries and modules in a software product. AI bills of materials extend that logic to AI systems, where the assets can include models, datasets, services, infrastructure, third-party dependencies and the relationships between them.

Cisco has open-sourced an AI-BOM scanner, Wiz is tying AI inventories to identities and cloud access, and Palo Alto Networks is positioning AI visibility as a security requirement for autonomous agents.

Beyond the traditional software bill of materials

Cisco’s AI-BOM scanner analyzes codebases, container images and cloud environments to produce a structured inventory of models, agents, tools, MCP servers and clients, datasets, prompts, guardrails, secrets and other AI assets.

The GitHub project supports Python, JavaScript/TypeScript, Java, Go, Rust, Ruby and C#, and can output reports in formats including JSON, CycloneDX, SARIF, SPDX, HTML, Markdown and CSV.

That scope is wider than a conventional SBOM. IBM describes an SBOM as a machine-readable list of components, libraries and modules in a software product. AI-BOMs build on that idea, but the AI system being tracked is not just code.

Wiz defines an AI-BOM as an inventory of models, training datasets, software dependencies, infrastructure and related components that shape how the system behaves.

Capturing the dynamic nature of AI behavior

Unlike conventional software components, AI systems can change through prompts, datasets, fine-tuning, agent permissions and external model dependencies, widening what enterprises need to track.

Palo Alto Networks describes an AI-BOM as a machine-readable inventory covering every dataset, model and software component used to build and operate an AI system, with records for versions, sources and relationships. It also says AI-BOMs support model lineage, data provenance and compliance evidence.

Tracking provenance and regulatory risk

Cisco is also targeting the provenance side of that problem. Its Model Provenance Kit compares two models through metadata, tokenizer structure and weight-level signals, or scans one model against a fingerprint database.

Cisco said the initial database covers about 150 base models across more than 45 families and more than 20 publishers.

Cisco linked the tool to risks around repackaged models, licensing, regulatory exposure and supply chain integrity. In its release, the company cited cases where organizations may not know whether a model was trained from scratch, copied, modified or derived from another model.

It also pointed to EU AI Act documentation requirements for high-risk systems and NIST’s AI Risk Management work as reasons enterprises need clearer model records.

Linking identity and cloud infrastructure

Wiz’s contribution is the cloud context around that record. Its AI inventory guidance says a useful inventory maps AI systems to data, identities, infrastructure and owners, rather than stopping at model names.

Wiz also says a modern AI inventory must update continuously because AI systems appear, change and retire quickly.

That identity layer is becoming central to AI governance. Wiz’s AI application security guidance says deployed AI workloads require discovery of shadow AI, configuration checks and mapping of identity permissions attached to each workload. It describes the AI-BOM as the inventory that connects models, SDKs, agent configurations and data dependencies across the environment.

Google Cloud has also folded that idea into its Wiz partnership. In an April post, Google said Wiz’s dynamic AI-BOM inventories AI models, AI development tools and IDE extensions, helping organizations track approved tools such as Gemini Code Assist and GitHub Copilot while uncovering unapproved shadow AI plugins.

Palo Alto Networks is approaching the same issue from the runtime security side. In March, it launched Prisma AIRS 3.0, saying the product discovers AI agents, models and connections across cloud environments, SaaS platforms and local endpoints.

The company also said its Agent Artifact Security maps agent architecture and scans for vulnerabilities, while its AI Agent Gateway is in limited preview for runtime and identity controls.

Palo Alto Networks is also using incident-response examples to argue that AI-BOMs have moved beyond a theoretical control.

Ian Swanson, vice president of AI security products at Palo Alto Networks, told The Register that one incident involved attackers using AI for reconnaissance, gaining access to an internal AI workload’s system prompts and modifying them to steal data and send it to an external email account.

Swanson argued that an AI-BOM could help defenders compare the system prompt used at one point in time with a later changed state.

Moving toward industry-wide standards

The standards layer is still unsettled. OWASP’s AIBOM Generator produces AI Bills of Materials for Hugging Face models in CycloneDX format aligned with SPDX, and the OWASP GenAI Security Project says its AIBOM Initiative is trying to turn the concept into open tooling, completeness checks and practitioner guidance.

A separate SBOM for AI Tiger Team, facilitated by CISA but not adopted as official U.S. government policy, has published use cases for compliance, incident management, open-source model risk, third-party AI risk, intellectual property and model lifecycle management.

Personalized Feed
Personalized Feed