The National Security Agency (NSA) has posted a new joint cybersecurity information sheet, “CSI: AI ML Supply Chain Risks and Mitigations.”
The document says it is intended for organizations that deploy, develop or procure AI and machine learning systems and components, and says its risks and mitigations should inform questions and requirements for vendors when those systems are sourced from third parties.
Six supply chain components, six risk surfaces
The paper defines the AI and ML supply chain as the set of components vendors and service providers must source or manage to deliver an AI system. It names six of them: training data, models, software, infrastructure, hardware and third-party services.
The document says each of those components can introduce vulnerabilities that affect confidentiality, integrity or availability, and adds that technologies such as large language models and AI agents may have additional security concerns beyond the paper’s broad treatment.
Data, models and software: what the paper says to do
For data, the paper highlights low-quality or biased inputs, data poisoning and training-data exposure through model inversion, membership inference and training-data extraction.
It says organizations should quarantine and test externally sourced data before moving it into internal systems, review and preprocess it, and use integrity and provenance methods such as checksums, hashes, digital signatures and lineage tracking.
For models, the paper highlights serialization attacks, model poisoning, hidden backdoors, malware embedded in weights or metadata and evasion attacks. It says organizations should prefer secure file formats, use trusted and transparent model sources, perform initial and periodic performance testing, maintain a registry of verified model versions and use integrity checks such as hashes, checksums and digital signatures.
For software, the paper says AI systems often rely on many libraries and tools, which can increase attack surface and introduce conventional software-supply-chain risks such as name-confusion and typosquatting attacks.
It calls for integrity validation, malware scanning, static and dynamic testing, SBOM maintenance, least-privilege deployment and ongoing patching and monitoring. For infrastructure and hardware, it says most risks mirror conventional cyber systems, though AI-specific accelerator devices add attack surface through drivers, firmware and related components.
Third-party services as the highest-complexity risk vector
The guidance highlights third-party services as a unique risk vector, noting they often support some or all AI components and can introduce vulnerabilities through their own supply chains, cyber incidents, or shared resources.
The document says organizations should assess vendors’ security practices and vulnerability-management processes, monitor them over time and include cybersecurity requirements in contracts.
It gives examples of those terms, including restricting use of customer data for training, defining separate cloud residencies, granting audit rights, setting continuity requirements and establishing shared-responsibility models.
Visibility requirements and the AI Bill of Materials
Effective risk management hinges on full visibility of AI and ML systems and their supply chain.
It tells organizations to identify suppliers and subcontractors, seek information on security controls and policies for AI integrations, require an AI Bill of Materials and a Software Bill of Materials or equivalent documentation, revise risk management, perform threat modeling and vulnerability mapping, and maintain an incident response plan for the AI supply chain.
Where the guidance fits the wider framework landscape
The same document also says it is not meant to stand alone. It says general cyber supply-chain risk management still applies, maps relevant risks to NIST’s Adversarial Machine Learning taxonomy and says the guidance in general maps to MITRE ATLAS under AI Supply Chain Compromise.
Its “More information” section points readers to related resources including “Deploying AI systems securely,” “Managing cyber supply chains,” “Principles for the Secure Integration of Artificial Intelligence in Operational Technology,” the NIST AI Risk Management Framework, CycloneDX’s Machine Learning Bill of Materials and OWASP’s Machine Learning Security Top Ten.
The release follows a series of related publications, including December 2025 guidance on the secure integration of AI in operational technology and September 2025 joint guidance on a shared vision for software bills of materials.
NIST, for its part, says its AI Risk Management Framework is intended for voluntary use, is meant to improve incorporation of trustworthiness considerations into the design, development, use and evaluation of AI systems, and now includes a Generative AI Profile released in July 2024.