U.S. cybersecurity agencies say integrating artificial intelligence into operational technology (OT) introduces additional security risks and requires specific safeguards, according to new joint guidance.
The National Security Agency (NSA) said it joined CISA and other partners to release a joint cybersecurity information sheet, “Principles for the Secure Integration of Artificial Intelligence in Operational Technology,” aimed at organizations integrating AI into systems that run physical processes.
The guidance focuses on industrial control and other OT systems that support physical processes and have strict safety and reliability requirements, such as SCADA, distributed control systems and programmable logic controllers. NIST describes these systems and their constraints in Special Publication 800‑82.
The joint paper says AI can both enhance capabilities and introduce new risks in OT environments, particularly when models are connected to sensors, control logic or safety functions.
In the document, the authoring agencies identify AI-specific threat patterns, including prompt injection, data poisoning and model drift that can degrade accuracy over time.
The paper also highlights “data sovereignty” concerns where AI systems depend on large volumes of operational data and may involve foreign-based vendors or infrastructure subject to foreign government control or legal compulsion, according to the guidance.
To structure controls, the guidance lays out four organizing principles: understanding AI risks, assessing appropriateness for OT use cases, establishing governance, and building safety mechanisms. The NSA’s release emphasizes that operators need human oversight and resilient fallback pathways when AI is used in environments where errors can affect physical operations.
The guidance also links to external risk-management references, including NIST’s AI Risk Management Framework, as a way to standardize how organizations identify and manage AI risks across the lifecycle.
The paper also makes procurement recommendations. It says operators should request visibility into AI components and dependencies, including software supply chain information, clarity on where models are hosted and connected, and defined conditions for disabling AI features or reverting to non‑AI modes.
Those recommendations are similar to ongoing U.S. federal work on software transparency. CISA’s ‘2025 Minimum Elements for a Software Bill of Materials (SBOM)’ and its associated Federal Register request for comment set expectations for how organizations document software components and supplier relationships.
The release comes as courts and regulators consider how responsibility is assigned when AI tools influence outcomes. In Mobley v. Workday, a federal court order discussed legal theories under which an AI software provider could be treated as an employer’s agent in discrimination claims tied to automated screening. While that case concerns employment tools rather than OT, the order discusses how contract language, operating responsibility and audit rights can affect how responsibility is assigned when AI is embedded into business processes.