The nonprofit industry organization Cloud Security Alliance’s CSAI Foundation has announced three pieces of security infrastructure for agentic AI: CVE authorization, a catastrophic-risk annex for STAR for AI and stewardship of two open specifications for governing autonomous AI actions.
CSAI is CSA’s AI security and safety arm. It says its 2026 mission focuses on the “agentic control plane,” where risk moves beyond model behavior into identity, authorization, orchestration, runtime behavior and trust assurance across agent ecosystems.
Narrow initial scope for vulnerability tracking
CSA explained that it has been authorized by the CVE Program as a CVE Numbering Authority (CNA), an organization approved to assign CVE IDs within a defined scope. CSA said its initial operating scope is “addressing vulnerabilities in our software tools.”
The CVE role links CSAI’s work to existing vulnerability-management workflows. NIST describes CVE as a system for uniquely identifying vulnerabilities and tying them to specific code-base versions, with the National Vulnerability Database later analyzing published records.
CSA says CSAI is beginning with its own software tools while organizing research and operational projects around agent-specific coordination.
Auditable controls for worst-case scenarios
The second move focuses on assurance. CSAI launched the STAR for AI Catastrophic Risk Annex with support from Coefficient Giving, extending CSA’s AI Controls Matrix and STAR for AI program to scenarios involving loss of human oversight, uncontrolled system behavior and large-scale irreversible consequences.
CSA said the four-phase rollout begins in June 2026 and runs through December 2027, ending with a State of Catastrophic AI Risk Controls Report.
The annex is designed to turn high-impact AI safety concerns into auditable controls by identifying relevant AI Controls Matrix controls, adding new ones where gaps exist and defining evidence requirements and testing criteria.
CSAI cited checks on whether human-in-the-loop controls can be bypassed, action gating prevents unsafe escalation and rollback mechanisms work under pressure.
Alignment with federal risk frameworks
The annex’s focus on controls, evidence and testing aims to align with NIST’s AI RMF Core, which is organized around govern, map, measure and manage functions. NIST also calls for production monitoring, emergent-risk detection and procedures to disengage or deactivate systems that depart from intended use.
Open specifications for runtime governance
The third piece is runtime governance. CSAI received the Autonomous Action Runtime Management (AARM) specification from Vanta and took over stewardship of the Agentic Trust Framework (ATF) from Josh Woodruff of MassiveScale.AI, with AARM founder Herman Errico and Woodruff continuing to lead their respective working groups.
CSA described AARM as an open specification for securing AI-driven actions at runtime across context, policy, intent and behavior, while ATF applies Zero Trust principles to agentic AI.
Intercepting actions and applying zero trust
AARM’s own documentation defines the specification as a way to intercept, evaluate, authorize and record autonomous actions before they execute.
An AARM system accumulates session context, evaluates actions against policy, enforces decisions such as allow, deny, modify, defer or require approval and records tamper-evident receipts.
ATF focuses on the access-governance layer. It aims to apply Zero Trust to AI agents through identity, behavior, data governance, segmentation and incident response, covering agent credentials, authorization chains, monitoring, sensitive-data controls, resource boundaries, kill switches and containment procedures.