I spend a lot of time talking with CISOs about AI agents, and the conversation almost always starts in the same place: monitoring. They have human-in-the-loop controls, continuous monitoring dashboards, data minimization policies, and believe they have visibility into what their AI systems are doing. But then I ask a different question: can you stop one? Not flag it. Not log it for review on Monday. Stop it – right now – from accessing data it should not be touching, before the damage is done.
Most cannot. And the data confirms it.
The gap security teams are not talking about
The recent Kiteworks 2026 Data Security, Compliance & Risk Forecast Report reveals a startling 15-to-20-point gap between what organizations can observe and what they can actually control when it comes to AI agents.
On the governance side (monitoring, human-in-the-loop, data minimization) adoption runs between 56% and 59%. Organizations have invested in watching. Yet on the containment side (purpose binding, kill switches, network isolation) adoption drops to 37% to 40%.
63% of organizations cannot enforce purpose limitations on their AI agents. Six-in-ten cannot terminate a misbehaving agent quickly. Over half (55%) cannot isolate an AI system from broader network access. These are the ways to stop AI when something goes wrong and they represent the largest gaps in the entire survey.
As a CISO, this is the number that keeps me up at night. Businesses are watching the incident happen in real time and reaching for a kill switch that does not exist.
Why this is different
The CrowdStrike 2026 Global Threat Report documents an 89% increase in AI-enabled adversary attacks year-over-year, with average eCrime breakout time at 29 minutes and the fastest at 27 seconds. 82% of detections are now malware-free. Meaning adversaries are using valid credentials, native tools, and identity abuse to move through environments without triggering traditional defenses.
AI agents amplify this threat in a way that is qualitatively different from anything we have managed before. A compromised human account can access data within the scope of that person’s role, at human speed, with human judgment about what seems appropriate. A compromised or misbehaving AI agent can access thousands of records across dozens of systems in minutes, with no internal sense of proportionality and no hesitation before taking an irreversible action.
The recent “Agents of Chaos” study by researchers from Harvard, MIT, Stanford, and Carnegie Mellon documented exactly this. AI agents were manipulated into exfiltrating sensitive data, deleting infrastructure, and propagating vulnerabilities to other agents. All through conversational manipulation, not technical exploits. One agent destroyed its owner’s email infrastructure on a non-owner’s instruction. Another disclosed an entire email containing a Social Security number, bank account details, and medical records because it could not distinguish between a request for the data and a request for the container holding it.
And this is not simply hypothetical. In September 2025, Anthropic reported disrupting a Chinese state-sponsored operation that used AI agent swarms to execute 80–90% of the tactical work in a cyber-espionage campaign targeting approximately 30 entities. Humans intervened at only four to six decision points. The rest was automated. That is the threat model. And monitoring alone will not contain it.
The board conversation to have
Here is the difficult truth for CISOs: 54% of boards do not have AI governance in their top five topics. The conversation needs to shift from “how are we monitoring AI risk?” to “can we demonstrate that every AI agent interaction with regulated data is authenticated, policy-governed, encrypted, and logged. And can we stop an agent in real time if it exceeds its authorized scope?” If the answer is no, you have a containment gap that represents unquantified operational and regulatory risk.
The World Economic Forum’s Global Cybersecurity Outlook 2026 reports that 87% of organizations now rank AI-related vulnerabilities as their fastest-growing cyber risk. Over 50% of large enterprises expect mandatory AI compliance audits by the end of 2026. The audit question is not coming. It is here.
Data-layer containment
From an operational security standpoint, the only governance layer that AI agents cannot circumvent is one that sits at the point of data access.
That means every agent must be authenticated before accessing any data, with its identity linked to the human authorizer who delegated the workflow. Second, every data request must be evaluated in real time against attribute-based access policies. Third, all agent-accessed data must be encrypted using FIPS 140-3 validated cryptographic modules. Fourth, every interaction must be captured in a tamper-evident audit trail that feeds directly into the SIEM, giving the SOC team real-time visibility into AI agent behavior alongside every other threat signal they monitor.
That fourth capability matters more than most CISOs realize. If AI agent audit data sits in a compliance silo separate from the SIEM, the security team cannot correlate agent behavior with broader threat indicators. The organizations with evidence-quality audit trails are 20 to 32 points ahead on every AI governance metric. Not because logging is a checkbox, but because it is the foundation that makes detection, response, and containment possible.
Close the containment gap
For CISOs, the priorities are concrete. Get AI governance on the board agenda. Demand containment controls, not just monitoring. Fund the keystone capabilities and then consolidate fragmented data exchange infrastructure so logs are unified and actionable rather than scattered across platforms.
You can monitor your AI agents all day long. But until you can authenticate every one of them, enforce what they are authorized to do, encrypt what they touch, log every interaction, and shut one down the instant it exceeds its scope you are watching the threat, not managing it.