Seven in 10 organizations report confirmed or suspected security vulnerabilities introduced by AI-generated code in production systems, even as 83% of security leaders say their existing tools effectively detect AI-generated vulnerabilities, according to The Purple Book Community’s State of AI Risk Management 2026.

The report is based on a survey of more than 650 senior cybersecurity decision-makers across seven industries and two continents. The report says the findings reflect self-reported beliefs and practices and should be read as directional indicators, not precise measurements of any single organization’s capabilities.

The confidence gap

According to the report, that mismatch sits inside what it calls the “Confidence Gap,” the distance between what security leaders believe about their programs and what their own answers suggest is happening in practice.

It says 90% believe they have visibility into data shared with AI, yet 59% confirm or suspect shadow AI is present and ungoverned. It also says 86% claim a complete AI inventory, but 57% of that same group still admit shadow AI is present in their organization.

Ungoverned perimeter, governed core

The gap is also about pace and perimeter. Purple Book says organizations are governing the AI they have approved while the ungoverned perimeter expands faster than policy can follow.

Across the surveyed organizations, 59% of security leaders confirmed or suspected employees were using AI tools that IT and security had not approved or reviewed. In North America, that figure rose to 61.5%, compared with 48.8% in Europe, which the report says likely reflects faster bottom-up adoption in North America and stronger regulatory discipline in Europe.

Detection is happening too late

Additionally, Purple Book says 73.1% of security leaders agree that AI coding tools have accelerated development enough to make it harder for security to keep up, while 66% report extensive or pervasive AI use in software development.

The report argues the problem is not simply whether tools detect flaws, but when they do: 92% of organizations with confirmed AI vulnerabilities in production still say their tools work, leading the report to say detection is often happening after deployment rather than in the pipeline.

That timing problem is landing as AI use broadens beyond coding assistants. The report says 78% of enterprises are piloting or deploying agentic AI, 72.7% claim they are actively tracking and governing MCP servers or similar frameworks, and only 1.9% say security teams are rarely or never notified before new AI models or AI-powered features reach production.

Even so, the report says those formal processes may not capture the full scope of AI entering through unofficial channels.

A tool stack that fragments rather than consolidates

The tool stack is adding another layer of strain. Purple Book says 51.5% of organizations use 11 or more distinct security scanning and vulnerability-management tools.

It also says 81.6% believe managing findings across disconnected tools significantly hurts their ability to prioritize and remediate risk, while 46.3% admit they spend significant time triaging vulnerabilities that ultimately do not matter.

What GitHub and NIST say should be happening

That finding sits alongside guidance from GitHub and NIST that AI-generated code still requires human scrutiny.

GitHub’s documentation says Copilot-generated code may carry security vulnerabilities, bugs and other issues, and says users should always review, validate and test generated code. GitHub also says Copilot code review should supplement, not replace, human review, and warns that the system may miss problems in large or complex changes.

NIST, in its DevSecOps guidance, says AI-generated content should be monitored and validated by humans, that AI-based suggestions should face rigorous scrutiny and that AI-assisted changes should receive review comparable to human modifications.

Taken together, the Purple Book report describes a security posture in which awareness is rising faster than control.

Its authors write: “Security leaders aren’t lacking awareness. They’re lacking the ability to convert that awareness into governed action at the pace AI demands.”

Personalized Feed
Personalized Feed