Half of organizations with AI security controls in place still reported a suspicious or confirmed AI-related incident, according to cybersecurity firm Proofpoint’s 2026 AI and Human Risk Landscape report. The survey covered more than 1,400 full-time security professionals across 20 industries and 12 countries in January 2026.
The firm stressed that the issue is prominent now as AI projects move on from pilots. Proofpoint found that 87% of organizations have deployed AI assistants beyond pilot stage, while 76% are piloting or rolling out autonomous agents.
The systems now sit inside customer support, Slack and Teams chat summaries, email drafting and third-party collaboration, placing AI inside channels where enterprise work and sensitive data already move.
Disconnect between deployment and detection
Controls have followed, yet confidence has not. The report found that 63% of organizations have AI security controls in place, yet 52% are not fully confident those controls would detect a compromised AI. Proofpoint also reported that only one-third of organizations are fully prepared to investigate an AI- or agent-related incident.
The new attack surface
AI-related threats are not concentrated in one system. Among organizations that reported an AI-related incident, 67% saw threat activity in email, 57% in SaaS or cloud apps and 53% in AI assistants or agents.
Proofpoint also found exposure across collaboration tools, social and messaging platforms and file-sharing systems, tying the risk to the wider collaboration workspace rather than email alone.
That channel spread creates an investigation problem. Proofpoint reported that 41% of organizations have difficulty correlating threats across channels, while 94% said managing multiple security tools is at least moderately challenging. More than half described that tool complexity as very or extremely difficult.
Prompt injection reaches the defense layer
The report’s threat research shows how those gaps can appear in practice. Proofpoint researchers observed a phishing email that looked like a Gmail password notice but contained hidden prompt-injection instructions in the email source.
The instructions were designed to make an AI-based detection system consume resources through repeated reasoning before reaching a verdict, which Proofpoint said could allow the message to pass through undetected.
OWASP’s LLM risk taxonomy describes prompt injection as crafted input that can lead to unauthorized access, data breaches and compromised decision-making. It also lists model denial of service as a risk when resource-heavy operations overload LLM systems, aligning with Proofpoint’s example of hidden instructions targeting the AI defense layer itself.
Consent becomes a security surface
Proofpoint also tracked a sharp rise in user-consented third-party applications with AI functionality, from 11,290 in December 2024 to 258,033 by November 2025. The report presents that increase as a sign of how quickly users are granting permissions to AI-enabled tools inside normal workflows.
Microsoft has separately warned that consent phishing tricks users into granting malicious cloud applications access to legitimate cloud services and data.
In Microsoft 365 environments, illicit consent grants can give an external application account-level access to contacts, email or documents without needing an organizational account, and password resets or MFA are not effective remediation for that attack type.
AI-agent identity moves into standards work
The identity and authorization issue is also moving into standards work. NIST’s National Cybersecurity Center of Excellence is reviewing comments on a project exploring standards-based ways to identify, manage and authorize access and actions taken by software agents, including AI agents.
NCCoE said enterprises are seeking to move AI capabilities from text and graphics into actions such as deploying code to production, with limited human supervision.
The operational message is narrower than a broad warning about AI risk. Proofpoint’s data points to a control gap across collaboration channels: AI tools are already embedded in daily workflows, but many organizations still lack confidence in detection, cross-channel correlation and incident reconstruction.
Over the next 12 months, 61% of respondents plan to expand AI protections, 56% plan to extend collaboration coverage and 53% expect to move toward a unified platform approach.
Proofpoint sells collaboration and data security products, and the report’s findings support that market position. The report also notes that its sample skewed toward senior leaders at larger enterprises, which may reflect more advanced AI adoption than the broader market.
But the survey’s core data is still useful for enterprise security leaders because it identifies where AI risk is appearing: not only in models or prompts, but across email, SaaS, messaging, file-sharing, consented apps and AI-agent workflows.