The ‘BodySnatcher’ exploit begins with a scenario that could keep CISOs awake: an unauthenticated attacker using nothing but an employee’s email to hijack privileged workflows within an enterprise platform.
The vulnerability was discovered by California-based cybersecurity firm AppOmni, demonstrates how an attacker with no credentials can impersonate a user and drive privileged actions through an AI workflow.
The US National Vulnerability Database writes that this Virtual Agent flaw (tracked as CVE-2025-12420) in the ServiceNow AI Platform “could enable an unauthenticated user to impersonate another user and perform the operations that the impersonated user is entitled to perform.”
The CVE record also states that ServiceNow “addressed this vulnerability by deploying a relevant security update to hosted instances in October 2025.” TechInformed has reached out to ServiceNow for comment on this issue.
The technical bug was closed months before public disclosure. However, the architectural lesson remains.
The trust shortcut: a shared secret used for bot-to-platform authentication
AppOmni’s discovery begins in a part of ServiceNow many enterprises rely on: the Virtual Agent API, which lets organizations expose ServiceNow chatbot “topics” to external platforms such as Slack or Microsoft Teams. AppOmni describes the API as a bridge that allows conversations to occur outside the ServiceNow web interface, while still invoking ServiceNow workflows.
Aaron Costello, chief of security research at AppOmni, the researcher credited with discovering the flaw, said the entry point was tied to a static credential used for those external integrations in his statement to TechInformed:
“The shared secret is a password used by external integrations such as Slack and Microsoft Teams, etc., to communicate (authenticate) with the Virtual Agent API on behalf of a user.”
Costello said the design problem was not simply that a secret existed, but that it was not unique per customer instance: “Since this secret was ‘shared’ across all ServiceNow instances, it meant that an attacker could communicate with the Virtual Agent API through the same mechanism that is intended to be used by external integrations / bots. It was the initial ‘entry point’ for an attacker.”
AppOmni’s published post describes the exploit chain in similar terms. It says an attacker could chain “a hardcoded, platform-wide secret” with identity-linking logic that “trusts a simple email address,” and in doing so, bypass Multi-Factor Authentication (MFA), Single Sign-On (SSO) and other access controls in the demonstrated flow.
The CVE record’s framing is more conservative, focusing on impersonation and performing operations the impersonated user is entitled to perform. That conservative phrasing is the safest baseline for enterprises assessing exposure.
Why “agentic” changes the stakes: capability scales with what your agents can do
A typical identity flaw raises one core question: what can an attacker access as a compromised user? In agentic systems, there is a second question that can matter more: what can the platform do for that user once the identity is accepted?
Costello warns that “the more AI agents a company has built to do powerful or sensitive actions, the more an attacker can do.”
In his proof-of-concept, he focused on an outcome that turns a workflow compromise into a control-plane concern: “In my PoC, I chose to show how I could create a new user with admin permissions, which an attacker could use in order to gain internal access into the platform.”
AppOmni’s write-up argues that Virtual Agent APIs can become unintended execution paths for privileged AI workflows. While ServiceNow has issued fixes, Costello notes the exposure was universal due to the platform’s initial setup: ‘Since this AI agent is given to all ServiceNow customers by default, it meant that all customers were at risk’.
AppOmni argues this elevates the flaw from a simple configuration error to a systemic concern for any organization enabling AI on the platform.
It also lists the affected components and fixed versions. AppOmni says the issue affected certain versions of Now Assist AI Agents (sn_aia) and Virtual Agent API (sn_va_as_service), and it published the earliest fixed versions for each.
The enterprise pattern: AI is deployed fast, then authorization discipline catches up
Identity and authorization design is where these stories often become enterprise problems. Eric Woodruff, chief identity architect at Semperis, in his statement to TechInformed captures the reckless ‘Shadow AI’ mentality often seen in the field.
“I’ve seen many firsthand instances where business teams acquire an AI system and just point it at everything, and the typical fallout is the post realization that there are a lot of overly permissive authorization models on the data itself,” Woodruff says.
This ‘point-it-at-everything’ approach creates a governance vacuum where security guardrails must then be retrofitted.
Woodruff warned against governance that is so restrictive it pushes usage into unmanaged channels, but he also said organizations need rules that can be enforced: “CIO/CIOs need to balance not creating ‘shadow AI’ scenarios by stifling AI, but do need to develop an AI policy and hold the business to it.”
He added that security involvement has to begin earlier than incident response: “It needs to be prioritized that security works along with the business unit during procurement and deployment of the AI system. This will help ensure that security can assist in implementing guardrails proactively.”
Auditability becomes fragile when “the agent acts as the user”
For many boards and CIOs, the most consequential question is not “could the workflow be abused,” but “could we prove what happened.”
NIST SP 800-53 is a NIST catalog of security and privacy controls for federal information systems and organizations.
It includes an audit control for non-repudiation (AU-10), stating that systems should protect against an individual or “process acting on behalf of an individual,” falsely denying having performed covered actions.
Costello’s concern is that the most visible record in an agent interaction may not show the truth an investigator needs:
“Conversation logs do not surface if the user who initiated the conversation was ‘spoofed’.”
Costello warns that basic log auditing is no longer sufficient. Beyond just ‘correlating data,’ he advises defenders to look out for the IP address:
“Seeing that the IP address in the transaction logs is different from the admin’s usual IP address and that the user account created by the AI agent has an external email, that’s going to ring alarm bells.”
This specific mismatch serves as a critical indicator that an AI agent may have been manipulated into creating an unauthorized administrative path.
“Least privilege” still applies, but agents make violations more expensive
It is easy to talk about “agentic security” as if it is separate from classic security. BodySnatcher is a reminder that the old concepts still describe the failure modes. They simply appear through new interfaces.
The CVE record lists CWE-250 (Execution with Unnecessary Privileges) as the weakness category. MITRE’s definition warns that running with extra privileges can amplify the impact of other weaknesses by giving an attacker access to resources allowed by those privileges.
AppOmni’s conclusion is similar in practical terms. It argues that preventing abuse of agentic AI in conversational channels requires stronger provider configuration controls (including enforced MFA for account linking), an agent approval process, and lifecycle management policies to de-provision unused agents.
Costello framed the lifecycle point in language that mirrors how enterprises already think about user accounts:
“When security vulnerabilities such as the one I found surfaces, every single AI agent (active, dormant, and/or unused) that exists becomes a potential weapon that can be used against the organization.”
He added the operational implication:
“This is why it’s crucial to maintain good hygiene and deactivate or delete AI agents that no longer serve a purpose and are not in use.”
What changes in practice
AppOmni’s post calls for tightening provider configuration and account linking, formalizing agent approval, and managing the agent lifecycle.
Woodruff’s statement lands in the same place from the enterprise side: policy, security involvement at procurement and deployment, and guardrails that prevent “point AI at everything” from becoming “grant AI access to everything.”