The Center for Internet Security (CIS) has published three AI security companion guides that adapt CIS Controls v8.1 to large language models (LLM), AI agents and Model Context Protocol (MCP) environments, extending a familiar security control set into systems that can access enterprise data and execute actions through connected tools.

Published with non-human identity security firm Astrix Security and API security company Cequence Security, the guides divide the AI stack into three layers.

The LLM guide addresses prompt, context and sensitive-data risks. The agent guide covers safe tool execution, governed autonomy and access to enterprise systems. The MCP guide focuses on secure tool access, non-human identities and auditable interactions across the protocol layer.

The core operational framework of the release centers on this three-layer architectural split. CIS’s technical post says LLMs process and generate information, agents add reasoning, planning, memory and action, and MCP determines how AI systems interact with tools, services and data.

CIS also says no single layer can secure the full system, because controls need to span input sanitization, context protection, tool validation, logging and output review.

Using existing security frameworks

The guides also lean on CIS Controls that many security teams already use, rather than asking them to adopt a new framework. CIS describes its 18 Critical Security Controls as prioritized and simplified cybersecurity best practices.

It says v8.1 is mapped to multiple legal, regulatory and policy structures, with updates for NIST CSF 2.0, revised asset classes and a new Govern function.

The risk of unsanctioned agents

Recent AI-agent risk data shows the context in which CIS is publishing the guides. A Cloud Security Alliance study found that 43% of organizations said more than half of employees regularly use AI agents, while 54% reported between one and 100 unsanctioned agents.

Only 31% had formally adopted an AI-agent governance policy and 44% reported low or no confidence in detecting AI-agent-specific threats.

Those findings also support the broader argument that AI security is not only a model problem. OWASP lists prompt injection, sensitive-information disclosure and excessive agency among major LLM application risks.

Its excessive-agency guidance ties the risk to excessive functionality, permissions and autonomy, then recommends limiting extensions, permissions and human approval gaps around high-impact actions.

Moving from prompts to protocol security

MCP makes the access problem more concrete. Anthropic introduced MCP in 2024 as an open standard for secure two-way connections between data sources and AI tools, while the MCP specification describes it as a way to connect LLM applications with external data sources and tools.

CIS describes MCP security around secure tool access, non-human identity management and auditable protocol-layer interactions, while the MCP specification says the protocol lets applications expose tools and capabilities to AI systems.

Enterprise access, not just prompts

Microsoft’s MCP documentation shows why the protocol layer matters to enterprise security teams. Copilot Studio supports MCP tools and resources. Microsoft says tools published by a connected server are made available to an agent, with server-side updates or removals reflected dynamically.

In Dynamics 365 finance and operations, Microsoft says its ERP MCP server can let agents create, read, update and delete application data, invoke business logic and access actions filtered by the security role for the authenticated user for the agent.

That operating pattern turns agent security into identity, authorization and audit work. NIST’s National Cybersecurity Center of Excellence is separately reviewing comments on a software and AI agent identity and authorization project that explores standards-based ways to identify, manage and authorize actions taken by software and AI agents.

For CISOs, the immediate change is the availability of AI-specific guidance tied to controls many security teams already use.

Personalized Feed
Personalized Feed