Center for Internet Security, Astrix Security, and Cequence Security have released three new CIS Critical Security Controls Companion Guides for large language models, AI agents, and Model Context Protocol environments. Announced in London on 21 April, the guides are intended to help enterprises apply the CIS Controls to AI systems already being deployed across operations.
The AI LLM Companion Guide covers prompts, context handling, and the exposure of sensitive information. The AI Agent Companion Guide focuses on tool execution, governed autonomy, and access to enterprise systems. The MCP Companion Guide addresses secure tool access, non-human identities, and auditable interactions across the protocol layer.
Curtis Dukes, Executive Vice President and General Manager of Security Best Practices at CIS, said: “These guides reflect a shared effort to bring clarity to an area where organisations are seeking direction. By combining our collective expertise, we translated the CIS Controls into concrete steps that help teams secure AI systems across the model, agent, and protocol layers.”
The organisations said the guides are aimed at risks including data leakage, unbounded agent autonomy, credential misuse, and unsafe tool execution. Astrix contributed work around AI agents, MCP servers, and non-human identities, while Cequence focused on application, data, and API security. Jonathan Sander, Field CTO of Astrix Security, said: “AI agents introduce a new operational surface that organisations must understand before they scale.” Shreyans Mehta, CTO and Co-Founder of Cequence Security, said the partnership had created guidance that helps organisations enable “agentic AI safely”. More information is available here.





You must be logged in to post a comment.