LangWatch has launched Scenario, an open-source framework designed to help organisations test AI agents for security weaknesses that standard prompt-based checks can miss.
The Amsterdam-based company said the release is aimed at businesses using or scaling AI-driven applications such as customer service bots and data analytics agents. It is designed for automated red teaming and AI penetration testing as more enterprise systems gain access to sensitive data and business-critical workflows.
LangWatch Scenario simulates realistic, multi-turn attacks against AI applications. Rather than relying on a single adversarial prompt, the framework builds context and trust over the course of a conversation, reflecting the way an attacker might probe an agent gradually before attempting to extract information or override expected behaviour.
The company said a second model evaluates the progress of each attack and adjusts the strategy as the exchange develops. LangWatch said this allows teams to identify weaknesses that may not appear in conventional testing, but can emerge after repeated interactions.
LangWatch said single-shot penetration tests are no longer enough for production AI systems, particularly those built on large language models that may disclose sensitive information after successive turns even when an initial direct request is rejected.
Scenario applies what LangWatch calls the Crescendo strategy, a four-phase structure that begins with low-friction exploration, moves through hypothetical prompts and authority-based claims such as compliance audit requests, and then applies greater pressure as the conversation develops. The framework is intended to show development teams where an AI agent becomes vulnerable during extended exchanges.
Rogerio Chaves, co-founder and CTO of LangWatch, commented: “An AI agent that rejects every single prompt gives you a false sense of security. In practice, cybercriminals do not work with a single direct question. They have dozens of relaxed conversations, build trust, and when the agent is in a cooperative mode after twenty turns, a request that would have been rejected in turn one suddenly becomes no problem at all.”
LangWatch said the framework is intended for organisations running AI applications in production, including banks, insurers, and AI-first software companies. It can be incorporated into existing development and continuous integration workflows, giving teams a structured way to test agent safety as products evolve.
Companies already using the wider LangWatch platform include Backbase, Buy It Direct, Ask Vinny, Visma, Skai, and PagBank. The new release extends that platform with automated red-team testing as more businesses formalise how they evaluate AI systems connected to internal data, customer records, and operational processes.
Manouk Draisma, co-founder and CEO of LangWatch, said: “It is rarely about a single spectacular hack. It is about patience and context. A cybercriminal who interacts calmly and systematically with an AI agent for twenty minutes can extract sensitive information that a direct attack would never reveal. LangWatch Red-Teaming makes these hidden risks visible before damage occurs.”
LangWatch said the framework is available immediately as open-source software and will form the basis of a broader set of red-team tools. More information is available on its red-teaming page and the Scenario GitHub repository.




You must be logged in to post a comment.