RAIDS AI has launched the beta version of its monitoring platform, expanding access to a tool designed to detect rogue artificial intelligence behaviour in real time. The move follows an extended pilot phase in which early adopters tested the system’s ability to flag and log deviations in model performance before they result in bias, system failure, or legal exposure.
The platform provides dashboards for behavioural tracking, incident reporting, and tailored safety analysis, giving developers and compliance teams visibility over how their systems behave once deployed. During the pilot, participants were supported directly by the company’s in-house team to refine detection thresholds and alert protocols.
The launch arrives as businesses across Europe face a tightening regulatory environment for AI. The EU AI Act — the bloc’s first comprehensive legal framework for artificial intelligence — took effect in August 2024, with enforcement beginning in August 2026. The legislation classifies AI systems by risk level and places obligations on providers and deployers to monitor and mitigate harmful outcomes.
RAIDS AI positions its platform as part of that compliance infrastructure. Research by the team identified more than 40 documented AI failures in recent years, ranging from fabricated legal citations to unsafe autonomous driving responses. Each, it notes, led to reputational and financial consequences that might have been prevented through earlier behavioural detection.
Nik Kairinos, Chief Executive and Co-founder of RAIDS AI, said the company’s goal is to make AI safety measurable and manageable. He described the launch as “the start of the next exciting phase” for the business, adding that while AI innovation is “incredibly exciting,” it must be “balanced with regulation and safety.” Kairinos noted that decades of progress in deep learning have reached a point where “perpetual self-improvement changed the rules of the game,” and urged CIOs and CTOs to recognise the severity of AI risk. “AI safety is attainable,” he said. “Failure is not random or unpredictable — and by understanding how AI fails, we can give organisations the tools to capitalise on AI’s capabilities in a safe and managed way.”
The beta phase will allow more organisations to trial the platform free of charge for a limited period, ahead of full commercial rollout later next year.
You can register for beta access here.
You must be logged in to post a comment.