RAIDS AI enters beta with real-time safety monitoring

RAIDS AI enters beta with real-time safety monitoring

AI safety platform RAIDS AI has opened its beta phase to new users. The launch follows a pilot focused on real-time detection of abnormal AI behaviour and comes as companies prepare for EU AI Act compliance.


RAIDS AI has launched the beta version of its monitoring platform, expanding access to a tool designed to detect rogue artificial intelligence behaviour in real time. The move follows an extended pilot phase in which early adopters tested the system’s ability to flag and log deviations in model performance before they result in bias, system failure, or legal exposure.

The platform provides dashboards for behavioural tracking, incident reporting, and tailored safety analysis, giving developers and compliance teams visibility over how their systems behave once deployed. During the pilot, participants were supported directly by the company’s in-house team to refine detection thresholds and alert protocols.

The launch arrives as businesses across Europe face a tightening regulatory environment for AI. The EU AI Act — the bloc’s first comprehensive legal framework for artificial intelligence — took effect in August 2024, with enforcement beginning in August 2026. The legislation classifies AI systems by risk level and places obligations on providers and deployers to monitor and mitigate harmful outcomes.

RAIDS AI positions its platform as part of that compliance infrastructure. Research by the team identified more than 40 documented AI failures in recent years, ranging from fabricated legal citations to unsafe autonomous driving responses. Each, it notes, led to reputational and financial consequences that might have been prevented through earlier behavioural detection.

Nik Kairinos, Chief Executive and Co-founder of RAIDS AI, said the company’s goal is to make AI safety measurable and manageable. He described the launch as “the start of the next exciting phase” for the business, adding that while AI innovation is “incredibly exciting,” it must be “balanced with regulation and safety.” Kairinos noted that decades of progress in deep learning have reached a point where “perpetual self-improvement changed the rules of the game,” and urged CIOs and CTOs to recognise the severity of AI risk. “AI safety is attainable,” he said. “Failure is not random or unpredictable — and by understanding how AI fails, we can give organisations the tools to capitalise on AI’s capabilities in a safe and managed way.”

The beta phase will allow more organisations to trial the platform free of charge for a limited period, ahead of full commercial rollout later next year.

You can register for beta access here.


Stories for you

  • OpenAI launches ChatGPT Atlas to rival Google Chrome

    OpenAI launches ChatGPT Atlas to rival Google Chrome

    OpenAI has launched ChatGPT Atlas, a Chromium-based browser that embeds its conversational AI directly into web navigation. Featuring ‘chat with the page’ and an early Agent Mode, the launch positions OpenAI to challenge Google Chrome and redefine the user’s gateway to the web.


  • Behavioural Risk Index launches to map executive vulnerabilities

    Behavioural Risk Index launches to map executive vulnerabilities

    A new psychometric tool measures behavioural risk in leadership teams. Behavioural Risk Index applies psychology and data science to identify how collective decision-making traits influence resilience, performance, and organisational outcomes.


  • Airbus and Cathay invest in sustainable fuel

    Airbus and Cathay invest in sustainable fuel

    Airbus and Cathay Group partner to scale sustainable aviation fuel. The partnership involves a $70 million investment to boost SAF production in Asia and globally, aiming to reduce aviation emissions. Airbus and Cathay will identify projects with commercial potential to increase SAF availability….