RAIDS AI enters beta with real-time safety monitoring

RAIDS AI enters beta with real-time safety monitoring

AI safety platform RAIDS AI has opened its beta phase to new users. The launch follows a pilot focused on real-time detection of abnormal AI behaviour and comes as companies prepare for EU AI Act compliance.


RAIDS AI has launched the beta version of its monitoring platform, expanding access to a tool designed to detect rogue artificial intelligence behaviour in real time. The move follows an extended pilot phase in which early adopters tested the system’s ability to flag and log deviations in model performance before they result in bias, system failure, or legal exposure.

The platform provides dashboards for behavioural tracking, incident reporting, and tailored safety analysis, giving developers and compliance teams visibility over how their systems behave once deployed. During the pilot, participants were supported directly by the company’s in-house team to refine detection thresholds and alert protocols.

The launch arrives as businesses across Europe face a tightening regulatory environment for AI. The EU AI Act — the bloc’s first comprehensive legal framework for artificial intelligence — took effect in August 2024, with enforcement beginning in August 2026. The legislation classifies AI systems by risk level and places obligations on providers and deployers to monitor and mitigate harmful outcomes.

RAIDS AI positions its platform as part of that compliance infrastructure. Research by the team identified more than 40 documented AI failures in recent years, ranging from fabricated legal citations to unsafe autonomous driving responses. Each, it notes, led to reputational and financial consequences that might have been prevented through earlier behavioural detection.

Nik Kairinos, Chief Executive and Co-founder of RAIDS AI, said the company’s goal is to make AI safety measurable and manageable. He described the launch as “the start of the next exciting phase” for the business, adding that while AI innovation is “incredibly exciting,” it must be “balanced with regulation and safety.” Kairinos noted that decades of progress in deep learning have reached a point where “perpetual self-improvement changed the rules of the game,” and urged CIOs and CTOs to recognise the severity of AI risk. “AI safety is attainable,” he said. “Failure is not random or unpredictable — and by understanding how AI fails, we can give organisations the tools to capitalise on AI’s capabilities in a safe and managed way.”

The beta phase will allow more organisations to trial the platform free of charge for a limited period, ahead of full commercial rollout later next year.

You can register for beta access here.


Stories for you

  • DataSapien targets AI ROI crisis with device-native marketplace

    DataSapien targets AI ROI crisis with device-native marketplace

    London-based DataSapien launches open beta for its Device-Native AI platform. The marketplace shifts intelligence from the cloud to local devices, aiming to address a $109 billion shortfall in enterprise AI returns.


  • EU invests €5bn in net zero projects

    EU invests €5bn in net zero projects

    The EU allocates €5.2 billion for net-zero projects. The European Commission plans to invest in net-zero technology, clean hydrogen, and industrial decarbonisation using funds from the EU Emissions Trading System, with initiatives aimed at reducing greenhouse gas emissions.


  • How security tech entrepreneur Marie-Claire Dwek mastered the art of resilience

    How security tech entrepreneur Marie-Claire Dwek mastered the art of resilience

    Resilience, not technology, defines Marie-Claire Dwek’s leadership at Newmark today. From losing her home in the 1990s crash to returning as CEO of a once-struggling engineering firm, she has turned Newmark Security into a growing, service-led listed business built on human capital protection, recurring revenue, and a promise to herself.