RAIDS AI enters beta with real-time safety monitoring

RAIDS AI enters beta with real-time safety monitoring

AI safety platform RAIDS AI has opened its beta phase to new users. The launch follows a pilot focused on real-time detection of abnormal AI behaviour and comes as companies prepare for EU AI Act compliance.


RAIDS AI has launched the beta version of its monitoring platform, expanding access to a tool designed to detect rogue artificial intelligence behaviour in real time. The move follows an extended pilot phase in which early adopters tested the system’s ability to flag and log deviations in model performance before they result in bias, system failure, or legal exposure.

The platform provides dashboards for behavioural tracking, incident reporting, and tailored safety analysis, giving developers and compliance teams visibility over how their systems behave once deployed. During the pilot, participants were supported directly by the company’s in-house team to refine detection thresholds and alert protocols.

The launch arrives as businesses across Europe face a tightening regulatory environment for AI. The EU AI Act — the bloc’s first comprehensive legal framework for artificial intelligence — took effect in August 2024, with enforcement beginning in August 2026. The legislation classifies AI systems by risk level and places obligations on providers and deployers to monitor and mitigate harmful outcomes.

RAIDS AI positions its platform as part of that compliance infrastructure. Research by the team identified more than 40 documented AI failures in recent years, ranging from fabricated legal citations to unsafe autonomous driving responses. Each, it notes, led to reputational and financial consequences that might have been prevented through earlier behavioural detection.

Nik Kairinos, Chief Executive and Co-founder of RAIDS AI, said the company’s goal is to make AI safety measurable and manageable. He described the launch as “the start of the next exciting phase” for the business, adding that while AI innovation is “incredibly exciting,” it must be “balanced with regulation and safety.” Kairinos noted that decades of progress in deep learning have reached a point where “perpetual self-improvement changed the rules of the game,” and urged CIOs and CTOs to recognise the severity of AI risk. “AI safety is attainable,” he said. “Failure is not random or unpredictable — and by understanding how AI fails, we can give organisations the tools to capitalise on AI’s capabilities in a safe and managed way.”

The beta phase will allow more organisations to trial the platform free of charge for a limited period, ahead of full commercial rollout later next year.

You can register for beta access here.


Stories for you

  • G7 allies step up rare earth diversification efforts

    G7 allies step up rare earth diversification efforts

    G7 finance ministers and allied economies are pushing to lessen global dependence on Chinese rare earths. The 10-nation meeting in Washington signalled growing consensus that critical minerals are not just trade goods but strategic assets. Governments are now exploring joint financing models and market coordination to support non-Chinese producers.


  • Trump announces 25% tariff on countries trading with Iran

    Trump announces 25% tariff on countries trading with Iran

    Trump has ordered a 25% tariff on nations doing business with Iran. The move, announced via social media, threatens to upend global trade dynamics. Key U.S. partners — from China to India — now face a choice between maintaining commerce with Tehran or risking punitive tariffs on American trade.


  • Alphabet hits  trillion valuation as AI refocus lifts sentiment

    Alphabet hits $4 trillion valuation as AI refocus lifts sentiment

    Alphabet reached a $4 trillion valuation for the first time. The milestone reflects investor confidence in its AI strategy and partnerships. Alphabet’s record high reinforces its position among the world’s most valuable companies, underscoring the renewed strength of its cloud and AI divisions.