AI agents, identities, and legacy tech: The new security frontier

AI agents, identities, and legacy tech: The new security frontier

Automation is racing ahead of security controls. As AI agents join corporate networks, experts warn that 2026 will test enterprise resilience. ExtraHop’s Jamie Moles says governance, visibility, and culture will define whether businesses stay ahead or get blindsided.


The coming year could mark a turning point in enterprise cybersecurity. With artificial intelligence now embedded across everything from service desks to DevOps pipelines, many organisations are rushing to deploy agentic tools faster than they can secure them.

“If an attacker compromises one, they’ll get all the access without any of the scrutiny,” says Jamie Moles, Senior Technical Manager at ExtraHop. “AI agents need to be treated like high-risk identities and managed like a normal employee: monitored, restricted and continuously verified. Blind trust in opaque systems is how breaches happen and AI governance will be even more key come next year.”

That warning reflects a broader shift in risk perception. According to IBM’s 2025 Cost of a Data Breach report, stolen or compromised credentials were implicated in more than 80% of breaches last year. As multi-cloud environments expand and third-party integrations multiply, the likelihood of attackers exploiting identity systems continues to grow. The priority, Moles argues, will be “watching what happens after someone gets in and identifying suspicious behaviours early.”

Yet much of the danger still lies in legacy technology. Research from the Ponemon Institute suggests that more than 60% of businesses continue to operate unsupported systems, many of which are connected to modern cloud applications. “Everyone knows they’re fragile,” Moles adds. “Modernisation is now imperative to risk reduction, it’s not just a nice-to-have.”

Even so, the most persistent risk remains human. Proofpoint’s Human Factor 2025 found that over 90% of successful breaches begin with social engineering. As automation reduces direct oversight, the psychological leverage on employees increases. “People still read MFA codes aloud and approve unfamiliar access requests,” Moles says. “Regular reinforcement and removing decision pressure will be the key factors to managing this risk.”

If 2025 was the year of generative adoption, 2026 will be the year of AI accountability — where automation and human vigilance converge. For now, one principle holds: you can’t defend what you don’t understand, and you can’t trust what you can’t see.


Stories for you

  • AI agents, identities, and legacy tech: The new security frontier

    AI agents, identities, and legacy tech: The new security frontier

    Automation is racing ahead of security controls. As AI agents join corporate networks, experts warn that 2026 will test enterprise resilience. ExtraHop’s Jamie Moles says governance, visibility, and culture will define whether businesses stay ahead or get blindsided.


  • Why small businesses struggle to adopt AI — and how to move forward

    Why small businesses struggle to adopt AI — and how to move forward

    AI has become accessible to small businesses almost overnight. Yet as Kelly Salter, Commercial Director at names.co.uk (part of team.blue), explains, many still struggle with skills gaps, confidence, and trust. Bridging this divide requires practical guidance and mindset change — helping small businesses see AI as a partner, not a…


  • Fast-growth UK companies face heightened HMRC scrutiny: it’s time to act

    Fast-growth UK companies face heightened HMRC scrutiny: it’s time to act

    HMRC is intensifying its scrutiny of fast growing UK companies. Francesca Titus, barrister and white-collar crime partner at McGuireWoods, warns that expanding enforcement powers, AI-led investigations, and new criminal offences are raising the stakes. For scale-ups, proactive compliance is now essential — before HMRC comes knocking.