AI agents, identities, and legacy tech: The new security frontier

AI agents, identities, and legacy tech: The new security frontier

Automation is racing ahead of security controls. As AI agents join corporate networks, experts warn that 2026 will test enterprise resilience. ExtraHop’s Jamie Moles says governance, visibility, and culture will define whether businesses stay ahead or get blindsided.


The coming year could mark a turning point in enterprise cybersecurity. With artificial intelligence now embedded across everything from service desks to DevOps pipelines, many organisations are rushing to deploy agentic tools faster than they can secure them.

“If an attacker compromises one, they’ll get all the access without any of the scrutiny,” says Jamie Moles, Senior Technical Manager at ExtraHop. “AI agents need to be treated like high-risk identities and managed like a normal employee: monitored, restricted and continuously verified. Blind trust in opaque systems is how breaches happen and AI governance will be even more key come next year.”

That warning reflects a broader shift in risk perception. According to IBM’s 2025 Cost of a Data Breach report, stolen or compromised credentials were implicated in more than 80% of breaches last year. As multi-cloud environments expand and third-party integrations multiply, the likelihood of attackers exploiting identity systems continues to grow. The priority, Moles argues, will be “watching what happens after someone gets in and identifying suspicious behaviours early.”

Yet much of the danger still lies in legacy technology. Research from the Ponemon Institute suggests that more than 60% of businesses continue to operate unsupported systems, many of which are connected to modern cloud applications. “Everyone knows they’re fragile,” Moles adds. “Modernisation is now imperative to risk reduction, it’s not just a nice-to-have.”

Even so, the most persistent risk remains human. Proofpoint’s Human Factor 2025 found that over 90% of successful breaches begin with social engineering. As automation reduces direct oversight, the psychological leverage on employees increases. “People still read MFA codes aloud and approve unfamiliar access requests,” Moles says. “Regular reinforcement and removing decision pressure will be the key factors to managing this risk.”

If 2025 was the year of generative adoption, 2026 will be the year of AI accountability — where automation and human vigilance converge. For now, one principle holds: you can’t defend what you don’t understand, and you can’t trust what you can’t see.



  • Foreverland raises €6m for Europe push

    Foreverland raises €6m for Europe push

    Foreverland is scaling cocoa-free chocolate across Europe after fresh funding. The Italian foodtech company has raised €6 million to expand Choruba, deepen manufacturer partnerships, and support a new organic product line.


  • Durham study exposes multiple-job protection gaps

    Durham study exposes multiple-job protection gaps

    Multiple-job workers face widening gaps in dignity protections at work. New research argues current frameworks miss the realities of precarious multiple employment, leaving essential workers exposed to stress, stigma, and weak workplace protections.


  • New training targets autoimmune workplace blind spot

    New training targets autoimmune workplace blind spot

    Autoimmune illness is emerging as a major workplace risk nationwide. A new CPD-accredited training programme is aiming to help employers and healthcare professionals recognise autoimmune disease earlier and respond with more effective support.