AI adoption is surging, but trust isn’t keeping up

AI adoption is surging, but trust isn’t keeping up

UK businesses are scaling AI faster than their safeguards mature. New research points to widening gaps in governance, data readiness, and workforce skills.


In its latest UK research, the company said businesses are still concentrated in basic applications such as document summarisation, chatbots, and task automation, with only 24% having embedded AI into core decision-making or wider operations.

The research also points to a consumer base that is using AI more often, but remains cautious about how far it should act independently. EY’s AI Sentiment Index surveyed 15,000 people across 15 countries, including 1,000 consumers in the UK, and found that 74% of UK respondents had used AI in the past six months. Yet only 14% said they would be comfortable relying on fully autonomous, agent-led systems. Trust in those handling AI-related data also remains limited, with 43% saying they trust companies, and 41% saying they trust governments, to manage that data effectively.

Inside organisations, the same tensions are showing up in policy, oversight, and skills. EY said 40% of companies still lack formal AI policies or governance frameworks, while 45% said they do not have visibility into how company data is being shared through AI systems. Nearly half of organisations, 49%, cited a lack of AI and digital expertise as a barrier to progress.

Matthew Ringelheim, EY UK and Ireland AI Leader, said: “AI adoption in the UK is rapidly advancing, but trust is not keeping pace with technological capability. Whilst consumers are engaging with AI every day, many still want greater clarity about who is accountable when decisions are made on their behalf.

“This is a critical moment for organisations. As AI systems become more autonomous, trust must be embedded through strong data foundations, clear accountability and visible human oversight. Our research shows UK users want greater control and transparency, reinforcing the need to move beyond AI adoption for its own sake. Organisations that can clearly demonstrate how autonomy is governed, and how people retain meaningful control, will be best positioned to scale AI responsibly and unlock long-term value.”

“As AI deployment cycles accelerate and everyday usage continues to spike, AI systems require strong transparency and governance practices in order to demonstrate consistent oversight and responsible data handling. Having clear documentation, monitoring and standardisation are important for building trust in these AI systems, which ultimately filters down into end-user confidence.”



  • Oscar Lovera on leading at scale

    Oscar Lovera on leading at scale

    Oscar Lovera explains how disciplined systems underpin digital manufacturing scale. The Xometry COO reflects on process, pressure, and building operational resilience in an industry where transformation only works when the fundamentals are sound.


  • AI adoption is surging, but trust isn’t keeping up

    AI adoption is surging, but trust isn’t keeping up

    UK businesses are scaling AI faster than their safeguards mature. New research points to widening gaps in governance, data readiness, and workforce skills.


  • Signicat hires Bauer as product chief

    Signicat hires Bauer as product chief

    Signicat has added product leadership before Europe’s identity rules tighten. Emma Bauer joins as chief product officer as digital identity providers prepare for heavier cross-border compliance demands.