Studio Graphene says unapproved AI use is becoming a visible governance problem inside UK organisations, with new research suggesting policy and oversight are lagging behind how quickly employees are adopting generative tools in their daily work.
The company’s survey of 500 UK managers, directors, and C-suite executives found that 48% know or suspect employees are using AI tools that have not been officially approved. That rises to 54% in organisations with more than 250 employees. Just under two-thirds, 64%, said they are worried that unregulated AI use could create security or compliance risks, while 34% said their organisation has no formal policy or guidelines covering AI use at all. A further 37% said expectations have not been clearly communicated to staff.
The figures underline how quickly AI adoption is moving from experimentation to an operational control problem. The National Cyber Security Centre has long warned about the dangers of shadow IT, describing unmanaged tools and assets as a route to sensitive data loss and wider organisational risk. In the AI era, that warning is taking a more specific form. Employees do not need to install a full software stack to create exposure. Often, a single upload, pasted document, or unsanctioned workflow is enough.
Ritam Gandhi, director and founder of Studio Graphene, said: “Shadow AI isn’t the result of malice or even carelessness. It’s often the result of a disconnect between senior leadership and their teams – if the organisation is sanctioning or investing in AI tools that are not working well or delivering value, employees will turn to unsanctioned alternatives that will enable them to do their jobs better.”
Recent reporting points to a widening gap between perceived AI maturity and the controls needed to scale it safely, while other market data has suggested that employees often move faster than leadership on practical AI use. Studio Graphene’s own findings reinforce that tension: 61% of respondents said frontline staff are more comfortable using AI day to day than senior leaders, even as 59% worry that over-reliance on the technology could increase mistakes.
That combination creates a difficult management problem. Businesses need enough governance to manage privacy, compliance, and auditability, but they also need approved tools that are useful enough to stop employees going elsewhere. The Information Commissioner’s Office has continued to emphasise that AI use still sits within the principles of UK GDPR, including accountability and risk assessment. In other words, convenience does not remove legal responsibility.
Gandhi added: “It all comes down to precise strategy and effective integration. Businesses need a clear picture of where AI can make a meaningful impact and then, crucially, they have to embed it effectively into workflows so the AI can inform decisions or improve processes. Without that, AI projects are doomed to fail, meaning employees will continue to source their own AI tools – and that undoubtedly creates risks where data privacy, security and regulatory compliance are concerned.”
The issue for leaders is therefore no longer whether shadow AI exists. It is whether official strategy can catch up before unofficial usage becomes the de facto operating model.





You must be logged in to post a comment.