A new study conducted by the British Standards Institution (BSI) has uncovered that numerous businesses are making significant investments in AI without adequate safeguards for its execution. While leaders are promoting AI’s potential to enhance productivity, the findings indicate that the majority of companies are functioning primarily with unfounded confidence in a technology they do not yet completely comprehend.
The research revealed that 62% of business leaders intend to boost AI investments in the upcoming twelve months, citing increased efficiency and cost reductions. Nonetheless, fewer than 25% of companies (24%) have established an AI governance program. Among larger enterprises, this percentage rises slightly to one-third.
BSI chief Susan Taylor Martin commented to City AM: “The business sector is gradually enhancing its understanding of the vast possibilities of AI, but the governance disparity is alarming.” She further stated, “AI will not serve as a remedy for slow growth or low productivity without strategic oversight and well-defined guardrails. Excessive confidence, along with disjointed and erratic governance, risks making many organisations susceptible to preventable failures and reputational harm.”
This report emerges amidst ongoing debates regarding how governments or regulations should prioritise fostering innovation while ensuring accountability. Recently, the UK government initiated plans for its ‘AI growth lab’ — a regulatory sandbox aimed at allowing organisations to test AI in more strictly regulated environments. Although the initiative received praise from industry leaders, concerns surfaced that secure experimentation should be founded on strong ethical oversight.
The BSI study discovered that only 28% of executives are aware of the data sources their businesses utilise to train or implement AI tools, representing a 35% decrease from earlier this year. At the same time, merely two-fifths indicated that their business has established clear protocols for managing confidential data utilised in AI training.
Risk management appears inconsistent across all sectors, with nearly a third of executives noting that AI has introduced risks within their organisations. However, only 30% reported having a formal risk assessment procedure, and even fewer, at 22%, stated that their firms limit unauthorised employee use of AI tools.
Despite the significant investment, the BSI’s evaluation indicated that the word ‘automation’ was mentioned seven times more frequently than ‘training’ or ‘education’ in executives’ annual reports, highlighting a broader disregard for workforce readiness. While more than half of executives express confidence in their staff’s abilities to effectively use AI, only one-third have created a corresponding training program. Taylor Martin cautioned that “businesses may be underestimating the necessity of human oversight and capabilities in conjunction with technological progression.”



