Nearly every major enterprise deploying artificial intelligence has incurred financial losses linked to its use, according to new global research from EY.
The consultancy’s latest Responsible AI Pulse 2025 survey found that 99 percent of large companies — those with annual revenues exceeding US $1 billion — had suffered some form of risk-related financial loss since adopting AI tools. Across 975 executives surveyed in 21 countries, the combined cost was estimated at around US $4.4 billion.
The most common causes of those losses were compliance failures (57 percent), negative impacts on sustainability goals (55 percent), and bias in AI outputs (53 percent). Other contributing factors included weak explainability, legal exposure, and reputational damage. Nearly two-thirds of respondents said losses had exceeded US $1 million.
Yet EY’s findings suggest a widening divide between companies that have established clear governance processes and those still experimenting with early deployments. The study found that organisations with advanced “Responsible AI” frameworks — including risk-control mapping, real-time oversight, and board-level accountability — were significantly more likely to report stronger revenue growth, cost savings, and employee satisfaction.
On average, respondents said they had implemented seven of ten recommended governance measures, but when tested on how to match controls to risks in a hypothetical use case, only 12 percent scored a perfect result. Performance varied sharply by role: chief information and technology officers fared best, while chief executives, marketing, and operations heads lagged behind.
“AI is absolutely improving efficiency and productivity — people are doing more, faster,” said Joe Depa, EY’s Global Chief Innovation Officer. “But the value capture lags because those gains are being reinvested into doing more work, not necessarily into cutting costs or driving immediate revenue.”
The survey points to a maturing but uneven landscape. While AI systems are already embedded in most large businesses, the financial benefit often depends on how responsibly they are managed. As regulation tightens — from the EU AI Act to sector-specific guidance in the US and UK — the cost of non-compliance is expected to rise.
EY argues that responsible AI governance should be seen less as a compliance exercise and more as a performance tool. The correlation between mature controls and better outcomes, it says, underscores that risk management is becoming a strategic differentiator rather than a defensive necessity.
The consultancy’s data also highlight a lingering skills gap at board level. Many executives remain uncertain about which safeguards apply to which risks — a mismatch that could magnify as generative and agentic AI models move deeper into operations.
EY’s full report, Responsible AI Pulse 2025, is available on the company’s website.
You must be logged in to post a comment.