The recent launch of a non-profit AI watchdog by Yoshua Bengio, one of the original architects of artificial intelligence, signals a pivotal moment in the AI debate. His new organisation aims to hold AI systems to account — probing their reasoning, flagging dishonesty, and ultimately answering a growing concern in boardrooms: if an AI sounds confident, can it still be wrong?
The answer, increasingly, is yes. Generative AI systems are extraordinarily fluent, but not reliably truthful — a mismatch that many business leaders are still underestimating.
“Businesses need to approach generative AI on the basis that it is always ‘hallucinating’,” says Will Richmond-Coggan, partner specialising in Responsible AI and Data at Freeths LLP. “Often, the hallucinations will be benign, or even helpful. But they should never be trusted without verification or validation. What is perceived as dishonesty or fabrication is not maliciously motivated – the AI doesn’t ‘know’ that it is being dishonest. Traditional governance is poorly equipped to deal with a technology which, by design, is incapable of being trusted or reliably accurate.”
This concept of ‘polished dishonesty’ — outputs that appear highly plausible, but are fundamentally false — is at the heart of the new transparency challenge. “We’re dazzled by outputs that look right, even when they’re miles off,” says Jo Sutherland, Managing Director at Magenta Associates. “These systems don’t just get things wrong; they make being wrong look right. And that’s dangerous, especially in sectors where trust, accuracy, and nuance matter.”
The result, Sutherland warns, is that AI systems can embed error and amplify bias — all while sounding helpful and persuasive. “The problem is, traditional governance tools aren’t built for this. Most risk frameworks assume you’ll spot the warning signs when something goes awry, but AI doesn’t wave red flags. It’s a people-pleaser. That’s why governance needs to evolve, fast.”
Even among companies already investing in AI, capability gaps persist. “Nearly two-thirds (64%) globally reported that their organisation’s AI readiness is not as effective as it should be,” says Narasimha Goli, Chief Technology Officer at Iron Mountain. “Only 35% said their information management strategies in AI-readiness are consistently generating value.”

The result is a growing tension between the speed of AI adoption and the maturity of organisational safeguards. For Greg Nieuwenhuys, AI expert and senior partner at Generative AI Strategy, transparency begins with demystifying how systems reach their conclusions. “Honest AI means showing how outputs were generated – sources, confidence, assumptions. It also means clearly signalling when something is uncertain or extrapolated.”
But transparency doesn’t just mean adding context — it also means acknowledging trade-offs. “You lose a bit of speed or ‘wow factor’ — but gain trust, control and safety. In the long run, that’s a better business outcome,” Nieuwenhuys adds.
For many firms, bridging the trust gap will require new tools and external oversight. “This type of tool is likely to become indispensable,” says Richmond-Coggan, referring to independent auditing models. “We already recommend to clients that they appoint a ‘red team’ to destruction test any model that they are looking to deploy, to check how it might be misused or how it might react to unexpected or impermissible user behaviours. But in reality, human testers will only be able to check a limited array of responses, whereas a suitably instructed AI testing model would be able to undertake such testing at a dramatically larger scale.”
That shift is already visible in large organisations. “Smart businesses are implementing ‘AI FinOps’ – understanding exactly how AI investments drive business value, crucially keeping tight control on development and inference costs,” says Simon James, Group Vice President of Data Science & AI at Publicis Sapient. “When you can track how AI agents reduce manual work, accelerate decision-making, or improve customer experiences, governance becomes a value driver rather than a compliance burden.”
Transparency, in short, is becoming a strategic asset — not just a legal obligation. “In a business context, a truly honest AI system doesn’t just give an answer – it shows its work,” says Ian Quackenbos, Lead of AI Innovation & Incubation at SUSE. “That could mean surfacing uncertainty, pointing to the sources it drew from, or making it easier to understand why it responded the way it did.”
For now, many AI systems can’t deliver that. But as scrutiny grows and pressure mounts — from regulators, customers, and employees alike — the companies that can demonstrate honesty in their AI systems are likely to be the ones that win trust and grow with confidence.
Read the full expert analysis on BQ Executive, our source for in-depth leadership insights.




