UK leaders establish worldwide benchmark for AI evaluations

UK leaders establish worldwide benchmark for AI evaluations

The UK introduces a new AI audit standard. The British Standards Institution has launched the world’s first international certification standard for AI assurance, creating new benchmarks for independence, rigour, and global governance.


The British Standards Institution (BSI) has introduced the world’s first international standard aimed at assuring the quality of audits and safeguarding against unregulated providers in the burgeoning AI sector. This standard stands apart from broader AI governance frameworks by focusing on the certification of AI audit systems rather than the AI systems themselves.

The new standard sets out robust governance mechanisms for auditors assessing compliance, aiming to add a verification layer that bolsters trust in AI governance certifications and ensures consistency among certifying entities. The BSI noted that “hundreds of firms in the UK” are offering AI assurance services, many provided by AI developers — raising concerns about the independence and rigour of these offerings.

This development coincides with most of the Big Four accounting firms establishing new AI assurance services, aiming to tap into an emerging revenue stream. The standard is expected to be used by accreditation bodies and conformity assessment organisations to promote trustworthy AI assurance practices, supporting alignment with developing global regulatory frameworks.

Unlike comparable frameworks in the United States, European Union, and Asia-Pacific, which primarily define what must be audited — such as high-risk AI systems or providers — the UK’s BS ISO/IEC 42006 takes a novel meta-assurance approach by regulating who is allowed to audit. By focusing on the competence, independence, and governance of the auditors themselves, the standard establishes an oversight layer not yet mandated by other regimes. While the EU AI Act hinges on notified bodies for conformity assessments and the US NIST framework encourages voluntary system-level audits, BS ISO/IEC 42006 formalises auditor accreditation and multi-role review processes, positioning the UK as an early mover in auditor accountability.

Commenting on the guidance, Mark Thirwell, Global Digital Director at BSI, warned of the dangers of a “wild west” of unregulated providers and inconsistent assessments, as firms race to enter the AI audit market. He stressed the importance of strong, coherent, and consistent evaluations to underpin trust in a secure AI ecosystem.

Thirwell added that the new guidance, BS ISO/IEC 42006, marks a significant step forward in global AI accountability by establishing clear certification criteria. He said the standard would enable regulators, customers, and investors to distinguish credible AI governance practices from unsubstantiated claims — supporting responsible AI innovation and paving the way for AI to be a force for good.


Stories for you

  • Brineworks secures m for DAC expansion

    Brineworks secures $8m for DAC expansion

    Brineworks secures €6.8 million funding to advance low-cost DAC technology. The Amsterdam-based startup aims to develop affordable carbon capture and clean fuel production technologies, targeting sub-$100/ton CO2 capture with its innovative electrolyzer system. The company plans to achieve commercial readiness by 2026….


  • Brineworks secures m for DAC expansion

    DHL and Hapag-Lloyd commit to green shipping

    DHL and Hapag-Lloyd partner for sustainable marine fuel use. The new agreement aims to reduce Scope 3 emissions through sustainable marine fuels in Hapag-Lloyd’s fleet, using a book and claim mechanism that decouples decarbonisation from physical transportation….


  • Survey: one in seven women face workplace harassment

    Survey: one in seven women face workplace harassment

    Over a quarter of women face workplace harassment in the UK. WalkSafe’s data highlights persistent harassment issues, with 27% of women and 16% of men affected. Many employees believe companies should enhance safety measures, valuing anonymous reporting systems.