EU to delay high-risk AI rules after industry pressure

EU to delay high-risk AI rules after industry pressure

The EU’s flagship AI regulation faces a significant postponement. Brussels is expected to delay enforcement of high-risk AI system rules until 2027 following sustained pressure from major technology providers. The decision gives companies longer to adapt but raises concerns about governance complacency and shifting legal accountability.


The European Union is set to postpone enforcement of its flagship Artificial Intelligence Act’s most stringent provisions — those covering ‘high-risk’ systems — until late 2027, following sustained lobbying from major technology companies. The decision, expected to be confirmed in a formal Commission statement, marks the first significant adjustment to the legislative timetable since the Act was adopted in 2024.

The delay affects the implementation of compliance requirements for AI systems classified as ‘high risk,’ including those used in areas such as healthcare, finance, transport, and critical infrastructure. Under the revised plan, companies will have an additional two years to align internal systems and certification frameworks before enforcement begins.

Industry groups including DigitalEurope and leading technology providers are understood to have urged the Commission to extend the timeline, citing the complexity of compliance architecture and the absence of fully operational national AI supervisory authorities. Critics, however, warn that the delay risks creating a false sense of regulatory relief.

“More concerning than the timeline extension is the Commission’s shift from national authority classification to self-assessment for high-risk AI systems. This transfers legal accountability to organizations without reducing compliance requirements, leaving them open to significant fines. Self-assessment can appear as permission to skip governance – organizations mustn’t make this mistake.

“Waiting for a catastrophe to happen is the wrong approach and only results in poorly designed rules. Delaying implementation creates a further problem still; complacency can set in, which then leads to a crisis-driven scramble when deadlines finally arrive. The same thing happened with GDPR, so businesses must understand that the delay isn’t a reason to postpone; it’s a reason to start preparing well ahead of the December 2027 deadline.

“Delay aside, the fundamental issue hasn’t changed; organizations need to build AI governance capabilities that enable innovation and competitive advantage. Failure to do so will only result in damaged reputations, contract losses and a lack of competitive edge.”

The postponement follows growing concern from regulators about the readiness of national compliance frameworks. Several member states have yet to establish the supervisory authorities required to enforce the AI Act’s obligations. Others have warned that overlapping certification systems could lead to fragmentation across the bloc.

The delay will not affect transparency rules for general-purpose AI models, which remain scheduled for mid-2026 implementation. However, the new timeline raises questions over how effectively the EU can coordinate AI governance while maintaining its position as a global standard-setter.

Under the revised schedule, high-risk AI obligations are now expected to take effect from December 2027, with preparatory audits beginning the previous year. The Commission is expected to publish an updated roadmap in the coming weeks.



  • Manufacturers seek finance with risk cover

    Manufacturers seek finance with risk cover

    Manufacturers are seeking finance with stronger personal risk protection measures. Purbeck says applications for Personal Guarantee Insurance rose sharply in Q1 as loan values and growth borrowing increased.


  • Nokia, KETS scale quantum-safe security demo

    Nokia, KETS scale quantum-safe security demo

    Nokia and KETS advance quantum-safe telecoms with integrated QKD systems. Their latest collaboration combines optical networking and chip-based encryption hardware in a live global demonstration platform.


  • UK finance warns on AI governance gap

    UK finance warns on AI governance gap

    Zango report says UK finance lacks shared AI governance rules. The research argues banks and payments companies are still building oversight models separately as generative and agentic adoption gathers pace.