The European Union is set to postpone enforcement of its flagship Artificial Intelligence Act’s most stringent provisions — those covering ‘high-risk’ systems — until late 2027, following sustained lobbying from major technology companies. The decision, expected to be confirmed in a formal Commission statement, marks the first significant adjustment to the legislative timetable since the Act was adopted in 2024.
The delay affects the implementation of compliance requirements for AI systems classified as ‘high risk,’ including those used in areas such as healthcare, finance, transport, and critical infrastructure. Under the revised plan, companies will have an additional two years to align internal systems and certification frameworks before enforcement begins.
Industry groups including DigitalEurope and leading technology providers are understood to have urged the Commission to extend the timeline, citing the complexity of compliance architecture and the absence of fully operational national AI supervisory authorities. Critics, however, warn that the delay risks creating a false sense of regulatory relief.
Nikolas Kairinos, CEO of RAIDS AI, said: “The decision to delay may reduce immediate regulatory pressures on the market, but it obscures the fact that organisations can’t afford to wait to safeguard their systems. 78% of enterprise AI procurement already requires third-party safety certification today – so a delay simply means that businesses that hesitate to bring in safeguarding measures will be unable to win contracts, obtain insurance coverage or satisfy due diligence in the short term, let alone by 2027.
“More concerning than the timeline extension is the Commission’s shift from national authority classification to self-assessment for high-risk AI systems. This transfers legal accountability to organizations without reducing compliance requirements, leaving them open to significant fines. Self-assessment can appear as permission to skip governance – organizations mustn’t make this mistake.
“Waiting for a catastrophe to happen is the wrong approach and only results in poorly designed rules. Delaying implementation creates a further problem still; complacency can set in, which then leads to a crisis-driven scramble when deadlines finally arrive. The same thing happened with GDPR, so businesses must understand that the delay isn’t a reason to postpone; it’s a reason to start preparing well ahead of the December 2027 deadline.
“Delay aside, the fundamental issue hasn’t changed; organizations need to build AI governance capabilities that enable innovation and competitive advantage. Failure to do so will only result in damaged reputations, contract losses and a lack of competitive edge.”
The postponement follows growing concern from regulators about the readiness of national compliance frameworks. Several member states have yet to establish the supervisory authorities required to enforce the AI Act’s obligations. Others have warned that overlapping certification systems could lead to fragmentation across the bloc.
The delay will not affect transparency rules for general-purpose AI models, which remain scheduled for mid-2026 implementation. However, the new timeline raises questions over how effectively the EU can coordinate AI governance while maintaining its position as a global standard-setter.
Under the revised schedule, high-risk AI obligations are now expected to take effect from December 2027, with preparatory audits beginning the previous year. The Commission is expected to publish an updated roadmap in the coming weeks.




You must be logged in to post a comment.