Europe’s first-of-its-kind Artificial Intelligence Act is entering a critical new stage, as providers of general-purpose AI systems — including those behind language models, image generators and advanced enterprise software — become subject to sweeping new compliance obligations. From 2 August 2025, all such providers serving the European market must adhere to strict documentation, transparency and risk-management standards, with enforcement powers handed to national authorities and the newly formed EU AI Office.
The move represents the most ambitious regulatory step yet in the global race to govern artificial intelligence. Under the Act’s provisions, firms must now maintain technical dossiers detailing system architecture, training data provenance, and safety evaluations. Providers are also required to publish summaries of training data and outline copyright compliance measures, as well as supply a dedicated “transparency package” to downstream partners and users. Non-EU firms must appoint an EU-based representative, ensuring the law’s reach extends worldwide.
Failure to comply brings significant risk. The AI Act introduces fines of up to €35 million or 7% of global turnover, whichever is higher — mirroring the tough penalty regime first seen under the EU’s General Data Protection Regulation (GDPR).
Only a handful of EU member states have met the deadline to appoint national enforcement authorities, raising questions over the consistency and speed of regulatory oversight in the early months. Even so, the European Commission has made clear that delays will not alter the legal timetable. “Europe is setting a global standard for trustworthy AI,” said Margrethe Vestager, European Commissioner for Competition and Digital, in a statement. “These rules offer clarity for developers and protect citizens’ rights, while supporting innovation.”
Some of Europe’s largest corporates have expressed concern over the timeline and complexity of implementation. Executives from companies including Airbus, Philips, and BNP Paribas have called for a delay, citing legal uncertainty and the risk of stifling innovation. In response, the Commission last month published a voluntary Code of Practice for general-purpose AI, with Google among the first to sign — though Meta declined, highlighting continued industry division.
Analysts see parallels with the introduction of GDPR in 2018: a chaotic early phase of compliance, followed by gradual market adaptation. Smaller providers, in particular, face steep documentation and resourcing costs. Yet, others argue that the Act’s transparency requirements will ultimately build consumer trust, support responsible corporate behaviour, and boost Europe’s influence in global tech regulation.
The new compliance phase also coincides with a strategic push by Brussels to accelerate domestic AI capability. The EU has earmarked up to €200 billion for investment in AI infrastructure, including plans for large-scale chip manufacturing. This twin-track approach — regulation and investment — is widely seen as central to Europe’s long-term competitiveness.
With enforcement powers now in effect, attention turns to the operational reality. Many providers are scrambling to finalise technical files, review training data policies and align risk controls. Multinationals must quickly adapt compliance across multiple jurisdictions, while smaller players are urged to seek early engagement with their national regulators.
For the AI sector and its business users, the next phase will test both the robustness of the new rules and the capacity of member states to enforce them fairly. But there is little doubt: the era of regulated AI has begun, with Europe setting the pace.