The AI Act became law on 1 August 2024, with a deliberately long term phased implementation timeline and process. This decision on the part of the Commission is in marked contrast to other technology focussed EU regulations and directives.
The EU General Data Protection Regulation (GDPR), for example, came into force practically in its entirety on 18 May 2018. Consultants, lawyers and technology advisers were establishing systems and services to meet the requirements of GDPR for over a year beforehand. While there was considerable concern over how the new regime would be regulated and enforced, for many organisations, there was a "toolkit" of policies and procedures which could be applied to their existing business practices so that they could continue to provide the goods and services they had done prior to May 2018. Likewise, regulations such as the Digital Services Act (DSA) (which effectively target every organisation with an online marketplace) and the Digital Markets Act (DMA) have been rolled out over a shorter time horizon.
Why is there a different approach with the AI Act?
The AI Act regulates Artificial Intelligence which is defined broadly. It is also risk based rather than targeting a technology type ie what are the impacts on the fundamental rights of parties interacting with AI. The variety of products and services that are caught are broad and the implications are far reaching. This is what makes the EU AI Act different. The GDPR is aimed at the processing of personal data and the DSA and DMA are aimed at specific types of online commercial activities.
It is primarily because of this risk-based approach and the broad implications which results in an incremental approach coming into effect over a two-year period, from August 2024 to August 2026, with some implementation in 2027.
This offers stakeholders time to adapt their operations, technical infrastructure and to strengthen their internal governance framework. It also, whether by accident or design, provides an opportunity for large technology companies, particularly across the Atlantic, to engage on the nature of the regulation. It is a truism, however, there is some truth to the statement that that the US innovates, and the EU regulates. It is likely that in this case, the best approach is to combine innovation and regulation. Finding the balance may be the difficulty.
Recent Developments
As the EU AI Act moves toward full implementation, the first half of 2025 has seen significant developments.
On 2 February (the first significant enforcement milestone) prohibitions on certain AI systems and requirements on AI literacy for staff entered into force. The Act bans AI systems that engage in social scoring, manipulative or harmful behaviour, or unauthorised surveillance.
In May 2025, the Commission launched a public consultation to gather input on implementing the AI Act's provisions for high-risk AI systems, particularly in sectors such as law-enforcement and healthcare. Stakeholders, including providers and developers of high-risk AI systems, businesses, governments and citizens were invited to share their views before 18 July.
On 9 July 2025, the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs published a study examining the challenges to the core pillars of EU copyright law that AI has created. The study finds that:
- Current EU Text and Data Mining (TDM) exceptions are incompatible with the scope and nature of AI training.
- Fully machine generated content should not be copyright protected.
- A statutory remuneration scheme is suggested to ensure creators are compensated when their works are used to train AI models.
- A dedicated AI working group should be established to address cross committee co-ordination gaps and ensure political follow up.
On 10 July, the European Commission published the final version of its General-Purpose AI (GPAI) Code of Practice, a voluntary framework designed to help GPAI providers adhere to the provisions of the AI Act, particularly regarding obligations related to transparency, copyright and safety / security. Guidelines to implement the Code of Practice were published on 18 July. The implementation delay (of some months) surrounding the GPAI Code of Practice likely stemmed from disagreement among stakeholders regarding the scope and enforceability of the Code. Some advocate for greater flexibility in the name of encouraging innovation, while others communicate a stronger need for guardrails and accountability.
It is also important to note that the AI Act is intended to be part of a wider architecture of regulation and law in relation to AI and there have been modifications to the proposed legislation in that regard, specifically, the withdrawal of the AI Liability Directive (AILD). This change in approach is contentious as experts assess whether this decision reflects a deliberate shift towards a simplified approach to AI regulation. The AILD was designed to address liability for damages caused by AI systems, regardless of whether the fault lies with the manufacturer, provider, or user of the system. This shift in approach is seen as desirable by technology sector lobbyists whilst some consumer rights groups do take an alternative view.
What next?
It is not easy to predict what exactly will happen next however there is very much a plan.
The next significant milestone in August 2025 introduced a series of measures that require AI providers to publicly disclose details about risks, model training techniques, and datasets. Organisations will also be required to ensure that AI outputs are understandable, predictable, and governed by clearly defined policies. Further developments expected in the second half of the year include the outcome of the high-risk AI consultation, the designation of national authorities and the empowerment of the Commission to impose significant fines on AI providers who pose a systemic risk.
Given the delayed publication of technical standards (GPAI Code) and pressure from lobbyists and transnational trading partners, it is possible that the Commission may postpone/amend further regulatory activity. While the path to complete implementation remains unclear, no official decision has been made to delay the overall implementation of the AI Act and organisations should continue with compliance provisions.
For more information, please contact David McMunn or your usual contact in Beauchamps LLP.