AI oversight and penalties set to commence in Europe
The European Union's Artificial Intelligence (AI) Act, set to take full effect on 2 August 2027, will bring significant changes to the regulation of AI systems across member states. However, as we approach the halfway mark to the deadline, many countries are still grappling with the task of appointing and designating the necessary authorities to enforce the new legislation.
The AI Act mandates the appointment of three key types of authorities: Market Surveillance Authorities, Notifying Authorities, and National Public Authorities. These bodies are responsible for ensuring AI products comply with EU rules, overseeing conformity assessment bodies, and enforcing fundamental rights obligations related to high-risk AI systems, respectively.
One of the countries facing criticism for missing the deadline is Germany. Consumer organizations and regulators have expressed concerns about the lack of a national AI oversight authority, highlighting the disadvantages this poses for effective regulation and AI innovation in the country. The Hamburg Data Protection Commissioner, Thomas Fuchs, emphasized the need for immediate action to address this issue.
As of early August 2025, most EU member states have missed the AI Act’s deadline to appoint or notify their national competent and public authorities. Only a minority of countries have completed the designation process and submitted notifications to the European Commission, which are currently under evaluation.
The delays and lack of clarity in numerous countries are causing concern about readiness for effective enforcement as parts of the AI Act’s provisions become applicable. This includes regulations on general-purpose AI providers like ChatGPT and others, which will start coming into force this month.
The challenges faced in this process include delays by many member states in appointing and notifying the required national competent and public authorities, lack of clarity and formal regulatory structures in numerous countries to oversee compliance and protect fundamental rights tied to AI systems, and potential regulatory gaps risking insufficient enforcement and oversight during the critical early phase of the AI Act’s implementation.
Preparatory work is underway in some places, but overall, the EU is currently facing significant challenges in establishing the governance framework needed to enforce the AI Act uniformly and effectively across all member states as mandated.
Other complexities arise when considering how the AI Act must interact with existing regulations like the General Data Protection Regulation (GDPR), the Digital Services Act, and the Digital Markets Act. Additionally, the AI Act sets a ceiling, not a floor, for fines, meaning companies may be fined up to €35 million or 7% of total worldwide annual turnover, whichever is higher, for breaches of the AI Act.
Notably, US tech giant Meta has announced it will not sign the Code of Practice on GPAI, a set of rules for general-purpose AI systems released by the Commission. Companies that sign the Code of Practice will be deemed compliant with the AI Act, but will still need to comply with the AI rulebook.
As the deadline approaches, it is crucial that national authorities are appointed as soon as possible and are competent and properly resourced to oversee the risks posed by AI systems. It remains to be seen how multiple bodies at both EU and national levels need to coordinate together to ensure a smooth and effective implementation of the AI Act.
- The delays in appointing and notifying the necessary authorities in various EU member states, including Germany, are raising concerns about the readiness for effective enforcement of the AI Act's provisions, particularly with regard to general-purpose AI providers like ChatGPT.
- The AI Act stipulates that companies may face significant fines for breaches, with penalties reaching up to €35 million or 7% of their total worldwide annual turnover, whichever is higher, underscoring the importance of robust technology regulation and competent authorities to oversee the risks posed by AI systems.