Blogpost

EU AI Regulation: GPAI rules since 2 August 2025

New EU AI Regulations for AI will apply from August – especially for ‘general-purpose’ models (GPAI). Greater transparency, stricter documentation, clear risk assessments: the EU AI Act is really gaining momentum. Read on to find out what companies need to know about GPAI obligations, systemic risk, the voluntary Code of Practice and the next steps.

3299
2 minutes reading time
Metal Surface

EU AI regulation: GPAI rules since August 2025

Since August 2, 2025, specific obligations for “general purpose” AI models (GPAI) have applied in the EU, in particular more transparency, more security and more risk management are required.

The Commission has published guidelines and a voluntary Code of Practice (CoP). Although this CoP is not legally binding, it is recognized by the Commission and the AI Board as a suitable means of verification and offers pragmatic implementation aids.

The guidelines specify what is considered a GPAI, how models are differentiated from applications and when a “systemic risk” exists. Models with training computes above 10^25 FLOPs are considered to be systemic in a rebuttable manner; models below this threshold can also be classified. If a model reaches or is expected to reach the threshold value, the provider must inform the AI Office immediately, but within two weeks at the latest. However, not every GPAI is subject to notification: the two-week notification only applies to GPAIs with a systemic risk. Mandatory documentation, a copyright policy and a public summary of the training data in accordance with the EU template apply to all LPAIs.

The CoP finalized on July 10, 2025 bundles the “state of the art” in checklists and processes; signatories are closely monitored by the AI Office.

CoP focal points

EU AI Act: What applies when?

Core obligations include risk-based assessment, traceability, training and user information; high-risks additionally include QMS (Art. 17), conformity assessment, EU declaration of conformity and CE marking.

Risk classes

Depending on its use, a GPAI can fall into any class; model-related GPAI obligations are added.

For implementation, the Commission recommends clear system definitions (model vs. application), risk-based assessments, compliance with technical requirements including CE and EU database for high-risks as well as operation with ongoing monitoring. In 2025, the focus will be on AI competence (Art. 4), data protection and copyright.

In Germany, the Federal Network Agency acts as a single point of contact with an AI service desk.

Central contact points, SME assistance and guidelines on classification, training and documentation are planned. Relevant articles are: Art. 4 (AI competence), Art. 17 (QMS for high-risk) and Art. 50 (transparency obligations including labeling of synthetic content).

Takeaways

For companies, we recommend a consolidated inventory of all AI systems, classification including testing for GPAI and systemic risk, the establishment of an Art. 17 QMS for high-risk, an internal AI compliance manual with roles, approvals and incident processes as well as consistent technical documentation including a public training data summary for GPAI.

CE conformity and EU database must be planned for high-risk at an early stage; training in accordance with Art. 4 must be rolled out on a role-specific basis and updated regularly.

Mitarbeiter neu Hassan

Hasan Neu

combines many years of experience in technology consulting with certified process expertise. He is accustomed to navigating complex structures, quickly analysing and structuring extensive requirements, and working with his project team to develop results-oriented solutions. He combines in-depth technical knowledge with agile management methods and has broad technological expertise as well as strong methodological knowledge in the areas of IT structure and process organisation. His focus is on all topics related to cloud transition, from strategy development to successful implementation.

Write a comment

You must login to post a comment.