1. Home

  2. /
  3. Insights
  4. /
  5. Trends & Analysis
  6. /
  7. GPAI Code of Practice:…

GPAI Code of Practice: Clarifying Legal Uncertainties in the EU AI Act

Date: 23 September 2025, 11:00–12:30 CEST
Host: European AI Office
Format: Live-streamed public webinar


Background

As the EU Artificial Intelligence Act (AI Act) approaches full enforcement, General-Purpose AI (GPAI) models have emerged as a central focus of regulatory implementation.
The AI Act introduces specific obligations for GPAI providers — covering transparency, copyright compliance, and systemic risk mitigation.

To support this framework, the European AI Office has developed and approved the Code of Practice for GPAI (GPAI CoP), now signed by more than 25 companies. The Code was created through an open process involving nearly 1,000 stakeholders — including providers, academics, civil society, rights holders, and international observers — representing one of the first global legal instruments for Responsible AI.

This webinar, part of the AI Pact Series, provided a timely opportunity for stakeholders to better understand the new compliance landscape ahead of the GPAI rules entering into force in August 2025.


Speakers

Moderator: Marion Ho-Dac, Professor of Private Law, Artois University (France)
Speakers:

  • Yordanka Ivanova, Head of Sector, Legal Oversight of AI Act Implementation, European AI Office
  • Sabrina Küspert, Policy Officer, General-Purpose AI Coordination, European AI Office
  • Lauro Langosco, Technology Expert, European AI Office

Key Themes and Takeaways

1. Regulatory Clarity: Definitions and Obligations

Sabrina Küspert introduced the structure of the GPAI Code of Practice, highlighting its three main pillars: Transparency, Copyright, and Safety & Security.

Yordanka Ivanova outlined transparency requirements under the AI Act and CoP, emphasizing that GPAI providers must share standardized model documentation with the AI Office and relevant national authorities to enable downstream providers to meet their obligations.

She also detailed copyright-related provisions, which require providers to respect “opt-out” signals (e.g. robots.txt), avoid circumvention of paywalls, and maintain transparency around data mining practices.
For frontier models with systemic risk, Chapter 3 of the CoP specifies additional safety and security obligations, including measures to prevent misuse, unauthorized access to model weights, or autonomous replication beyond human control.

2. GPAI Guidelines: Making Sense of the New Framework

The AI Office’s guidelines define a GPAI model as one “trained with large amounts of data, demonstrating significant generality and capable of performing a wide range of tasks” (AI Act, Article 3.66).
However, several areas of legal uncertainty remain:

  • The definition of a GPAI model is not fixed — the AI Office acts as an arbiter, and providers may submit cases to be considered.
  • The definition of a “systemic” GPAI model (high-risk or frontier models) is likewise open to interpretation.
  • The FLOP threshold (used as a computational proxy for model size and capability) serves as guidance, not an absolute criterion.
  • The AI Office retains discretion to classify models as systemic based on other factors, such as the number of users.

Open-source exemptions were also clarified: providers may be exempt if their models and weights are freely available and non-commercial, but monetized open-source models lose that status. Transparency obligations — including publication of parameters and architecture — remain mandatory.

3. The GPAI Code of Practice: Compliance and Benefits

According to Sabrina Küspert, the Code of Practice offers a voluntary yet transparent path to demonstrate compliance with the AI Act.

Signatories benefit from:

  • Reduced administrative burdens;
  • Enhanced trust and collaboration with the AI Office;
  • Potentially mitigating factors in enforcement or fines.

However, the CoP does not establish a legal presumption of conformity. Non-signatories must independently demonstrate how they meet AI Act obligations and may face more intensive supervision.

4. Training Data Transparency Template

In July 2025, the AI Office published a template for public summaries of training data, aimed at balancing transparency with trade secret protection.

The template requires disclosure of:

  • General information about data sources and processing;
  • Full documentation and linking of public datasets;
  • High-level summaries for private or copyrighted data;
  • For web-scraped data, information on crawler names, collection periods, and the top 10% of domains scraped (5% for SMEs).

Yet, uncertainty remains around the extent to which trade secret information must be disclosed and the precision of web crawling transparency requirements.

5. Timeline and Next Steps

  • August 2025: GPAI obligations enter into force; CoP signatories begin implementing transparency and compliance measures.
  • August 2026: AI Office’s formal enforcement powers take effect.
  • In the interim, the Office will take a collaborative approach with providers, focusing on guidance and voluntary alignment.
  • The next initiative will focus on a Code of Practice for downstream AI systems, particularly on transparency obligations.

Legal Uncertainties Highlighted During the Session

While the webinar provided significant clarification, several areas of legal ambiguity remain:

  1. Definitions of GPAI and Systemic GPAI models remain open to interpretation by the AI Office.
  2. The FLOP threshold is not an exclusive determinant; user base and functionality may also be considered.
  3. Transparency vs. Trade Secrets — unclear boundaries in public disclosure obligations.
  4. Depth of Documentation — the AI Office may request more detailed reports for advanced or systemic models.
  5. The growing need for external transparency and AI governance mechanisms to facilitate constructive engagement with regulators.

Conclusion

The GPAI Code of Practice marks a milestone in Europe’s journey toward trustworthy and accountable AI governance.
While certain legal definitions and procedural details remain unsettled, the European AI Office has demonstrated a strong commitment to collaboration, transparency, and legal clarity.

As the AI Act’s GPAI provisions come into effect, 2025–2026 will be a critical period for shaping how AI regulation is implemented in practice — and how Europe’s approach to Responsible AI sets a global precedent.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *