Understanding General Purpose AI

Blog

and Leila Debiasi

With literacy obligations under the Artificial Intelligence (AI) Act (herein referred to as the EU AI Act) entering into application at the start of February, now is the perfect time to take a broader look at other key elements of the Act’s enforcement. One of the ongoing developments is the drafting of the Codes of Practice (CoP) for General-Purpose AI (GPAI). Designed to bridge the gap between the EU AI Act and practical implementation, these codes will serve as a key compliance tool, particularly for general purpose AI models.

But what exactly do they entail, and how will they shape AI governance in Europe? In this blog, we explore the purpose and scope of the AI CoP, the process behind their development, and what public administrations need to do to align with them.

Article 3(63) of the EU AI Act defines GPAI as a model that ‘… displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications …’. These can potentially be designated as ‘general purpose AI models with systemic risk’ under Article 51 EU AI Act if they present high-impact capabilities, posing risks such as ‘negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content’.

By contrast, standard AI models are often developed for narrowly defined applications, such as applicant filtering in recruitment, information retrieval for finding public services, or fraud detection in banking. The EU AI Act identifies some AI systems as high risk based on their potential impact on health, safety, and fundamental rights, or on intended use, as outlined in Article 6 and further specified in Annex III. These high-risk systems require rigorous oversight to prevent bias, discrimination, or otherwise undesirable outcomes. To further aid in the classification of AI systems the European Commission recently released its guidelines on AI system definition to facilitate the application of the provisions of the EU AI Act. However, GPAI is outside the scope of those guidelines and remains a less discussed AI application, leaving many with questions about their distinction, both legally and functionally, from AI systems, despite high demand from organisations eager to integrate it into their processes.

visual codes of practice AI

Codes of Practice – a tool for compliance?

GPAI models, unlike AI systems, are designed to be flexible and can perform a wide range of tasks, remaining adaptable for various applications and users. AI models are components of AI systems, where the model performs sophisticated and complex processing, and the system encompasses the broader hard and software to enable AI models to be accessed by users. As such, AI models, including GPAI models, can be integrated into AI systems, including high-risk AI systems.

Consequently, the EU AI Act regulates GPAI and AI systems differently, requiring additional transparency and monitoring measures where applicable. For example, GPAI models require more detailed documentation on the testing process and on training data sources, disclosure of who intends to embed their GPAI model into their AI system, and clear labelling when AI-generated content is produced. Further distinctions relating specifically to GPAI models have also been made under the EU AI Act – GPAI with and without systemic risk – where GPAI models posing systemic risk trigger even stricter obligations under Article 55, such as the assessment and mitigation of possible systemic risk, and reporting obligations in case of serious incidents.

Distinctions between AI systems, GPAI models, and their associated risks – whether high risk, systemic risk, or no risk at all – have raised many questions about which EU regulations apply to different AI applications. The simple answer? GPAI models are regulated based on the extent of systemic risk they pose, as defined in Article 51 of the EU AI Act. Additionally, any GPAI model embedded within an AI system must comply with the corresponding legal obligations for that system. This means that if the AI system falls under the high-risk category, the embedded GPAI model must meet the specific requirements set out for high-risk AI systems, including transparency, robustness, and human oversight provisions.

How can providers of GPAI models demonstrate their compliance with EU obligations?

There are several pathways for providers and deployers of AI systems to demonstrate compliance with the EU AI Act. This includes self-certification assessments, where providers of AI systems certify that they meet the Act’s requirements, by adhering to EU harmonised standards (anticipated for late 2025), or by making use of an external notified body to audit their model. However, when it comes to GPAI models specifically, the EU AI Act offers another avenue specifically for GPAI models as a means of demonstrating compliance.

In particular, Article 56 of the EU AI Act mandates that the EU AI Office facilitate the development of the CoP to provide guidelines for AI providers to comply with the obligations set out in Articles 53 and 55, and emphasising the need for a multistakeholder approach in its development, inviting providers and national competent authorities to participate in this co-regulation process, with the support of civil society, industry, and academia.

In recent years, co-regulation initiatives have become a feature of regulation in the areas of technical standardisation, professional rules, and social dialogue. Increasingly, we have also seen co-regulation employed in other policy areas, including environmental regulation, and in the tech sector. For example, the Digital Services Act sets out a co-regulatory framework for the drafting of codes of conduct by and for service providers. Similarly, Article 40 of the GDPR allows for the drafting of codes of conduct by a diverse array of actors, which have seen a very limited uptake. It can be concluded that multistakeholder regulatory mechanisms in the tech sector are on the rise. In this regard, the EU AI Act’s emphasis on CoPs and the development of harmonised technical standards reflects the EU’s commitment to balancing innovation with ethical and legal safeguards.

Co-regulation – a good idea?

With the CoP playing a key role in demonstrating compliance, understanding how they are developed is essential. The drafting process began in September 2024, when the AI Office hosted the opening event for the CoP plenary. The plenary is comprised of four working groups, each tackling a specific topic under the guidance of chairs and vice-chairs appointed by the AI Office. The role of chairs and vice-chairs involves gathering feedback and contributions from various stakeholders and making sure it is reflected in the subsequent draft of the CoP. The process will produce three drafts (two of which have already been published), with the final version expected to be published in April 2025. Upon the successful completion of the CoP and in accordance with Article 56(6) of the EU AI Act, the European Commission may choose to grant the CoP validity by means of an implementing act. This would mark a significant shift in the nature of the CoP – born as a voluntary guideline, developed into a binding standard enforceable across the Union.

Both the drafting process and the possible shift of the CoP from a voluntary guideline to a binding standard have raised important questions about the involvement of providers, typically those who develop the AI system or GPAI model, in the drafting process. By actively contributing to the drafting process, providers are not merely shaping industry best practices, but potentially influencing the very regulatory obligations that will govern them. On the one hand, their expertise and insights bring a practical perspective that can enable the implementation of the CoP, ensuring that guidelines are technically sound and feasible. On the other hand, it prompts concerns about the extent of which industry has influence over regulatory standards, providing yet another example of the delicate balance the EU is striving to strike between innovation and regulation.

As hinted above, the involvement of AI providers in the drafting process offers several advantages. For instance, their participation can help curb information asymmetries, ensuring that measures are informed by the latest advancements and practical considerations. This collaborative approach may also lead to higher compliance rates, as there is a greater likelihood of mutual understanding and compromise between regulators and AI providers, and deployers. Of course, industry-inclusive regulation can also foster innovation by allowing companies the flexibility to develop and implement new elements without the constraints of rigid governmental oversight. One effective approach is the use of regulatory sandboxes, which provide a controlled environment for companies to test new AI technologies while ensuring compliance and mitigating potential risks. Some have even argued for less involvement from the European Commission, suggesting that industry is not the problem, but rather the participation of non-technical representatives who significantly slow down the process, thus hampering efforts to provide a timely regulatory response to innovation.

However, critics point out that self-regulation is not immune to conflicts of interest, where economic motivations prevail over ethical considerations. The concerns highlight a subsequent lack of accountability and transparency that might further undermine public trust in AI systems, which is particularly problematic for the implementation of AI models in the public sector. Highlighting the risk of leniency when adopting co-regulatory mechanisms, Max Schrems affirmed that ‘having industry self-regulate is like putting kids in a candy shop’ during the latest Data Protection Day, accurately summarising a persisting criticism of an industry-led approach to AI regulation.

The state of play

With the first draft being published on 14 November 2024, the plenary provided a highly anticipated look into the CoP’s scope and priorities. Currently, the second draft has been released, and the working groups are developing a third draft, expected to be published towards the end of February. Despite some variations between drafts, both versions are formed around a set of core principles reflecting the EU principles and values, which prioritise, above all else, fundamental rights, equally present in the EU AI Act. However, additional aspects are included in the CoP, including international approaches to AI regulation, proportionality to risk and the size of the GPAI provider, future-proof measures, and support and growth of the AI safety ecosystem. This continuity regarding the base layer of the CoP provides a degree of certainty despite further elaborations. While these core principles and the structure of the CoP remain largely untouched with the second draft, observers commented that the second draft seems to be pushing more towards the industry’s interests, for example by replacing ‘signatories will’ with ‘signatories commit to’, and by emphasising a more flexible approach in implementation. On the other hand, stricter timeframes are introduced with this new draft, balancing out the flexibility brought by previously discussed amendments.

Looking ahead

While the CoP is primarily designed to guide AI providers, its impact extends well beyond the private sector. Public administrations increasingly rely on AI-powered tools for policymaking, service delivery, and administrative processes. Whether using GPAI models directly or procuring AI-enabled systems from external providers, public authorities will need to understand and navigate the evolving regulatory landscape shaped by the CoP. For instance, procurement officers responsible for acquiring AI-driven solutions must ensure that vendors adhere to applicable regulations, particularly if GPAI models are embedded in high-risk AI systems used in critical public services. Equally, national competent authorities overseeing AI compliance will likely reference the CoP as a useful benchmark for evaluating adherence to the EU AI Act, especially in the absence of harmonised standards.

If the European Commission does grant general validity to the CoP under Article 56(6), they could become de facto benchmarks for AI governance, making prompt adherence to them essential. As AI continues to shape the future of public administration, understanding and engaging with the CoP will be essential for ensuring that AI technologies are deployed responsibly, ethically, and in the public interest. Public bodies must, therefore, be proactive in understanding how the CoP affects transparency requirements, risk assessment procedures, and procurement processes. As one key obstacle to compliance is the lack of technical and updated knowledge, and AI literacy in public officials, the focus should be on providing relevant training. One good starting point is our upcoming event on 12 March, aiming to bring more clarity on the GPAI CoP, their drafting process, and the impact on the public sector, through a conversation with three experts involved in the drafting process.

Want to know more? 

If you are interested in more about this topic, keep an eye out for our upcoming EIPA in Conversation With event “From Draft to Implementation: Preparing for the EU GPAI Codes of Practice” where you will gain an overview of the EU AI Codes of Practice in ensuring compliance with the AI Act.

To the event page

 

The views expressed in this blog are those of the authors and not necessarily those of EIPA.

Tags Artificial Intelligence