Beyond the Buzzwords: A Practical Guide to AI Procurement with Model Clauses and GDPR

Blog

Introduction: Responsible AI Procurement, by Emma O’Connell

Artificial intelligence (AI) procurement presents distinct challenges, as highlighted in our previous blog, largely due to the lack of well-defined regulations, limited experience, and evolving technical standards. The recent emergence of the EU AI Act and the related model clauses for AI procurement offer a glimmer of hope. These resources provide a much-needed framework to structure decision-making, highlighting key considerations for procurement officers. However, they are still in their nascent stages, and significant responsibility remains with the procurement officers to interpret and operationalise these requirements into concrete technical specifications, award criteria, and contractual clauses. Such clauses, covering various aspects of AI procurement from data governance and human oversight to accuracy, robustness, cybersecurity, and transparency, aim to set the standards for trust, accountability, and respect for individual rights – all essential qualities as we adopt AI for the public good.

Albert Sanchez-Graells’ thoughtful guidance was instrumental in shaping this perspective, inspiring me to explore further how these theoretical frameworks can be practically applied, equipping procurement professionals to acquire AI systems responsibly while navigating both legal and ethical obligations.

While these model clauses offer valuable guidance, operationalising them raises several questions. For instance, how do we translate these clauses into tangible, measurable requirements that can be effectively evaluated during the procurement process? How do we ensure that the selected AI system demonstrably complies with these requirements throughout the AI life cycle? The clauses mention the importance of technical documentation, instructions for use, and logging capabilities to ensure traceability and transparency, but it would be interesting to explore how these mechanisms function in practice. These practical considerations are crucial to bridge the gap between theoretical frameworks and real-world implementation – how can public buyers ensure that the AI systems they procure not only effectively meet their intended purpose, but are legally compliant and respectful of fundamental rights, particularly regarding data protection?

Model Clauses: Building Blocks for Responsible Adoption of AI for the Public Sector

First, it is important to understand why model clauses are beneficial to procurement professionals. Think of these clauses as building blocks – they give public buyers a practical starting point for integrating trustworthy AI principles into our procurement processes. These clauses, developed by experts and regulatory bodies, help us translate those big, sometimes abstract, ethical ideas into concrete, actionable requirements that we can include in our contracts.

They are helpful in guiding us to think about key factors that we might miss, especially given how complex AI can be. Here are some of the core areas that these model clauses tend to address:

  • Risk management: It is all about taking a proactive approach from the start. Model clauses emphasise the importance of a risk management system that spans the entire life of the AI system. This means we need to identify potential risks to people’s health, safety, and fundamental rights, considering both how the AI is supposed to be used and how it could be misused, even accidentally. Then, we need to figure out how to evaluate and minimise those risks.
  • Data and data governance: We all know that data is the foundation of any AI system. That is why model clauses put a lot of emphasis on data quality, governance, and making sure the data is not biased. Public buyers must use this to their advantage and really scrutinise the datasets used to train, test, and validate the AI systems they are considering. We have to ask questions such as: Where did these data come from? Do they represent the real world in a fair and balanced way? Are these complete? This careful attention to data is essential to make sure our AI systems do not lead to unfair or discriminatory results.
  • Technical documentation and instructions for use: To make sure we understand how the AI system works, model clauses call for clear and comprehensive documentation and instructions. This is vital because it helps procurement professionals assess whether the AI system meets the necessary requirements, and it allows us to use it properly and responsibly. Basically, these documents help demystify the ‚black box‘ of AI.
  • Transparency and human oversight: We cannot simply let AI systems run on autopilot, especially in the public sector. Model clauses push for transparency by requiring us to understand how the AI system makes its decisions, and that we can explain those decisions to others. They also stress the need for human oversight. This means we need to have people monitoring the AI system’s performance, spotting any problems, and having the ability to step in when necessary. This is all about making sure we stay in control and that the AI system serves the public good.
  • Conformity assessments and CE markings: These are essential tools for ensuring that AI systems comply with established safety and performance standards. By incorporating these into procurement processes, public buyers can have greater confidence in the reliability and legality of the AI systems they adopt.
  • Harmonised technical standards: Providing a presumption of conformity, these standards ensure consistency and quality across AI systems procured in different contexts. They are particularly helpful in reducing complexity and uncertainty, allowing public buyers to focus on innovation while adhering to regulatory frameworks.

These requirements, when we weave them into our own contracts, give us a solid foundation for procuring AI in a responsible way. But it is important to remember that these model clauses are not a one-size-fits-all solution. We need to carefully interpret and adapt them to fit our specific needs and contexts. Additionally, we cannot just check the box and forget about it. We need to stay vigilant and ensure that the AI systems we procure are truly being used ethically and responsibly throughout their life cycle.

A crucial aspect of this practical implementation is data protection. By now, it is clear that the EU AI Act and related procurement clauses are essential, but they are only part of the equation. Public buyers need to be keenly aware that these resources do not cover all legal obligations, particularly those under the General Data Protection Regulation (GDPR).

This distinction highlights the dual responsibility public buyers have; while these clauses offer valuable ‚building blocks‘ for addressing AI-specific considerations, fulfilling GDPR requirements remains a separate duty when procuring AI systems.

  • The GDPR mandates principles such as data minimisation, purpose limitation, and transparency, all of which must be integrated into AI procurement processes.
  • Public buyers must ensure that AI systems are designed and implemented in a way that respects these principles in every phase of their deployment.

The essence of the message is clear: GDPR compliance in AI procurement is not an afterthought. This requires translating principles into actionable practices – but how do we achieve this in real terms? For example, how can we ensure that an AI system’s data processing activities are limited to what is strictly necessary for its intended purpose (data minimisation)? How can public buyers assess whether an AI system provides sufficient transparency to data subjects regarding its processing activities? Addressing these challenges calls for careful consideration and collaboration between procurement professionals, data protection officers, and technical experts. This is where Chrysi Chrysochou’s insights, from the vantage of a legal officer and deputy data protection coordinator in the European Commission, are invaluable. Her perspective deepens our understanding of these complex concerns, shedding light on innovative solutions at the intersection of AI, data protection, and procurement compliance.

Future-Proofing Procurement: AI’s Place in the Equation, by Chrysi Chrysochou

Building on Emma’s reflections, it is important to explore the practicalities of integrating Artificial Intelligence (AI) into public procurement processes.

AI encompasses a diverse array of technologies and capabilities with significant potential to transform public administration organisations. By automating repetitive and manual tasks, for example, AI can help reduce administrative costs and enhance the quality of service delivery. However, the inherent complexities of AI often require reliance on external service providers. As a result, public procurement processes must be adapted to the AI landscape to address the unique challenges and specific requirements associated with procuring AI tools, ensuring effective and ethical integration into public services. This should take the form of an ex ante or ex post checks of the conformity of AI systems or AI-based services with the regulatory and/or contractual requirements.

Even though it is apparent that when an AI system or an AI-based service processes personal data, Regulation (EU) 2018/1725 (EUDPR) and Regulation (EU) 2016/676 (GDPR) apply. However, we need to keep in mind that for non-personal data which are not within the scope of the EUDPR/GDPR, intellectual property laws (mainly copyright) come into play and need to be respected.

To address such challenges of AI technology in procurement procedures, public administrations need to take into account several factors. In particular, this should include the nature, scope, and content (including pre-contractual obligations such as setting appropriate specifications for calls for tenders), the risk categorisation of AI systems or AI-based services that are intended to be procured, and a thorough risk-assessment of the entire AI life cycle before and after deployment.

When it comes to the risk categorisation of an AI system or AI-based service, the AI Act sets certain obligations for providers and deployers of AI systems:

1.1. High-risk AI systems,

1.2. Non-high-risk AI systems,

1.3. General-purpose AI models (GPAI).

When public administrations procure AI systems or AI-based services these scenarios need to be taken into account, in particular for high-risk AI systems:

a) Procurement of AI development services by a public administration organisation as a provider – the AI Act defines a ‘provider’ of AI systems as any natural or legal person, public authority, agency, or entity that develops an AI system. It includes commissioning its development with the intention of placing it on the market or making it operational under their own name or trademark, regardless of whether this is done for profit or free of charge. Additionally, the AI Act describes ‘putting into service’ as the initial supply of an AI system by the provider for direct use by the deployer or for the provider’s own use within the EU, in accordance with the system’s intended purpose (Scenario 1).
b) Procurement of AI systems by a public administration organisation as a ‘deployer’ – under the AI Act, a deployer of AI systems is defined as any natural or legal person, public authority, agency, or other entity that uses an AI system under its authority. An exception is when the system is used for personal, non-professional activities.
Based on this definition, when the staff of a public administration organisation or its external service providers (using IT equipment provided by public administration organisations) deploy off-the-shelf AI systems provided by a third-country contractor – whether as a contractor-provider or contractor-distributor – the public administration organisation is classified as a deployer (Scenario 2).
c) Use of AI systems by contractors delivering AI-based services procured by public administration organisations – AI systems or AI-based services are used by contractors in the delivery of any kind of procured services to public administration organisations (Scenario 3).

Scenario 1: Public procurement procedures for AI systems and AI-based services need to take into account legal, ethical, and technical requirements throughout the entire AI life cycle.

During the collection and use of personal data used for training, testing, validation, and input data, public administration organisations need to ensure compliance with the requirements of the EUDPR/GDPR on the processing of personal data. This includes, in particular, a lawful legal basis, retention policies, and transparency obligations.

The accuracy and the quality of the datasets do not only respect the principle of accuracy of the EUDPR/GDPR, but also ensure – to the extent possible – the limitation or avoidance of biases and discrimination when using AI systems. Biases are closely linked with risks to the fundamental rights, such as non-discrimination.

From a technical standpoint, it is important to ensure that the developed AI system, along with its algorithmic parts, maintains the required quality to achieve its intended purpose consistently throughout its life cycle. This includes requiring the AI developer to establish a robust testing and data validation plan, as well as implementing an effective quality management system to uphold transparency and reliability. This may include algorithmic transparency, log (auditable) records and explainability. Human oversight is another important element to ensure that the AI system is equipped with the necessary features and instructions of use that give control to the deployer.

Scenario 2: Public administration organisations acting as deployers must create the necessary conditions to comply with their obligations under the AI Act:

a) Procure only high-risk AI systems registered in the EU database of the Commission and request evidence of this registration.
b) Monitor the deployment and performance of AI systems or AI-based services; inform the provider or distributor and the relevant market surveillance authority when the deployment/use and maintenance of such systems may affect fundamental rights or public interest in an unacceptable manner.
c) The public administration organisation should request from the provider of the AI system all relevant information on the training, operation, and maintenance of the AI system. This includes information on the training dataset, human oversight, transparency requirements, instructions of use, and technical and organisational measures.
d) Monitor legal, ethical, and technical compliance of the AI system during and prior to contracting, and during and after deployment (such as bias detection as output, adoption of adequate cybersecurity measures by the contractor, and transparency).

Scenario 3: In case the contractor uses AI systems to deliver a project, transparency and information obligations should be included as clauses in the contract between the contractor and the public administration organisation. Quality requirements need to be specified in advance and be included in the contract (or as specifications in the call for tenders).

Pre-procurement and post-procurement considerations

In summary, public administration organisations should ensure compliance with the AI Act and the relevant data protection framework both in the pre-procurement and post-procurement phases.

Pre-procurement:

  • Conduct risk assessments to align procurement goals with AI life cycle requirements.
  • Include detailed evaluation criteria in tender documents to ensure compatibility with legal, ethical, and technical standards.

Post-procurement monitoring:

  • Regularly audit AI systems to ensure compliance with agreed standards and assess their performance.
  • Adapt contracts to account for updates in AI capabilities or regulations.
  • Train staff for effective oversight and ethical deployment of AI tools.

Procuring General-purpose AI Models (GPAI)

There are also cases where public procurement organisations wish to utilise general-purpose AI models (GPAI) for certain activities. GPAI models are distinct from AI systems. While GPAI models can function as integral components and are typically embedded within AI systems, they do not independently constitute a complete AI system. Instead, GPAI models serve as the mathematical backbone that powers AI systems. By contrast, an AI system is a fully integrated solution combining various elements, including one or more GPAI models. For example, the ChatGPT application of OpenAI is an AI system, with GPT-4 acting as its core GPAI model.

Key characteristics of GPAI models

GPAI models exhibit specific characteristics that influence their procurement and use:

  • Acquisition methods: GPAI models can be acquired through:
    • proprietary developers or downstream providers of AI systems (closed-source models);
    • free open-source licences (open-source models).
  • Accessibility: They can be made available in different formats, including:
    • libraries or application programming interfaces (APIs);
    • direct downloads or physical copies.
  • Risk classification: GPAI models may be categorised as follows:
    • Conventional GPAI models, with standard operational applications;
    • GPAI models with systemic risks, which may pose broader challenges to public interest or safety.

Procurement scenarios for public administration organisations

When public administration organisations procure GPAI models, they may do so under one of the following scenarios:

  1. Procuring GPAI model development services as a provider: developing GPAI models tailored to specific public sector needs.
  2. Procuring GPAI models as a downstream provider: acquiring GPAI models to integrate them into larger AI systems for operational use.
  3. Procuring the use of GPAI models by contractors: ensuring contractors utilise GPAI models in delivering services to the public administration organisation, such as AI-based solutions in service delivery.

The most common approach for public administration organisations is to procure GPAI models from third-party contractors. Public administration organisations act as providers of GPAI models when they commission its development through procurement procedures. In such cases, they procure development services from a contractor-developer (hereafter, ‘Contractor’) to obtain a GPAI model tailored for their own use.

However, it is relatively uncommon for public administration organisations to procure the creation of entirely new GPAI models, due to the high costs and complexity of pre-training such models. Instead, they are more likely to procure services to significantly modify an existing GPAI model as part of an AI system. By doing so, public administration organisations may become providers of the modified model, and such activities should also be taken into account when procuring or deploying GPAI.

The AI Act imposes specific transparency and compliance obligations on providers of GPAI models to enhance accountability and reliability. Providers must supply downstream system developers with detailed information about their models, ensuring these systems can be used and integrated into the public administration organisation more efficiently and effectively. Additionally, model providers are required to implement robust policies to guarantee compliance with copyright laws during the training of their models.

For models classified as having systemic risks, providers face stricter requirements. They must assess and mitigate potential risks, report significant incidents, and perform rigorous, state-of-the-art testing and evaluations of their models. Furthermore, they are obligated to maintain high standards of cybersecurity and disclose the energy consumption associated with their systems, to promote sustainability and transparency.

To support ethical and lawful practices, providers are encouraged to collaborate with the AI Office to develop comprehensive codes of conduct. These codes, formulated with input from experts and a scientific panel, will serve as critical tools for establishing clear guidelines and oversight mechanisms for general-purpose AI models, ensuring their safe and responsible use. On 14 November the European Commission published its first draft of the General-Purpose Artificial Intelligence (AI) Code of Practice. The final document is expected to be published and presented at a closing plenary meeting in May 2025.

We look forward to continuing this journey of exploring the intersection between AI and public procurement in the new year. Mark your calendars for the next edition of our course taking place on 8 and 9 May 2025 in Maastricht. The seminar will delve even deeper into the challenges and opportunities of leveraging AI responsibly in public procurement.

To stay updated on this and other upcoming activities in the field, make sure you register for our newsletter.

 

The views expressed in this blog are those of the authors and not necessarily those of EIPA.

Tags
Artificial IntelligencePublic procurement