Are Data Protection Authorities the new AI Market Surveillance Authorities?

Blog

and Elpida Longinidou

As the EU landscape of artificial intelligence (AI) governance continues to evolve, we saw a spring full of conferences, events, and seminars dedicated to enriching our evolving understanding of the dynamics between data protection and AI. In light of the recently implemented EU AI Act, critical questions about who will oversee AI compliance have emerged, only adding to speculation about the roles and responsibilities of data protection professionals in the era of AI.

In our previous blog, EIPA emphasised the importance of strengthening AI literacy among public officials; not just to fulfil their obligation under the EU AI Act, but to ensure the responsible implementation and monitoring of AI systems within both the public and private sectors. And while much effort is being made to understand how AI literacy is and should be defined, it undoubtably cannot be achieved without due consideration of the rights and freedoms guaranteed to European citizens through the EU Charter of Fundamental Rights and the EU General Data Protection Regulation (GDPR).

EIPA has been busy exploring the intersection of data protection, privacy and AI, which often revolves around questions of data – an essential aspect of AI development and deployment, and a commodity that often requires legal protections and safeguards. As such, it is not entirely surprising that the EU AI Act compliance is deeply connected to the principles and protections embedded within the GDPR, as responsible processing and management of data can enable and foster responsible development and deployment of AI.

Put simply, there is no responsible AI without data protection

As regulations tout for their human-centricity, both the GDPR and the EU AI Act employ principles designed to protect individuals against fundamental human rights violations through risk-based approaches. Consider, for instance, the principle of transparency. Under the GDPR:

  • Personal data should be processed in a transparent manner (Article 12).
  • Information regarding data processing should be available in an easily accessible manner (Article 15).
  • Data subjects have the right to be informed about how their data is collected, stored, shared and used no matter whether the data has been collected from the data subjects themselves or from other sources (Article 13 and Article 14).

Similarly, the EU AI Act promotes transparency by asserting that users should be

  • aware of when they are interacting with an AI system (Article 50);
  • notified when they are subject to high-risk AI systems (Article 13);
  • aware that such systems shall be developed in such a way that allows for traceability and explainability (Article 12 and preambulatory clause 71).

Another example is the principle of fairness (Article 5.1 a), which is enforced by the GDPR in the processing of personal data in, for instance, prohibiting data processing that may result in discrimination. Under the EU AI Act, protections against bias (Article 10, 14 and 15) and discrimination are also present, ensuring that AI systems avoid perpetuating biases that unfairly impact individuals or groups of people based on protected characteristics such as race, gender or religion.

With these principles underpinning both regulations, it is clear that those overseeing their compliance require similar knowledge about the key principles designed to safeguard fundamental rights, and the practices that will ensure high-level principles are reflected within processes. Such perspectives are not uncommon among data protection authorities (DPAs), who – prior to the EU AI Act – were the ones who oversaw AI systems’ compliance with data protection law. Since the GDPR applies to all processing of personal data, DPAs play an important role in monitoring compliance of AI systems regarding the provisions set out in the GDPR. However, in light of the new regulation for AI systems, the question is whether this should continue under the EU AI Act, or whether new authorities should be designated to monitor all AI system compliance.

man holding tablet with blue light futuristic feel

 

The recent statement by the European Data Protection Board suggests yes, declaring DPAs as the natural choice for taking up the role of Market Surveillance Authorities (MSAs) of high-risk AI systems under the EU AI Act. The statement says, ‘DPAs are proven – and are proving to be – indispensable actors in the chain leading to the safe, rights-oriented and secure deployment of AI systems across several sectors’.

The complex and demanding responsibilities of DPAs, which often employ a combination of legal, technical and operational knowledge, provides a strong basis upon which AI literacy can be developed and applied to carry out MSAs’ responsibilities. Beyond the expertise required to fulfil their obligations, both DPAs and MSAs have comparable autonomy within the respective domains, whether it be structural autonomy designated through Member States, or functional autonomy in acting independently and free of external influence.

On this basis, the similarities between the obligations of a DPA and an MSA become clearer, in both the scope and function of their designated authority. Nevertheless, there are risks in delegating DPAs with the task of market surveillance, adding to an already long list of responsibilities and lack of resources, and moreover, blurring the distinct lines between the GDPR and the EU AI Act.

So, who should monitor AI compliance?

The EU AI Act has defined key actors, governance structures, regulatory bodies and enforcement mechanisms for ensuring compliance. For example, the newly established EU AI Office has designated powers for governing General Purpose AI (GPAI) via the EU AI Act. Some have criticised the role of the AI Office for being too ambiguous, stating that the EU AI Act’s reference to both the EU Commission and the EU AI Office without clear indication of their distinct roles, if any, results in unnecessary ‘legal confusion’. Regardless of this, the AI Office will play a significant role in implementing and enforcing the EU AI Act through its powers granted by the EU Commission, and will do so through their teams dedicated to ‘Excellence in AI and Robotics’, ‘Regulation and Compliance’, ‘AI Safety’, ‘AI innovation and Policy Coordination’ and AI for Societal Good’, in consultation with the scientific and international affairs advisors.

In addition to establishing new bodies, existing institutions are also being assigned new AI supervision responsibilities. The European Data Protection Supervisor (EDPS), for example, is typically responsible for monitoring the processing of personal data by EU institutions and bodies and will now also take an active part in EU AI Act enforcement. Under the EU AI Act, Member States are required to appoint their MSA and notifying authorities, allowing national governments to decide who is best suited to manage and monitor AI-related risks and incidents within their borders.

Here, the states are already beginning to take action. For instance, the Dutch Data Protection Authority, alongside other national supervisory authorities, have issued their joint advice for effective AI supervision in the Netherlands. In it, they advocate for the Dutch DPA to serve as the primary MSA for AI applications, ensuring these systems meet strict standards related to training, transparency and human oversight in combination with sector-specific supervision and cross-border collaboration.

europe connected by pins and rope

Alternatively, the Spanish Government pursued a dualistic model, whereby the Spanish Council of Ministers approved the creation of the Spanish Agency for the Supervision of Artificial Intelligence (Agencia Española de Supervisión de la Inteligencia Artificial – AESIA) to fulfil their legal and regulatory obligations, which is expected to work in collaboration with their established DPA. In such an arrangement, the role of DPAs remains largely unchanged, since, in the case of the Spanish Government, AESIA acts as a distinct entity from the DPA.

Awaiting other Member States to identify their respective AI bodies gives way to speculation of whether data protection officers (DPOs) should take up AI monitoring roles within their respective organisations, in addition to their data protection monitoring tasks. Compelling arguments have been made for designating such responsibilities to existing DPOs, citing their capacity to assess and monitor fundamental rights risks and provide legal and operational advice for their organisations accordingly.

For example, the overlap between assessments, such as the Data Protection Impact Assessment under the GDPR, and the Fundamental Rights Impact Assessment under the EU AI Act, begs the question of to what extent such assessments can and should be conducted by the same authority, team or individual.

Beyond legal expertise and knowledge, many of the DPOs who witnessed and contributed to the development and implementation of internal data protection frameworks would be well equipped with the institutional knowledge to contribute to an internal AI governance framework. Internal AI governance frameworks, like data protection frameworks, require knowledge of regulatory and legal obligations, and regional standards and best practices, as well as organisational knowledge about internal processes, systems and practices.

Despite the value and necessity in considering whether DPOs should absorb AI-related responsibilities, it is important that we don’t forget to consider which skills, competencies and qualifications are required to establish a strong AI risk management system.

Fortunately, these discussions are underway. In fact, a prominent takeaway from the 2024 CPDP Conference – as well as the EDPS Summit and EIPA’s own DPO certification programme – was that an effective, proactive and competent data, AI and digital risk management strategy requires a diverse and collaborative team, comprised of technical, legal, regulatory and operational competencies.

When IT professionals, engineers, legal advisors, digital ethics officers, AI officers, DPOs and security teams meet to discuss hazards, threats and vulnerabilities – risk mitigation and management protocols become more resilient to the increasing number of threats that endanger privacy, data protection and fundamental rights, whether known or unknown. As highlighted by a recent OECD report, when privacy and AI communities meet, everyone benefits from enhanced awareness and collective actions towards the digital transformation.

strategy team

However, the effectiveness of such a team relies heavily on the clear and distinct roles of each member in contributing to good governance within an institution. Here, DPOs have an advantage as their roles and responsibilities are clearly delineated under the GDPR. However, the EU AI Act leaves much more room for interpretation of the necessity, background and responsibilities of an AI equivalent of a DPO, leaving some to initiate AI projects without first designating someone to oversee and manage their governance.

Where do we converge?

For now, there is agreement among the stakeholders that there is an urgent need for clearer guidance and instruction from relevant authorities. The role of the EU AI Office will be integral to this – not only for defining the roles of AI officers and supervisors, but also for establishing what organisational procedures will be required for EU AI Act compliance. However, time is of the essence. There is a growing need for AI research to be conducted in alignment with data protection and privacy, leveraging the lessons learned from nearly a decade of GDPR implementation. Equally, DPOs and legal professionals must recognise and prepare for the implications of AI-driven technologies to manage their impact on data protection and privacy.

Now that the EU AI Act has come into force, both private and public sector actors must prepare for AI compliance, monitoring and risk mitigation. More specifically, Member States have only until November 2024 to identify and disclose their authorities for fundamental human rights protection, and until summer 2025 to appoint their national MSA. This means relevant authorities have minimal time to prepare for their new responsibilities by becoming experts in the intricate AI regulatory landscape, ready to prevent violations of fundamental human rights related to AI systems.

Currently, there are limited resources available to help guide individuals through the complex EU regulatory framework on AI. To address this, EIPA has created an intensive two-day course designed to help participants navigate the EU AI Act’s risk-based approach and explore its interplay with the larger digital regulatory framework including the GDPR, Digital Services Act, Digital Markets Act and NIS2 Directive.

Interested in learning more about AI at EIPA? Contact our AI Senior Research Officer, Michaela Sullivan-Paul at m.sullivanpaul@eipa.eu.

More on this topic

Ready to become an expert in Data Protection? Check out the programme below, or register now while spots are available. You can stay in touch with EIPA about additional training and resources on the topic of Data Protection and AI.

View full programme here

 

The views expressed in this blog are those of the authors and not necessarily those of EIPA.

Tags
Artificial IntelligenceDigital policy cyber security and data protection