Keeping up to date on advancements in Artificial Intelligence (AI) is no easy task. In the span of six months, AI can evolve more rapidly than many other sectors do in decades. The need for key stakeholders to keep a close eye on advancements in this field is crucial to ensure AI developments are aligned with the interests and safety of humans.
Understanding AI Literacy
AI literacy is yet another term to be added to an already extensive, yet constantly growing list of AI expressions and concepts. The word “literacy” once denoted the ability to read and write. But, in today’s digital world, literacy assumes much more – it demands a multifaceted ability to interpret, understand, respond, and anticipate.
AI literacy, specifically, implies an essential level of AI understanding that enables people to live, learn and work in a digital world through AI-supported technologies (Ng et al. 2021). It is often preceded by AI awareness, which implies that there is a basic understanding that AI exists, and it has the capacity to influence life and work, but without a clear comprehension of how or why.
For example, AI awareness could be knowing that your social media feed is tailored to your personal preferences and interests. Whereas AI literacy implies a deeper understanding that personal data and online behaviour is used to train such algorithms. Similarly, AI awareness could be knowing that generative AI systems like ChatGPT exist, but literacy suggests the ability to use such a system to achieve specific tasks while understanding the risks of plagiarism and bias, the importance of well-crafted prompts, and the need to critically revise outputs.
In other words, AI awareness is knowing AI is out there, whereas AI literacy is knowing how to use it and its implications.
AI awareness can be achieved passively and sufficiently by understanding AI exists and can therefore influence activities and decision making. On the other hand, AI literacy requires continuous learning through timely information and updated assumptions. Consider, for instance, the EU AI Act’s definition of AI Literacy.
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”[i]
According to the EU AI Act, AI literacy assumes a rich and diverse background of legal, regulatory, technical, operational, and contextual knowledge about AI models and the systems and the environment in which they are being applied. By this definition, it is achieved through the obtainment of certain skills and competencies that enable a holistic understanding of AI, its risks and applications.
The need for AI literacy in public administration
Public officials play a role in policy making, public service design and delivery, compliance to law and regulation, and in enabling public sector transformation. Given this, it is harder to define the specific AI skills and knowledge needed to ensure ethical AI is fully realised for public good. Nevertheless, it is clear that AI awareness simply isn’t enough to ensure public safety, we need AI literacy. Specifically, we need AI literacy in the public administration to ensure that government officials are equipped to make informed decisions regarding the future of AI on society, politics, and economics.
Here, we can’t afford to wait. AI is already transforming various areas of public administration. In healthcare, AI is used in predictive analytics to improve patient outcomes and manage healthcare resources more effectively. In law enforcement, AI-driven tools are used for predictive policing, analysing crime data to identify potential hotspots and allocate resources accordingly. In public procurement, officers are already preparing for changes in procurement processes driven by AI tools and systems. Public services, too, benefit from AI through the deployment of chatbots and virtual assistants, which streamline citizen interactions with government services, providing instant information and support.
The transformations undergone by the public sector to integrate AI has already been initiated. However, without the support of public officials with a rich understanding of the implications of AI, we risk ignoring critical considerations. As such, the need to invest and develop AI literacy within the public sector are particularly critical for the following reasons:
- Ethical AI Implementation: AI literacy helps public officials understand the ethical implications of AI, ensuring that AI systems are developed and used in ways that are transparent, accountable, and aligned with public values.
- Regulatory Compliance: With frameworks like the EU AI Act defining AI literacy as the skills and knowledge needed to make informed AI-related decisions, it is clear that regulatory compliance relies on a deep understanding of AI technologies.
- Public Trust and Safety: AI-literate officials are better positioned to safeguard public interests, minimising the risks of AI misuse, and maximising its benefits. This fosters public trust in AI-driven public services.
The need for specialized AI skills among public officials and regulators has not gone unnoticed. Recently, the U.S. Government’s Office of Management and Budget (OMB) announced that all federal agencies must designate a Chief AI Officer to coordinate and oversee the use of AI across their agency. Similarly, the European Commission has initiated its requests to national governments to appoint their AI regulators as part of the implementation of the recently adopted EU AI Act.
Unfortunately, and perhaps unsurprisingly, despite being explicitly mentioned in the first AI regulatory framework for AI, there is no common understanding of what skills or competencies are needed to achieve AI literacy. In fact, very little is known about how AI literacy should translate into real-world settings, despite being identified as crucial for a future supported by AI. What we do know is that different actors will require different levels of AI skills, making it more urgent for some to reach AI literacy over others.
The race to develop and maintain AI skills and competencies in government are well under way. As part of their ongoing commitment to strengthen AI capacity in their government, the US National AI Talent Surge have already increased AI capacity by 172% from October 2023 – March 2024. It also plans a 500% increase in recruitment for AI enabling skills between September 2024 and September 2025. Such investment in talent growth and acquisition signals a growing recognition of the transformative power of AI in the public sector. It also confirms that such a transformation depends on AI-specific skills among public officials.
While the EU has announced its own AI Skill Strategy for Europe, with emphasis on enhancing both basic digital skills and AI specialised skills under its Digital Decade targets for 2030, a similar initiative seeking to develop and attract AI skills within the public sector have yet to be initiated as aggressively.
How do we increase AI literacy in the public sector?
Following the US model, increasing AI literacy in the public sectors means investing in measures to develop AI skills and attract AI talent to the public sector. However, the public sector faces persistent challenges in light of “brain drain” and losing talent to the private sector. Despite efforts to address this by increasing workplace incentives, governments still face a challenging battle in generating enthusiasm for public sector employment against the attractive incentives of the private sector.
The second option is to upskill existing public officials to fill AI knowledge gaps. While we might be able to assume that most public officials have AI awareness, we cannot assume there is AI literacy. Ensuring trustworthy AI cannot be achieved through one-shot learning interventions. This needs continuous learning, built upon strong foundations of AI awareness towards specialised knowledge achieved through AI literacy.
This will be a challenging task as technological adoption and AI literacy vary greatly among individuals, making it difficult to design and deliver necessary skills for all. Efforts should be made to avoid further broadening the gap of AI skills and creating inequalities between those who benefit from the advantages of AI. As such, knowledge sharing, life-long learning, and on-the-job training will be critical steps in establishing AI literacy.
In particular, tailored learnings designed to explore the specific legal, regulatory, ethical and organisational considerations of AI in the public sector are vital for ensuring policy and decision makers are educated about the potential risks and opportunities for an AI-supported administration. Understanding the unique role public officials play in advancing ethical AI is crucial. Not just for ensuring the safety of the public, but for fostering transparency, accountability, and trust between the public and their government. These skills are also necessary for identifying innovative approaches for enhancing public sector activities via AI-supported tools and anticipating future applications for AI.
Both options are possible. Governments should explore opportunities to attract and retain highly skilled workers while investing in opportunities for public officials’ skill development. In fact, eventually, it will be impossible not to. As AI continues to reshape public sector functioning and operations, public administrations who fail to upskill their officials with the knowledge and competencies to confidently and effectively use AI will fall behind in the digital transformation.
Conclusions
In a quest to navigate a dense ecosystem of AI news and headlines, it is impossible to avoid ideas of “ethical AI”, “trustworthy AI”, or “human-centered AI”. These terms, often reflecting human-centered values, like those of the OECD’s AI Principles, are designed to ensure that AI is developed in a safe and transparent manner – under the supervision of educated, aware and critical decision makers. How do we ensure human-centered approaches are at the forefront of AI regulation and safety? It starts with AI literacy. This means ensuring those tasked with the immense responsibility of monitoring and regulating AI systems, governing their use in the public and private sectors, and implementing them into public services and polices have the sufficient knowledge and understanding to fully realize their impact and make decisions according to the needs and interests of the public they serve.
Ready to start your journey towards AI literacy? Join EIPA for a 2-day intensive AI course AI Risk Management in the EU.
[i] Note that this definition refers to the one included in the Final Draft of the Artificial Intelligence Act as of 19th April 2024.
The views expressed in this blog are those of the authors and not necessarily those of EIPA.