The AI Act – AI Literacy obligations
The European Union's Artificial Intelligence Act (the AI Act) came into force on 1 August 2024 with a phased enforcement timeline over two years. As well as setting out requirements for providing and using AI based on risk categories, there are separate AI literacy requirements which aim to ensure the responsible and safe use of AI.
The rules of AI literacy came into effect on 2 February 2025.
What is AI literacy?
Article 3 of the AI Act defines AI literacy as the “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.”
This is a very broad definition. When read together with Recital 20 of the AI Act it becomes even wider. AI literacy is intended to allow not only providers and deployers but also affected persons (not specifically defined) to make an informed deployment of AI systems and to provide relevant actors in the AI value chain with the insights required to ensure appropriate compliance and correct enforcement of the AI Act. The AI literacy requirement applies to all AI systems regardless of the level of risk they present. The level of AI literacy to be promoted is subjective and a "one size fits all" approach is not appropriate.
Who must receive AI literacy training?
Article 4 of the AI Act states "providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used."
Most businesses today deploy some form of AI system from the most basic initiation of it to the extremely sophisticated. Organisations will have to take measures to ensure that staff can make an informed deployment of AI systems and be aware of the opportunities and risks associated with them. Article 4 is broadly worded and the level of detail and depth of training will vary depending on the specific needs of each organisation, and even between different groups within the same organisation. For example, training may cover topics such as how AI systems operate, the opportunities and risks they present, the limits on how AI systems can be used within the organisation, and how confidential information and personal data should be handled. It may also include understanding third party rights when using AI, protection of AI generated output and potential copyright issues, system-related weaknesses stemming from training data (such as bias) or the way the system functions (such as hallucinations).
Steps to AI literacy compliance
The following steps should be taken by an organisation to achieve compliance with its AI literacy obligations:
- AI Systems Audit: Carry out a strategic-level audit to assess what type of AI systems are being provided/deployed and/or being introduced.
- Assess Existing Knowledge: Assess who uses the AI systems and what their current knowledge is to determine what knowledge and tools are needed to achieve an appropriate level of AI literacy.
- Training Programmes: Develop training material and procedures and implement them to ensure that the target level of AI literacy can be achieved across the organisation and covering any persons using AI systems on its behalf.
- Documenting and Monitoring: This includes keeping records of training materials, attendance, and also updating training programmes as AI technologies and regulations evolve.
Non-compliance
While the AI Act's obligations came into effect on 2 February 2025, the provisions dealing with non-compliance will only apply from 2 August 2025. It is open to each member state is to set out penalties and other enforcement measures for non-compliance which measures must be effective, proportionate and dissuasive. However, organisations must keep in mind that significant fines can be imposed for supplying incorrect, incomplete or misleading information to notified bodies, which would include any information supplied in relation to AI literacy.
Conclusion
By fostering a culture of AI literacy, not only does an organisation comply with legal obligations but it also creates a responsible, informed environment that promotes safe and ethical use of AI.
For more information, please contact Damian Maloney, Franklin O'Sullivan or your usual contact in Beauchamps.