Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
UPDATE: On July 12, 2024, the EU AI Act was published in the European Union Official Journal. The Act will enter into force on August 2, 2024, with the majority of the provisions applying from August 2, 2026. The ban on AI systems that pose an unacceptable risk will come into force on February 2, 2025, and the obligations for high-risk systems will apply from August 2, 2027.
* * *
|
On March 13, 2024, the European Parliament approved the EU Artificial Intelligence Act (the “AI Act”) by a sweeping majority. The AI Act will be the world’s first comprehensive set of rules for artificial intelligence.
What does the AI Act regulate?
The definition of AI has developed throughout the legislative process to include both predictive AI (i.e., decisions-based AI) as well as generative AI (i.e., AI used to generate new outputs based upon the patterns within data on which they have been trained, such as ChatGPT).
The AI Act sets out a broad definition of “AI systems” to mean a:
machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The challenge faced by the legislators in drafting a definition of AI that stands the test of time is demonstrated by the fact that the first draft of the AI Act in April 2021 did not anticipate the use of generative AI and pre-dated the release of ChatGPT by nearly 18 months.
The AI Act regulates a number of different roles in the AI lifecycle. However, for the purposes of this article, the most significant responsibilities fall on providers (broadly those that develop AI or put it on the market or into service under their own name) and deployers (organizations under whose authority the AI system is used, which would include employers) according to the potential level of risk that results from the AI.
A “risk-based” approach to AI
The AI Act takes a risk-based approach to the regulation of AI systems; put simply, the greater the potential risk that the AI poses to individuals, the greater the compliance obligations. The AI Act defines “risk” as both the probability of an occurrence of harm and the severity of that harm.
Unacceptable risk
The AI Act sets out a list of AI practices that pose an “unacceptable risk” and would therefore be prohibited. The focus of the prohibition is on AI systems with an unacceptable level of risk to people’s safety or that are intrusive or discriminatory. Notably for employers, a more recent addition to the list of prohibited practices is the use of AI systems to infer emotions of individuals in the workplace, except where the use of the AI system is intended to be for medical or safety reasons. AI used by employers for this purpose will be banned under the AI Act.
High risk
The AI Act also identifies a category of AI systems that is deemed to be “high risk,” and which will therefore be subject to significant regulatory oversight, including a range of detailed compliance requirements for both providers and deployers (which are set out below), with the majority of the obligations falling on providers.
The default position under the AI Act is that the following uses of AI systems in the workplace will be high risk:
- for recruitment or selection (in particular for placing targeted job advertisements, analysing and filtering job applications, and evaluating candidates); and
- to make decisions affecting terms of work-related relationships, notably the promotion or termination of work-related contracts, allocating tasks based on individual behaviour, personal traits or characteristics or to monitor and evaluate the performance and behaviour of individuals in the workplace.
This is likely to cover most uses made by employers of AI systems in respect of its workers and as a result, employers that use AI in the workplace will be required to take additional compliance steps.
Compliance obligations on “providers” of AI
The main obligations on providers of high-risk AI are as follows, with a focus on ensuring that the systems are developed in such a way to ensure compliance with the AI Act from the outset and throughout their use:
- Establishment, implementation, documentation and maintenance of a “risk management system” which must be a continuous iterative process planned and run throughout the entire lifecycle of the high-risk AI system. A one-off risk assessment would not, therefore, seem to fulfil this obligation.
- Systems which involve the training of AI models to be developed ensuring that training, validation and testing datasets comply with the quality criteria set out in the AI Act and are relevant, sufficiently representative and free of errors to the best extent possible.
- The systems should be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable deployers to interpret the system’s output and use it appropriately.
- Requirement to establish a documented quality management system to ensure compliance with the AI Act to include putting in place written policies, procedures and instructions.
- Requirement to produce and retain technical documentation to demonstrate compliance with the Act.
- The systems should be designed and developed in such a way to allow developers to implement human oversight commensurate to the risks, level of autonomy and context of use of the high-risk AI system.
- High-risk AI systems shall technologically allow for the automatic recording of logs and providers shall retain copies.
- The systems should be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity.
- Requirement to ensure that the high-risk AI system undergoes the relevant conformity assessment procedure, prior to being placed on the market.
- Requirement to draw up an EU assessment of conformity.
- Obligation to comply with the registration requirements under the AI Act.
- Requirement to maintain records in relation to AI compliance.
Compliance obligations on “deployers” (which would include employers)
Most of the compliance burden will rest with providers. However, deployers will still be subject to significant obligations, many of which flow from the obligations imposed on providers (set out above):
- Ensuring that the AI system is being used in accordance with its instructions for use.
- Obligation to assign human oversight to individuals who have the necessary competence, training, authority, and support.
- To the extent that they exercise control over the input data, employers must ensure that the input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.
- Requirement to monitor the AI system on the basis of its instructions for use and, where relevant, inform the provider of any issues.
- Obligation to retain copies of the automatically generated logs for a period appropriate to the intended high-risk AI system and for at least six months.
- Before putting high-risk AI systems into use, requirement to inform workers’ representatives, affected workers and other individuals (where relevant) that they will be subject to the system and its use.
- Where applicable, obligation to use the data provided by deployers to complete data protection impact assessments.
- Where relevant, requirement to complete a fundamental rights impact assessment.
The AI Act also introduces a new right for individuals to obtain from the deployer a “clear and meaningful” explanation of the role of the AI system in the decision-making process where they have been subject to a decision on the basis of a high-risk AI system, which produces legal effects or which significantly impacts their fundamental rights, such as performance management or termination decisions. It remains to be seen how this right might be used in practice, but it is possible that it could be used by disgruntled employees to put their employers under pressure to explain how complex algorithms work.
Lower-risk AI systems (such as Chatbots for customer service) would be subject to less-burdensome transparency obligations. These are less likely to be relevant to employers.
Extraterritorial scope of the AI Act
As with the European data protection regulation (“GDPR”), the AI Act will have extra-territorial scope, and international companies, even if they are not based in the EU, may still find themselves subject to the AI Act.
As well as companies that are based in the EU, the AI Act is also expressed to apply to:
- providers placing AI systems or generative AI models on the market in the EU, irrespective of where they are based; and
- providers and deployers of AI systems that are based outside the EU, where the output produced by the AI system is used in the EU.
Penalties for non-compliance
The penalties for non-compliance of the AI Act are significant and are up to the higher of EUR 35 million (USD 38 million) or 7% of the company's global annual turnover in the previous financial year. By way of comparison, this is almost double the maximum penalty for GDPR breaches (which itself was considered to be very high at the time that GDPR was implemented six years ago).
Next steps
The AI Act is in the final stages of the legislative process and is now just subject to a final proofread and formal endorsement by the European Council, which is expected to happen before the EU elections in early June. Following that, the AI Act will be published in the Official Journal and will enter into force 20 days after publication.
The majority of the provisions would then come into force two years afterwards, with the ban on AI systems that pose an unacceptable risk coming into force after six months and the obligations for high-risk systems applying after 36 months.
In terms of next steps, businesses should be conducting a thorough audit of the use of AI throughout their organization and considering how it might be categorised under the AI Act in order to understand which compliance obligations they will be subject to. The most immediate focus will be ensuring that any AI that is deemed to pose an “unacceptable” risk is withdrawn ahead of the prohibition. When putting in place any new uses of AI, employers should be thinking ahead to the upcoming compliance requirements under the AI Act, in particular as the new legislation puts the focus squarely on ensuring that AI systems are designed in such a way to comply with the AI Act and to enable compliance.
If you would like to read the full text of the Act, as adopted by the European Parliament, see here.