Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.
|
“Algorithmic discrimination” refers to the use of an artificial intelligence (AI) system that results in differential treatment or impact disfavoring an individual based on protected characteristics (e.g., age, color, ethnicity, disability, national origin, race, religion, veteran status, sex, etc.). It is well settled that AI systems have the potential to create discriminatory results, whether from system training with flawed or unrepresentative data, or because the system found and replicated patterns of human discrimination within the training data. Such discrimination is particularly troublesome in the context of employers that use AI systems to make employment decisions.
Although President Biden released an executive order on the development and use of AI, there is no comprehensive federal legislation regulating the use of AI systems, particularly in the context of safeguarding against algorithmic discrimination in employment decisions. Therefore, several states have either passed or are considering legislation aimed at mitigating the risk of an employer’s use of an AI system resulting in algorithmic discrimination. Each of the proposed bills would impose similar obligations on employers that use an AI system or an automated decision-making tool (ADT) when making employment decisions.1
In general, these laws and proposed legislation impose a duty of reasonable care on employers to mitigate and assess the risk of algorithmic discrimination caused by their use of AI systems. There are significant affirmative reporting requirements, including direct notifications to individuals who are the subject of a decision made by an AI system. In some cases, the bills provide individuals with the chance to correct data input into the AI system and appeal adverse consequential decisions, which may require human review. Many of the bills include impact or risk assessment requirements to check for bias against protected groups. The specifics of the laws and proposed legislation are discussed further below.
Enacted Laws
Colorado: Senate Bill 24-205
- The Colorado Artificial Intelligence Act will take effect on February 1, 2026, and adopts a risk-based approach to AI regulation similar to the European Union’s AI Act.
- The law will apply to Colorado businesses that use AI systems to make, or that are used as a substantial factor in making, employment decisions.
- The legislation is designed to regulate the private-sector use of AI systems and will impose reasonable care requirements on Colorado employers.
- When parties doing business in Colorado deploy or make available an AI system intended to interact with consumers, the legislation also requires those parties to ensure that the AI system discloses to each consumer that the consumer is interacting with an AI system.
- The Colorado attorney general is responsible for enforcing the law and has the authority to promulgate rules to implement and enforce the law’s requirements, including setting standards for risk-management policies, disclosure notices and impact assessments. Punishments can include fines or injunctive relief.
- The Colorado AI Act does not include a private right of action.
Illinois: House Bill 3773
- HB 3773 amends the Illinois Human Rights Act to protect employees against discrimination from, and require transparency about, the use of AI in employment-related decisions.
- Under HB 3773, an employer cannot use AI that has the effect of subjecting employees to discrimination based on a protected class with respect to, e.g., recruitment, hiring, promotion, discharge, discipline, or the terms, privileges, or conditions of employment.
- The Act also prohibits employers from using zip codes as a proxy for protected classes.
- Illinois employers must notify employees of the use of AI to make or aid in making employment-related decisions.
- HB 3773 applies to any person employing one or more employees within Illinois and takes effect on January 1, 2026.
New York City: Local Law 144 (LL 144)
- Effective July 5, 2023, LL 144 prohibits employers and employment agencies from using an automated employment decision-making tool (AEDT) unless they ensure a bias audit was done and provide notices.
- The law only covers AEDTs that are being used to substantially assist or replace discretionary decision making for employment decisions.
- Notice Requirements: Employers must provide notice that an AEDT will be used. The notice must also include information about how to request a reasonable accommodation. If the applicant lives in NYC, employers must provide the required notice, together with a description of the “job qualifications and characteristics” that the AEDT will be used to assess, at least 10 business days before it is used.
- Bias Audits: Before using an AEDT, employers must conduct an audit of the tool to check for bias against protected groups (race/ethnicity and sex). The results of this audit must be made publicly available. The audit must be performed by an independent third party, and at least annually.
- LL 144 applies to all employers and employment agencies that use AEDTs “in the city,” which means (1) the job location is an office in NYC, at least part time; (2) the job is fully remote but the location associated with it is an office in NYC; or (3) the location of the employment agency using the AEDT is in NYC.
- Penalties for non-compliance are a $500 fine for the first violation and up to $1,500 fines for subsequent violations.
Pending Bills and Regulations
California Privacy Protection Agency (CPPA)
- In November 2024, the California Privacy Protection Agency (CPPA) released draft regulations on the use of AI and automated decision-making technology (ADMT), which were promulgated under the California Consumer Privacy Act (CCPA).
- California appeals court ruled that the CPPA can immediately enforce rules as soon as they are finalized.
- The public comment period was recently extended to February 19, 2025, and the CCPA is set to hold a public hearing that day to allow for the submission of in-person comments.
- As with the rest of the CCPA, the draft rules would apply to for-profit organizations that do business in California and meet at least one of the following criteria:
- The business has a total annual revenue of more than USD $25 million;
- The business buys, sells, or shares the personal data of 100,000+ California residents;
- The business makes at least half of its total annual revenue from selling the data of California residents.
- The rules would only apply to the use of AI and ADMT in making “significant decisions.”
- The draft CCPA AI regulations have three major requirements:
- Organizations that use covered ADMTs must issue pre-use notices to consumers;
- Offer ways to opt out of ADMTs; and
- Explain how the business’s use of ADTs affects the consumer.
California Civil Rights Council
- The California Civil Rights Department (CRD) is charged with enforcing the state’s anti-discrimination laws. As part of those efforts, the Civil Rights Council—a branch of CRD — develops and issues regulations to implement state civil rights laws.
- Under the proposed rules, employers that use AI in their hiring or employment practices would not be able to use a system that screens out, ranks or prioritizes applicants based on their religious creeds, disabilities or medical conditions unless the factors are job-related.
- The main thrust of the CRD’s proposed rules is that vendors are treated as agents and/or employment agents of employers by virtue of providing AI systems.
- The rules would also prohibit employers from using AI during the interview process.
- The proposed rules require covered employers and entities to maintain employment records—including data created from automated decision-making systems and AI training data—for at least four years.
- Under the proposed rules employers must also conduct anti-bias testing on their ADT systems.
Texas: 88(R) HB 1709
- If passed, the Texas Responsible AI Governance Act would establish obligations for developers, deployers, and distributors of “high-risk AI systems.” The proposal adopts a risk-based approach to AI regulation like the European Union’s AI Act. “High risk” systems include those used in consequential decisions, such as employment, healthcare, financial services, and criminal justice.
- Key provisions of the proposal include mandatory risk assessments, record-keeping requirements, and transparency measures.
- The bill outlines high penalties for non-compliance (monetary penalties up to $100,000) and proposes the establishment of a regulatory “sandbox” to allow for innovation while testing compliance with the law.
- The bill mandates that developers and deployers of high-risk AI systems conduct detailed impact assessments. These assessments would evaluate risks of algorithmic discrimination, cybersecurity vulnerabilities, and transparency measures.
- Distributors are required to ensure that AI systems meet compliance standards before entering the market.
The incoming Trump administration’s impact on regulating AI will likely be minor because most AI regulatory efforts are occurring at the state level. Therefore, AI legislation will continue to proliferate at the state and local level. In Democratic-led states, we may see an uptick in AI regulations that establish a counterpoint to what’s happening at the federal level. Employers that use or are considering using AI to make employment decisions would do well to keep abreast of the relevant legislation. The patchwork of emerging laws at the state and local level means employers should also prioritize transparency measures and proactive audits as attainable goals to managing risk of bias inherent in in AI tools.
See Footnotes
1 An ADT is a system that uses AI and has been specifically developed to make, or contribute to making, consequential decisions, including employment decisions. ADTs are sometimes referred to as automated decision-making technology (ADMT) or automated employment decision-making tools (AEDT). The terms are generally interchangeable.