AI in the Garden State: New Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination

  • NJ Division of Civil Right has issued guidance clarifying that state discrimination law prohibits algorithmic discrimination.
  • The Guidance discusses how the design, training, and deployment phases involved with AI decision-making tools can lead to discrimination under the NJ Law Against Discrimination.

On January 9, 2025, the New Jersey attorney general and the Division of Civil Rights (DCR) announced that the DCR has launched a new Civil Rights and Technology Initiative to address the risks of discrimination stemming from the use of artificial intelligence (AI) and other advanced technologies. As part of this new initiative, the New Jersey Office of the Attorney General and the DCR issued Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination. The Guidance clarifies that the New Jersey Law Against Discrimination (NJLAD) prohibits “algorithmic discrimination,” that is, discrimination resulting from a covered entity’s use of automated decision-making tools. Automated decision-making tools generally refer to any tool “that is used to automate all or part of the human decision-making process,” including, for example, generative AI, machine-learning tools, statistical tools, and decision trees.

How Can Automated Decision-Making Tools Discriminate?

The Guidance identifies three areas where decision-making tools can lead to discriminatory outcomes: designing the tool, training the tool, and deploying the tool.

A developer may intentionally or inadvertently skew the tool’s outcome because of the tool’s design. This includes the tool’s model, algorithm, or inputs. The Guidance cites an example from an enforcement action the U.S. Equal Employment Opportunity Commission (EEOC) brought against a tutoring company. The EEOC alleged that the company programmed a screening tool to reject female applicants aged 55 or older and male applicants aged 60 or older. The company ultimately agreed as part of the settlement to cease collecting problematic inputs like dates of birth. Another example where design could potentially lead to algorithmic discrimination is if an employer uses predictive analytics, where generative AI analyzes data from past hiring processes to predict which candidates are likely to succeed in specific roles. If discrimination tainted previous hiring processes, however, the tool’s design could lead to it recommending additional discriminatory outcomes.

The Guidance identifies training as a second way decision-making tools can cause discriminatory outcomes. Developers often train decision-making tools before deploying them for use. Training the tool on a biased data set could result in discriminatory outcomes. The Guidance did not identify an employment-related example, but this could arise in the context of video-interview analytics, where employers assess a candidate’s facial expressions, voice tone, or word choice to determine suitability for the role. If the developer limits the training to a particular sex or race, for example, the tool may associate the appropriate voice tone or facial expression with that sex or race and exclude those outside of those classes.

Finally, the Guidance identifies the deployment stage as a fertile place for algorithmic discrimination. Deployment could result in algorithmic discrimination if an employer uses it for one protected class but not another (e.g., using a screening tool for men’s resumes but not women’s resumes). According to the Guidance, deployment may also result in algorithmic discrimination if an employer designs and trains a tool for one purpose but uses it for another (e.g., designing and training the tool for recruiting but using the tool for onboarding). Deployment can also lead to a feedback loop, where a biased tool causes discriminatory decisions to be introduced into the tool for training, leading to more discrimination.

What Types of Discrimination Claims Apply?

The Guidance specifies that the NJLAD prohibits algorithmic discrimination based on actual or perceived protected characteristics in disparate treatment, disparate impact, and failure to accommodate claims. 

Disparate treatment occurs if an employer designs or uses automated decision-making tools to intentionally treat members of a protected class differently or uses a tool that is discriminatory on its face, even if the employer has no intent to discriminate. Disparate treatment would include the example above about using a resume screening tool for one group of applicants, such as men, but not for women. The Guidance also explains that a tool that does not directly consider a protected characteristic can still be the basis of a disparate treatment claim if the tool considers a close proxy to a protected characteristic. Consistent with the example from the Guidance, if an employer designs a tool to prefer applicants who provide their social security numbers instead of individual taxpayer identification numbers, this may be the basis of a national origin discrimination claim if the employer is using the applicants’ use of tax identification numbers as a proxy for citizenship status.

Algorithmic discrimination constitutes disparate impact discrimination when automated decision-making tools recommend or contribute to decisions that disproportionately affect members of a protected characteristic—regardless of the developer’s or employer’s intent—unless the employer’s use of the tool serves a substantial, legitimate, nondiscriminatory interest and there is no less-discriminatory alternative. The Guidance explains that relevant evidence for the less-discriminatory-alternatives analysis includes whether the employer or developer tested the automated decision-making tool for biases, or the employer evaluated alternatives.

The Guidance describes how automated decision-making tools can affect reasonable accommodations in several ways if the tool precludes or impedes the provision of a reasonable accommodation for a person’s disability, religion, pregnancy, or breastfeeding. An automated decision-making tool may be inaccessible to individuals with disabilities. For example, an employer may use a tool to measure an applicant’s typing speed but cannot measure the speed for typing on a non-traditional keyboard that a person with a disability is using. In such a case, the NJLAD requires the employer to provide an accommodation for that person if the employer knew or should have known about the need for an accommodation and the accommodation would not cause an undue hardship on the employer. An employer or developer may also not train the tool on data that includes individuals who need accommodations, so the tool may not recognize an accommodation is possible or penalize individuals who have or need a reasonable accommodation. This scenario may arise with tools that measure productivity and recommend penalties for employees who take excessive breaks where the tool does not allow for breaks related to medical, pregnancy, or breastfeeding needs, for example. The employer may violate the NJLAD if it accepted the tool’s recommendation to discipline employees under those circumstances.

Recommended Practices

The Guidance leaves no room for ambiguity: employers are liable for algorithmic discrimination even if a third-party developed the tool and the employer did not understand “the inner workings of the tool.” The legal landscape for automated decision-making tools is rapidly shifting, but employers should keep these recommended practices in mind:

  • Understand the Tools you Are Using. Most employers use decision-making tools from third parties. Employers should ask the vendor questions to ensure they understand the tool they are using; ask about the design and development process; ask about the training process and real-word testimonials and outcomes.
  • Properly Train Employees. Employers should make efforts to ensure employees using these tools understand how to use them and are using them for the correct purposes.
  • Audit the Tools. After implementing the tools, employers should consider auditing the results to ensure the tools are not disproportionately affecting individuals who share a protected characteristic.
  • Provide Notice. Although the Guidance does not require employers to provide notice to applicants or employees for the use of automated decision-making tools, the Guidance recommends doing so as a best practice.
  • Monitor Trends and Consult Legal Counsel. We anticipate that the Guidance and similar declarations from other states will evolve as the automated-decision tools continue to advance. Consulting with legal counsel is the best way to ensure these tools comply with local, state, and federal laws.

Information contained in this publication is intended for informational purposes only and does not constitute legal advice or opinion, nor is it a substitute for the professional judgment of an attorney.