A New NYC Law Puts Pressure on Talent Intelligence: Will Your AI Solution Be Ready?

Isabelle Bichler
Reading Time: 2 minutes

UPDATE: Local Law 144 will now go into effect on April 15, 2023. Learn more about the change here.

On January 1, 2023, a new law will take effect in New York City, aimed to more heavily regulate the use of AI in screening job candidates or employees up for promotion. The news follows an October 2021 statement from the U.S. Equal Employment Opportunity Commission, announcing updated guidelines for employers using AI in hiring.

Today, AI tools are in high demand for companies looking to speed up screening processes and enable efficiency in the hiring process. However, automated employment decision tools can still be susceptible to embedded bias that leads to skewed results–or worse. The Federal Trade Commission has warned companies they may face penalties if their AI reflects racial or gender bias.

New York City is hoping to get ahead of possible AI bias infractions before sanctions are needed, by enacting new vetting and notice requirements. Namely, companies using what the law calls “automated employment decision tools” must first submit them to a bias audit–an evaluation by an independent auditor to test the possible unfair impact on job candidates or employees based on their race, ethnicity or sex. In addition, hiring companies must notify candidates at least ten days before an AI tool is used in their interviewing process so as to allow adequate opportunity to request an alternative selection process or an accommodation. Violations of the new rules will result in hefty fines.

Here’s the good news. With almost a full year to gear up, now’s the time to check your HR tech stack to see if its AI will stand up to a bias audit or if changes need to be made. Some points to consider:

  • A platform that extracts skills from candidate profiles–not titles or roles–is far less likely to feed skewed algorithms. While one program may read “nurse” as a role tending to be female, for example, a solution that identifies “patient advocacy” or “medication management” is less likely to draw in assumptions that can negatively impact male candidates for the position.
  • Semantics make up for the pitfalls of keyword matching. Screening CVs to simply find a “data analyst,” a vulnerable platform may overpopulate results with candidates assumed to be a match for the role. Adding context, a more advanced solution will find those with proficiencies in cloud-based pattern analysis rather than, say, risk analysis within a local environment.
  • Data analytics are tools, not recommendations. The NYC law states that bias danger lies within tools that score, classify or recommend candidates, in effect replacing discretionary decision making around the hiring process. Compliant AI tools don’t recommend; they provide organized data to inform human decisions.

More good news? We’ve partnered with Employment Litigation Attorney Robert Szyba for a webinar about this law and what it means for you. Register here >>