UPDATE: The Automated Employment Decision Tool (AEDT) Law (Local Law 144) slated to take effect in New York City, on April 15th will be delayed until May 6, 2023.
On Monday, December 12, 2022, the New York City’s Department of Consumer & Worker Protection (“DCWP”) announced the Automated Employment Decision Tool (AEDT) Law (Local Law 144) slated to take effect in New York City, on January 1st will be delayed until April 15, 2023.
Created to ensure organizations using automated / AI-based hiring tools proactively protect against potential or unintended bias in the processing of candidate information or hiring decisions, the law requires organizations using such tools to comply with mandatory independent audits of AI systems and transparency about their use with candidates. With only months to go, this means the time for enterprises to evaluate their systems for ethical, Responsible AI is now.
Despite its designation as a local law, HR leaders everywhere must remain engaged in tracking its evolution. New York City is the epicenter of the business world, if an enterprise operates and has employees or is hiring employees in NYC this regulation applies to them.
So why the delay?
The New York City Department of Consumer and Worker Protection (DCWP) is overseeing the rollout of the law. They say the delay is due to the high volume of public comments generated by a public hearing held in November. A quick review of the department’s website shows well over 100 pages of feedback and inquiries stemming from that hearing, including comments submitted by retrain.ai. The DCWP aims to review all input before planning a second hearing.
What sort of questions came up?
Numerous points were raised, ranging from what specifically defines an AEDT to how regulation can remain effective without stifling innovation. A few specifics included:
- What sort of qualifications and certifications will be required to select and authorize an independent auditor?
- How will data size be figured into the equation, given that some businesses won’t possess the robust data set necessary to accurately determine bias?
- What options are available to candidates who opt out of the AI-based systems, as is their choice? How will they be assured equal consideration in the hiring process?
A second public hearing will be planned for the first quarter of 2023. In the meantime, we’ll keep you updated in our Responsible AI Hub, where you can also learn what constitutes unbiased, Responsible AI, what to look for in an HR Tech vendor to ensure compliance, and how retrain.ai uses the five pillars of Responsible AI to support the growth of a skilled, diverse workforce.
To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.