Update: NYC AI Audit Law (Local Law #144)

As of July 2023, Local Law #144 in New York City is in full effect for enterprises using Automated Employment Decision Tools (AEDT) to assist in hiring decisions. The complete rollout of the new regulation includes punitive enforcement for those found in disagreement with the law.

Given the prospect of a civil penalty for non-compliance, HR leaders are well-advised to keep up to speed on specific requirements of the law, starting with AEDT audits. The law mandates that:

  • AEDT system audits be conducted by an independent entity
  • Summarized audit results be posted on the employer’s website
  • Alternative selection processes or accommodations be provided to job candidates who opt out of AEDT-assisted selection

Despite being known as a NYC-based law, Local Law 144 has generated questions about its actual domain. The geographical guidelines now clarify: 

  • If the role is located in New York City: A bias audit and notice must be sent to NYC residents
  • If the role is located outside of New York City: A bias audit and notice are not required
  • If the role is fully remote but the company has only a New York City office: A bias audit and notice must be sent to NYC residents
  • If the hiring company does not have an office in New York City: A bias audit and notice are not required
  • If the hiring company has offices both in and outside of New York City: Specifics of the role will dictate the need for a bias audit and notice

To break the law down further, let’s start with a few, tangible moves HR leaders and Talent Acquisition specialists can make now to plan for Local Law 144 compliance:

  1. Conduct internal assessments to find out what tools are currently being used that may qualify as an AEDT
  2. Build an inventory and tracking plan for identified AEDTs, including where and how they’re used
  3. Monitor and track AEDT performance, including data input and outcomes, to test accuracy
  4. Develop policies and procedures for storing demographic and selection data needed for AEDT audits
  5. Create a compliance team of HR leaders, legal advisors and tech representatives to prepare or adapt compliance disclosures 

As stewards of Responsible AI, we at retrain.ai are keenly aware that the issues raised by Local Law #144 directly impact the talent intelligence space. Automated, accelerated, bias-free hiring processes powered by innovative technology act as the key to growing and supporting a diverse and inclusive workforce; Responsible AI is the key to reaching DEI goals and remaining compliant.

If your organization is considering integrating an AI-driven solution into your HR tech stack, be sure to check out our Buyer’s Guide to Talent Intelligence to learn more about what to look for, what questions to ask, and how to engage your teams in the process. 

 

retrain.ai is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to hire the right people, keep them longer and cultivate a successful skills-based organization. retrain.ai fuels Talent Acquisition, Talent Management and Skills Architecture all in one, data-driven solution. To see it in action, request a demo.

Event Recap: Responsible HR Forum 2023 presented by retrain.ai

There’s something incredible that happens when thought leaders and knowledge seekers gather to explore a critical topic. Such was the vibe at the first-ever Responsible HR Forum presented by retrain.ai. Below, find a brief overview of the day’s sessions, which you can now access as podcast or vidcast recordings.


Keynote: EEOC Comm
issioner Keith Sonderling

Starting off the day, keynote speaker Commissioner Keith Sonderling of the EEOC shared insights on the expansion of Responsible AI governance across the U.S., emphasizing that current regulations put the onus on businesses using AI systems to ensure they generate fair end results–not on the makers of AI systems.

Watch the vidcast | Listen to the podcast

 

Ready or Not, RegulationAre Coming 

Talk of Responsible AI continued into the first panel discussion, where Commissioner Sonderling was joined by Scott Loughlin of Hogan Lovells, Rob Szyba of Seyfarth Shaw and Niloy Ray of Littler to discuss the new AI Audit Law in New York City, the far-reaching implications of seemingly local regulations, and how the European Union’s approach to AI governance differs from the U.S.

Watch the vidcast | Listen to the podcast


The Paradox of the HR Mission: Creating a Multidimensional View of Talent

In conversation with retrain.ai’s Amy DeCicco, Dr. Anna Tavis of the Human Capital Management Department at New York University and Dr. Yustina Saleh from The Burning Glass Institute posed provocative questions, encouraging attendees to think about questions like whether empathy is truly a skill or a trait, or how HR leaders can tell from a skills profile whether or not a candidate will be able to do the job needed.

Watch the vidcast | Listen to the podcast


Becoming a Skills-Based Organization: More Than a Trend?

With more enterprises talking about transforming to an SBO model, Dr. Sandra Loughlin of EPAM Systems shared lessons learned from her company’s transformation, while Heidi Ramirez-Perloff discussed The Estee Lauder Company’s exploration into SBO strategy. Urmi Majithia of Atlassian delved into executing technology to help overcome the challenges of becoming an SBO, and Ben Eubanks of Lighthouse Research & Advisory broke down the larger SBO concept to a tangible level regarding individual employees and hiring managers.

Watch the vidcast | Listen to the podcast


The Hidden Architecture of a Skills-Based Organization

Following the panel discussion, Dr. Loughlin sat down for a one-on-one with retrain.ai CEO Dr. Shay David to go more in depth into EPAM’s experience developing a thriving SBO strategy, sharing benefits, pitfalls and lessons learned along the way.

Watch the vidcast | Listen to the podcast

 

Can Innovation and Regulation Co-Exist? How ChatGPT Sparked the Conversation

No discussion around Responsible HR would be complete without an exploration of the huge impact ChatGPT and other generative AI solutions are having on the tech space.  Leading a fascinating discussion on the topic were Yuying Chen-Wynn of Wittingly Ventures and Art Kleiner of Kleiner Powell International, who examined the potential of generative AI to greatly improve business systems, as well as the ethical AI use questions that remain in the midst of growing regulation.

Watch the vidcast | Listen to the podcast


Continuing the Conversation: The Responsible HR Council

To conclude the Responsible HR Forum, retrain.ai announced the formation of our Responsible HR Council. Like the Forum, our Council will involve experts from academia, law, enterprise, government and nonprofit sectors. We’ll meet quarterly to get up to speed on new AI legislation, new AI technologies, and the melding of the two within Responsible HR practices. Check back for details soon! 

retrain.ai is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, and win the war for talent and the great resignation. retrain.ai fuels Talent Acquisition, Talent Management and Skills Architecture, all in one, data-driven solution. To see it in action, request a demo

The 5 Pillars of Responsible AI

Beginning in April 2023, NYC employers—and all organizations hiring and doing business in NYC—will be subject to one of the most stringent regulations governing AI to date. We’ve written extensively about the evolution of Local Law #144, which prohibits employers from using Automated Employment Decision Tools (AEDT) in hiring and promotion decisions unless they’ve taken affirmative measures. Specifically, employers using AEDTs in hiring must have them independently audited and must notify candidates in advance of their use. 

Why is this important?

As AI becomes more embedded in HR systems, enterprise leaders face increased responsibility to ensure their solutions use Responsible AI to mitigate unintended bias risk. 

What exactly makes AI responsible?

Responsible AI uses specific methodologies that continuously test for bias against personal characteristics and eliminate information that can introduce unintended bias. 

In all, there are 5 pillars of Responsible AI:

 

  • Explainability and Interpretability – AI machine learning outcomes, as well as the methodology which produces them, are explainable in easily understandable business-speak. Platform users have visibility into the external and internal data being utilized and the platform’s data structurization and outcomes delivery.
  • Fairness algorithms – AI machine learning models mitigate unwanted bias by focusing on role requirements, skills maps and dynamic employee profiles while masking demographic and other information that can potentially introduce bias.
  • Robustness – Data used to test bias is expansive enough to accurately represent a large data pool while being granular enough to provide accurate, detailed results.
  • Data Quality and Rights – AI system complies with data privacy regulations, offering transparency to the user around proper sourcing and usage of data, and avoiding using data beyond its intended and stated use.
  • Accountability – AI systems meet rigorous accountability standards for proper functioning, responsible methodology and outcomes, and regular compliance testing. 

In addition to building our Talent Intelligence Platform on Responsible AI from the ground up, retrain.ai exemplifies a larger overall commitment to innovation built on Responsible AI. As such, we work with the Responsible Artificial Intelligence Institute (RAII), a leading nonprofit organization building tangible governance tools for trustworthy, safe, and fair artificial intelligence. To learn more, visit our Responsible AI Hub.

 

retrain.ai is a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution.

To see it in action, request a demo.

Update: Responsible AI and the NYC Audit Law Pushed to Q2

UPDATE: The Automated Employment Decision Tool (AEDT) Law (Local Law 144) slated to take effect in New York City, on April 15th will be delayed until May 6, 2023.

On Monday, December 12, 2022, the New York City’s Department of Consumer & Worker Protection (“DCWP”) announced the Automated Employment Decision Tool (AEDT) Law (Local Law 144) slated to take effect in New York City, on January 1st will be delayed until April 15, 2023.

Created to ensure organizations using automated / AI-based hiring tools proactively protect against potential or unintended bias in the processing of candidate information or hiring decisions, the law requires organizations using such tools to comply with mandatory independent audits of AI systems and transparency about their use with candidates. With only months to go, this means the time for enterprises to evaluate their systems for ethical, Responsible AI is now. 

Learn how this law impacts HR Leaders everywhere, not just in NYC >>

Despite its designation as a local law, HR leaders everywhere must remain engaged in tracking its evolution. New York City is the epicenter of the business world, if an enterprise operates and has employees or is hiring employees in NYC this regulation applies to them.

So why the delay? 

The New York City Department of Consumer and Worker Protection (DCWP) is overseeing the rollout of the law. They say the delay is due to the high volume of public comments generated by a public hearing held in November. A quick review of the department’s website shows well over 100 pages of feedback and inquiries stemming from that hearing, including comments submitted by retrain.ai. The DCWP aims to review all input before planning a second hearing.

What sort of questions came up? 

Numerous points were raised, ranging from what specifically defines an AEDT to how regulation can remain effective without stifling innovation. A few specifics included:

  • What sort of qualifications and certifications will be required to select and authorize an independent auditor? 
  • How will data size be figured into the equation, given that some businesses won’t possess the robust data set necessary to accurately determine bias?
  • What options are available to candidates who opt out of the AI-based systems, as is their choice? How will they be assured equal consideration in the hiring process?

A second public hearing will be planned for the first quarter of 2023. In the meantime, we’ll keep you updated in our Responsible AI Hub, where you can also learn what constitutes unbiased, Responsible AI, what to look for in an HR Tech vendor to ensure compliance, and how retrain.ai uses the five pillars of Responsible AI to support the growth of a skilled, diverse workforce.  

To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.

Additional resources

  • Responsible AI and the NYC Audit Law: What You Need to Know Before 2023 – On-demand webinar
  • Responsible AI: Why It Matters and What HR Leaders Need to Know – On-demand webinar

Q&A: The NYC AI Audit Law

UPDATE: Local Law 144 will now go into effect on May 6, 2023.

For organizations using AI in their hiring processes, prepping for 2023 means evaluating compliance with a new law that will take effect in New York City in January, but which will impact millions of HR leaders and job candidates everywhere

Local Law 144, or the NYC AI Audit Law, issues updated guidelines for employers using AI in hiring. Part of a quickly growing practice, AI tools are in high demand for companies looking to speed up preliminary candidate screening and enable efficiency in the hiring process. To avoid introducing unintended bias into those actions, however, the AI must be responsible–meaning fully explainable machine learning systems structured to avoid biases that could skew results unfairly. 

With only weeks to go until the NYC AI Audit Law kicks in, there are still plenty of unanswered questions. In this vidcast, retrain.ai Co-founder and COO Isabelle Bichler-Eliasaf speaks with Rob Szyba, partner and employment attorney at Seyfarth Shaw about aspects of the law that aren’t quite clear yet, including:

  • What specifically defines an automated employment decision tool (AEDT)? How much weight is given to the AEDT as one part of a multi-level hiring process?  [Timestamp: 5:08]
  • Who is performing the mandatory AI bias audits required by the law?  [Timestamp: 10:01]
  • What accommodations are given to candidates who opt out of AEDT interview steps?  [Timestamp: 11:28]
  • How are candidates who opt out assured equal consideration?  [Timestamp: 12:38]
  • What happens to organizations in that are new to AI use in hiring and don’t necessarily have enough data to test their system by the time the law takes effect? Will they be considered in default?  [Timestamp 15:32]
  • The law applies in New York City, but what does that mean for businesses based outside of NYC who have offices or even remote workers based in the City?  [Timestamp 19:02]
  • How can those of us in the AI space convey the importance of ensuring that regulation helps the process without stifling innovation? That it protects AI’s ability to enhance the human workforce experience?  [Timestamp 25:06]

Additional Resources:

Not Headquartered in NYC? The New AI-based Hiring Regulations Will Likely Still Apply to You. Blog post

NYC AI Law Update – 4 Important Things You Need to Know Blog post

A New NYC Law Puts Pressure on Talent Intelligence: Will Your AI Solution Be Ready? Blog post

Responsible AI and the NYC Audit Law: What You Need to Know Before 2023 – On-demand webinar

Responsible AI: Why It Matters and What HR Leaders Need to Know – On-demand webinar

To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.

Not Headquartered in NYC? The New AI-based Hiring Regulations Will Likely Still Apply to You

UPDATE: Local Law 144 will now go into effect on May 6, 2023.

Beginning on January 1, 2023, companies using AI in their hiring practices in New York City must comply with Local Law #144, the Automated Employment Decision Tool Law (AEDT), which mandates independent audits of AI systems and transparency about their use with candidates, among other specifics. 

At its core, the NYC Law–and the larger EEOC statement that preceded it–aim to ensure that AI and other emerging tools used in hiring and employment decisions don’t introduce or augment bias that can create discriminatory barriers to jobs. You can read more about the details of the law in our earlier blog post

While some may believe the new regulation is just a niche city law that only applies to enterprises within the boundaries of New York City, impacting a relatively small pool of employers and job candidates, the reality is that its reach goes well beyond the NYC metro area and even the state as a whole.

Who needs to pay attention to the NYC Law?

Pretty much EVERYONE.

New York City is the epicenter of the business world, with many corporate roads running through it. If an enterprise operates any element of its business through NYC, and if they hire staff for that function, the law applies. 

Enterprises don’t need to be that expansive. Organizations using AI in hiring and promotions practices will need to ensure compliance with the new law if:

  • They have any sort of office or presence in NYC
  • They are based elsewhere but have open positions based in NYC
  • They have open remote positions that may attract candidates residing in NYC

But what if a company has only a single NYC employee, working remotely from their apartment in the City? Or if a global company has just one position to hire in Manhattan–which may be filled by a candidate living in New Jersey or Connecticut? 

It ALL counts. And reaches just about EVERYWHERE.

The geographic reach of the NYC law stretches far beyond the U.S. as well. New York City is a major hub for companies based all over the world and global companies who operate any part of their business–from a US Headquarters to a sales office, to a warehouse team and everything in between–fall under the requirements of the new legislation.

Strategize now for compliance next year.

Add up all the scenarios and you’ve got a massive number of companies that will be under the microscope come January. In today’s competitive landscape, stopping to retrofit HR systems for compliance presents a loss of momentum. Likewise accommodating multiple solutions across geographies or business functions. 

If you’re not sure whether your HR systems are using Responsible unbiased AI, now is the time to find a partner who can integrate with your HR tech stack, forming a unified system of intelligence that actively targets and eliminates unintended bias.

The retrain.ai Talent Intelligence Platform is built on the five pillars of Responsible AI to provide our customers with a transparent and bias-audited system. Our Talent Acquisition and Talent Management solutions help HR leaders hire faster and retain longer, while actively supporting a skilled and diverse workforce. 

To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.

 

Additional resources

  • Responsible AI and the NYC Audit Law: What You Need to Know Before 2023 – On-demand webinar
  • Responsible AI: Why It Matters and What HR Leaders Need to Know – On-demand webinar