The 5 Pillars of Responsible AI

Beginning in April 2023, NYC employers—and all organizations hiring and doing business in NYC—will be subject to one of the most stringent regulations governing AI to date. We’ve written extensively about the evolution of Local Law #144, which prohibits employers from using Automated Employment Decision Tools (AEDT) in hiring and promotion decisions unless they’ve taken affirmative measures. Specifically, employers using AEDTs in hiring must have them independently audited and must notify candidates in advance of their use. 

Why is this important?

As AI becomes more embedded in HR systems, enterprise leaders face increased responsibility to ensure their solutions use Responsible AI to mitigate unintended bias risk. 

What exactly makes AI responsible?

Responsible AI uses specific methodologies that continuously test for bias against personal characteristics and eliminate information that can introduce unintended bias. 

In all, there are 5 pillars of Responsible AI:

 

  • Explainability and Interpretability – AI machine learning outcomes, as well as the methodology which produces them, are explainable in easily understandable business-speak. Platform users have visibility into the external and internal data being utilized and the platform’s data structurization and outcomes delivery.
  • Fairness algorithms – AI machine learning models mitigate unwanted bias by focusing on role requirements, skills maps and dynamic employee profiles while masking demographic and other information that can potentially introduce bias.
  • Robustness – Data used to test bias is expansive enough to accurately represent a large data pool while being granular enough to provide accurate, detailed results.
  • Data Quality and Rights – AI system complies with data privacy regulations, offering transparency to the user around proper sourcing and usage of data, and avoiding using data beyond its intended and stated use.
  • Accountability – AI systems meet rigorous accountability standards for proper functioning, responsible methodology and outcomes, and regular compliance testing. 

In addition to building our Talent Intelligence Platform on Responsible AI from the ground up, retrain.ai exemplifies a larger overall commitment to innovation built on Responsible AI. As such, we work with the Responsible Artificial Intelligence Institute (RAII), a leading nonprofit organization building tangible governance tools for trustworthy, safe, and fair artificial intelligence. To learn more, visit our Responsible AI Hub.

 

retrain.ai is a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution.

To see it in action, request a demo.

ChatGPT Is Changing the AI Game, But Enterprises Need More

Chances are you’re one of the millions of people who have played with ChatGPT, the game-changing generative AI assistive technology released by OpenAI. Designed to interact conversationally, the advanced chatbot can engage in dialogue with a user to provide answers, respond to follow-up questions, correct mistakes, and adjust tone and voice when provided with direction. 

A consumer-focused tool, ChatGPT aptly showcases the groundbreaking ability of generative AI to use machine learning to index retrievable content and mimic writing styles. As such, it has prompted a conversation around its possible business uses, garnering opinions from those who see great potential–and those who fear for their jobs. Some even suggest that we are nearing the singularity, or at least seeing for the first time machines that can pass the (in)famous Turing test.

>> Book a demo to see retrain.ai’s generative AI in action

As leaders in the AI space, we see ChatGPT as an example of a set of tools with the potential to transform business processes. Yet it has notable limitations when viewed through the lens of an enterprise-level solution. There are four main areas in which this differentiation is most apparent:

  1. AI-driven technology designed for business incorporates features optimized for a particular industry. retrain.ai, for example, was built from the ground up as a specialized solution for the HR space. As such, our technology expands beyond a ChatGPT-level machine learning model to one which can organize, analyze and structure data precisely enough to inform critical business decisions. We anticipate that in each industry, vertical-specific leaders will emerge who build AI models that are based on industry know-how and language, and are tailored toward specific tasks.
  2. Explainability is another critical feature of specialized AI technologies that you won’t find in a general-purpose chatbot platform. Explainable solutions are referred to as white-box technology, meaning machine learning outcomes, and the methodology which produces them, can be explained using general business-speak. For enterprises trusting generative AI systems with critical decision assistance, this means they have a clear enough understanding to question or challenge the platform’s output. 
  3. Without white-box explainability, an AI system is lacking a key component of Responsible AI, a non-negotiable design element, when it comes to bias prevention in hiring processes. Only by using Responsible AI can an enterprise ensure candidates are being screened solely on skills, eliminating information that can introduce unintended bias. Increasing regulations will also hold enterprises accountable for making sure they are using Responsible AI in hiring practices.
  4. Enterprise-level solutions are implemented to directly impact business performance. They come with contractual assurances like Service Level Agreements (SLAs) to outline vendor expectations and set metrics by which the technology’s effectiveness will be measured. Open platforms like ChatGPT don’t offer performance metrics or customized services, leaving adopters with no recourse should something go wrong. The same is true about data sovereignty, and compliance with privacy standards like GDPR. We anticipate that the big vendors like Microsoft and Google will soon offer enterprise grade service assurances around consumer tools like ChatGPT (or Google’s Lambda), but until that time, the use of consumer tools cannot be relied upon.

The retrain.ai Talent Intelligence Platform uses generative AI with similar language processing technology to ChatGPT’s, but expands on the model to provide a fully explainable enterprise-level solution designed specifically for talent intelligence, while complying with SOC2, GDRP, and offering enterprise grade SLA. We’re excited to see how the market continues to develop and how enterprises transform years old practices with new tools. 

>> Book a demo to see retrain.ai’s generative AI in action

See how the retrain.ai Talent Intelligence Platform fuels your talent acquisition, talent management, job architecture and DEI goals, contact us today. 

retrain.ai is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills architecture, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution.

Learn more: book a demo

VIDCAST: Keep Your Best People Longer with Opportunities to Thrive

Of the millions of workers who quit their jobs over the last two years during the Great Resignation, many cited lack of opportunity for advancement as a major factor. Employees saw investment in their professional development as validation that their contributions were valued and rewarded; its absence sent the opposite message.

Today’s HR leaders must strategize how to hang on to their best-fit hires so they become long-term employees. Much of this comes down to providing a vision for future opportunities in the form of roles, projects and gigs that will utilize, challenge and develop a worker’s skills.

In this session, retrain.ai Co-founder and CEO Dr. Shay David and Chief Research Officer Ben Eubanks of Lighthouse Research discuss how organizations can build a mutually beneficial path forward for valued talent. Their conversation covers:

  • How HR tech can counter today’s quit rates 
  • The connection between internal opportunities and worker retention
  • What we can learn from Great Resignation data
  • The DRIP Problem: Data Rich, Information Poor
  • Implications of the new employer-employee dynamic
  • How AI enhances the human experience at work
  • The importance of Responsible AI and explainability
  • Tips for HR leaders new to using AI-driven tech
  • Talent scarcity as a business problem, not just an HR problem
  • How HRs and hiring managers can align to optimize Responsible AI solutions

 

 

 

VIDCAST: Sourcing and Screening at a Time of Talent Scarcity

In the wake of the Great Resignation, the war for skilled workers rages on, with more open roles than there are job seekers to fill them. Candidates are willing to wait it out to find best-fit roles, demanding (and receiving) higher compensation, more flexibility, community, and an inclusive culture before accepting a full-time job at a traditional employer.

Meanwhile, an open role represents significant costs for an enterprise through both productivity and financial losses; numbers that only compound with each passing day. To avoid such pitfalls, a long-term strategy is needed to navigate today’s talent shortage. 


In the short term, there are immediate measures HR leaders can put in place to get the right people in the right places quickly. These include sourcing and attracting talent through creative recruitment, broadening the talent pool to include active and passive candidates, looking internally for employee mobility opportunities and focusing on skills-based hiring within all of these channels.

In this session, retrain.ai co-founder and COO Isabelle Bichler-Eliasaf and Chief Research Officer Ben Eubanks of Lighthouse Research discuss how AI can invigorate and expedite the sourcing and screening process to help HR leaders hone in on best-fit, diverse candidates faster. Their conversation covers:

  • The biggest hiring challenges today and how HRs are managing them
  • What factors have caused today’s talent shortage
  • The importance of career-pathing opportunities in attracting talent and keep employees engaged
  • How AI and skills-matching can build a talent marketplace to fuel internal mobility
  • How AI can enhance the human experience at work and strengthen DEI goals
  • What constitutes Responsible AI and how does HR tech balance automation and fairness
  • Ben’s 2023 predictions for HR

 

 

 

Update: Responsible AI and the NYC Audit Law Pushed to Q2

UPDATE: The Automated Employment Decision Tool (AEDT) Law (Local Law 144) slated to take effect in New York City, on April 15th will be delayed until May 6, 2023.

On Monday, December 12, 2022, the New York City’s Department of Consumer & Worker Protection (“DCWP”) announced the Automated Employment Decision Tool (AEDT) Law (Local Law 144) slated to take effect in New York City, on January 1st will be delayed until April 15, 2023.

Created to ensure organizations using automated / AI-based hiring tools proactively protect against potential or unintended bias in the processing of candidate information or hiring decisions, the law requires organizations using such tools to comply with mandatory independent audits of AI systems and transparency about their use with candidates. With only months to go, this means the time for enterprises to evaluate their systems for ethical, Responsible AI is now. 

Learn how this law impacts HR Leaders everywhere, not just in NYC >>

Despite its designation as a local law, HR leaders everywhere must remain engaged in tracking its evolution. New York City is the epicenter of the business world, if an enterprise operates and has employees or is hiring employees in NYC this regulation applies to them.

So why the delay? 

The New York City Department of Consumer and Worker Protection (DCWP) is overseeing the rollout of the law. They say the delay is due to the high volume of public comments generated by a public hearing held in November. A quick review of the department’s website shows well over 100 pages of feedback and inquiries stemming from that hearing, including comments submitted by retrain.ai. The DCWP aims to review all input before planning a second hearing.

What sort of questions came up? 

Numerous points were raised, ranging from what specifically defines an AEDT to how regulation can remain effective without stifling innovation. A few specifics included:

  • What sort of qualifications and certifications will be required to select and authorize an independent auditor? 
  • How will data size be figured into the equation, given that some businesses won’t possess the robust data set necessary to accurately determine bias?
  • What options are available to candidates who opt out of the AI-based systems, as is their choice? How will they be assured equal consideration in the hiring process?

A second public hearing will be planned for the first quarter of 2023. In the meantime, we’ll keep you updated in our Responsible AI Hub, where you can also learn what constitutes unbiased, Responsible AI, what to look for in an HR Tech vendor to ensure compliance, and how retrain.ai uses the five pillars of Responsible AI to support the growth of a skilled, diverse workforce.  

To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.

Additional resources

  • Responsible AI and the NYC Audit Law: What You Need to Know Before 2023 – On-demand webinar
  • Responsible AI: Why It Matters and What HR Leaders Need to Know – On-demand webinar

Q&A: The NYC AI Audit Law

UPDATE: Local Law 144 will now go into effect on May 6, 2023.

For organizations using AI in their hiring processes, prepping for 2023 means evaluating compliance with a new law that will take effect in New York City in January, but which will impact millions of HR leaders and job candidates everywhere

Local Law 144, or the NYC AI Audit Law, issues updated guidelines for employers using AI in hiring. Part of a quickly growing practice, AI tools are in high demand for companies looking to speed up preliminary candidate screening and enable efficiency in the hiring process. To avoid introducing unintended bias into those actions, however, the AI must be responsible–meaning fully explainable machine learning systems structured to avoid biases that could skew results unfairly. 

With only weeks to go until the NYC AI Audit Law kicks in, there are still plenty of unanswered questions. In this vidcast, retrain.ai Co-founder and COO Isabelle Bichler-Eliasaf speaks with Rob Szyba, partner and employment attorney at Seyfarth Shaw about aspects of the law that aren’t quite clear yet, including:

  • What specifically defines an automated employment decision tool (AEDT)? How much weight is given to the AEDT as one part of a multi-level hiring process?  [Timestamp: 5:08]
  • Who is performing the mandatory AI bias audits required by the law?  [Timestamp: 10:01]
  • What accommodations are given to candidates who opt out of AEDT interview steps?  [Timestamp: 11:28]
  • How are candidates who opt out assured equal consideration?  [Timestamp: 12:38]
  • What happens to organizations in that are new to AI use in hiring and don’t necessarily have enough data to test their system by the time the law takes effect? Will they be considered in default?  [Timestamp 15:32]
  • The law applies in New York City, but what does that mean for businesses based outside of NYC who have offices or even remote workers based in the City?  [Timestamp 19:02]
  • How can those of us in the AI space convey the importance of ensuring that regulation helps the process without stifling innovation? That it protects AI’s ability to enhance the human workforce experience?  [Timestamp 25:06]

Additional Resources:

Not Headquartered in NYC? The New AI-based Hiring Regulations Will Likely Still Apply to You. Blog post

NYC AI Law Update – 4 Important Things You Need to Know Blog post

A New NYC Law Puts Pressure on Talent Intelligence: Will Your AI Solution Be Ready? Blog post

Responsible AI and the NYC Audit Law: What You Need to Know Before 2023 – On-demand webinar

Responsible AI: Why It Matters and What HR Leaders Need to Know – On-demand webinar

To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.

Not Headquartered in NYC? The New AI-based Hiring Regulations Will Likely Still Apply to You

UPDATE: Local Law 144 will now go into effect on May 6, 2023.

Beginning on January 1, 2023, companies using AI in their hiring practices in New York City must comply with Local Law #144, the Automated Employment Decision Tool Law (AEDT), which mandates independent audits of AI systems and transparency about their use with candidates, among other specifics. 

At its core, the NYC Law–and the larger EEOC statement that preceded it–aim to ensure that AI and other emerging tools used in hiring and employment decisions don’t introduce or augment bias that can create discriminatory barriers to jobs. You can read more about the details of the law in our earlier blog post

While some may believe the new regulation is just a niche city law that only applies to enterprises within the boundaries of New York City, impacting a relatively small pool of employers and job candidates, the reality is that its reach goes well beyond the NYC metro area and even the state as a whole.

Who needs to pay attention to the NYC Law?

Pretty much EVERYONE.

New York City is the epicenter of the business world, with many corporate roads running through it. If an enterprise operates any element of its business through NYC, and if they hire staff for that function, the law applies. 

Enterprises don’t need to be that expansive. Organizations using AI in hiring and promotions practices will need to ensure compliance with the new law if:

  • They have any sort of office or presence in NYC
  • They are based elsewhere but have open positions based in NYC
  • They have open remote positions that may attract candidates residing in NYC

But what if a company has only a single NYC employee, working remotely from their apartment in the City? Or if a global company has just one position to hire in Manhattan–which may be filled by a candidate living in New Jersey or Connecticut? 

It ALL counts. And reaches just about EVERYWHERE.

The geographic reach of the NYC law stretches far beyond the U.S. as well. New York City is a major hub for companies based all over the world and global companies who operate any part of their business–from a US Headquarters to a sales office, to a warehouse team and everything in between–fall under the requirements of the new legislation.

Strategize now for compliance next year.

Add up all the scenarios and you’ve got a massive number of companies that will be under the microscope come January. In today’s competitive landscape, stopping to retrofit HR systems for compliance presents a loss of momentum. Likewise accommodating multiple solutions across geographies or business functions. 

If you’re not sure whether your HR systems are using Responsible unbiased AI, now is the time to find a partner who can integrate with your HR tech stack, forming a unified system of intelligence that actively targets and eliminates unintended bias.

The retrain.ai Talent Intelligence Platform is built on the five pillars of Responsible AI to provide our customers with a transparent and bias-audited system. Our Talent Acquisition and Talent Management solutions help HR leaders hire faster and retain longer, while actively supporting a skilled and diverse workforce. 

To experience a personalized walkthrough of how retrain.ai can help you reach your HR goals, visit us here.

 

Additional resources

  • Responsible AI and the NYC Audit Law: What You Need to Know Before 2023 – On-demand webinar
  • Responsible AI: Why It Matters and What HR Leaders Need to Know – On-demand webinar