The 5 Pillars of Responsible AI

Isabelle Bichler
Reading Time: 2 minutes
Responsible AI is the key to hiring a diverse workforce

Beginning in April 2023, NYC employers—and all organizations hiring and doing business in NYC—will be subject to one of the most stringent regulations governing AI to date. We’ve written extensively about the evolution of Local Law #144, which prohibits employers from using Automated Employment Decision Tools (AEDT) in hiring and promotion decisions unless they’ve taken affirmative measures. Specifically, employers using AEDTs in hiring must have them independently audited and must notify candidates in advance of their use. 

Why is this important?

As AI becomes more embedded in HR systems, enterprise leaders face increased responsibility to ensure their solutions use Responsible AI to mitigate unintended bias risk. 

What exactly makes AI responsible?

Responsible AI uses specific methodologies that continuously test for bias against personal characteristics and eliminate information that can introduce unintended bias. 

In all, there are 5 pillars of Responsible AI:

 

  • Explainability and Interpretability – AI machine learning outcomes, as well as the methodology which produces them, are explainable in easily understandable business-speak. Platform users have visibility into the external and internal data being utilized and the platform’s data structurization and outcomes delivery.
  • Fairness algorithms – AI machine learning models mitigate unwanted bias by focusing on role requirements, skills maps and dynamic employee profiles while masking demographic and other information that can potentially introduce bias.
  • Robustness – Data used to test bias is expansive enough to accurately represent a large data pool while being granular enough to provide accurate, detailed results.
  • Data Quality and Rights – AI system complies with data privacy regulations, offering transparency to the user around proper sourcing and usage of data, and avoiding using data beyond its intended and stated use.
  • Accountability – AI systems meet rigorous accountability standards for proper functioning, responsible methodology and outcomes, and regular compliance testing. 

In addition to building our Talent Intelligence Platform on Responsible AI from the ground up, retrain.ai exemplifies a larger overall commitment to innovation built on Responsible AI. As such, we work with the Responsible Artificial Intelligence Institute (RAII), a leading nonprofit organization building tangible governance tools for trustworthy, safe, and fair artificial intelligence. To learn more, visit our Responsible AI Hub.

 

retrain.ai is a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution.

To see it in action, request a demo.