The Responsible HR Forum Presented by retrain.ai

The conversation around responsible HR innovation and best practices has been growing steadily as enterprises look to ethically pursue equitable, diverse workforce growth. Its importance has further increased given the NYC AI Audit law taking effect in July 2023. 

Up to this point, much of the conversation has taken place disparately, with HR leaders, technologists and regulators operating in silos.

We are thrilled to announce that this May, we are hosting a first-of-its-kind event focused entirely on Responsible HR. We’re bringing together key stakeholders to form a community of HR leaders, technologists, educators, advocates and regulators to collaborate on our collective journey toward designing and adopting Responsible HR practices. 

The Responsible HR Forum

presented by retrain.ai

May 17, 2023

New York City

This full day of exploration and discussion will be kicked off by our esteemed keynote speaker, Commissioner Keith Sonderling of the Equal Employment Opportunity Commission (EEOC).

Keith Sonderling, Vice Chair and Commissioner, EEOC

Commissioner Sonderling will discuss increasing Responsible AI regulation as well as what’s on the legislative horizon for enterprises and HR leaders implementing AI-based tech solutions. 

You’re invited to join us as we bring together CHROs, regulators, legal experts, analysts, academics, nonprofits and more for a day of invigorating discussions, shared ideas and key strategies to prepare for this next wave in the future of work.

 

retrain.ai is a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution. To see it in action, request a demo

White Box vs. Black Box HR Solutions: What’s the Difference

As AI becomes increasingly embedded in HR systems, enterprise leaders face growing accountability from regulators, their C-suite, applicants, and more to ensure their solutions use ethical, responsible systems to mitigate unintended bias. As a result, Responsible AI is becoming a business mandate, with increasing momentum around laws requiring audits to ensure all benchmarks of Responsible AI are in place.  

One key component of Responsible AI is explainability. Users of an AI-based system should understand how their AI gathers, organizes and interprets data, as well as how the platform produces outcomes.

White box = Explainability

The level of transparency needed to fully explain an AI solution can only be found in what is referred to as a white box solution. With this approach, a full end-to-end view of an AI system’s functionality enables system users to see the what of the system–its data output–while also being able to ask the why–the methodology behind the results.

Such interpretability also allows data scientists and analysts to test the design and internal structure of an AI system in order to authenticate the input and outflow, gauge for errors or inconsistencies, and optimize functionalities accordingly.

What White box Means for HR Leaders

A white box AI solution empowers users to question processes and challenge results, which is especially critical when using such technology within HR functions. Armed with a thorough understanding of their AI solution, an HR leader can be sure their system is performing critical functions, such as mitigating bias risk within its machine learning models. Assured of such mitigation, the organization can stand behind hiring practices that fully support their diversity and inclusion goals.

Black box = Blind Trust

Conversely, there are AI systems for which explanations are too difficult to understand–or aren’t available at all. These are often referred to as black box solutions. In certain settings, black box AI can be useful. The algorithmic complexities necessary in fraud prevention systems, for example, are not explainable in simple terms. 

But within HR functions, a black box system doesn’t allow users to understand how the AI arrives at its conclusions around hiring decision support. As such, there is no visibility to detect errors within the processes, including the presence of possible bias permeating the algorithms.

What Black box Means for HR Leaders

For these reasons, black box solutions represent a significant risk to HR innovators. In the larger sense, they demand a significant level of blind trust. More specifically, by masking information that can derail DEI hiring practices, they render an AI  solution non-compliant in the face of increasing Responsible AI regulation.

retrain.ai and Responsible AI

In providing end-to-end transparency for platform users, retrain.ai is a white box solution. In choosing this methodology, retrain.ai supports the rights of enterprises to know and understand how their HR platforms deliver critical information.

As part of our larger commitment to leading the forefront of Responsible AI innovation in the HR Tech space, retrain.ai works with the Responsible Artificial Intelligence Institute (RAII), a leading nonprofit organization building tangible governance tools for trustworthy, safe, and fair artificial intelligence. To see the retrain.ai difference book a demo

 

 

retrain.ai is a talent intelligence platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution. To see it in action, request a demo