At HR Tech 2023, retrain.ai Co-founder and COO Isabelle Bichler-Eliasaf sat down with Dan Riley, co-founder of RADICL, to talk about the surge in Generative AI solutions in the HR space and the importance of its ethical, responsible use. Below are excerpts from their conversation; the full recording can be viewed here.
DR: You brought up responsible and ethical AI. How are we doing? Not just retrain.ai, but the industry in general. Are we getting there?
IBE: This year, everybody is talking about AI. Specifically, everybody’s talking about generative AI, and ChatGPT was a great demonstration to show the amazing abilities of AI. But it also showed the pitfalls. It showed it to be erroneous, biased and very generic, not stable enough. So Responsible AI is all about that; putting safeguards on the AI. It’s just a tool, right? So you need to use it wisely with the right safeguards.
Responsible AI principles span from explainability and bias reduction to consent and embedded privacy rights and so forth, and that’s what we’ve been doing at retrain.ai from the get-go. This is something that’s very important to me, it’s something that I’ve done as part of my research as well around the risks of AI. So now I’m happy to see that a lot of people are thinking about it and starting to do something about it.
Too often, we either blindly trust AI or we blindly distrust it. But it can’t be a binary conversation. So how do we find that middle ground? How do we challenge it and use it for good? What are some of the things retrain.ai is doing to make sure that happens?
It’s about design, development and deployment. It’s about the safety that we put into the technology, first of all to understand the data that we need to be distributed. And, you have to have representation for different protected classes, for example, to prevent biases. You also have to constantly measure the output and understand if it’s having an adverse impact on certain protected classes – gender, age, ethnicity, and so forth. Those safety guards must be in place all the time.
There’s also a lot of regulation emerging now to enforce that. Local law 144 in New York City is actually mandating that companies show and prove that their output isn’t biased. Beyond biases and discrimination, it’s also about explainability. With our product, we explain why a person is a good fit for a position based on their skills. It’s not a black box; the tool has to be transparent and explainable.
So we’re here at HR Tech, where for the most part if you talk to any vendor, they’re going to talk about what they’re doing with AI. What’s your advice for the industry in general?
I think you first need to really understand the problem that you’re solving. AI is a tool; so it’s not just about saying hey I want to bring in AI, I want to bring efficiency. What is the pain point? What is the problem you’re solving? What’s the use case? And then, do you have the right tool for it? Once you have that, you’re okay. But just adopting AI across the board because it’s something trendy or because the notion of efficiency is there, or productivity, it’s not enough. You need to know the problem you’re solving. And depending on the use case, you need to have technology with deep AI; not just augmentation and chatbots. It’s really the data that’s used, and the algorithms, and the generation of AI. You need to use really advanced models based on LLMs, with safety guards, in place to give you the results you want.
retrain.ai is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to hire the right people, keep them longer and cultivate a successful skills-based organization. retrain.ai fuels Talent Acquisition, Talent Management and Skills Architecture all in one, data-driven solution. To see it in action, request a demo.
Starting off the day, keynote speaker Commissioner Keith Sonderling of the EEOC shared insights on the expansion of Responsible AI governance across the U.S., emphasizing that current regulations put the onus on businesses using AI systems to ensure they generate fair end results–
Talk of Responsible AI continued into the first panel
In conversation with retrain.ai’s Amy DeCicco, Dr. Anna Tavis of the Human Capital Management Department at New York University and Dr. Yustina Saleh from The Burning Glass Institute posed provocative questions, encouraging attendees to think about questions like whether empathy is truly a skill or a trait, or how HR leaders can tell from a skills profile whether or not a candidate will be able to do the job needed.
With more enterprises talking about transforming to an SBO model, Dr. Sandra Loughlin of EPAM Systems shared lessons learned from her company’s transformation, while Heidi Ramirez-Perloff discussed The Estee Lauder Company’s exploration into SBO strategy. Urmi Majithia of Atlassian delved into executing technology to help overcome the challenges of becoming an SBO, and Ben Eubanks of Lighthouse Research &
Following the panel discussion, Dr. Loughlin sat down for a one-on-one with retrain.ai CEO Dr. Shay David to go more in depth into EPAM’s experience developing a thriving SBO strategy, sharing benefits, pitfalls and lessons learned along the way.
No discussion around Responsible HR would be complete without an exploration of the huge impact ChatGPT and other
