Gartner report: Mitigate Bias From AI in Technology

The following is a summary of “Mitigate Bias From AI in Technology,” a report from Gartner, available here for Gartner partners. The full report is available for download here (For Gartner Subscribers)


Organizations are rapidly adopting AI in HR, while regulations are struggling to keep up. As part of their HR strategy, HR leaders must promote responsible AI in their applications by mitigating bias that poses risks to talent management, the employee experience, DEI and more.

Key Findings from the Gartner report

  • Fifty-three percent of HR leaders are concerned about potential bias and discrimination from AI. Bias in AI is unavoidable; however, HR leaders can establish best practices that mitigate this bias.
  • Thoroughly vetting HR technologies for bias requires an understanding of business processes. Broad overarching assessments can lead to missed sources of bias, or to inaction due to fear of getting it wrong. The organization should assess each use case individually.
  • AI regulations, including HR-specific measures relevant to bias, are gradually taking effect. Since many HR functions plan to buy AI capabilities built by vendors, HR leaders face the need to monitor technology providers for their regulatory compliance and ethical considerations.
  • HR leaders are positioned to take a leading role in advancing practices that bolster openness about the potential impacts of bias from AI applications, and 35% of HR leaders recently reported they expect to lead their organization’s enterprise wide AI ethics approach.

Gartner Recommendations

HR leaders responsible for technology strategy must:

  • Map possible sources and outputs of bias for each AI use case in HR to assist in flagging areas of risk and monitoring vendors for their commitment to responsible AI practices.
  • Require and evaluate bias mitigation from HR technology providers offering AI functionality by assessing criteria related to their data, algorithms, organizational context, regulation compliance and ethical considerations.
  • Promote transparency into the potential impacts of AI’s bias by collaborating with external and internal stakeholders to take decisive steps in protecting the organization, the future of work and society at large.

Gartner, Mitigate Bias From AI in HR Technology, By Helen Poitevin, 16 October 2023

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

HR Tech 2023 Highlight: Generative AI Isn’t a Trend, It’s a Problem Solving Tool

At HR Tech 2023, Co-founder and COO Isabelle Bichler-Eliasaf sat down with Dan Riley, co-founder of RADICL, to talk about the surge in Generative AI solutions in the HR space and the importance of its ethical, responsible use. Below are excerpts from their conversation; the full recording can be viewed here.

DR: You brought up responsible and ethical AI. How are we doing? Not just, but the industry in general. Are we getting there?

IBE: This year, everybody is talking about AI. Specifically, everybody’s talking about generative AI, and ChatGPT was a great demonstration to show the amazing abilities of AI. But it also showed the pitfalls. It showed it to be erroneous, biased and very generic, not stable enough. So Responsible AI is all about that; putting safeguards on the AI. It’s just a tool, right? So you need to use it wisely with the right safeguards. 

Responsible AI principles span from explainability and bias reduction to consent and embedded privacy rights and so forth, and that’s what we’ve been doing at from the get-go. This is something that’s very important to me, it’s something that I’ve done as part of my research as well around the risks of AI. So now I’m happy to see that a lot of people are thinking about it and starting to do something about it. 

Too often, we either blindly trust AI or we blindly distrust it. But it can’t be a binary conversation. So how do we find that middle ground? How do we challenge it and use it for good? What are some of the things is doing to make sure that happens?

It’s about design, development and deployment. It’s about the safety that we put into the technology, first of all to understand the data that we need to be distributed. And, you have to have representation for different protected classes, for example, to prevent biases. You also have to constantly measure the output and understand if it’s having an adverse impact on certain protected classes – gender, age, ethnicity, and so forth. Those safety guards must be in place all the time.

There’s also a lot of regulation emerging now to enforce that. Local law 144 in New York City is actually mandating that companies show and prove that their output isn’t biased. Beyond biases and discrimination, it’s also about explainability. With our product, we explain why a person is a good fit for a position based on their skills. It’s not a black box; the tool has to be transparent and explainable. 

So we’re here at HR Tech, where for the most part if you talk to any vendor, they’re going to talk about what they’re doing with AI. What’s your advice for the industry in general?

I think you first need to really understand the problem that you’re solving. AI is a tool; so it’s not just about saying hey I want to bring in AI, I want to bring efficiency. What is the pain point? What is the problem you’re solving? What’s the use case? And then, do you have the right tool for it? Once you have that, you’re okay. But just adopting AI across the board because it’s something trendy or because the notion of efficiency is there, or productivity, it’s not enough. You need to know the problem you’re solving. And depending on the use case, you need to have technology with deep AI; not just augmentation and chatbots. It’s really the data that’s used, and the algorithms, and the generation of AI. You need to use really advanced models based on LLMs, with safety guards, in place to give you the results you want. is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to hire the right people, keep them longer and cultivate a successful skills-based organization. fuels Talent Acquisition, Talent Management and Skills Architecture all in one, data-driven solution. To see it in action, request a demo. 

Announcing the Partnership Program

NYC / August 9, 2023 –, a leading AI-driven Talent Intelligence Platform, has announced the launch of an exclusive Partner Program available to consulting and recruiting firms. The company describes its new offering as a way for exclusive partnership with premier firms to bring their prospects and clients into the future of work with an AI-fueled, data-driven understanding of what they need to become a Skills-Based Organization.

The company uses a data driven, Responsible AI-driven operating model using billions of data points to help enterprises achieve a Skills-Based modality. 

“Becoming a Skills-Based Organization is the key to success for today’s enterprise HR leaders, but many don’t know where to start. When they ask a consultant for guidance on transforming to an SBO model they may get information, but not the tools they need to get started,” says Co-founder and COO Isabelle Bichler-Eliasaf. “We provide those tools, along with the expertise to optimize them for success.”

To accomplish this, centralizes data to create an adherent skills strategy to unify and standardize data sets to remove silos within HR functionalities.  This unified data set, paired with the company’s Responsible AI, equips HR’s to move faster and with more agility and efficiency. The platform continuously updates to eliminate future skills gaps within the organization where it is already implemented. 

Consultants in the Partner Program will have access to the talent intelligence platform’s Skills Architecture module to generate a skills-map of an enterprise client’s workforce, including unified skills language and agreed-upon job architecture. With better visibility into their employees’ strengths and skill gaps, HR leaders can spot hidden talent, reveal internal mobility opportunities and deploy talent efficiently during times of rapid change.

“Our platform provides HRs with a comprehensive understanding of their workforce and the right data to align with organizational goals,” says Bichler-Eliasaf. “Once our partner consultants provide their enterprise clients with a comprehensive skills catalog of the core competencies, technical proficiencies, and soft skills needed for each role in their organization, they can begin to strategize an SBO operating model.” is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to hire the right people, keep them longer and cultivate a successful skills-based organization. fuels Talent Acquisition, Talent Management and Skills Architecture all in one, data-driven solution.

PODCAST: Can Innovation and Regulation Co-Exist? How ChatGPT Sparked the Conversation

Nothing has blown open the generative AI conversation quite like the arrival of ChatGPT. Our final panel of the forum delved into this timely concept, covering everything from the promise of generative AI in terms of greatly improving business processes, as well as the peril it represents in an increasingly regulated innovation space that demands explainability.

Leading this fascinating discussion were YuyingChen-Wynn of Wittingly Ventures and author Art Kleiner of Kleiner Powell International. To hear their conversation with CEO Dr. Shay David, listen to the recording below or watch the session here.

PODCAST: Ready or Not, Regulations are Coming

The New York City AI Audit Law has garnered a lot of attention over the last year as it was drafted, discussed, opened for comments and eventually put into effect. Also called Local Law #144, the new regulation spreads far beyond NYC, impacting any company anywhere that may want to hire employees from within the city. 

Globally, Responsible AI governance is a hot topic as well, as different regions implement specific guidelines that may or may not align with others. Without European Union’s guidelines aligning with the US’s, for example, the issue can get tricky in a global economy. 

In this panel discussion with Commissioner Sonderling, Seyfarth Shaw partner and employment attorney Rob Szyba, Hogan Lovells partner Scott Loughlin, and Littler shareholder Niloy Ray explore the implications of differing approaches to Responsible AI governance.

PODCAST: The Paradox of the HR Mission

Stepping away from the more concrete topics of legal regulations and responsible technology, we dove into the complexities of human capital management from a people-centric perspective with Dr. Anna Tavis of NYU and Dr. Yustina Saleh of The Burning Glass Institute. 

The two women spoke with VP of Marketing Amy DeCicco about the subtle differences between skills, traits and characteristics, and where skills, while valuable, can’t always tell the whole story of a person. Check out a recording of their insightful conversation below or watch the session here


Responsible HR Keynote with Commissioner Keith Sonderling, EEOC

“That’s really the goal of artificial intelligence, and other technologies being used in HR, is to eliminate bias, to eliminate the human element that has plagued the workforce, which is why our [EEOC] laws exist.”

EEOC Commissioner Keith Sonderling kicked off the inaugural Responsible HR Forum by taking a positive view of technology as a chance for employers to proactively advance the EEOC mission rather than defend against infraction.

To hear Commissioner Sonderling’s full conversation with Co-founder and COO Isabelle Bichler-Eliasaf, access the recordings below or watch the full session here

PODCAST: Becoming a Skills-Based Organization – More than a trend? COO Isabelle Bichler-Eliasaf sits down with panelists Heidi Ramirez Perloff of The Estée Lauder Companies, Dr. Sandra Loughlin of EPAM Systems, Urmi Majithia of Atlassian and Ben Eubanks of Lighthouse Research & Advisory to discuss the nature of skills as a central workforce strategy, and what it means to be an SBO.

Listen below, or watch the session recording here.