Ready Or Not: 3 Points To Consider As Generative AI Tools Rush To Market

This article first appeared in Forbes.

About halfway between the day you first heard about ChatGPT and the day you started wishing you never had, the news became all about a new era of thinking machines. Faster than you can say “Generative AI,” new models are moving into the spotlight, each claiming to be better than the last.

ChatGPT is drawing big names into the generative AI race.

ChatGPT, the groundbreaking chatbot developed by OpenAI, became the talk of the tech world almost overnight and is the most advanced chatbot to date. Predominantly a consumer-focused tool, it was designed to interact conversationally with a user, providing answers and responding to follow-up questions. Demonstrating the extraordinary ability of artificial intelligence to use machine learning to index retrievable content and mimic writing styles, ChatGPT can even adjust tone and voice when provided with direction.

OpenAI technology is also used to power Bing, Microsoft’s less popular search engine launched in 2009 that’s now making a phoenix-rising-from-the-ashes comeback. Claiming capabilities more powerful and accurate than ChatGPT, the company says they’ve applied the AI model to the Bing search ranking engine to increase the relevance of even basic search queries. While this might be true, I think they still have a long way to go. The technology has more than a few kinks—for one thing, it recently told one researcher it was in love with him.

And Microsoft is not alone in its conundrum of determining when these technologies might be ready for market. Despite having arguably the strongest alignment with AI-charged search capabilities, Google fast-tracked its own chatbot, Bard, in order to compete directly with ChatGPT. However, a factual error churned out during a marketing demo derailed its momentum and even caused the stock of its parent company, Alphabet, to drop 9% within a day. Regardless, it’s possible that Bard may ultimately gain an edge over ChatGPT given its access to a wealth of data when integrated into Google’s search engine.

As a specialist in the AI space, my company sees the rapid uptick in generative AI products as a positive. But the promise comes with peril. As of now, these technologies lack the hallmarks of fully enterprise-level solutions. As we observe a burgeoning new tech space, here are a few points to consider:

1. AI is a tool, not a threat, but we must assign it to the right tasks.
Consumer-level chatbot technology showcases what we in the AI space already know: that machine learning and intelligent technology can greatly enhance the human experience. One could argue that when AI takes on more repetitive, mundane business tasks—and does so with a near-zero error rate—people will be freed up to generate more creative contributions. In the HR arena, AI-driven tools can map the skill sets of entire organizations, revealing hidden talent and new opportunities that may have otherwise been missed.

2. Responsible AI means more than content filtering.
The companies producing these new publicly available chatbots talk about responsibility as the importance of mitigating harmful content. Microsoft, for example, says the new Bing implements safeguards to defend against issues such as misinformation and disinformation. But for an AI product to be truly responsible, the design itself must be responsible. We are seeing this in the HR tech world, as increasing regulations are being introduced to stave off unintended bias in hiring processes. Chatbots and similar technologies must include responsible AI components even before the first piece of content is generated.

3. Better is subjective.
In the scramble to eclipse ChatGPT’s entry into the market, its competitors were launched amid bold superlatives. Microsoft introduced Bing as the tool that would “reinvent search,” providing a faster and more powerful, accurate and capable option than ChatGPT. Meanwhile, Google Bard’s access to more recent data seemed beneficial in the race with ChatGPT, as the OpenAI chatbot model was initially restricted to data collected only through 2021.

When AI is tailored to enterprise-level functionality, however, what’s considered superior in one scenario may not translate to an advantage in another. Whereas industry-specific AI tools are designed to organize, analyze and structure data precisely enough to inform critical business decisions, vertical-specific leaders must build AI models that are based on industry know-how and language to perform specific tasks. Businesses utilizing such technologies also depend upon contractual assurances like Service Level Agreements (SLAs) to outline vendor expectations and set performance metrics, something open chatbots can’t provide.

Conclusion
No doubt the consumer-facing generative AI race is just beginning. Advances and missteps are an inevitable part of growth, but I look forward to seeing how it all plays out, with the hope that it helps people view AI anew, through the lens of curiosity and potential.

Event Recap: Responsible HR Forum 2023 presented by retrain.ai

There’s something incredible that happens when thought leaders and knowledge seekers gather to explore a critical topic. Such was the vibe at the first-ever Responsible HR Forum presented by retrain.ai. Below, find a brief overview of the day’s sessions, which you can now access as podcast or vidcast recordings.


Keynote: EEOC Comm
issioner Keith Sonderling

Starting off the day, keynote speaker Commissioner Keith Sonderling of the EEOC shared insights on the expansion of Responsible AI governance across the U.S., emphasizing that current regulations put the onus on businesses using AI systems to ensure they generate fair end results–not on the makers of AI systems.

Watch the vidcast | Listen to the podcast

 

Ready or Not, RegulationAre Coming 

Talk of Responsible AI continued into the first panel discussion, where Commissioner Sonderling was joined by Scott Loughlin of Hogan Lovells, Rob Szyba of Seyfarth Shaw and Niloy Ray of Littler to discuss the new AI Audit Law in New York City, the far-reaching implications of seemingly local regulations, and how the European Union’s approach to AI governance differs from the U.S.

Watch the vidcast | Listen to the podcast


The Paradox of the HR Mission: Creating a Multidimensional View of Talent

In conversation with retrain.ai’s Amy DeCicco, Dr. Anna Tavis of the Human Capital Management Department at New York University and Dr. Yustina Saleh from The Burning Glass Institute posed provocative questions, encouraging attendees to think about questions like whether empathy is truly a skill or a trait, or how HR leaders can tell from a skills profile whether or not a candidate will be able to do the job needed.

Watch the vidcast | Listen to the podcast


Becoming a Skills-Based Organization: More Than a Trend?

With more enterprises talking about transforming to an SBO model, Dr. Sandra Loughlin of EPAM Systems shared lessons learned from her company’s transformation, while Heidi Ramirez-Perloff discussed The Estee Lauder Company’s exploration into SBO strategy. Urmi Majithia of Atlassian delved into executing technology to help overcome the challenges of becoming an SBO, and Ben Eubanks of Lighthouse Research & Advisory broke down the larger SBO concept to a tangible level regarding individual employees and hiring managers.

Watch the vidcast | Listen to the podcast


The Hidden Architecture of a Skills-Based Organization

Following the panel discussion, Dr. Loughlin sat down for a one-on-one with retrain.ai CEO Dr. Shay David to go more in depth into EPAM’s experience developing a thriving SBO strategy, sharing benefits, pitfalls and lessons learned along the way.

Watch the vidcast | Listen to the podcast

 

Can Innovation and Regulation Co-Exist? How ChatGPT Sparked the Conversation

No discussion around Responsible HR would be complete without an exploration of the huge impact ChatGPT and other generative AI solutions are having on the tech space.  Leading a fascinating discussion on the topic were Yuying Chen-Wynn of Wittingly Ventures and Art Kleiner of Kleiner Powell International, who examined the potential of generative AI to greatly improve business systems, as well as the ethical AI use questions that remain in the midst of growing regulation.

Watch the vidcast | Listen to the podcast


Continuing the Conversation: The Responsible HR Council

To conclude the Responsible HR Forum, retrain.ai announced the formation of our Responsible HR Council. Like the Forum, our Council will involve experts from academia, law, enterprise, government and nonprofit sectors. We’ll meet quarterly to get up to speed on new AI legislation, new AI technologies, and the melding of the two within Responsible HR practices. Check back for details soon! 

retrain.ai is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills taxonomy, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, and win the war for talent and the great resignation. retrain.ai fuels Talent Acquisition, Talent Management and Skills Architecture, all in one, data-driven solution. To see it in action, request a demo

ChatGPT Is Changing the AI Game, But Enterprises Need More

Chances are you’re one of the millions of people who have played with ChatGPT, the game-changing generative AI assistive technology released by OpenAI. Designed to interact conversationally, the advanced chatbot can engage in dialogue with a user to provide answers, respond to follow-up questions, correct mistakes, and adjust tone and voice when provided with direction. 

A consumer-focused tool, ChatGPT aptly showcases the groundbreaking ability of generative AI to use machine learning to index retrievable content and mimic writing styles. As such, it has prompted a conversation around its possible business uses, garnering opinions from those who see great potential–and those who fear for their jobs. Some even suggest that we are nearing the singularity, or at least seeing for the first time machines that can pass the (in)famous Turing test.

>> Book a demo to see retrain.ai’s generative AI in action

As leaders in the AI space, we see ChatGPT as an example of a set of tools with the potential to transform business processes. Yet it has notable limitations when viewed through the lens of an enterprise-level solution. There are four main areas in which this differentiation is most apparent:

  1. AI-driven technology designed for business incorporates features optimized for a particular industry. retrain.ai, for example, was built from the ground up as a specialized solution for the HR space. As such, our technology expands beyond a ChatGPT-level machine learning model to one which can organize, analyze and structure data precisely enough to inform critical business decisions. We anticipate that in each industry, vertical-specific leaders will emerge who build AI models that are based on industry know-how and language, and are tailored toward specific tasks.
  2. Explainability is another critical feature of specialized AI technologies that you won’t find in a general-purpose chatbot platform. Explainable solutions are referred to as white-box technology, meaning machine learning outcomes, and the methodology which produces them, can be explained using general business-speak. For enterprises trusting generative AI systems with critical decision assistance, this means they have a clear enough understanding to question or challenge the platform’s output. 
  3. Without white-box explainability, an AI system is lacking a key component of Responsible AI, a non-negotiable design element, when it comes to bias prevention in hiring processes. Only by using Responsible AI can an enterprise ensure candidates are being screened solely on skills, eliminating information that can introduce unintended bias. Increasing regulations will also hold enterprises accountable for making sure they are using Responsible AI in hiring practices.
  4. Enterprise-level solutions are implemented to directly impact business performance. They come with contractual assurances like Service Level Agreements (SLAs) to outline vendor expectations and set metrics by which the technology’s effectiveness will be measured. Open platforms like ChatGPT don’t offer performance metrics or customized services, leaving adopters with no recourse should something go wrong. The same is true about data sovereignty, and compliance with privacy standards like GDPR. We anticipate that the big vendors like Microsoft and Google will soon offer enterprise grade service assurances around consumer tools like ChatGPT (or Google’s Lambda), but until that time, the use of consumer tools cannot be relied upon.

The retrain.ai Talent Intelligence Platform uses generative AI with similar language processing technology to ChatGPT’s, but expands on the model to provide a fully explainable enterprise-level solution designed specifically for talent intelligence, while complying with SOC2, GDRP, and offering enterprise grade SLA. We’re excited to see how the market continues to develop and how enterprises transform years old practices with new tools. 

>> Book a demo to see retrain.ai’s generative AI in action

See how the retrain.ai Talent Intelligence Platform fuels your talent acquisition, talent management, job architecture and DEI goals, contact us today. 

retrain.ai is a Talent Intelligence Platform designed to help enterprises hire, retain, and develop their workforce, intelligently. Leveraging Responsible AI and the industry’s largest skills architecture, enterprises unlock talent insights and optimize their workforce effectively to lower attrition, win the war for talent and the great resignation in one, data-driven solution.

Learn more: book a demo