LLMs are NOT the future of recruiting

By Ben Ihle | Technical Co-founder of attract.ai

Artificial Intelligence (AI) has been revolutionising industries worldwide, including recruitment. With Large Language Models (LLMs) such as ChatGPT gaining traction, it’s easy to forget that it’s just one small part of AI.

LLMs are a type of AI that are trained on a massive amount of text with the goal to be able to produce human-like responses. ChatGPT from OpenAI is one of the worlds largest and most powerful LLMs, other popular examples include Google’s Bard and Anthropic’s Claude.

With the explosion of ChatGPT there has been an uptick in recruiters using LLMs in their every day work. While these tools can be helpful with some tasks, we want to discuss some limitations that LLMs face in recruiting, and how we can use various other AI techniques to address these limitations.

LLMs excel at processing vast amounts of text and generating coherent responses. However they’re not so good at comprehending the intricate nuances of human language and context. This leads to the uncanny valley effect, which arises when an AI system produces responses that appear almost human-like but lack authenticity.

In the context of AI Sourcing, this can lead to misinterpretations of candidate qualifications, resulting in poor candidate-job fits (particularly for soft-skill roles). Similarly, this phenomenon occurs when an AI generated outreach message is delivered that is almost personalised, but not quite. It’s the “not quite” part that creates a feeling of unease and mistrust in the recipient.

When used with expertise, these tools can certainly help recruiters, however they are not a set-and-forget replacement for recruitment activities.

Bias exists in all data, and LLMs and related systems are particularly susceptible due to the large volume of training data required to get a reasonable result.

Where AI models have been trained on biased data, the model will naturally propagate and amplify these biases, causing qualified candidates from non-standard backgrounds to be excluded from sourcing and screening. Amazon’s attempt at building an AI hiring solution is an excellent case study in this — and demonstrates how stubborn AI can be when it comes to trying to shake off biases.

Specific AI techniques (such as genetic algorithms like those used at attract.ai) can be used to overcome the bias concerns that are present in LLMs, in particular because they do not require nearly as much data.

Because recruitment must adhere with your countries Equal Employment Opportunity laws, it’s more important than ever to choose a tool that is able to identify, disclose and rectify bias.

To summarise, LLMs are probably not the AI solution you’re looking for if you’re in recruitment.

Although they can do some sophisticated tasks, there are plenty of limitations within real world applications. Companies that employ advanced AI techniques like fuzzy logic, and genetic algorithms are much better positioned to assist in discovering and selecting talent.

The use of Machine Learning in rec-tech is great to make data-driven predictions, and fuzzy logic is able to facilitate contextual understanding and human-like decision-making, ensuring accurate candidate assessments. At attract.ai, we love Genetic algorithms for optimising candidate selection based on evolving criteria, maximizing the chances of finding the best fit for each role.

While AI is on the brink of revolutionising the recruitment process, we’re concerned with how issues such as the uncanny valley effect and bias in LLMs will have an impact on candidates lives, in particular those who do not necessarily fit the mould. Not to worry though, there are plenty of AI platforms out there (using techniques like fuzzy logic, machine learning and genetic algorithms) that can help recruiters unlock diverse, high-performing teams without any of these issues.


Continue reading