Three big benefits of AI
The benefits of using AI effectively can be significant. First, it can create significant speed and efficiency gains, reducing costs. Unilever, for instance has reported saving about 70,000 person-hours by using AI and technology. That can free up hiring managers to focus on the best-qualified candidates.
Second, it can help to find better hires. Faced with a large stack of CVs, a recruiting manager might start to skim read, missing key details. AI does not skim read, and used properly it can identify candidates who are the best fit for a given role. With the addition of feedback on the retention of those people and their advancement through the company, machine learning can refine its model over time. Many businesses are beginning to report improvements on metrics such as two-year retention from AI-enabled hires.
The way to maximize results is often to combine AI with human input. Research comparing the performance of AI and humans in finding errors in a piece of text has shown that AI now catches more errors than humans, and spots things that humans miss – but it has also found that humans catch errors that AI misses. The lesson is clear: it is a combination of artificial and human intelligence that will have the most powerful results.
The third advantage of AI in recruitment is its potential to remove bias by sweeping away the unconscious prejudices that can affect even the best-intentioned human recruiters.
AI and diversity: Risks and opportunities
However, exploiting this third advantage of AI is not straightforward. Concerns that AI could perpetuate or increase bias are common – held by 23% of HR professionals, according to a 2019 study by IBM. But CHROs need to grasp the distinction between two key concepts here: bias and fairness.
Bias is typically the result of AI being trained using skewed data, such as existing employees – the successful candidates of years past. That builds in a survivor bias, and teaches the AI to favor candidates who resemble the existing employee base. This was the problem at Amazon, which had to scrap a machine learning recruitment tool that was penalizing women. Part of the problem was that the model was trained using old resumés submitted to the company – resumés that were dominated by men.
Then there is the question of whether a tool is giving fair and ethical outcomes. Has the tool operated correctly in evaluating candidates? And are the resulting outcomes, at a systemic level, fair? Evaluating these may require leadership judgments – for instance about the organization’s desired representation of minorities.