Image source: Claudenakagawa | Istock | Getty Images
Recent years have seen a rise in artificial intelligence (AI) use in the job candidate recruitment and screening process—as of 2023, an estimated 35-45% of companies use AI in the hiring process, including 99% of all Fortune 500 companies. These systems, which can be classified as Algorithmic Decision Support Systems (or ADSSs, for short) are intended to drastically reduce candidate screening time and ostensibly eradicate the potential for human recruiter bias with programmatic “objectivity.” However, AI applications are notably only as objective as the data sets they learn from.
In 2015, Amazon ditched its proprietary AI-powered Applicant Tracking System (AI-ATS) after discovering its habit of deprioritizing women's resumes for tech positions. This is a potential problem with all closed-circuit ADSSs: the system is trained on bad data, then makes decisions that reinforce its training, thus perpetuating a cycle founded on historical manmade norms that are more often than not exclusionary or outright discriminatory at their worst. Historically, for reasons I won't get into here, men have worked in tech positions. Thus, Amazon's ATS erroneously equated “maleness” to “success.”
The feedback loop model of algorithmic decision support systems
At the core of this issue is trust. If we are to assume that the computer cannot err, then we risk taking the human entirely “out-of-the-loop,” never thinking to validate its decisions and recommendations. This is sometimes referred to as automation bias and can be seen as a form of complacency. On the other hand, algorithmic aversion characterizes a distrust in automation and ADSS disuse, which risks operational inconsistency, nullifying the ADSS's utility of expediting turnaround times and mitigating human bias in hiring decisions. The sweet spot is somewhere in the middle—a cautious optimism; a cooperative alliance between man and machine, where each agent holds the other accountable. Finding this sweet spot in practice, however, still eludes academia and organizations alike to an extent.
Overall, research suggests that recruiters tend to trust human expert recommendations over recommendations provided by an AI-powered ATS. (All sources can be reviewed in the full report, available here). The reasons why, however, are largely unclear, though academics posit that trust in AI-powered recruitment platforms may be influenced by:
More generally, a major cause for ADSS distrust is the “black box” issue of AI. Often, people are unclear on how the AI made its decision. The underlying mechanisms of AI are often so complicated that the system cannot reasonably “explain” itself in a way that can be understood by humans. This lack of understanding drives human users of ADSSs towards algorithmic aversion because they cannot justify the decision and are thus less likely to vouch for it. Thus, AI-ATS design should include robust dashboards and reporting with a customizable and comprehensible user experience to attempt to bridge the dissonance.
On the other side of the interview table, applicants generally aren't happy about AI use in the recruitment process. A primary goal of academic research into these systems is to understand applicants' perceptions towards this type of system to provide formative advice to the tech companies that design and use AI-ATSs. Some themes of applicants' perceptions of AI-ATSs include:
A full version of this report is available here, which includes all citations and suggestions for future work in further understanding the degree of human-machine cooperation among recruiters and AI-ATSs.