The Debate: Should AI be used to make hiring decisions?
Would you trust AI to pick out your CV, or should hiring be left to humans? We hear the case for and against in this week’s Debate
YES: AI is more objective and reduces the reliance on ‘gut feeling’
AI should absolutely be used in recruitment, but it must be built and governed the right way.
Let’s be honest, too many companies have rushed to deploy AI without thinking it through. Last month’s lawsuit against Eightfold AI highlighted an understandable concern: candidates should never be assessed on data they didn’t knowingly provide. The problem is not AI, but opaque, non-consensual data practices and a rush for shortcuts at the expense of trust. And humans aren’t exactly without bias when it comes to hiring.
When AI is used responsibly, the positives are transformational.
AI can bring extraordinary consistency to hiring by asking every candidate the same structured questions and assessing them against the same objective criteria. That reduces reliance on “gut feel” and creates a more level playing field. For candidates without elite networks or polished CVs, that shift is game-changing.
Responsible AI also improves access and inclusion. Chat-based, mobile-first AI interviews allow people to apply anytime, anywhere, opening doors for working parents, shift workers and those outside major cities, widening the funnel in ways traditional processes cannot.
It also transforms the candidate experience. At scale, humans alone cannot provide timely, personalised engagement to thousands of applicants – but AI can. Used correctly, it responds instantly, gathers first-party data transparently and frees recruiters to focus on meaningful human conversations.
Recruitment is the start of the employer-candidate relationship. If AI is transparent, consent-based and designed with human oversight, it strengthens that relationship rather than undermines it. I want companies to be proud of using AI responsibly.
Perhaps the debate shouldn’t be whether we use AI in hiring, but whether it’s applied ethically, transparently and in service of fairness.
Barb Hyman is founder and CEO at Sapia.ai
NO: Recognising human potential requires conversation and empathy
Hiring isn’t a data problem. It’s a human one.
AI relies heavily on CVs for data but research shows that CV screening correlates with future job performance at around 0.06. In other words, they tell us very little about who will succeed in a role.
Good human recruiters understand this. People vary. Some are confident self-promoters; others are quieter or more introverted. Some candidates have all the skills (and more), but humility or self-doubt stops them applying or presenting themselves. Often it takes a conversation, and someone saying, “You’d be great at this,” to unlock potential. Recruitment, at its best, expands possibility. AI narrows it based on past patterns.
As AI has been increasingly used in recruitment, have things improved? Increasingly, candidates describe modern hiring where AI is used as exhausting and demoralising.
Plus, let’s not forget: most hiring happens in small organisations. Nearly half of UK private-sector employees work in businesses with fewer than 50 staff, many in teams of fewer than ten. Employers hire for team balance and context – neither of which is assessed by AI.
Technology can support administration, but recognising human potential requires judgement, conversation and empathy: skills I’d like to think remain in the human (not data) domain.
Lucy Standing is a chartered psychologist. Her new book Age Against The Machine is out on 16 April 2026
THE VERDICT
Last month, AI hiring platform Eightfold was sued for allegedly compiling reports used to screen job applicants without their knowledge, with the service used by the likes of Microsoft and Paypal, among many other high-profile companies. The lawsuit focused on the non-consensual aspect of the case, but it raises a larger question of whether AI has any place in hiring at all.
The appeals of using the tech are clear: hiring managers are faced with swathes of applications, many from clearly unsuitable candidates. Using AI to fish these out seems a no-brainer, but the risks are also clear. Entrenched biases in AI systems are well documented, and relying on the tech will reentrench these biases into hiring decisions.
What’s more, using AI to screen applicants (through detecting keywords, for example) will likely just incentivise candidates to use AI to optimise their own applications (by including keywords, for example), creating a fight fire with fire situation that can only lead to worse hiring decisions. The verdict: use humans to hire humans.