The last 10 years have been described by many as the ‘decade of disruption’. In this brief period advances in technology have brought us Alexa, driverless cars, 3D printing, and AI that has crept silently into all of our lives in one form or another.
Of course, whether these developments are to be welcomed – perhaps as the harbinger of a Fifth Industrial Revolution that will free up productivity and wellbeing - or are an existential threat to humanity, depends on who you ask. The answer is certainly above the pay grade of this author.
What's certain, however, is that AI will be a game changer for recruitment. And that particular genie is unlikely to be put back in the bottle.
61% of companies still use CVs as an initial means of assessing candidates
So, let’s start at the beginning. Research suggests that around 61% of companies still use CVs as an initial means of assessing candidates. While it’s always been the case that some candidates have a tendency towards “embellishment”, the difference now is that they can simply log on to ChatGPT (other AI is available) provide details of the job and its requirements, and let the software do the work.
And work it does. Earlier this year it was reported that Neil Taylor, founder of a communication Company called Schwa (which incidentally means a short neutral vowel sound – one for the pub quizzers among you) tested his recruitment team to find out whether they could spot a job application written by ChatGPT. After sifting, only 20% of candidates were invited to interview, of which ChatGPT was one. Taylor reportedly told Sky News “It was more competent than many of the bad people who apply to us”. Hardly a ringing endorsement but it is, after all, early days for generative AI.
The use of ChatGPT also seems widespread. A recent survey by US firm ResumeBuilder.com found that 46% of job seekers were using ChatGPT to write their CVs or cover letters and 7 in 10 reported a higher response rate as a result.
Universities investigating students for using ChatGPT to cheat
But let’s roll back a bit. According to student website The Tab, students and academics at 8 Russell Group universities had visited the ChatGPT website more than a million times in the months of December 2022 and January 2023, while by June 2023 over 40% of universities were investigating students for using ChatGPT to cheat. Not bad going for a tool that only launched in November 2022.
Of course universities are taking the matter seriously. But it does beg the question; can employers trust the qualifications their candidates profess to hold and include on their (potentially AI generated) CVs? It may be too early to say.
So what does all this mean for you as an employer? Your initial response may well be to fight fire with fire and harness your own AI in recruitment. Tools now abound to help you screen applications either on paper (which does give rise to the dystopian possibility of robots marking their own homework), through to video interviews which will assess candidates using analysis of speech patterns and physical “micro-expressions”.
But hold fire. The Equality Act 2010 is very clear that its provisions apply not only to employment, but also to the “arrangements A makes for deciding to whom to offer employment”. This means that any selection process mustn't be discriminatory.
Rubbish in, rubbish out
A detailed discussion around bias in AI is beyond the scope of this article. However, it's key to remember the adage that AI isn't created in a bubble; it's created by humans, and humans have biases. Rubbish in, rubbish out, as my old computer science teacher used to say.
And there are some well-known AI fails. For example, in 2018 Amazon announced that it had abandoned a candidate ranking tool that was found to systematically downgrade women’s CVs.
Further (and setting aside the validity of the underpinning science) any tool that seeks to “rank” language or micro-expression risks discriminating against those with disabilities or without English as their first language.
So let’s take a step back. As a lawyer I'm trained to ask the question “what is the mischief this solution is intended to avoid”. And then, the follow-up question “does it?”
In recruitment the mischief is usually twofold: first, the need to save time and expense, and second, the laudable desire to avoid subconscious bias and inject “objectivity” into the process. However, while it’s true that automating recruitment processes will likely achieve the former, that rests on a silo-ing of that process within the employment lifecycle.
Attempts to outsource diversity may entrench cultures of inequality and discrimination
What do I mean by this? Well a report from Cambridge University researchers published in October last year found that attempts to outsource “diversity work” to AI hiring tools may unintentionally entrench cultures of inequality and discrimination by failing to address systemic problems within organisations. In sum, AI is being trained to recruit in the current image of the organisation. This apparent failure to solve the second mischief then feeds into the widely accepted productivity losses of a non-diverse workforce, which may well offset any savings at the recruitment stage.
But there's another question. While we're busy ranking and measuring and plotting on graphs, are we still recruiting the right people? Are we listening to the experiences of our candidates and using skills gained through a lifetime of engaging with people and hard-earned HR experience to assess fit within the organisation. Are we truly creating a workforce fit for the Fifth Industrial Revolution that will bring skills, experience and diversity when we reduce the sum of a human to a score on an email sent to HR. On this I remain (for now) unconvinced.
*With apologies to REM
Our content explained
Every piece of content we create is correct on the date it’s published but please don’t rely on it as legal advice. If you’d like to speak to us about your own legal requirements, please contact one of our expert lawyers.