AI Has Changed the World. What’s Next for the Future of Work?

Published by:
Evan Forman
October 30, 2023
11
min read

We've always been vocal about the ethical use of AI in the recruitment process: Applied is built from the ground up to eliminate bias in hiring, and AI has been the perfect tool for perpetuating it. But with one-third of companies already using AI for at least one function, it's clear that this isn't going away. The way teams are hiring has to change, and our platform has had to change with them.

In light of the rapid developments in generative AI and its impact on the hiring landscape we recently held an in-person customer event with our CEO, Khyati Sundaram, and guest speaker Ed Bradon, of the Behavioural Insights Team. The session brought together a diverse group of organisations, all dealing with concerns and opportunities related to AI in recruitment. 

Amid concerns about AI fairness and accuracy, the session emphasised the importance of adapting to changes in recruitment and AI tech. The event was a collaborative forum addressing common challenges and opportunities in AI for hiring.

Let's talk about our longstanding position on AI in recruitment, and how we're adapting to a post-GPT hiring landscape.

The evolving landscape of generative AI in recruitment

For years, employers have been using AI recruitment tools to sift through hundreds or thousands of applications for any given role. They've been using machine learning systems to scan CVs and shortlist candidates based on relevant keywords. They've used game-like online tests and video interviewing software that would automatically rank candidates based on factors like their perceived English proficiency or personality traits.

But as everyone got to grips with ChatGPT starting in late 2022, candidates are increasingly using AI in applications and interviews. Recruiters are sick of seeing AI-generated cover letters, which come at a time when the number of applications per job is already soaring

Even highly skilled vacancies are susceptible to AI: with GPT-4 scoring in the top 10% of test-takers in a simulation of the USA's bar exam, it could flub its way through most interview processes if they were conducted entirely online. With these systems poised to get exponentially more powerful as time goes on, the traditional hiring process might not be able to cope.

Applied's unique approach to hiring has been challenged, too. We found the three most common challenges our attendees encountered since the release of Chat-GPT are:

  • Higher volumes of candidate responses to review
  • Greater difficulty in distinguishing a high-quality answer
  • Needing to change tact as some questions no longer ‘work’ 

So how has Applied adapted to these challenges and should we be using AI in our response?

What are the problems with using AI in recruitment?

Despite its advances, AI is still biased and inaccurate in a lot of ways. We've been vocal about not using it in the past, and controversies around AI in recruitment have only validated that position.

AI can only produce outputs that look like the data it was trained on. We see this illustrated by diffusion models like Midjourney and DALL-E, whose "original" imagery is always based on existing artists or styles in the training data.

If you're relying on AI tools like automated CV screening and AI-scored video interviews in your recruitment process, you could end up reproducing historical, biased, norms.

That's assuming the tool even works as advertised: in one investigation a video interviewing tool scored a candidate 6 out of 9 for English competency, despite her speaking only German.

When Amazon tried to build an AI tool to automate its recruitment operation in 2018, it was so biased against women they had to scrap it. This was because it was extrapolating from the high percentage of men applying for their technical roles: just looking at the historical data, it inferred that a successful applicant was probably going to be a male one.

These biased, unaccountable AI recruitment tools are endemic to the way hiring is done today: in 2022, a survey found that 79% of employers are using them. Only now are regulators starting to take action around these systems.

This year, New York City started requiring that employers using AI to recruit, hire, or promote employees submit those algorithms for an independent audit and make the results of that audit public. Employers will also have to disclose to all candidates that they're using AI to make the decisions.

As well as the algorithms they're using, employers will also have to list the "average score" those algorithms output for candidates of different races, ethnicities, and genders. They'll also need to publicise an "impact ratio" for each demographic: the average score of everyone in that demographic divided by the average score for the highest-rated demographic.

Speaking to TechCrunch, Khyati Sundaram said "We're not yet at a place where algorithms can or should be trusted to make these decisions on their own without mirroring and perpetuating biases that already exist in the world of work."

AI is a support, not a substitute

We think AI can enhance processes and make recruitment more efficient, but it should support, not replace, human judgment. The approach we're taking is to emphasise how AI can assist human decision-making. 

We're using AI as a support, not a substitute. 

At the event Khyati, unveiled two new product features which have been developed as a response to the use of AI in sift answers.

  1. Flagging Feature - Hiring teams can now flag answers they suspect have been generated by AI. OpenAI confirmed that currently there's no automated way to determine if text is AI-generated, written by a human, or something in between. But if you've spent any time reviewing sift answers, you’ll probably be able to notice the patterns in AI-generated responses. The flag feature allows hiring teams to mark such responses for further review and investigation when needed.
  1. Comparative AI-Generated Reference Answers - We're now including AI-generated reference answers to the questions employers are asking. This means anyone can review sift questions with confidence: they have an example on hand for how Chat GPT would answer that exact question. 

These features have been carefully considered and thoughtfully integrated to provide ethical, future-proofed solutions to the challenges presented by AI in the recruitment landscape. By carefully adapting to AI in our process, we can make sure our users' hiring process is as fast and efficient as possible, without making any tradeoffs on our platform's fairness or predictive capability.

Test, Learn, Adapt

At the event, Ed Bradon, Director of the Behavioural Insights Team, provided useful tools and advice to assist operations and talent teams in addressing AI-related recruitment challenges. He stressed the importance of adaptability and a learning mindset.

Ed spoke about the Behavioural Insights Team's tests on the Applied platform, where GPT was used to respond to work sample questions alongside human candidates.

Watch Ed's full talk below:

As a result of the blind tests they've made the following adjustments to their hiring funnel and recommended Applied users do the same:

Do more Do less
Interviews ++ ‘Naive’ sift questions —
In-person interviews +++ Offline tasks (unless AI allowed) — —
In-person tasks ++
Phone screen interviews ++
‘Creative’ sift questions +
Applied tools, e.g. tagging ++

…and keep doing blind testing!

What's Next at Applied?

As AI continues to change the world of work over the coming years, we're going to keep testing Applied with new AI tools as soon as they come online, and we're going to keep listening to users about how AI is changing things for them.

And we're going to go further with our belief that AI should enhance human decision-making rather than replace it.

We have plenty of ideas about how AI can help us double down on what makes Applied unique: it's purpose-built to make the hiring process fair, auditable, and enjoyable for hiring teams and candidates.

We envision a future where AI plays a crucial role in supporting Applied users. We believe that in future, AI can be used to help employers identify the top skills required for specific job roles. Additionally, we see the potential for AI to formulate the most effective interview questions tailored to assess these skills, while potentially suggesting the most fair and unbiased candidate evaluation process. Rather than fearing AI, we embrace it as a gateway to numerous opportunities within the realm of work. However, as with all our product features, we need to ensure that any AI-driven features remain free from bias.

It's fair for many HR professionals to think that AI-assisted applications have broken the traditional hiring process.

We think it's been broken for a long time: candidates hired with Applied have a 93% retention rate after one year, but the traditional hiring process would have missed 60% of them.

Used carefully, we think AI can help our users achieve a hiring process that's easy, fast, and made fairer by using technology to remove human cognitive biases, instead of amplifying them.