When a rejection became a suggestion. How to successfully refer candidates to other jobs

Published by:
September 4, 2017
4
min read

Originally published on Medium, September 4, 2017

A few weeks ago we wrote about the value of giving unsuccessful candidates feedback on their application, yeah you saw it, mind blown, we broke the internet. Read that article here.

But actually while feedback is pretty clearly a really really nice thing to offer, it’s still not the end goal. People apply for jobs because, you guessed it, they want a job. So recently we’ve been testing the power of referrals. Not your ohh you want a job look at this long list of jobs, but timely, relevant and attractive recommendations to jobs you might want to apply to.

Some recent research by Bersin by Deloitte suggests that candidates feel they are spending too long, on average 47 days, searching for a job. That gives us a window in which we might be able to help with referrals.

We decided to build our referral engine on some basic assumptions about jobs that are referral worthy: the location, role type and timeliness. And for referral-worthy candidates: those who’ve just missed out on a role with a similar skill profile. Our most recent batch of referrals found two research-based internships in the UK public sector, both in London and closing within a week of each other. On paper a great match.

But did it work?

For ease, we will call the two hiring rounds internship A and internship B.

Internship A attracted over 300 candidates, of these the top 80 who just missed out on being invited to the next stage were recommended to apply to internship B (at the Centre for Public Impact, a BCG foundation), which attracted just over 100 candidates in total.

Impact on diversity

We found that relative to the pool as a whole, Applied referred candidates were:

  • almost twice as likely to be ethnically diverse (44% were from a minority heritage, relative to 26% across the board)
  • slightly more gender balanced (59% female vs 64% female)
  • younger (median age of 18–24 vs 25–30); and
  • ever so slightly more from households where one or both parents had been to university (20% v 22%).*

*these data exclude the small number who opted to ‘prefer not to say’ .

Figure 1: Differences in ethnicity by type of candidate
Figure 2: Differences in gender by type of candidate

Performance correlation

Applied referrals made up 15% of all candidates (17 out of just over 100), but a whopping 43% of those waitlisted, and 13% of those interviewed.

Figure 3: Comparison of scores

Applied referrals scored higher on average than the group as a whole (16.13 v 14.05; or about 15% higher) (fig 3)

(Because applications on Applied are always reviewed and scored anonymously and without hiring teams knowing who a referral is, we can be sure there was no bias in how they were scored.)

Candidate skill profiles

Internship A application was made up of 4 equally-weighted sections: numerical reasoning, data interpretation, proof-reading, and verbal reasoning. It looks like people who did best in proof-reading did better in their internship B score, but the others weren’t positively correlated.

Summary

We started this because we wanted to design a good way of doing job referrals, and our assumptions on candidate skill profile, the location, role type and timeliness seem to have held — this round of referrals was a big success.

Not only did Applied referrals do well on internship B, we also received lots of warm feedback from candidates (initially we weren’t sure they would appreciate the referral):

This is exactly the type of position i’m interested in. Thanks for letting me know.

Thank you for this link. I will be following it up.

Yes this is great — very well tailored rather than generic, thank you! Will be applying as soon as I have the time.

But we can still do more. Since there was some overlap in what internship A and B tested candidates on, there is room to reduce the burden and duplication for candidates applying to multiple jobs. Better matching on that skill profile is what we will start testing next.

Likewise, to properly test our matching assumptions we should check that those who didn’t do well in internship A, go on to also not do well in internship B. The tricky thing here is testing this in a way that isn’t a waste of time for the candidates or hiring teams.

Any suggestions on how we go about this? Please pop your ideas in the comments below.

If you want to try Applied yourself drop us a line at hello@beapplied.com or request a demo of our blind recruitment software.