Reflections on a Year of Intense AI Debate: Why Hiring Bias Has to be Part of the Conversation in 2024

Published by:
Khyati Sundaram
December 18, 2023
2
min read

As the excitement around the first AI Safety Summit dies down and the dust begins to settle, I've been reflecting on some of the key topics left out of the conversation in 2023, namely the use of AI in hiring.

This absence from the agenda and a reluctance to recognise the scale of the problem is fuelling discriminatory hiring practices.

In 2024, we must expedite regulation to prevent biased AI from excluding millions of marginalised and under-represented workers from jobs. White papers and discussion papers have repeatedly acknowledged that discrimination resulting from AI may contravene the protections set out in the Equality Act 2010, but the pernicious effect that AI bias will have on recruitment practices has been chronically overlooked. Unregulated AI could bake human bias into code and amplify existing inequalities. 

Conclusions drawn by AI are entirely determined by the data it is trained on. Datasets trained on historical data therefore reinforce gender, age, and racial stereotypes. Legislation and social change has helped to dismantle some of the biases that hold back marginalised groups, but there is still so much work to do – and AI hiring tools that are poorly designed risk setting us back in a fight for diversity we are yet to win. 

AI has many positive applications in recruitment. AI models trained on de-biased data can prevent human bias from influencing hiring decisions. De-biased - or “ethical” - AI can help identify the role-specific skills human hiring teams should be testing candidates for. And it can help us remove gendered language from job adverts, which research at Applied shows can actively deter female candidates from applying to roles.

The solution is simple: employers must only use AI if their models are trained on data that has been cleaned of historical bias. This can be done: a new law in New York City requires employers who use AI tools to submit their algorithms for independent audit. 

While AI may expedite recruitment processes, speed must not be favoured over fairness. It speaks volumes that rich data on where, how and to what effect companies in the UK are using AI to hire does not exist. It’s a gaping data void we must urgently fill. 

Until we introduce regulation that ensures AI can be trusted to score candidates fairly, hiring decisions must rest with humans and algorithms must be audited. Ultimately, AI extrapolates and perpetuates biased patterns that we must not allow our society or organisations to reflect.

I look forward to the next AI Safety Summit in 2024, where we’ll be pushing for AI in hiring to be on the agenda and treated with the seriousness and proactivity it deserves.