This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Antidiscrimination AI for recruitment is ‘pseudoscience’, says Cambridge study
A University of Cambridge study has concluded that AI that aims to improve diversity and inclusion in recruitment is ‘pseudoscience’.
The study, published in the Philosophy and Technology journal, replicated a commercial AI model used in HR and found that the artificial intelligence could actually promote further discrimination in the hiring process.
The AI in question claims to objectively assess candidates by removing gender and race from systems in order to make the process more equal.
However, the study found that the AI-powered tools “are part of a much longer lineage of sorting, taxonomising and classifying voices and bodies along gendered and racialised lines”.
Named the “Personality Machine”, the AI system uses images of people’s faces to search for the “big five” personality traits: extroversion, agreeableness, openness, conscientiousness, and neuroticism.
However, the study found that that the software’s predictions were influenced by changes in people’s facial expressions, lighting, backgrounds, and clothing.
The study explains that this is due to the fact that machine learning used for the AI tools utilises outdated data.
“Machine learning models are understood as predictive; however, since they are trained on past data, they are re-iterating decisions made in the past, not the future,” co-author Kerry Mackereth told The Register.
“As the tools learn from this pre-existing dataset, a feedback loop is created between what the companies perceive to be an ideal employee and the criteria used by automated recruitment tools to select candidates,” she added.
The researchers believe the technology needs a stricter regulation. Eleanor Drage, another co-author of the study told The Register, “we are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers.”
“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested. As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer,” she added.
Earlier this year, the UK data watchdog set out a three-year review looking into how AI recruitment could have a bias caused by the lack of ethnic minorities and neurodivergent people during testing.
In April, 20 AI experts from organisations including Accenture and UNESCO were interviewed for a study that investigated what tech leaders could do to combat AI bias.
#BeInformed
Subscribe to our Editor's weekly newsletter