What does it mean to solve the problem of discrimination in hiring? Social, technical and legal perspectives from the UK on automated hiring systems

09/28/2019
by   Javier Sanchez-Monedero, et al.
0

The ability to get and keep a job is a key aspect of participating in society and sustaining livelihoods. Yet the way decisions are made on who is eligible for jobs, and why, are rapidly changing with the advent and growth in uptake of automated hiring systems (AHSs) powered by data-driven tools. Key concerns about such AHSs include the lack of transparency and potential limitation of access to jobs for specific profiles. In relation to the latter, however, several of these AHSs claim to detect and mitigate discriminatory practices against protected groups and promote diversity and inclusion at work. Yet whilst these tools have a growing user-base around the world, such claims of bias mitigation are rarely scrutinised and evaluated, and when done so, have almost exclusively been from a US socio-legal perspective. In this paper, we introduce a perspective outside the US by critically examining how three prominent automated hiring systems (AHSs) in regular use in the UK, HireVue, Pymetrics and Applied, understand and attempt to mitigate bias and discrimination. Using publicly available documents, we describe how their tools are designed, validated and audited for bias, highlighting assumptions and limitations, before situating these in the socio-legal context of the UK. The UK has a very different legal background to the US in terms not only of hiring and equality law, but also in terms of data protection (DP) law. We argue that this might be important for addressing concerns about transparency and could mean a challenge to building bias mitigation into AHSs definitively capable of meeting EU legal standards. This is significant as these AHSs, especially those developed in the US, may obscure rather than improve systemic discrimination in the workplace.

READ FULL TEXT
research
03/16/2018

Some HCI Priorities for GDPR-Compliant Machine Learning

In this short paper, we consider the roles of HCI in enabling the better...
research
08/02/2020

Análisis jurídico de la discriminación algorítmica en los procesos de selección laboral

The use of machine learning systems in processing job applications has m...
research
04/12/2021

Towards Algorithmic Transparency: A Diversity Perspective

As the role of algorithmic systems and processes increases in society, s...
research
05/02/2022

The Theory of Artificial Immutability: Protecting Algorithmic Groups Under Anti-Discrimination Law

Artificial Intelligence (AI) is increasingly used to make important deci...
research
07/13/2023

National Origin Discrimination in Deep-learning-powered Automated Resume Screening

Many companies and organizations have started to use some form of AIenab...
research
01/27/2023

"Finding the Magic Sauce": Exploring Perspectives of Recruiters and Job Seekers on Recruitment Bias and Automated Tools

Automated recruitment tools are proliferating. While having the promise ...
research
12/11/2014

Certifying and removing disparate impact

What does it mean for an algorithm to be biased? In U.S. law, unintentio...

Please sign up or login with your details

Forgot password? Click here to reset