Local Law 144: A Critical Analysis of Regression Metrics

by   Giulio Filippi, et al.

The use of automated decision tools in recruitment has received an increasing amount of attention. In November 2021, the New York City Council passed a legislation (Local Law 144) that mandates bias audits of Automated Employment Decision Tools. From 15th April 2023, companies that use automated tools for hiring or promoting employees are required to have these systems audited by an independent entity. Auditors are asked to compute bias metrics that compare outcomes for different groups, based on sex/gender and race/ethnicity categories at a minimum. Local Law 144 proposes novel bias metrics for regression tasks (scenarios where the automated system scores candidates with a continuous range of values). A previous version of the legislation proposed a bias metric that compared the mean scores of different groups. The new revised bias metric compares the proportion of candidates in each group that falls above the median. In this paper, we argue that both metrics fail to capture distributional differences over the whole domain, and therefore cannot reliably detect bias. We first introduce two metrics, as possible alternatives to the legislation metrics. We then compare these metrics over a range of theoretical examples, for which the legislation proposed metrics seem to underestimate bias. Finally, we study real data and show that the legislation metrics can similarly fail in a real-world recruitment application.


page 1

page 2

page 3

page 4


Nuanced Metrics for Measuring Unintended Bias with Real Data for Text Classification

Unintended bias in Machine Learning can manifest as systemic differences...

Evaluating Metrics for Bias in Word Embeddings

Over the last years, word and sentence embeddings have established as te...

Equal Confusion Fairness: Measuring Group-Based Disparities in Automated Decision Systems

As artificial intelligence plays an increasingly substantial role in dec...

Men Also Do Laundry: Multi-Attribute Bias Amplification

As computer vision systems become more widely deployed, there is increas...

Modeling and Correcting Bias in Sequential Evaluation

We consider the problem of sequential evaluation, in which an evaluator ...

Coarse race data conceals disparities in clinical risk score performance

Healthcare data in the United States often records only a patient's coar...

"Finding the Magic Sauce": Exploring Perspectives of Recruiters and Job Seekers on Recruitment Bias and Automated Tools

Automated recruitment tools are proliferating. While having the promise ...

Please sign up or login with your details

Forgot password? Click here to reset