-
Evaluating Fairness Metrics in the Presence of Dataset Bias
Data-driven algorithms play a large role in decision making across a var...
read it
-
Identifying and Correcting Label Bias in Machine Learning
Datasets often contain biases which unfairly disadvantage certain groups...
read it
-
Through the Data Management Lens: Experimental Analysis and Evaluation of Fair Classification
Classification, a heavily-studied data-driven machine learning task, dri...
read it
-
Delayed Impact of Fair Machine Learning
Fairness in machine learning has predominantly been studied in static cl...
read it
-
Fairness in Forecasting and Learning Linear Dynamical Systems
As machine learning becomes more pervasive, the urgency of assuring its ...
read it
-
Fair Machine Learning Under Partial Compliance
Typically, fair machine learning research focuses on a single decisionma...
read it
-
Almost Politically Acceptable Criminal Justice Risk Assessment
In criminal justice risk forecasting, one can prove that it is impossibl...
read it
Residual Unfairness in Fair Machine Learning from Prejudiced Data
Recent work in fairness in machine learning has proposed adjusting for fairness by equalizing accuracy metrics across groups and has also studied how datasets affected by historical prejudices may lead to unfair decision policies. We connect these lines of work and study the residual unfairness that arises when a fairness-adjusted predictor is not actually fair on the target population due to systematic censoring of training data by existing biased policies. This scenario is particularly common in the same applications where fairness is a concern. We characterize theoretically the impact of such censoring on standard fairness metrics for binary classifiers and provide criteria for when residual unfairness may or may not appear. We prove that, under certain conditions, fairness-adjusted classifiers will in fact induce residual unfairness that perpetuates the same injustices, against the same groups, that biased the data to begin with, thus showing that even state-of-the-art fair machine learning can have a "bias in, bias out" property. When certain benchmark data is available, we show how sample reweighting can estimate and adjust fairness metrics while accounting for censoring. We use this to study the case of Stop, Question, and Frisk (SQF) and demonstrate that attempting to adjust for fairness perpetuates the same injustices that the policy is infamous for.
READ FULL TEXT
Comments
There are no comments yet.