-
Effectiveness of Equalized Odds for Fair Classification under Imperfect Group Information
Most approaches for ensuring or improving a model's fairness with respec...
read it
-
Making ML models fairer through explanations: the case of LimeOut
Algorithmic decisions are now being used on a daily basis, and based on ...
read it
-
Assessing the Fairness of Classifiers with Collider Bias
The increasing maturity of machine learning technologies and their appli...
read it
-
Awareness in Practice: Tensions in Access to Sensitive Attribute Data for Antidiscrimination
Organizations cannot address demographic disparities that they cannot se...
read it
-
Fairness in Supervised Learning: An Information Theoretic Approach
Automated decision making systems are increasingly being used in real-wo...
read it
-
Synthetic Attribute Data for Evaluating Consumer-side Fairness
When evaluating recommender systems for their fairness, it may be necess...
read it
-
Agnostic data debiasing through a local sanitizer learnt from an adversarial network approach
The widespread use of automated decision processes in many areas of our ...
read it
Evaluating Fairness of Machine Learning Models Under Uncertain and Incomplete Information
Training and evaluation of fair classifiers is a challenging problem. This is partly due to the fact that most fairness metrics of interest depend on both the sensitive attribute information and label information of the data points. In many scenarios it is not possible to collect large datasets with such information. An alternate approach that is commonly used is to separately train an attribute classifier on data with sensitive attribute information, and then use it later in the ML pipeline to evaluate the bias of a given classifier. While such decoupling helps alleviate the problem of demographic scarcity, it raises several natural questions such as: how should the attribute classifier be trained?, and how should one use a given attribute classifier for accurate bias estimation? In this work we study this question from both theoretical and empirical perspectives. We first experimentally demonstrate that the test accuracy of the attribute classifier is not always correlated with its effectiveness in bias estimation for a downstream model. In order to further investigate this phenomenon, we analyze an idealized theoretical model and characterize the structure of the optimal classifier. Our analysis has surprising and counter-intuitive implications where in certain regimes one might want to distribute the error of the attribute classifier as unevenly as possible among the different subgroups. Based on our analysis we develop heuristics for both training and using attribute classifiers for bias estimation in the data scarce regime. We empirically demonstrate the effectiveness of our approach on real and simulated data.
READ FULL TEXT
Comments
There are no comments yet.