Assessing the Fairness of Classifiers with Collider Bias

10/08/2020 ∙ by Zhenlong Xu, et al. ∙ 3

The increasing maturity of machine learning technologies and their applications to decisions relate to everyday decision making have brought concerns about the fairness of the decisions. However, current fairness assessment systems often suffer from collider bias, which leads to a spurious association between the protected attribute and the outcomes. To achieve fairness evaluation on prediction models at the individual level, in this paper, we develop the causality-based theorems to support the use of direct causal effect estimation for fairness assessment on a given a classifier without access to original training data. Based on the theorems, an unbiased situation test method is presented to assess individual fairness of predictions by a classifier, through the elimination of the impact of collider bias of the classifier on the fairness assessment. Extensive experiments have been performed on synthetic and real-world data to evaluate the performance of the proposed method. The experimental results show that the proposed method reduces bias significantly.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.