You Can Still Achieve Fairness Without Sensitive Attributes: Exploring Biases in Non-Sensitive Features
Though machine learning models are achieving great success, ex-tensive studies have exposed their disadvantage of inheriting latent discrimination and societal bias from the training data, which hinders their adoption on high-state applications. Thus, many efforts have been taken for developing fair machine learning models. Most of them require that sensitive attributes are available during training to learn fair models. However, in many real-world applications, it is usually infeasible to obtain the sensitive attribute due to privacy or legal issues, which challenges existing fair classifiers. Though the sensitive attribute of each data sample is unknown, we observe that there are usually some non-sensitive features in the training data that are highly correlated with sensitive attributes, which can be used to alleviate the bias. Therefore, in this paper, we study a novel problem of exploring features that are highly correlated with sensitive attributes for learning fair and accurate classifier without sensitive attributes. We theoretically show that by minimizing the correlation between these related features and model prediction, we can learn a fair classifier. Based on this motivation, we propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair. In addition, the model can dynamically adjust the importance weight of each related feature to balance the contribution of the feature on model classification and fairness. Experimental results on real-world datasets demonstrate the effectiveness of the proposed model for learning fair models with high classification accuracy.
READ FULL TEXT