Adversarial Removal of Gender from Deep Image Representations
In this work we analyze visual recognition tasks such as object and action recognition, and demonstrate the extent to which these tasks are correlated with features corresponding to a protected variable such as gender. We introduce the concept of natural leakage to measure the intrinsic reliance of a task on a protected variable. We further show that machine learning models of visual recognition trained for these tasks tend to exacerbate the reliance on gender features. To address this, we use adversarial training to remove unwanted features corresponding to protected variables from intermediate representations in a deep neural network. Experiments on two datasets: the COCO dataset (objects), and the imSitu dataset (actions), show reductions in the extent to which models rely on gender features while maintaining most of the accuracy of the original models. These results even surpass a strong baseline that blurs or removes people from images using ground-truth annotations. Moreover, we provide convincing interpretable visual evidence through an autoencoder-augmented model showing that this approach is performing semantically meaningful removal of gender features, and thus can also be used to remove gender attributes directly from images.
READ FULL TEXT