Grouping-By-ID: Guarding Against Adversarial Domain Shifts
When training a deep network for image classification, one can broadly distinguish between two types of latent features that will drive the classification. Following Gong et al. (2016), we can divide features into (i) "core" features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and (ii) "style" or "orthogonal" features X^ whose distribution P(X^ | Y) can change substantially across domains. These latter orthogonal features would generally include features such as position or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the "core" features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable, i.e. we cannot directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called ID variable. E.g. we might know that two images show the same person, with ID referring to the identity of the person. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016). If two or more samples share the same class and identifier, then we treat those samples as counterfactuals under different interventions on the orthogonal features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This substantially improves performance in settings where domains change in terms of image quality, brightness, color or posture and movement. We show links to questions of interpretability, fairness, transfer learning and adversarial examples.
READ FULL TEXT