Grouping-By-ID: Guarding Against Adversarial Domain Shifts

10/31/2017
by   Christina Heinze-Deml, et al.
0

When training a deep network for image classification, one can broadly distinguish between two types of latent features that will drive the classification. Following Gong et al. (2016), we can divide features into (i) "core" features X^ci whose distribution P(X^ci | Y) does not change substantially across domains and (ii) "style" or "orthogonal" features X^ whose distribution P(X^ | Y) can change substantially across domains. These latter orthogonal features would generally include features such as position or brightness but also more complex ones like hair color or posture for images of persons. We try to guard against future adversarial domain shifts by ideally just using the "core" features for classification. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable, i.e. we cannot directly see the distributional change of features across different domains. We do assume, however, that we can sometimes observe a so-called ID variable. E.g. we might know that two images show the same person, with ID referring to the identity of the person. The method requires only a small fraction of images to have an ID variable. We provide a causal framework for the problem by adding the ID variable to the model of Gong et al. (2016). If two or more samples share the same class and identifier, then we treat those samples as counterfactuals under different interventions on the orthogonal features. Using this grouping-by-ID approach, we regularize the network to provide near constant output across samples that share the same ID by penalizing with an appropriate graph Laplacian. This substantially improves performance in settings where domains change in terms of image quality, brightness, color or posture and movement. We show links to questions of interpretability, fairness, transfer learning and adversarial examples.

READ FULL TEXT

page 4

page 10

page 24

page 26

page 28

page 29

research
02/06/2020

Person Re-identification by Contour Sketch under Moderate Clothing Change

Person re-identification (re-id), the process of matching pedestrian ima...
research
02/06/2023

Rethinking Out-of-distribution (OOD) Detection: Masked Image Modeling is All You Need

The core of out-of-distribution (OOD) detection is to learn the in-distr...
research
04/09/2022

Understanding, Detecting, and Separating Out-of-Distribution Samples and Adversarial Samples in Text Classification

In this paper, we study the differences and commonalities between statis...
research
07/18/2018

Metric Embedding Autoencoders for Unsupervised Cross-Dataset Transfer Learning

Cross-dataset transfer learning is an important problem in person re-ide...
research
01/30/2019

Adversarial Metric Attack for Person Re-identification

Person re-identification (re-ID) has attracted much attention recently d...
research
05/08/2020

Automatic Cross-Domain Transfer Learning for Linear Regression

Transfer learning research attempts to make model induction transferable...
research
07/07/2023

When does the ID algorithm fail?

The ID algorithm solves the problem of identification of interventional ...

Please sign up or login with your details

Forgot password? Click here to reset