-
Combating Domain Shift with Self-Taught Labeling
We present a novel method to combat domain shift when adapting classific...
read it
-
Domain Adaptation with Conditional Distribution Matching and Generalized Label Shift
Adversarial learning has demonstrated good performance in the unsupervis...
read it
-
SENTRY: Selective Entropy Optimization via Committee Consistency for Unsupervised Domain Adaptation
Many existing approaches for unsupervised domain adaptation (UDA) focus ...
read it
-
Clarinet: A One-step Approach Towards Budget-friendly Unsupervised Domain Adaptation
In unsupervised domain adaptation (UDA), classifiers for the target doma...
read it
-
Maximum Classifier Discrepancy for Unsupervised Domain Adaptation
In this work, we present a method for unsupervised domain adaptation (UD...
read it
-
Self-Taught Support Vector Machine
In this paper, a new approach for classification of target task using li...
read it
-
Understanding Self-Training for Gradual Domain Adaptation
Machine learning systems must adapt to data distributions that evolve ov...
read it
Self-training Avoids Using Spurious Features Under Domain Shift
In unsupervised domain adaptation, existing theory focuses on situations where the source and target domains are close. In practice, conditional entropy minimization and pseudo-labeling work even when the domain shifts are much larger than those analyzed by existing theory. We identify and analyze one particular setting where the domain shift can be large, but these algorithms provably work: certain spurious features correlate with the label in the source domain but are independent of the label in the target. Our analysis considers linear classification where the spurious features are Gaussian and the non-spurious features are a mixture of log-concave distributions. For this setting, we prove that entropy minimization on unlabeled target data will avoid using the spurious feature if initialized with a decently accurate source classifier, even though the objective is non-convex and contains multiple bad local minima using the spurious features. We verify our theory for spurious domain shift tasks on semi-synthetic Celeb-A and MNIST datasets. Our results suggest that practitioners collect and self-train on large, diverse datasets to reduce biases in classifiers even if labeling is impractical.
READ FULL TEXT
Comments
There are no comments yet.