-
Iterative Least Trimmed Squares for Mixed Linear Regression
Given a linear regression setting, Iterative Least Trimmed Squares (ILTS...
read it
-
Supervised Classifiers for Audio Impairments with Noisy Labels
Voice-over-Internet-Protocol (VoIP) calls are prone to various speech im...
read it
-
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences
We provide two fundamental results on the population (infinite-sample) l...
read it
-
One for More: Selecting Generalizable Samples for Generalizable ReID Model
Current training objectives of existing person Re-IDentification (ReID) ...
read it
-
Identifying Mislabeled Training Data
This paper presents a new approach to identifying and eliminating mislab...
read it
-
Bad-Data Sequence Detection for Power System State Estimation via ICA-GAN
A deep learning approach to the detection of bad-data sequences in power...
read it
-
ChoiceNet: Robust Learning by Revealing Output Correlations
In this paper, we focus on the supervised learning problem with corrupte...
read it
Iteratively Learning from the Best
We study a simple generic framework to address the issue of bad training data; both bad labels in supervised problems, and bad samples in unsupervised ones. Our approach starts by fitting a model to the whole training dataset, but then iteratively improves it by alternating between (a) revisiting the training data to select samples with lowest current loss, and (b) re-training the model on only these selected samples. It can be applied to any existing model training setting which provides a loss measure for samples, and a way to refit on new ones. We show the merit of this approach in both theory and practice. We first prove statistical consistency, and linear convergence to the ground truth and global optimum, for two simpler model settings: mixed linear regression, and gaussian mixture models. We then demonstrate its success empirically in (a) saving the accuracy of existing deep image classifiers when there are errors in the labels of training images, and (b) improving the quality of samples generated by existing DC-GAN models, when it is given training data that contains a fraction of the images from a different and unintended dataset. The experimental results show significant improvement over the baseline methods that ignore the existence of bad labels/samples.
READ FULL TEXT
Comments
There are no comments yet.