Universum Prescription: Regularization using Unlabeled Data

11/11/2015
by   Xiang Zhang, et al.
0

This paper shows that simply prescribing "none of the above" labels to unlabeled data has a beneficial regularization effect to supervised learning. We call it universum prescription by the fact that the prescribed labels cannot be one of the supervised labels. In spite of its simplicity, universum prescription obtained competitive results in training deep convolutional networks for CIFAR-10, CIFAR-100, STL-10 and ImageNet datasets. A qualitative justification of these approaches using Rademacher complexity is presented. The effect of a regularization parameter -- probability of sampling from unlabeled data -- is also studied empirically.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/15/2022

FreeMatch: Self-adaptive Thresholding for Semi-supervised Learning

Pseudo labeling and consistency regularization approaches with confidenc...
research
10/19/2012

On Information Regularization

We formulate a principle for classification with the knowledge of the ma...
research
04/06/2019

Split Batch Normalization: Improving Semi-Supervised Learning under Domain Shift

Recent work has shown that using unlabeled data in semi-supervised learn...
research
10/07/2020

Theoretical Analysis of Self-Training with Deep Networks on Unlabeled Data

Self-training algorithms, which train a model to fit pseudolabels predic...
research
05/15/2019

ROI Regularization for Semi-supervised and Supervised Learning

We propose ROI regularization (ROIreg) as a semi-supervised learning met...
research
11/30/2017

Hybrid VAE: Improving Deep Generative Models using Partial Observations

Deep neural network models trained on large labeled datasets are the sta...
research
04/26/2019

Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via Posterior Regularization

Existing methods for CWS usually rely on a large number of labeled sente...

Please sign up or login with your details

Forgot password? Click here to reset