ROI Regularization for Semi-supervised and Supervised Learning

05/15/2019
by   Hiroshi Kaizuka, et al.
0

We propose ROI regularization (ROIreg) as a semi-supervised learning method for image classification. ROIreg focuses on the maximum probability of a posterior probability distribution g(x) obtained when inputting an unlabeled data sample x into a convolutional neural network (CNN). ROIreg divides the pixel set of x into multiple blocks and evaluates, for each block, its contribution to the maximum probability. A masked data sample x_ROI is generated by replacing blocks with relatively small degrees of contribution with random images. Then, ROIreg trains CNN so that g(x_ROI ) does not change as much as possible from g(x). Therefore, ROIreg can be said to refine the classification ability of CNN more. On the other hand, Virtual Adverserial Training (VAT), which is an excellent semi-supervised learning method, generates data sample x_VAT by perturbing x in the direction in which g(x) changes most. Then, VAT trains CNN so that g(x_VAT ) does not change from g(x) as much as possible. Therefore, VAT can be said to be a method to improve CNN's weakness. Thus, ROIreg and VAT have complementary training effects. In fact, the combination of VAT and ROIreg improves the results obtained when using VAT or ROIreg alone. This combination also improves the state-of-the-art on "SVHN with and without data augmentation" and "CIFAR-10 without data augmentation". We also propose a method called ROI augmentation (ROIaug) as a method to apply ROIreg to data augmentation in supervised learning. However, the evaluation function used there is different from the standard cross-entropy. ROIaug improves the performance of supervised learning for both SVHN and CIFAR-10. Finally, we investigate the performance degradation of VAT and VAT+ROIreg when data samples not belonging to classification classes are included in unlabeled data.

READ FULL TEXT
research
03/28/2020

Gradient-based Data Augmentation for Semi-Supervised Learning

In semi-supervised learning (SSL), a technique called consistency regula...
research
03/26/2020

Milking CowMask for Semi-Supervised Image Classification

Consistency regularization is a technique for semi-supervised learning t...
research
09/16/2022

Confidence-Guided Data Augmentation for Deep Semi-Supervised Training

We propose a new data augmentation technique for semi-supervised learnin...
research
08/03/2022

Augmentation Learning for Semi-Supervised Classification

Recently, a number of new Semi-Supervised Learning methods have emerged....
research
07/16/2020

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

Recent state-of-the-art semi-supervised learning (SSL) methods use a com...
research
11/11/2015

Universum Prescription: Regularization using Unlabeled Data

This paper shows that simply prescribing "none of the above" labels to u...
research
05/16/2022

Sharp Asymptotics of Self-training with Linear Classifier

Self-training (ST) is a straightforward and standard approach in semi-su...

Please sign up or login with your details

Forgot password? Click here to reset