Boost Picking: A Universal Method on Converting Supervised Classification to Semi-supervised Classification

02/18/2016
by   Fuqiang Liu, et al.
0

This paper proposes a universal method, Boost Picking, to train supervised classification models mainly by un-labeled data. Boost Picking only adopts two weak classifiers to estimate and correct the error. It is theoretically proved that Boost Picking could train a supervised model mainly by un-labeled data as effectively as the same model trained by 100 the two weak classifiers are all greater than zero and the sum of precisions is greater than one. Based on Boost Picking, we present "Test along with Training (TawT)" to improve the generalization of supervised models. Both Boost Picking and TawT are successfully tested in varied little data sets.

READ FULL TEXT
research
11/12/2016

Dual Teaching: A Practical Semi-supervised Wrapper Method

Semi-supervised wrapper methods are concerned with building effective su...
research
03/25/2016

Object Recognition Based on Amounts of Unlabeled Data

This paper proposes a novel semi-supervised method on object recognition...
research
03/31/2021

The GIST and RIST of Iterative Self-Training for Semi-Supervised Segmentation

We consider the task of semi-supervised semantic segmentation, where we ...
research
10/28/2020

MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images

Semi-supervised learning via learning from limited quantities of labeled...
research
05/18/2022

Automatic Rule Induction for Efficient Semi-Supervised Learning

Semi-supervised learning has shown promise in allowing NLP models to gen...
research
06/16/2020

Data Augmentation of IMU Signals and Evaluation via a Semi-Supervised Classification of Driving Behavior

Over the past years, interest in classifying drivers' behavior from data...

Please sign up or login with your details

Forgot password? Click here to reset