Revisiting Self-Training with Regularized Pseudo-Labeling for Tabular Data

02/27/2023
by   Minwook Kim, et al.
0

Recent progress in semi- and self-supervised learning has caused a rift in the long-held belief about the need for an enormous amount of labeled data for machine learning and the irrelevancy of unlabeled data. Although it has been successful in various data, there is no dominant semi- and self-supervised learning method that can be generalized for tabular data (i.e. most of the existing methods require appropriate tabular datasets and architectures). In this paper, we revisit self-training which can be applied to any kind of algorithm including the most widely used architecture, gradient boosting decision tree, and introduce curriculum pseudo-labeling (a state-of-the-art pseudo-labeling technique in image) for a tabular domain. Furthermore, existing pseudo-labeling techniques do not assure the cluster assumption when computing confidence scores of pseudo-labels generated from unlabeled data. To overcome this issue, we propose a novel pseudo-labeling approach that regularizes the confidence scores based on the likelihoods of the pseudo-labels so that more reliable pseudo-labels which lie in high density regions can be obtained. We exhaustively validate the superiority of our approaches using various models and tabular datasets.

READ FULL TEXT
research
05/11/2022

DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision

Following the success of supervised learning, semi-supervised learning (...
research
02/15/2022

Debiased Pseudo Labeling in Self-Training

Deep neural networks achieve remarkable performances on a wide range of ...
research
12/13/2022

Boosting Semi-Supervised Learning with Contrastive Complementary Labeling

Semi-supervised learning (SSL) has achieved great success in leveraging ...
research
02/05/2022

LST: Lexicon-Guided Self-Training for Few-Shot Text Classification

Self-training provides an effective means of using an extremely small am...
research
05/30/2022

Conformal Credal Self-Supervised Learning

In semi-supervised learning, the paradigm of self-training refers to the...
research
11/22/2022

Good Data from Bad Models : Foundations of Threshold-based Auto-labeling

Creating large-scale high-quality labeled datasets is a major bottleneck...
research
06/13/2022

Confident Sinkhorn Allocation for Pseudo-Labeling

Semi-supervised learning is a critical tool in reducing machine learning...

Please sign up or login with your details

Forgot password? Click here to reset