RankingMatch: Delving into Semi-Supervised Learning with Consistency Regularization and Ranking Loss

10/09/2021
by   Trung Q. Tran, et al.
0

Semi-supervised learning (SSL) has played an important role in leveraging unlabeled data when labeled data is limited. One of the most successful SSL approaches is based on consistency regularization, which encourages the model to produce unchanged with perturbed input. However, there has been less attention spent on inputs that have the same label. Motivated by the observation that the inputs having the same label should have the similar model outputs, we propose a novel method, RankingMatch, that considers not only the perturbed inputs but also the similarity among the inputs having the same label. We especially introduce a new objective function, dubbed BatchMean Triplet loss, which has the advantage of computational efficiency while taking into account all input samples. Our RankingMatch achieves state-of-the-art performance across many standard SSL benchmarks with a variety of labeled data amounts, including 95.13 on CIFAR-100 with 10000 labels, 97.76 97.77 prove the efficacy of the proposed BatchMean Triplet loss against existing versions of Triplet loss.

READ FULL TEXT

page 2

page 12

page 17

page 18

page 20

page 21

research
05/21/2022

ADT-SSL: Adaptive Dual-Threshold for Semi-Supervised Learning

Semi-Supervised Learning (SSL) has advanced classification tasks by inpu...
research
10/29/2019

Learning from Label Proportions with Consistency Regularization

The problem of learning from label proportions (LLP) involves training c...
research
10/12/2020

Unsupervised Semantic Aggregation and Deformable Template Matching for Semi-Supervised Learning

Unlabeled data learning has attracted considerable attention recently. H...
research
05/27/2018

Adversarial Constraint Learning for Structured Prediction

Constraint-based learning reduces the burden of collecting labels by hav...
research
02/24/2022

Interpolation-based Contrastive Learning for Few-Label Semi-Supervised Learning

Semi-supervised learning (SSL) has long been proved to be an effective t...
research
08/08/2023

Enhancing Adversarial Robustness in Low-Label Regime via Adaptively Weighted Regularization and Knowledge Distillation

Adversarial robustness is a research area that has recently received a l...
research
03/04/2020

Semixup: In- and Out-of-Manifold Regularization for Deep Semi-Supervised Knee Osteoarthritis Severity Grading from Plain Radiographs

Knee osteoarthritis (OA) is one of the highest disability factors in the...

Please sign up or login with your details

Forgot password? Click here to reset