Learning Soft Labels via Meta Learning

09/20/2020
by   Nidhi Vyas, et al.
0

One-hot labels do not represent soft decision boundaries among concepts, and hence, models trained on them are prone to overfitting. Using soft labels as targets provide regularization, but different soft labels might be optimal at different stages of optimization. Also, training with fixed labels in the presence of noisy annotations leads to worse generalization. To address these limitations, we propose a framework, where we treat the labels as learnable parameters, and optimize them along with model parameters. The learned labels continuously adapt themselves to the model's state, thereby providing dynamic regularization. When applied to the task of supervised image-classification, our method leads to consistent gains across different datasets and architectures. For instance, dynamically learned labels improve ResNet18 by 2.1 labels correct the annotation mistakes, and improves over state-of-the-art by a significant margin. Finally, we show that learned labels capture semantic relationship between classes, and thereby improve teacher models for the downstream task of distillation.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/11/2020

Meta Soft Label Generation for Noisy Labels

The existence of noisy labels in the dataset causes significant performa...
research
11/11/2022

Continuous Soft Pseudo-Labeling in ASR

Continuous pseudo-labeling (PL) algorithms such as slimIPL have recently...
research
02/16/2023

Learning From Biased Soft Labels

Knowledge distillation has been widely adopted in a variety of tasks and...
research
03/19/2021

MetaLabelNet: Learning to Generate Soft-Labels from Noisy-Labels

Real-world datasets commonly have noisy labels, which negatively affects...
research
05/05/2023

Leaf Cultivar Identification via Prototype-enhanced Learning

Plant leaf identification is crucial for biodiversity protection and con...
research
06/12/2018

Improving Regression Performance with Distributional Losses

There is growing evidence that converting targets to soft targets in sup...
research
12/09/2022

Co-training 2^L Submodels for Visual Recognition

We introduce submodel co-training, a regularization method related to co...

Please sign up or login with your details

Forgot password? Click here to reset