DeepAI AI Chat
Log In Sign Up

Self-Tuning for Data-Efficient Deep Learning

02/25/2021
by   Ximei Wang, et al.
0

Deep learning has made revolutionary advances to diverse applications in the presence of large-scale labeled datasets. However, it is prohibitively time-costly and labor-expensive to collect sufficient labeled data in most realistic scenarios. To mitigate the requirement for labeled data, semi-supervised learning (SSL) focuses on simultaneously exploring both labeled and unlabeled data, while transfer learning (TL) popularizes a favorable practice of fine-tuning a pre-trained model to the target data. A dilemma is thus encountered: Without a decent pre-trained model to provide an implicit regularization, SSL through self-training from scratch will be easily misled by inaccurate pseudo-labels, especially in large-sized label space; Without exploring the intrinsic structure of unlabeled data, TL through fine-tuning from limited labeled data is at risk of under-transfer caused by model shift. To escape from this dilemma, we present Self-Tuning, a novel approach to enable data-efficient deep learning by unifying the exploration of labeled and unlabeled data and the transfer of a pre-trained model. Further, to address the challenge of confirmation bias in self-training, a Pseudo Group Contrast (PGC) mechanism is devised to mitigate the reliance on pseudo-labels and boost the tolerance to false-labels. Self-Tuning outperforms its SSL and TL counterparts on five tasks by sharp margins, e.g. it doubles the accuracy of fine-tuning on Cars with 15

READ FULL TEXT

page 1

page 2

page 3

page 4

12/13/2018

When Semi-Supervised Learning Meets Transfer Learning: Training Strategies, Models and Datasets

Semi-Supervised Learning (SSL) has been proved to be an effective way to...
06/27/2020

Uncertainty-aware Self-training for Text Classification with Few Labels

Recent success of large-scale pre-trained language models crucially hing...
02/15/2022

Debiased Pseudo Labeling in Self-Training

Deep neural networks achieve remarkable performances on a wide range of ...
03/25/2021

SMILE: Self-Distilled MIxup for Efficient Transfer LEarning

To improve the performance of deep learning, mixup has been proposed to ...
04/26/2018

Competitive Learning Enriches Learning Representation and Accelerates the Fine-tuning of CNNs

In this study, we propose the integration of competitive learning into c...
06/26/2020

Train and You'll Miss It: Interactive Model Iteration with Weak Supervision and Pre-Trained Embeddings

Our goal is to enable machine learning systems to be trained interactive...
11/17/2019

REFIT: a Unified Watermark Removal Framework for Deep Learning Systems with Limited Data

Deep neural networks (DNNs) have achieved tremendous success in various ...