Improved Regularization and Robustness for Fine-tuning in Neural Networks

11/08/2021
by   Dongyue Li, et al.
0

A widely used algorithm for transfer learning is fine-tuning, where a pre-trained model is fine-tuned on a target task with a small amount of labeled data. When the capacity of the pre-trained model is much larger than the size of the target data set, fine-tuning is prone to overfitting and "memorizing" the training labels. Hence, an important question is to regularize fine-tuning and ensure its robustness to noise. To address this question, we begin by analyzing the generalization properties of fine-tuning. We present a PAC-Bayes generalization bound that depends on the distance traveled in each layer during fine-tuning and the noise stability of the fine-tuned model. We empirically measure these quantities. Based on the analysis, we propose regularized self-labeling – the interpolation between regularization and self-labeling methods, including (i) layer-wise regularization to constrain the distance traveled in each layer; (ii) self label-correction and label-reweighting to correct mislabeled data points (that the model is confident) and reweight less confident data points. We validate our approach on an extensive collection of image and text data sets using multiple pre-trained model architectures. Our approach improves baseline methods by 1.76 classification tasks and 0.75 target data set includes noisy labels, our approach outperforms baseline methods by 3.56

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/06/2022

Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees

We consider transfer learning approaches that fine-tune a pretrained dee...
research
10/15/2020

Fine-Tuning Pre-trained Language Model with Weak Supervision: A Contrastive-Regularized Self-Training Approach

Fine-tuned pre-trained language models (LMs) achieve enormous success in...
research
10/14/2022

Self-Repetition in Abstractive Neural Summarizers

We provide a quantitative and qualitative analysis of self-repetition in...
research
03/19/2023

Trainable Projected Gradient Method for Robust Fine-tuning

Recent studies on transfer learning have shown that selectively fine-tun...
research
08/09/2023

SLPT: Selective Labeling Meets Prompt Tuning on Label-Limited Lesion Segmentation

Medical image analysis using deep learning is often challenged by limite...
research
08/28/2020

Background Splitting: Finding Rare Classes in a Sea of Background

We focus on the real-world problem of training accurate deep models for ...
research
02/27/2017

CIFT: Crowd-Informed Fine-Tuning to Improve Machine Learning Ability

Item Response Theory (IRT) allows for measuring ability of Machine Learn...

Please sign up or login with your details

Forgot password? Click here to reset