Improving Reliability of Fine-tuning with Block-wise Optimisation

01/15/2023
by   Basel Barakat, et al.
0

Finetuning can be used to tackle domain-specific tasks by transferring knowledge. Previous studies on finetuning focused on adapting only the weights of a task-specific classifier or re-optimizing all layers of the pre-trained model using the new task data. The first type of methods cannot mitigate the mismatch between a pre-trained model and the new task data, and the second type of methods easily cause over-fitting when processing tasks with limited data. To explore the effectiveness of fine-tuning, we propose a novel block-wise optimization mechanism, which adapts the weights of a group of layers of a pre-trained model. In our work, the layer selection can be done in four different ways. The first is layer-wise adaptation, which aims to search for the most salient single layer according to the classification performance. The second way is based on the first one, jointly adapting a small number of top-ranked layers instead of using an individual layer. The third is block based segmentation, where the layers of a deep network is segmented into blocks by non-weighting layers, such as the MaxPooling layer and Activation layer. The last one is to use a fixed-length sliding window to group layers block by block. To identify which group of layers is the most suitable for finetuning, the search starts from the target end and is conducted by freezing other layers excluding the selected layers and the classification layers. The most salient group of layers is determined in terms of classification performance. In our experiments, the proposed approaches are tested on an often-used dataset, Tf_flower, by finetuning five typical pre-trained models, VGG16, MobileNet-v1, MobileNet-v2, MobileNet-v3, and ResNet50v2, respectively. The obtained results show that the use of our proposed block-wise approaches can achieve better performances than the two baseline methods and the layer-wise method.

READ FULL TEXT

page 1

page 6

page 7

page 8

page 9

research
06/09/2021

AutoFT: Automatic Fine-Tune for Parameters Transfer Learning in Click-Through Rate Prediction

Recommender systems are often asked to serve multiple recommendation sce...
research
05/21/2018

A Simple Cache Model for Image Recognition

Training large-scale image recognition models is computationally expensi...
research
06/03/2022

MetaLR: Layer-wise Learning Rate based on Meta-Learning for Adaptively Fine-tuning Medical Pre-trained Models

When applying transfer learning for medical image analysis, downstream t...
research
07/15/2020

AdapterHub: A Framework for Adapting Transformers

The current modus operandi in NLP involves downloading and fine-tuning p...
research
03/07/2023

Introspective Cross-Attention Probing for Lightweight Transfer of Pre-trained Models

We propose InCA, a lightweight method for transfer learning that cross-a...
research
08/07/2021

NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of Models

Fine-tuning from pre-trained ImageNet models has been a simple, effectiv...
research
06/29/2019

NetTailor: Tuning the Architecture, Not Just the Weights

Real-world applications of object recognition often require the solution...

Please sign up or login with your details

Forgot password? Click here to reset