A Flexible Selection Scheme for Minimum-Effort Transfer Learning

08/27/2020
by   Amélie Royer, et al.
0

Fine-tuning is a popular way of exploiting knowledge contained in a pre-trained convolutional network for a new visual recognition task. However, the orthogonal setting of transferring knowledge from a pretrained network to a visually different yet semantically close source is rarely considered: This commonly happens with real-life data, which is not necessarily as clean as the training source (noise, geometric transformations, different modalities, etc.). To tackle such scenarios, we introduce a new, generalized form of fine-tuning, called flex-tuning, in which any individual unit (e.g. layer) of a network can be tuned, and the most promising one is chosen automatically. In order to make the method appealing for practical use, we propose two lightweight and faster selection procedures that prove to be good approximations in practice. We study these selection criteria empirically across a variety of domain shifts and data scarcity scenarios, and show that fine-tuning individual units, despite its simplicity, yields very good results as an adaptation technique. As it turns out, in contrast to common practice, rather than the last fully-connected unit it is best to tune an intermediate or early one in many domain-shift scenarios, which is accurately detected by flex-tuning.

READ FULL TEXT

page 6

page 8

research
10/20/2022

Surgical Fine-Tuning Improves Adaptation to Distribution Shifts

A common approach to transfer learning under distribution shift is to fi...
research
02/21/2022

Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution

When transferring a pretrained model to a downstream task, two popular m...
research
01/29/2021

A linearized framework and a new benchmark for model selection for fine-tuning

Fine-tuning from a collection of models pre-trained on different domains...
research
12/31/2019

Side-Tuning: Network Adaptation via Additive Side Networks

When training a neural network for a desired task, one may prefer to ada...
research
07/02/2020

Learn Faster and Forget Slower via Fast and Stable Task Adaptation

Training Deep Neural Networks (DNNs) is still highly time-consuming and ...
research
05/25/2019

Efficient Neural Task Adaptation by Maximum Entropy Initialization

Transferring knowledge from one neural network to another has been shown...
research
07/03/2023

Surgical fine-tuning for Grape Bunch Segmentation under Visual Domain Shifts

Mobile robots will play a crucial role in the transition towards sustain...

Please sign up or login with your details

Forgot password? Click here to reset