Borrowing Treasures from the Wealthy: Deep Transfer Learning through Selective Joint Fine-tuning

02/28/2017
by   Weifeng Ge, et al.
0

Deep neural networks require a large amount of labeled training data during supervised learning. However, collecting and labeling so much data might be infeasible in many cases. In this paper, we introduce a source-target selective joint fine-tuning scheme for improving the performance of deep learning tasks with insufficient training data. In this scheme, a target learning task with insufficient training data is carried out simultaneously with another source learning task with abundant training data. However, the source learning task does not use all existing training data. Our core idea is to identify and use a subset of training images from the original source learning task whose low-level characteristics are similar to those from the target learning task, and jointly fine-tune shared convolutional layers for both tasks. Specifically, we compute descriptors from linear or nonlinear filter bank responses on training images from both tasks, and use such descriptors to search for a desired subset of training samples for the source learning task. Experiments demonstrate that our selective joint fine-tuning scheme achieves state-of-the-art performance on multiple visual classification tasks with insufficient training data for deep learning. Such tasks include Caltech 256, MIT Indoor 67, Oxford Flowers 102 and Stanford Dogs 120. In comparison to fine-tuning without a source domain, the proposed method can improve the classification accuracy by 2

READ FULL TEXT

page 3

page 6

research
09/09/2017

Optimal Transport for Deep Joint Transfer Learning

Training a Deep Neural Network (DNN) from scratch requires a large amoun...
research
07/26/2022

AMF: Adaptable Weighting Fusion with Multiple Fine-tuning for Image Classification

Fine-tuning is widely applied in image classification tasks as a transfe...
research
05/26/2022

Understanding new tasks through the lens of training data via exponential tilting

Deploying machine learning models to new tasks is a major challenge desp...
research
04/05/2019

The Information Complexity of Learning Tasks, their Structure and their Distance

We introduce an asymmetric distance in the space of learning tasks, and ...
research
07/04/2021

A Theoretical Analysis of Fine-tuning with Linear Teachers

Fine-tuning is a common practice in deep learning, achieving excellent g...
research
09/28/2020

Cross-Task Representation Learning for Anatomical Landmark Detection

Recently, there is an increasing demand for automatically detecting anat...
research
11/01/2020

An Information-Geometric Distance on the Space of Tasks

This paper computes a distance between tasks modeled as joint distributi...

Please sign up or login with your details

Forgot password? Click here to reset