Factors of Transferability for a Generic ConvNet Representation

06/22/2014
by   Hossein Azizpour, et al.
0

Evidence is mounting that Convolutional Networks (ConvNets) are the most effective representation learning method for visual recognition tasks. In the common scenario, a ConvNet is trained on a large labeled dataset (source) and the feed-forward units activation of the trained network, at a certain layer of the network, is used as a generic representation of an input image for a task with relatively smaller training set (target). Recent studies have shown this form of representation transfer to be suitable for a wide range of target visual recognition tasks. This paper introduces and investigates several factors affecting the transferability of such representations. It includes parameters for training of the source ConvNet such as its architecture, distribution of the training data, etc. and also the parameters of feature extraction such as layer of the trained ConvNet, dimensionality reduction, etc. Then, by optimizing these factors, we show that significant improvements can be achieved on various (17) visual recognition tasks. We further show that these visual recognition tasks can be categorically ordered based on their distance from the source task such that a correlation between the performance of tasks and their distance from the source task w.r.t. the proposed factors is observed.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/01/2016

Transfer Learning Based on AdaBoost for Feature Selection from Multiple ConvNet Layer Features

Convolutional Networks (ConvNets) are powerful models that learn hierarc...
research
07/11/2019

Multifaceted Analysis of Fine-Tuning in Deep Model for Visual Recognition

In recent years, convolutional neural networks (CNNs) have achieved impr...
research
12/08/2022

Task Bias in Vision-Language Models

Incidental supervision from language has become a popular approach for l...
research
10/06/2013

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

We evaluate whether features extracted from the activation of a deep con...
research
08/13/2020

Adversarial Knowledge Transfer from Unlabeled Data

While machine learning approaches to visual recognition offer great prom...
research
11/24/2014

Persistent Evidence of Local Image Properties in Generic ConvNets

Supervised training of a convolutional network for object classification...
research
05/06/2021

Multi-Perspective LSTM for Joint Visual Representation Learning

We present a novel LSTM cell architecture capable of learning both intra...

Please sign up or login with your details

Forgot password? Click here to reset