Transfer Learning Based on AdaBoost for Feature Selection from Multiple ConvNet Layer Features

02/01/2016
by   Jumabek Alikhanov, et al.
0

Convolutional Networks (ConvNets) are powerful models that learn hierarchies of visual features, which could also be used to obtain image representations for transfer learning. The basic pipeline for transfer learning is to first train a ConvNet on a large dataset (source task) and then use feed-forward units activation of the trained ConvNet as image representation for smaller datasets (target task). Our key contribution is to demonstrate superior performance of multiple ConvNet layer features over single ConvNet layer features. Combining multiple ConvNet layer features will result in more complex feature space with some features being repetitive. This requires some form of feature selection. We use AdaBoost with single stumps to implicitly select only distinct features that are useful towards classification from concatenated ConvNet features. Experimental results show that using multiple ConvNet layer activation features instead of single ConvNet layer features consistently will produce superior performance. Improvements becomes significant as we increase the distance between source task and the target task.

READ FULL TEXT

page 1

page 2

page 3

research
06/22/2014

Factors of Transferability for a Generic ConvNet Representation

Evidence is mounting that Convolutional Networks (ConvNets) are the most...
research
04/29/2022

Fix the Noise: Disentangling Source Feature for Transfer Learning of StyleGAN

Transfer learning of StyleGAN has recently shown great potential to solv...
research
08/06/2014

Scalable Greedy Algorithms for Transfer Learning

In this paper we consider the binary transfer learning problem, focusing...
research
06/19/2018

Transfer Learning with Human Corneal Tissues: An Analysis of Optimal Cut-Off Layer

Transfer learning is a powerful tool to adapt trained neural networks to...
research
10/19/2020

An Investigation of Feature Selection and Transfer Learning for Writer-Independent Offline Handwritten Signature Verification

SigNet is a state of the art model for feature representation used for h...
research
10/02/2018

Target Aware Network Adaptation for Efficient Representation Learning

This paper presents an automatic network adaptation method that finds a ...
research
03/30/2018

Class Subset Selection for Transfer Learning using Submodularity

In recent years, it is common practice to extract fully-connected layer ...

Please sign up or login with your details

Forgot password? Click here to reset