DeepAI AI Chat
Log In Sign Up

CactusNets: Layer Applicability as a Metric for Transfer Learning

by   Edward Collier, et al.
Louisiana State University

Deep neural networks trained over large datasets learn features that are both generic to the whole dataset, and specific to individual classes in the dataset. Learned features tend towards generic in the lower layers and specific in the higher layers of a network. Methods like fine-tuning are made possible because of the ability for one filter to apply to multiple target classes. Much like the human brain this behavior, can also be used to cluster and separate classes. However, to the best of our knowledge there is no metric for how applicable learned features are to specific classes. In this paper we propose a definition and metric for measuring the applicability of learned features to individual classes, and use this applicability metric to estimate input applicability and produce a new method of unsupervised learning we call the CactusNet.


page 1

page 2

page 3

page 4


An analysis of the transfer learning of convolutional neural networks for artistic images

Transfer learning from huge natural image datasets, fine-tuning of deep ...

AutoFCL: Automatically Tuning Fully Connected Layers for Transfer Learning

Deep Convolutional Neural Networks (CNN) have evolved as popular machine...

How transferable are features in deep neural networks?

Many deep neural networks trained on natural images exhibit a curious ph...

Adversarially-Trained Deep Nets Transfer Better

Transfer learning has emerged as a powerful methodology for adapting pre...

Le Cam meets LeCun: Deficiency and Generic Feature Learning

"Deep Learning" methods attempt to learn generic features in an unsuperv...

Class Subset Selection for Transfer Learning using Submodularity

In recent years, it is common practice to extract fully-connected layer ...

Brain-mediated Transfer Learning of Convolutional Neural Networks

Human flexible cognition and behavior indicate that the human brain can ...