Efficient Neural Task Adaptation by Maximum Entropy Initialization

05/25/2019
by   Farshid Varno, et al.
0

Transferring knowledge from one neural network to another has been shown to be helpful for learning tasks with few training examples. Prevailing fine-tuning methods could potentially contaminate pre-trained features by comparably high energy random noise. This noise is mainly delivered from a careless replacement of task-specific parameters. We analyze theoretically such knowledge contamination for classification tasks and propose a practical and easy to apply method to trap and minimize the contaminant. In our approach, the entropy of the output estimates gets maximized initially and the first back-propagated error is stalled at the output of the last layer. Our proposed method not only outperforms the traditional fine-tuning, but also significantly speeds up the convergence of the learner. It is robust to randomness and independent of the choice of architecture. Overall, our experiments show that the power of transfer learning has been substantially underestimated so far.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/28/2017

Gradual Tuning: a better way of Fine Tuning the parameters of a Deep Neural Network

In this paper we present an alternative strategy for fine-tuning the par...
research
02/07/2020

Improving the Adversarial Robustness of Transfer Learning via Noisy Feature Distillation

Fine-tuning through knowledge transfer from a pre-trained model on a lar...
research
07/26/2022

AMF: Adaptable Weighting Fusion with Multiple Fine-tuning for Image Classification

Fine-tuning is widely applied in image classification tasks as a transfe...
research
06/06/2022

Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees

We consider transfer learning approaches that fine-tune a pretrained dee...
research
04/11/2019

Knowledge Flow: Improve Upon Your Teachers

A zoo of deep nets is available these days for almost any given task, an...
research
04/12/2017

Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning

Although neural networks are well suited for sequential transfer learnin...
research
08/27/2020

A Flexible Selection Scheme for Minimum-Effort Transfer Learning

Fine-tuning is a popular way of exploiting knowledge contained in a pre-...

Please sign up or login with your details

Forgot password? Click here to reset