How to fine-tune deep neural networks in few-shot learning?

12/01/2020
by   Peng Peng, et al.
101

Deep learning has been widely used in data-intensive applications. However, training a deep neural network often requires a large data set. When there is not enough data available for training, the performance of deep learning models is even worse than that of shallow networks. It has been proved that few-shot learning can generalize to new tasks with few training samples. Fine-tuning of a deep model is simple and effective few-shot learning method. However, how to fine-tune deep learning models (fine-tune convolution layer or BN layer?) still lack deep investigation. Hence, we study how to fine-tune deep models through experimental comparison in this paper. Furthermore, the weight of the models is analyzed to verify the feasibility of the fine-tuning method.

READ FULL TEXT

page 7

page 8

page 9

page 10

page 12

research
02/14/2023

Few-shot learning approaches for classifying low resource domain specific software requirements

With the advent of strong pre-trained natural language processing models...
research
10/06/2017

Efficient K-Shot Learning with Regularized Deep Networks

Feature representations from pre-trained deep neural networks have been ...
research
08/12/2021

DOI: Divergence-based Out-of-Distribution Indicators via Deep Generative Models

To ensure robust and reliable classification results, OoD (out-of-distri...
research
08/03/2020

Incorrect by Construction: Fine Tuning Neural Networks for Guaranteed Performance on Finite Sets of Examples

There is great interest in using formal methods to guarantee the reliabi...
research
05/10/2020

A Comparison of Few-Shot Learning Methods for Underwater Optical and Sonar Image Classification

Deep convolutional neural networks have shown to perform well in underwa...
research
08/16/2018

On the Decision Boundary of Deep Neural Networks

While deep learning models and techniques have achieved great empirical ...
research
11/04/2020

Investigating Novel Verb Learning in BERT: Selectional Preference Classes and Alternation-Based Syntactic Generalization

Previous studies investigating the syntactic abilities of deep learning ...

Please sign up or login with your details

Forgot password? Click here to reset