DeepAI AI Chat
Log In Sign Up

Efficient K-Shot Learning with Regularized Deep Networks

by   Donghyun Yoo, et al.
Carnegie Mellon University
Michigan State University

Feature representations from pre-trained deep neural networks have been known to exhibit excellent generalization and utility across a variety of related tasks. Fine-tuning is by far the simplest and most widely used approach that seeks to exploit and adapt these feature representations to novel tasks with limited data. Despite the effectiveness of fine-tuning, itis often sub-optimal and requires very careful optimization to prevent severe over-fitting to small datasets. The problem of sub-optimality and over-fitting, is due in part to the large number of parameters used in a typical deep convolutional neural network. To address these problems, we propose a simple yet effective regularization method for fine-tuning pre-trained deep networks for the task of k-shot learning. To prevent overfitting, our key strategy is to cluster the model parameters while ensuring intra-cluster similarity and inter-cluster diversity of the parameters, effectively regularizing the dimensionality of the parameter search space. In particular, we identify groups of neurons within each layer of a deep network that shares similar activation patterns. When the network is to be fine-tuned for a classification task using only k examples, we propagate a single gradient to all of the neuron parameters that belong to the same group. The grouping of neurons is non-trivial as neuron activations depend on the distribution of the input data. To efficiently search for optimal groupings conditioned on the input data, we propose a reinforcement learning search strategy using recurrent networks to learn the optimal group assignments for each network layer. Experimental results show that our method can be easily applied to several popular convolutional neural networks and improve upon other state-of-the-art fine-tuning based k-shot learning strategies by more than10


page 1

page 2

page 3

page 4


How to fine-tune deep neural networks in few-shot learning?

Deep learning has been widely used in data-intensive applications. Howev...

Revisiting the Updates of a Pre-trained Model for Few-shot Learning

Most of the recent few-shot learning algorithms are based on transfer le...

Distance-Based Regularisation of Deep Networks for Fine-Tuning

We investigate approaches to regularisation during fine-tuning of deep n...

Repurposing Existing Deep Networks for Caption and Aesthetic-Guided Image Cropping

We propose a novel optimization framework that crops a given image based...

Incremental Learning Through Deep Adaptation

Given an existing trained neural network, it is often desirable to be ab...

Identifying and interpreting tuning dimensions in deep networks

In neuroscience, a tuning dimension is a stimulus attribute that account...