Efficient K-Shot Learning with Regularized Deep Networks

10/06/2017
by   Donghyun Yoo, et al.
0

Feature representations from pre-trained deep neural networks have been known to exhibit excellent generalization and utility across a variety of related tasks. Fine-tuning is by far the simplest and most widely used approach that seeks to exploit and adapt these feature representations to novel tasks with limited data. Despite the effectiveness of fine-tuning, itis often sub-optimal and requires very careful optimization to prevent severe over-fitting to small datasets. The problem of sub-optimality and over-fitting, is due in part to the large number of parameters used in a typical deep convolutional neural network. To address these problems, we propose a simple yet effective regularization method for fine-tuning pre-trained deep networks for the task of k-shot learning. To prevent overfitting, our key strategy is to cluster the model parameters while ensuring intra-cluster similarity and inter-cluster diversity of the parameters, effectively regularizing the dimensionality of the parameter search space. In particular, we identify groups of neurons within each layer of a deep network that shares similar activation patterns. When the network is to be fine-tuned for a classification task using only k examples, we propagate a single gradient to all of the neuron parameters that belong to the same group. The grouping of neurons is non-trivial as neuron activations depend on the distribution of the input data. To efficiently search for optimal groupings conditioned on the input data, we propose a reinforcement learning search strategy using recurrent networks to learn the optimal group assignments for each network layer. Experimental results show that our method can be easily applied to several popular convolutional neural networks and improve upon other state-of-the-art fine-tuning based k-shot learning strategies by more than10

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2020

How to fine-tune deep neural networks in few-shot learning?

Deep learning has been widely used in data-intensive applications. Howev...
research
05/26/2023

Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation

Few-shot fine-tuning and in-context learning are two alternative strateg...
research
02/19/2020

Distance-Based Regularisation of Deep Networks for Fine-Tuning

We investigate approaches to regularisation during fine-tuning of deep n...
research
01/07/2022

Repurposing Existing Deep Networks for Caption and Aesthetic-Guided Image Cropping

We propose a novel optimization framework that crops a given image based...
research
10/12/2021

LiST: Lite Self-training Makes Efficient Few-shot Learners

We present a new method LiST for efficient fine-tuning of large pre-trai...
research
10/01/2021

UserIdentifier: Implicit User Representations for Simple and Effective Personalized Sentiment Analysis

Global models are trained to be as generalizable as possible, with user ...
research
05/11/2017

Incremental Learning Through Deep Adaptation

Given an existing trained neural network, it is often desirable to be ab...

Please sign up or login with your details

Forgot password? Click here to reset