One-Shot Learning in Discriminative Neural Networks

07/18/2017
by   Jordan Burgess, et al.
0

We consider the task of one-shot learning of visual categories. In this paper we explore a Bayesian procedure for updating a pretrained convnet to classify a novel image category for which data is limited. We decompose this convnet into a fixed feature extractor and softmax classifier. We assume that the target weights for the new task come from the same distribution as the pretrained softmax weights, which we model as a multivariate Gaussian. By using this as a prior for the new weights, we demonstrate competitive performance with state-of-the-art methods whilst also being consistent with 'normal' methods for training deep networks on large data.

READ FULL TEXT

page 1

page 2

page 3

research
03/20/2020

Few-Shot Learning with Geometric Constraints

In this article, we consider the problem of few-shot learning for classi...
research
07/12/2018

Metalearning with Hebbian Fast Weights

We unify recent neural approaches to one-shot learning with older ideas ...
research
03/19/2020

XtarNet: Learning to Extract Task-Adaptive Representation for Incremental Few-Shot Learning

Learning novel concepts while preserving prior knowledge is a long-stand...
research
12/07/2019

Improved Few-Shot Visual Classification

Few-shot learning is a fundamental task in computer vision that carries ...
research
05/31/2022

HyperMAML: Few-Shot Adaptation of Deep Models with Hypernetworks

The aim of Few-Shot learning methods is to train models which can easily...
research
03/20/2021

Classifier Crafting: Turn Your ConvNet into a Zero-Shot Learner!

In Zero-shot learning (ZSL), we classify unseen categories using textual...
research
04/28/2022

It's DONE: Direct ONE-shot learning without training optimization

Learning a new concept from one example is a superior function of human ...

Please sign up or login with your details

Forgot password? Click here to reset