Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes

06/18/2019
by   James Requeima, et al.
6

The goal of this paper is to design image classification systems that, after an initial multi-task training phase, can automatically adapt to new tasks encountered at test time. We introduce a conditional neural process based approach to the multi-task classification setting for this purpose, and establish connections to the meta-learning and few-shot learning literature. The resulting approach, called CNAPs, comprises a classifier whose parameters are modulated by an adaptation network that takes the current task's dataset as input. We demonstrate that CNAPs achieves state-of-the-art results on the challenging Meta-Dataset benchmark indicating high-quality transfer-learning. We show that the approach is robust, avoiding both over-fitting in low-shot regimes and under-fitting in high-shot regimes. Timing experiments reveal that CNAPs is computationally efficient at test-time as it does not involve gradient based adaptation. Finally, we show that trained models are immediately deployable to continual learning and active learning where they can outperform existing approaches that do not leverage transfer learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2019

MetAdapt: Meta-Learned Task-Adaptive Architecture for Few-Shot Classification

Few-Shot Learning (FSL) is a topic of rapidly growing interest. Typicall...
research
05/07/2021

Few-Shot Learning for Image Classification of Common Flora

The use of meta-learning and transfer learning in the task of few-shot i...
research
04/15/2021

Embedding Adaptation is Still Needed for Few-Shot Learning

Constructing new and more challenging tasksets is a fruitful methodology...
research
01/13/2022

Beyond Simple Meta-Learning: Multi-Purpose Models for Multi-Domain, Active and Continual Few-Shot Learning

Modern deep learning requires large-scale extensively labelled datasets ...
research
02/11/2022

The Dual Form of Neural Networks Revisited: Connecting Test Time Predictions to Training Patterns via Spotlights of Attention

Linear layers in neural networks (NNs) trained by gradient descent can b...
research
06/16/2021

Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation

Multi-task learning (MTL) aims to improve the generalization of several ...
research
01/28/2023

A Closer Look at Few-shot Classification Again

Few-shot classification consists of a training phase where a model is le...

Please sign up or login with your details

Forgot password? Click here to reset