L2AE-D: Learning to Aggregate Embeddings for Few-shot Learning with Meta-level Dropout

04/08/2019
by   Heda Song, et al.
0

Few-shot learning focuses on learning a new visual concept with very limited labelled examples. A successful approach to tackle this problem is to compare the similarity between examples in a learned metric space based on convolutional neural networks. However, existing methods typically suffer from meta-level overfitting due to the limited amount of training tasks and do not normally consider the importance of the convolutional features of different examples within the same channel. To address these limitations, we make the following two contributions: (a) We propose a novel meta-learning approach for aggregating useful convolutional features and suppressing noisy ones based on a channel-wise attention mechanism to improve class representations. The proposed model does not require fine-tuning and can be trained in an end-to-end manner. The main novelty lies in incorporating a shared weight generation module that learns to assign different weights to the feature maps of different examples within the same channel. (b) We also introduce a simple meta-level dropout technique that reduces meta-level overfitting in several few-shot learning approaches. In our experiments, we find that this simple technique significantly improves the performance of the proposed method as well as various state-of-the-art meta-learning algorithms. Applying our method to few-shot image recognition using Omniglot and miniImageNet datasets shows that it is capable of delivering a state-of-the-art classification performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2020

Cross-Domain Few-Shot Learning with Meta Fine-Tuning

In this paper, we tackle the new Cross-Domain Few-Shot Learning benchmar...
research
04/19/2019

Hierarchical Meta Learning

Meta learning is a promising solution to few-shot learning problems. How...
research
05/31/2022

Meta-ticket: Finding optimal subnetworks for few-shot learning within randomly initialized neural networks

Few-shot learning for neural networks (NNs) is an important problem that...
research
10/12/2022

A Unified Framework with Meta-dropout for Few-shot Learning

Conventional training of deep neural networks usually requires a substan...
research
02/27/2020

Transductive Few-shot Learning with Meta-Learned Confidence

We propose a novel transductive inference framework for metric-based met...
research
02/14/2021

Model-Agnostic Graph Regularization for Few-Shot Learning

In many domains, relationships between categories are encoded in the kno...
research
05/28/2019

Image Deformation Meta-Networks for One-Shot Learning

Humans can robustly learn novel visual concepts even when images undergo...

Please sign up or login with your details

Forgot password? Click here to reset