Few-shot Learning with Global Relatedness Decoupled-Distillation

07/12/2021
by   Yuan Zhou, et al.
1

Despite the success that metric learning based approaches have achieved in few-shot learning, recent works reveal the ineffectiveness of their episodic training mode. In this paper, we point out two potential reasons for this problem: 1) the random episodic labels can only provide limited supervision information, while the relatedness information between the query and support samples is not fully exploited; 2) the meta-learner is usually constrained by the limited contextual information of the local episode. To overcome these problems, we propose a new Global Relatedness Decoupled-Distillation (GRDD) method using the global category knowledge and the Relatedness Decoupled-Distillation (RDD) strategy. Our GRDD learns new visual concepts quickly by imitating the habit of humans, i.e. learning from the deep knowledge distilled from the teacher. More specifically, we first train a global learner on the entire base subset using category labels as supervision to leverage the global context information of the categories. Then, the well-trained global learner is used to simulate the query-support relatedness in global dependencies. Finally, the distilled global query-support relatedness is explicitly used to train the meta-learner using the RDD strategy, with the goal of making the meta-learner more discriminative. The RDD strategy aims to decouple the dense query-support relatedness into the groups of sparse decoupled relatedness. Moreover, only the relatedness of a single support sample with other query samples is considered in each group. By distilling the sparse decoupled relatedness group by group, sharper relatedness can be effectively distilled to the meta-learner, thereby facilitating the learning of a discriminative meta-learner. We conduct extensive experiments on the miniImagenet and CIFAR-FS datasets, which show the state-of-the-art performance of our GRDD method.

READ FULL TEXT

page 1

page 4

page 8

research
09/12/2020

Few-shot Learning with LSSVM Base Learner and Transductive Modules

The performance of meta-learning approaches for few-shot learning genera...
research
07/01/2021

Few-Shot Learning with a Strong Teacher

Few-shot learning (FSL) aims to train a strong classifier using limited ...
research
09/25/2019

A Theoretical Analysis of the Number of Shots in Few-Shot Learning

Few-shot classification is the task of predicting the category of an exa...
research
11/19/2018

Representation based and Attention augmented Meta learning

Deep learning based computer vision fails to work when labeled images ar...
research
03/30/2021

Revisiting Deep Local Descriptor for Improved Few-Shot Classification

Few-shot classification studies the problem of quickly adapting a deep l...
research
08/22/2018

Learning to Support: Exploiting Structure Information in Support Sets for One-Shot Learning

Deep Learning shows very good performance when trained on large labeled ...
research
09/08/2020

Region Comparison Network for Interpretable Few-shot Image Classification

While deep learning has been successfully applied to many real-world com...

Please sign up or login with your details

Forgot password? Click here to reset