Canonical Mean Filter for Almost Zero-Shot Multi-Task classification

04/08/2022
by   Yong Li, et al.
0

The support set is a key to providing conditional prior for fast adaption of the model in few-shot tasks. But the strict form of support set makes its construction actually difficult in practical application. Motivated by ANIL, we rethink the role of adaption in the feature extractor of CNAPs, which is a state-of-the-art representative few-shot method. To investigate the role, Almost Zero-Shot (AZS) task is designed by fixing the support set to replace the common scheme, which provides corresponding support sets for the different conditional prior of different tasks. The AZS experiment results infer that the adaptation works little in the feature extractor. However, CNAPs cannot be robust to randomly selected support sets and perform poorly on some datasets of Meta-Dataset because of its scattered mean embeddings responded by the simple mean operator. To enhance the robustness of CNAPs, Canonical Mean Filter (CMF) module is proposed to make the mean embeddings intensive and stable in feature space by mapping the support sets into a canonical form. CMFs make CNAPs robust to any fixed support sets even if they are random matrices. This attribution makes CNAPs be able to remove the mean encoder and the parameter adaptation network at the test stage, while CNAP-CMF on AZS tasks keeps the performance with one-shot tasks. It leads to a big parameter reduction. Precisely, 40.48% parameters are dropped at the test stage. Also, CNAP-CMF outperforms CNAPs in one-shot tasks because it addresses inner-task unstable performance problems. Classification performance, visualized and clustering results verify that CMFs make CNAPs better and simpler.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/27/2023

Projected Subnetworks Scale Adaptation

Large models support great zero-shot and few-shot capabilities. However,...
research
01/30/2023

Anchor-Based Adversarially Robust Zero-Shot Learning Driven by Language

Deep neural networks are vulnerable to adversarial attacks. We consider ...
research
06/17/2020

Improving Few-Shot Visual Classification with Unlabelled Examples

We propose a transductive meta-learning method that uses unlabelled inst...
research
05/24/2023

A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification

In recent years, large language models (LLMs) have achieved strong perfo...
research
11/01/2021

Influential Prototypical Networks for Few Shot Learning: A Dermatological Case Study

Prototypical network (PN) is a simple yet effective few shot learning st...
research
07/20/2020

Fantastic Embeddings and How to Align Them: Zero-Shot Inference in a Multi-Shop Scenario

This paper addresses the challenge of leveraging multiple embedding spac...

Please sign up or login with your details

Forgot password? Click here to reset