f-VAEGAN-D2: A Feature Generating Framework for Any-Shot Learning

03/25/2019
by   Yongqin Xian, et al.
0

When labeled training data is scarce, a promising data augmentation approach is to generate visual features of unknown classes using their attributes. To learn the class conditional distribution of CNN features, these models rely on pairs of image features and class attributes. Hence, they can not make use of the abundance of unlabeled data samples. In this paper, we tackle any-shot learning problems i.e. zero-shot and few-shot, in a unified feature generating framework that operates in both inductive and transductive learning settings. We develop a conditional generative model that combines the strength of VAE and GANs and in addition, via an unconditional discriminator, learns the marginal feature distribution of unlabeled images. We empirically show that our model learns highly discriminative CNN features for five datasets, i.e. CUB, SUN, AWA and ImageNet, and establish a new state-of-the-art in any-shot learning, i.e. inductive and transductive (generalized) zero- and few-shot learning settings. We also demonstrate that our learned features are interpretable: we visualize them by inverting them back to the pixel space and we explain them by generating textual arguments of why they are associated with a certain label.

READ FULL TEXT
research
12/04/2017

Feature Generating Networks for Zero-Shot Learning

Suffering from the extreme training data imbalance between seen and unse...
research
12/05/2018

Generalized Zero- and Few-Shot Learning via Aligned Variational Autoencoders

Many approaches in generalized zero-shot learning rely on cross-modal ma...
research
04/22/2019

Learning Feature-to-Feature Translator by Alternating Back-Propagation for Zero-Shot Learning

We investigate learning feature-to-feature translator networks by altern...
research
03/17/2017

Learning Robust Visual-Semantic Embeddings

Many of the existing methods for learning joint embedding of images and ...
research
06/14/2018

Nearly Zero-Shot Learning for Semantic Decoding in Spoken Dialogue Systems

This paper presents two ways of dealing with scarce data in semantic dec...
research
07/12/2019

Augmenting Neural Nets with Symbolic Synthesis: Applications to Few-Shot Learning

We propose symbolic learning as extensions to standard inductive learnin...
research
02/09/2022

Can Humans Do Less-Than-One-Shot Learning?

Being able to learn from small amounts of data is a key characteristic o...

Please sign up or login with your details

Forgot password? Click here to reset