Bi-Adversarial Auto-Encoder for Zero-Shot Learning

11/20/2018
by   Yunlong Yu, et al.
18

Existing generative Zero-Shot Learning (ZSL) methods only consider the unidirectional alignment from the class semantics to the visual features while ignoring the alignment from the visual features to the class semantics, which fails to construct the visual-semantic interactions well. In this paper, we propose to synthesize visual features based on an auto-encoder framework paired with bi-adversarial networks respectively for visual and semantic modalities to reinforce the visual-semantic interactions with a bi-directional alignment, which ensures the synthesized visual features to fit the real visual distribution and to be highly related to the semantics. The encoder aims at synthesizing real-like visual features while the decoder forces both the real and the synthesized visual features to be more related to the class semantics. To further capture the discriminative information of the synthesized visual features, both the real and synthesized visual features are forced to be classified into the correct classes via a classification network. Experimental results on four benchmark datasets show that the proposed approach is particularly competitive on both the traditional ZSL and the generalized ZSL tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 8

research
02/26/2021

Class Knowledge Overlay to Visual Feature Learning for Zero-Shot Image Classification

New categories can be discovered by transforming semantic features into ...
research
12/03/2018

Towards Visual Feature Translation

Most existing visual search systems are deployed based upon fixed kinds ...
research
04/15/2018

Semantic Feature Augmentation in Few-shot Learning

A fundamental problem with few-shot learning is the scarcity of data in ...
research
05/22/2017

Semantic Softmax Loss for Zero-Shot Learning

A typical pipeline for Zero-Shot Learning (ZSL) is to integrate the visu...
research
08/29/2020

Zero-Shot Learning from Adversarial Feature Residual to Compact Visual Feature

Recently, many zero-shot learning (ZSL) methods focused on learning disc...
research
08/02/2023

What Is the Difference Between a Mountain and a Molehill? Quantifying Semantic Labeling of Visual Features in Line Charts

Relevant language describing visual features in charts can be useful for...
research
03/15/2023

Bi-directional Distribution Alignment for Transductive Zero-Shot Learning

It is well-known that zero-shot learning (ZSL) can suffer severely from ...

Please sign up or login with your details

Forgot password? Click here to reset