CANZSL: Cycle-Consistent Adversarial Networks for Zero-Shot Learning from Natural Language

09/21/2019
by   Zhi Chen, et al.
0

Existing methods using generative adversarial approaches for Zero-Shot Learning (ZSL) aim to generate realistic visual features from class semantics by a single generative network, which is highly under-constrained. As a result, the previous methods cannot guarantee that the generated visual features can truthfully reflect the corresponding semantics. To address this issue, we propose a novel method named Cycle-consistent Adversarial Networks for Zero-Shot Learning (CANZSL). It encourages a visual feature generator to synthesize realistic visual features from semantics, and then inversely translate back synthesized the visual feature to corresponding semantic space by a semantic feature generator. Furthermore, in this paper a more challenging and practical ZSL problem is considered where the original semantics are from natural language with irrelevant words instead of clean semantics that are widely used in previous work. Specifically, a multi-modal consistent bidirectional generative adversarial network is trained to handle unseen instances by leveraging noise in the natural language. A forward one-to-many mapping from one text description to multiple visual features is coupled with an inverse many-to-one mapping from the visual space to the semantic space. Thus, a multi-modal cycle-consistency loss between the synthesized semantic representations and the ground truth can be learned and leveraged to enforce the generated semantic features to approximate to the real distribution in semantic space. Extensive experiments are conducted to demonstrate that our method consistently outperforms state-of-the-art approaches on natural language-based zero-shot learning tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset