What Remains of Visual Semantic Embeddings

07/26/2021
by   Yue Jiao, et al.
0

Zero shot learning (ZSL) has seen a surge in interest over the decade for its tight links with the mechanism making young children recognize novel objects. Although different paradigms of visual semantic embedding models are designed to align visual features and distributed word representations, it is unclear to what extent current ZSL models encode semantic information from distributed word representations. In this work, we introduce the split of tiered-ImageNet to the ZSL task, in order to avoid the structural flaws in the standard ImageNet benchmark. We build a unified framework for ZSL with contrastive learning as pre-training, which guarantees no semantic information leakage and encourages linearly separable visual features. Our work makes it fair for evaluating visual semantic embedding models on a ZSL setting in which semantic inference is decisive. With this framework, we show that current ZSL models struggle with encoding semantic relationships from word analogy and word hierarchy. Our analyses provide motivation for exploring the role of context language representations in ZSL tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2021

Language Models as Zero-shot Visual Semantic Learners

Visual Semantic Embedding (VSE) models, which map images into a rich sem...
research
07/18/2017

Visually Aligned Word Embeddings for Improving Zero-shot Learning

Zero-shot learning (ZSL) highly depends on a good semantic embedding to ...
research
01/20/2021

Semantic Disentangling Generalized Zero-Shot Learning

Generalized Zero-Shot Learning (GZSL) aims to recognize images from both...
research
02/23/2021

Multi-Knowledge Fusion for New Feature Generation in Generalized Zero-Shot Learning

Suffering from the semantic insufficiency and domain-shift problems, mos...
research
08/06/2019

Generalised Zero-Shot Learning with a Classifier Ensemble over Multi-Modal Embedding Spaces

Generalised zero-shot learning (GZSL) methods aim to classify previously...
research
11/18/2021

Simple but Effective: CLIP Embeddings for Embodied AI

Contrastive language image pretraining (CLIP) encoders have been shown t...
research
06/08/2023

COURIER: Contrastive User Intention Reconstruction for Large-Scale Pre-Train of Image Features

With the development of the multi-media internet, visual characteristics...

Please sign up or login with your details

Forgot password? Click here to reset