DeepAI AI Chat
Log In Sign Up

Large-Scale Zero-Shot Image Classification from Rich and Diverse Textual Descriptions

by   Sebastian Bujwid, et al.

We study the impact of using rich and diverse textual descriptions of classes for zero-shot learning (ZSL) on ImageNet. We create a new dataset ImageNet-Wiki that matches each ImageNet class to its corresponding Wikipedia article. We show that merely employing these Wikipedia articles as class descriptions yields much higher ZSL performance than prior works. Even a simple model using this type of auxiliary data outperforms state-of-the-art models that rely on standard features of word embedding encodings of class names. These results highlight the usefulness and importance of textual descriptions for ZSL, as well as the relative importance of auxiliary data type compared to algorithmic progress. Our experimental results also show that standard zero-shot learning approaches generalize poorly across categories of classes.


page 2

page 7

page 13


Zero-shot Learning with Class Description Regularization

The purpose of generative Zero-shot learning (ZSL) is to learning from s...

SemSup-XC: Semantic Supervision for Zero and Few-shot Extreme Classification

Extreme classification (XC) involves predicting over large numbers of cl...

Text2Model: Model Induction for Zero-shot Generalization Using Task Descriptions

We study the problem of generating a training-free task-dependent visual...

Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions

One of the main challenges in Zero-Shot Learning of visual categories is...

Exploring Meta Information for Audio-based Zero-shot Bird Classification

Advances in passive acoustic monitoring and machine learning have led to...

Core Risk Minimization using Salient ImageNet

Deep neural networks can be unreliable in the real world especially when...

Learning to Name Classes for Vision and Language Models

Large scale vision and language models can achieve impressive zero-shot ...