Predicting Deep Zero-Shot Convolutional Neural Networks using Textual Descriptions

06/01/2015
by   Jimmy Ba, et al.
0

One of the main challenges in Zero-Shot Learning of visual categories is gathering semantic attributes to accompany images. Recent work has shown that learning from textual descriptions, such as Wikipedia articles, avoids the problem of having to explicitly define these attributes. We present a new model that can classify unseen categories from their textual description. Specifically, we use text features to predict the output weights of both the convolutional and the fully connected layers in a deep convolutional neural network (CNN). We take advantage of the architecture of CNNs and learn features at different layers, rather than just learning an embedding space for both modalities, as is common with existing approaches. The proposed model also allows us to automatically generate a list of pseudo- attributes for each visual category consisting of words from Wikipedia articles. We train our models end-to-end us- ing the Caltech-UCSD bird and flower datasets and evaluate both ROC and Precision-Recall curves. Our empirical results show that the proposed model significantly outperforms previous methods.

READ FULL TEXT

page 7

page 10

page 11

page 12

page 13

page 14

page 15

research
11/15/2016

Learning a Deep Embedding Model for Zero-Shot Learning

Zero-shot learning (ZSL) models rely on learning a joint embedding space...
research
05/26/2019

Integration of Text-maps in Convolutional Neural Networks for Region Detection among Different Textual Categories

In this work, we propose a new technique that combines appearance and te...
research
03/17/2021

Large-Scale Zero-Shot Image Classification from Rich and Diverse Textual Descriptions

We study the impact of using rich and diverse textual descriptions of cl...
research
06/29/2015

Tell and Predict: Kernel Classifier Prediction for Unseen Visual Classes from Unstructured Text Descriptions

In this paper we propose a framework for predicting kernelized classifie...
research
08/30/2018

Towards Effective Deep Embedding for Zero-Shot Learning

Zero-shot learning (ZSL) attempts to recognize visual samples of unseen ...
research
06/30/2019

Visual Space Optimization for Zero-shot Learning

Zero-shot learning, which aims to recognize new categories that are not ...
research
11/07/2016

Latent Attention For If-Then Program Synthesis

Automatic translation from natural language descriptions into programs i...

Please sign up or login with your details

Forgot password? Click here to reset