Prototypical Priors: From Improving Classification to Zero-Shot Learning

12/03/2015
by   Saumya Jetley, et al.
0

Recent works on zero-shot learning make use of side information such as visual attributes or natural language semantics to define the relations between output visual classes and then use these relationships to draw inference on new unseen classes at test time. In a novel extension to this idea, we propose the use of visual prototypical concepts as side information. For most real-world visual object categories, it may be difficult to establish a unique prototype. However, in cases such as traffic signs, brand logos, flags, and even natural language characters, these prototypical templates are available and can be leveraged for an improved recognition performance. The present work proposes a way to incorporate this prototypical information in a deep learning framework. Using prototypes as prior information, the deepnet pipeline learns the input image projections into the prototypical embedding space subject to minimization of the final classification loss. Based on our experiments with two different datasets of traffic signs and brand logos, prototypical embeddings incorporated in a conventional convolutional neural network improve the recognition performance. Recognition accuracy on the Belga logo dataset is especially noteworthy and establishes a new state-of-the-art. In zero-shot learning scenarios, the same system can be directly deployed to draw inference on unseen classes by simply adding the prototypical information for these new classes at test time. Thus, unlike earlier approaches, testing on seen and unseen classes is handled using the same pipeline, and the system can be tuned for a trade-off of seen and unseen class performance as per task requirement. Comparison with one of the latest works in the zero-shot learning domain yields top results on the two datasets mentioned above.

READ FULL TEXT

page 6

page 8

research
09/10/2019

Semantic Similarity Based Softmax Classifier for Zero-Shot Learning

Zero-Shot Learning (ZSL) is a classification task where we do not have e...
research
08/12/2019

Visual and Semantic Prototypes-Jointly Guided CNN for Generalized Zero-shot Learning

In the process of exploring the world, the curiosity constantly drives h...
research
12/16/2017

Train Once, Test Anywhere: Zero-Shot Learning for Text Classification

Zero-shot Learners are models capable of predicting unseen classes. In t...
research
11/18/2020

A Multi-class Approach – Building a Visual Classifier based on Textual Descriptions using Zero-Shot Learning

Machine Learning (ML) techniques for image classification routinely requ...
research
06/30/2016

Zero-Shot Learning with Multi-Battery Factor Analysis

Zero-shot learning (ZSL) extends the conventional image classification t...
research
02/16/2020

CRL: Class Representative Learning for Image Classification

Building robust and real-time classifiers with diverse datasets are one ...
research
12/05/2017

Co-domain Embedding using Deep Quadruplet Networks for Unseen Traffic Sign Recognition

Recent advances in visual recognition show overarching success by virtue...

Please sign up or login with your details

Forgot password? Click here to reset