DesCo: Learning Object Recognition with Rich Language Descriptions

06/24/2023
by   Liunian Harold Li, et al.
0

Recent development in vision-language approaches has instigated a paradigm shift in learning visual recognition models from language supervision. These approaches align objects with language queries (e.g. "a photo of a cat") and improve the models' adaptability to identify novel objects and domains. Recently, several studies have attempted to query these models with complex language expressions that include specifications of fine-grained semantic details, such as attributes, shapes, textures, and relations. However, simply incorporating language descriptions as queries does not guarantee accurate interpretation by the models. In fact, our experiments show that GLIP, the state-of-the-art vision-language model for object detection, often disregards contextual information in the language descriptions and instead relies heavily on detecting objects solely by their names. To tackle the challenges, we propose a new description-conditioned (DesCo) paradigm of learning object recognition models with rich language descriptions consisting of two major innovations: 1) we employ a large language model as a commonsense knowledge engine to generate rich language descriptions of objects based on object names and the raw image-text caption; 2) we design context-sensitive queries to improve the model's ability in deciphering intricate nuances embedded within descriptions and enforce the model to focus on context rather than object names alone. On two novel object detection benchmarks, LVIS and OminiLabel, under the zero-shot detection setting, our approach achieves 34.8 APr minival (+9.1) and 29.3 AP (+3.6), respectively, surpassing the prior state-of-the-art models, GLIP and FIBER, by a large margin.

READ FULL TEXT

page 2

page 10

research
06/20/2022

DALL-E for Detection: Language-driven Context Image Synthesis for Object Detection

Object cut-and-paste has become a promising approach to efficiently gene...
research
04/05/2023

What's in a Name? Beyond Class Indices for Image Recognition

Existing machine learning models demonstrate excellent performance in im...
research
09/21/2023

LLM-Grounder: Open-Vocabulary 3D Visual Grounding with Large Language Model as an Agent

3D visual grounding is a critical skill for household robots, enabling t...
research
06/03/2022

Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning

People say, "A picture is worth a thousand words". Then how can we get t...
research
03/29/2022

Image Retrieval from Contextual Descriptions

The ability to integrate context, including perceptual and temporal cues...
research
04/04/2023

Learning to Name Classes for Vision and Language Models

Large scale vision and language models can achieve impressive zero-shot ...
research
02/13/2013

Object Recognition with Imperfect Perception and Redundant Description

This paper deals with a scene recognition system in a robotics contex. T...

Please sign up or login with your details

Forgot password? Click here to reset