COBE: Contextualized Object Embeddings from Narrated Instructional Video

07/14/2020
by   Gedas Bertasius, et al.
1

Many objects in the real world undergo dramatic variations in visual appearance. For example, a tomato may be red or green, sliced or chopped, fresh or fried, liquid or solid. Training a single detector to accurately recognize tomatoes in all these different states is challenging. On the other hand, contextual cues (e.g., the presence of a knife, a cutting board, a strainer or a pan) are often strongly indicative of how the object appears in the scene. Recognizing such contextual cues is useful not only to improve the accuracy of object detection or to determine the state of the object, but also to understand its functional properties and to infer ongoing or upcoming human-object interactions. A fully-supervised approach to recognizing object states and their contexts in the real-world is unfortunately marred by the long-tailed, open-ended distribution of the data, which would effectively require massive amounts of annotations to capture the appearance of objects in all their different forms. Instead of relying on manually-labeled data for this task, we propose a new framework for learning Contextualized OBject Embeddings (COBE) from automatically-transcribed narrations of instructional videos. We leverage the semantic and compositional structure of language by training a visual detector to predict a contextualized word embedding of the object and its associated narration. This enables the learning of an object representation where concepts relate according to a semantic language metric. Our experiments show that our detector learns to predict a rich variety of contextual object information, and that it is highly effective in the settings of few-shot and zero-shot learning.

READ FULL TEXT

page 2

page 8

page 9

research
10/20/2022

Learning Attention Propagation for Compositional Zero-Shot Learning

Compositional zero-shot learning aims to recognize unseen compositions o...
research
07/26/2021

Language Models as Zero-shot Visual Semantic Learners

Visual Semantic Embedding (VSE) models, which map images into a rich sem...
research
04/24/2019

Context-Aware Zero-Shot Learning for Object Recognition

Zero-Shot Learning (ZSL) aims at classifying unlabeled objects by levera...
research
03/16/2018

Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts

Current Zero-Shot Learning (ZSL) approaches are restricted to recognitio...
research
06/01/2021

Independent Prototype Propagation for Zero-Shot Compositionality

Humans are good at compositional zero-shot reasoning; someone who has ne...
research
03/02/2021

Semantic Relation Reasoning for Shot-Stable Few-Shot Object Detection

Few-shot object detection is an imperative and long-lasting problem due ...
research
09/13/2018

Seeing Tree Structure from Vibration

Humans recognize object structure from both their appearance and motion;...

Please sign up or login with your details

Forgot password? Click here to reset