Object Priors for Classifying and Localizing Unseen Actions

04/10/2021
by   Pascal Mettes, et al.
6

This work strives for the classification and localization of human actions in videos, without the need for any labeled video training examples. Where existing work relies on transferring global attribute or object information from seen to unseen action videos, we seek to classify and spatio-temporally localize unseen actions in videos from image-based object information only. We propose three spatial object priors, which encode local person and object detectors along with their spatial relations. On top we introduce three semantic object priors, which extend semantic matching through word embeddings with three simple functions that tackle semantic ambiguity, object discrimination, and object naming. A video embedding combines the spatial and semantic object priors. It enables us to introduce a new video retrieval task that retrieves action tubes in video collections based on user-specified objects, spatial relations, and object size. Experimental evaluation on five action datasets shows the importance of spatial and semantic object priors for unseen actions. We find that persons and objects have preferred spatial relations that benefit unseen action localization, while using multiple languages and simple object filtering directly improves semantic matching, leading to state-of-the-art results for both unseen action classification and localization.

READ FULL TEXT

page 4

page 9

page 13

page 14

page 16

research
07/28/2017

Spatial-Aware Object Embeddings for Zero-Shot Localization and Classification of Actions

We aim for zero-shot localization and classification of human actions in...
research
03/08/2022

Universal Prototype Transport for Zero-Shot Action Recognition and Localization

This work addresses the problem of recognizing action categories in vide...
research
02/01/2021

Forecasting Action through Contact Representations from First Person Video

Human actions involving hand manipulations are structured according to t...
research
07/28/2016

SEMBED: Semantic Embedding of Egocentric Action Videos

We present SEMBED, an approach for embedding an egocentric object intera...
research
06/05/2015

Sentence Directed Video Object Codetection

We tackle the problem of video object codetection by leveraging the weak...
research
12/13/2018

Detecting rare visual relations using analogies

We seek to detect visual relations in images of the form of triplets t =...
research
07/31/2021

Learning Embeddings that Capture Spatial Semantics for Indoor Navigation

Incorporating domain-specific priors in search and navigation tasks has ...

Please sign up or login with your details

Forgot password? Click here to reset