Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens

06/13/2022
by   Elad Ben-Avraham, et al.
15

Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights. First, as both images and videos contain structured information, we enrich a transformer model with a set of object tokens that can be used across images and videos. Second, the scene representations of individual frames in video should "align" with those of still images. This is achieved via a Frame-Clip Consistency loss, which ensures the flow of structured information between images and videos. We explore a particular instantiation of scene structure, namely a Hand-Object Graph, consisting of hands and objects with their locations as nodes, and physical relations of contact/no-contact as edges. SViT shows strong performance improvements on multiple video understanding tasks and datasets. Furthermore, it won in the Ego4D CVPR'22 Object State Localization challenge. For code and pretrained models, visit the project page at <https://eladb3.github.io/SViT/>

READ FULL TEXT

page 2

page 4

page 17

research
06/15/2022

Structured Video Tokens @ Ego4D PNR Temporal Localization Challenge 2022

This technical report describes the SViT approach for the Ego4D Point of...
research
12/08/2022

PromptonomyViT: Multi-Task Prompt Learning Improves Video Transformers using Synthetic Scene Data

Action recognition models have achieved impressive results by incorporat...
research
03/07/2023

MOSO: Decomposing MOtion, Scene and Object for Video Prediction

Motion, scene and object are three primary visual components of a video....
research
02/20/2019

Learning Transferable Self-attentive Representations for Action Recognition in Untrimmed Videos with Weak Supervision

Action recognition in videos has attracted a lot of attention in the pas...
research
11/25/2022

WALDO: Future Video Synthesis using Object Layer Decomposition and Parametric Flow Prediction

This paper presents WALDO (WArping Layer-Decomposed Objects), a novel ap...
research
06/20/2023

How can objects help action recognition?

Current state-of-the-art video models process a video clip as a long seq...
research
06/05/2020

Egocentric Object Manipulation Graphs

We introduce Egocentric Object Manipulation Graphs (Ego-OMG) - a novel r...

Please sign up or login with your details

Forgot password? Click here to reset