Action Concept Grounding Network for Semantically-Consistent Video Generation

11/23/2020
by   Wei Yu, et al.
8

Recent works in self-supervised video prediction have mainly focused on passive forecasting and low-level action-conditional prediction, which sidesteps the problem of semantic learning. We introduce the task of semantic action-conditional video prediction, which can be regarded as an inverse problem of action recognition. The challenge of this new task primarily lies in how to effectively inform the model of semantic action information. To bridge vision and language, we utilize the idea of capsule and propose a novel video prediction model Action Concept Grounding Network (AGCN). Our method is evaluated on two newly designed synthetic datasets, CLEVR-Building-Blocks and Sapien-Kitchen, and experiments show that given different action labels, our ACGN can correctly condition on instructions and generate corresponding future frames without need of bounding boxes. We further demonstrate our trained model can make out-of-distribution predictions for concurrent actions, be quickly adapted to new object categories and exploit its learnt features for object detection. Additional visualizations can be found at https://iclr-acgn.github.io/ACGN/.

READ FULL TEXT

page 6

page 7

page 8

page 11

page 12

page 13

research
10/07/2022

See, Plan, Predict: Language-guided Cognitive Planning with Video Prediction

Cognitive planning is the structural decomposition of complex tasks into...
research
01/28/2021

Playable Video Generation

This paper introduces the unsupervised learning problem of playable vide...
research
04/09/2019

Back to the Future: Knowledge Distillation for Human Action Anticipation

We consider the task of training a neural network to anticipate human ac...
research
02/24/2023

A Joint Modeling of Vision-Language-Action for Target-oriented Grasping in Clutter

We focus on the task of language-conditioned grasping in clutter, in whi...
research
05/08/2018

Weakly-Supervised Video Object Grounding from Text by Loss Weighting and Object Interaction

We study weakly-supervised video object grounding: given a video segment...
research
08/23/2018

Predicting Action Tubes

In this work, we present a method to predict an entire `action tube' (a ...
research
09/08/2016

Learning Action Concept Trees and Semantic Alignment Networks from Image-Description Data

Action classification in still images has been a popular research topic ...

Please sign up or login with your details

Forgot password? Click here to reset