DeepAI AI Chat
Log In Sign Up

Compositional Video Synthesis with Action Graphs

by   Amir Bar, et al.

Videos of actions are complex spatio-temporal signals, containing rich compositional structures. Current generative models are limited in their ability to generate examples of object configurations outside the range they were trained on. Towards this end, we introduce a generative model (AG2Vid) based on Action Graphs, a natural and convenient structure that represents the dynamics of actions between objects over time. Our AG2Vid model disentangles appearance and position features, allowing for more accurate generation. AG2Vid is evaluated on the CATER and Something-Something datasets and outperforms other baselines. Finally, we show how Action Graphs can be used for generating novel compositions of unseen actions.


page 2

page 6

page 7

page 14

page 15

page 16

page 17


Revisiting spatio-temporal layouts for compositional action recognition

Recognizing human actions is fundamentally a spatio-temporal reasoning p...

LARNet: Latent Action Representation for Human Action Synthesis

We present LARNet, a novel end-to-end approach for generating human acti...

Generative Models for Pose Transfer

We investigate nearest neighbor and generative models for transferring p...

EQUI-VOCAL: Synthesizing Queries for Compositional Video Events from Limited User Interactions [Technical Report]

We introduce EQUI-VOCAL: a new system that automatically synthesizes que...

The Case for Durative Actions: A Commentary on PDDL2.1

The addition of durative actions to PDDL2.1 sparked some controversy. Fo...

Representing Videos as Discriminative Sub-graphs for Action Recognition

Human actions are typically of combinatorial structures or patterns, i.e...

GenéLive! Generating Rhythm Actions in Love Live!

A rhythm action game is a music-based video game in which the player is ...