DeepAI AI Chat
Log In Sign Up

Compositional Video Synthesis with Action Graphs

06/27/2020
by   Amir Bar, et al.
3

Videos of actions are complex spatio-temporal signals, containing rich compositional structures. Current generative models are limited in their ability to generate examples of object configurations outside the range they were trained on. Towards this end, we introduce a generative model (AG2Vid) based on Action Graphs, a natural and convenient structure that represents the dynamics of actions between objects over time. Our AG2Vid model disentangles appearance and position features, allowing for more accurate generation. AG2Vid is evaluated on the CATER and Something-Something datasets and outperforms other baselines. Finally, we show how Action Graphs can be used for generating novel compositions of unseen actions.

READ FULL TEXT

page 2

page 6

page 7

page 14

page 15

page 16

page 17

11/02/2021

Revisiting spatio-temporal layouts for compositional action recognition

Recognizing human actions is fundamentally a spatio-temporal reasoning p...
10/21/2021

LARNet: Latent Action Representation for Human Action Synthesis

We present LARNet, a novel end-to-end approach for generating human acti...
06/24/2018

Generative Models for Pose Transfer

We investigate nearest neighbor and generative models for transferring p...
01/03/2023

EQUI-VOCAL: Synthesizing Queries for Compositional Video Events from Limited User Interactions [Technical Report]

We introduce EQUI-VOCAL: a new system that automatically synthesizes que...
09/26/2011

The Case for Durative Actions: A Commentary on PDDL2.1

The addition of durative actions to PDDL2.1 sparked some controversy. Fo...
01/11/2022

Representing Videos as Discriminative Sub-graphs for Action Recognition

Human actions are typically of combinatorial structures or patterns, i.e...
02/25/2022

GenéLive! Generating Rhythm Actions in Love Live!

A rhythm action game is a music-based video game in which the player is ...