Motion-Based Generator Model: Unsupervised Disentanglement of Appearance, Trackable and Intrackable Motions in Dynamic Patterns

11/26/2019
by   Jianwen Xie, et al.
16

Dynamic patterns are characterized by complex spatial and motion patterns. Understanding dynamic patterns requires a disentangled representational model that separates the factorial components. A commonly used model for dynamic patterns is the state space model, where the state evolves over time according to a transition model and the state generates the observed image frames according to an emission model. To model the motions explicitly, it is natural for the model to be based on the motions or the displacement fields of the pixels. Thus in the emission model, we let the hidden state generate the displacement field, which warps the trackable component in the previous image frame to generate the next frame while adding a simultaneously emitted residual image to account for the change that cannot be explained by the deformation. The warping of the previous image is about the trackable part of the change of image frame, while the residual image is about the intrackable part of the image. We use a maximum likelihood algorithm to learn the model that iterates between inferring latent noise vectors that drive the transition model and updating the parameters given the inferred latent vectors. Meanwhile we adopt a regularization term to penalize the norms of the residual images to encourage the model to explain the change of image frames by trackable motion. Unlike existing methods on dynamic patterns, we learn our model in unsupervised setting without ground truth displacement fields. In addition, our model defines a notion of intrackability by the separation of warped component and residual component in each image frame. We show that our method can synthesize realistic dynamic pattern, and disentangling appearance, trackable and intrackable motions. The learned models are useful for motion transfer, and it is natural to adopt it to define and measure intrackability of a dynamic pattern.

READ FULL TEXT

page 4

page 5

page 6

page 7

page 8

page 9

research
12/27/2018

Learning Dynamic Generator Model by Alternating Back-Propagation Through Time

This paper studies the dynamic generator model for spatial-temporal proc...
research
11/21/2021

Dynamic imaging using motion-compensated smoothness regularization on manifolds (MoCo-SToRM)

We introduce an unsupervised deep manifold learning algorithm for motion...
research
08/15/2021

Asymmetric Bilateral Motion Estimation for Video Frame Interpolation

We propose a novel video frame interpolation algorithm based on asymmetr...
research
10/16/2019

Animating Landscape: Self-Supervised Learning of Decoupled Motion and Appearance for Single-Image Video Synthesis

Automatic generation of a high-quality video from a single image remains...
research
06/14/2021

TimeLens: Event-based Video Frame Interpolation

State-of-the-art frame interpolation methods generate intermediate frame...
research
08/20/2019

Spatio-temporal Manifold Learning for Human Motions via Long-horizon Modeling

Data-driven modeling of human motions is ubiquitous in computer graphics...
research
06/16/2018

Deformable Generator Network: Unsupervised Disentanglement of Appearance and Geometry

We propose a deformable generator model to disentangle the appearance an...

Please sign up or login with your details

Forgot password? Click here to reset