Generative Models for Pose Transfer

06/24/2018
by   Patrick Chao, et al.
2

We investigate nearest neighbor and generative models for transferring pose between persons. We take in a video of one person performing a sequence of actions and attempt to generate a video of another person performing the same actions. Our generative model (pix2pix) outperforms k-NN at both generating corresponding frames and generalizing outside the demonstrated action set. Our most salient contribution is determining a pipeline (pose detection, face detection, k-NN based pairing) that is effective at perform-ing the desired task. We also detail several iterative improvements and failure modes.

READ FULL TEXT

page 2

page 3

page 6

page 7

research
06/27/2020

Compositional Video Synthesis with Action Graphs

Videos of actions are complex spatio-temporal signals, containing rich c...
research
11/21/2021

Video Content Swapping Using GAN

Video generation is an interesting problem in computer vision. It is qui...
research
08/24/2023

PoseSync: Robust pose based video synchronization

Pose based video sychronization can have applications in multiple domain...
research
08/19/2019

Video synthesis of human upper body with realistic face

This paper presents a generative adversarial learning-based human upper ...
research
10/15/2017

Text2Action: Generative Adversarial Synthesis from Language to Action

In this paper, we propose a generative model which learns the relationsh...
research
05/25/2017

Latent Geometry and Memorization in Generative Models

It can be difficult to tell whether a trained generative model has learn...
research
02/25/2022

GenéLive! Generating Rhythm Actions in Love Live!

A rhythm action game is a music-based video game in which the player is ...

Please sign up or login with your details

Forgot password? Click here to reset