Flow Guided Transformable Bottleneck Networks for Motion Retargeting

06/14/2021
by   Jian Ren, et al.
7

Human motion retargeting aims to transfer the motion of one person in a "driving" video or set of images to another person. Existing efforts leverage a long training video from each target person to train a subject-specific motion transfer model. However, the scalability of such methods is limited, as each model can only generate videos for the given target subject, and such training videos are labor-intensive to acquire and process. Few-shot motion transfer techniques, which only require one or a few images from a target, have recently drawn considerable attention. Methods addressing this task generally use either 2D or explicit 3D representations to transfer motion, and in doing so, sacrifice either accurate geometric modeling or the flexibility of an end-to-end learned representation. Inspired by the Transformable Bottleneck Network, which renders novel views and manipulations of rigid objects, we propose an approach based on an implicit volumetric representation of the image content, which can then be spatially manipulated using volumetric flow fields. We address the challenging question of how to aggregate information across different body poses, learning flow fields that allow for combining content from the appropriate regions of input images of highly non-rigid human subjects performing complex motions into a single implicit volumetric representation. This allows us to learn our 3D representation solely from videos of moving people. Armed with both 3D object understanding and end-to-end learned rendering, this categorically novel representation delivers state-of-the-art image generation quality, as shown by our quantitative and qualitative evaluations.

READ FULL TEXT

page 3

page 7

page 8

research
04/10/2021

Do as we do: Multiple Person Video-To-Video Transfer

Our goal is to transfer the motion of real people from a source video to...
research
04/13/2019

Transformable Bottleneck Networks

We propose a novel approach to performing fine-grained 3D manipulation o...
research
02/26/2021

Dual-MTGAN: Stochastic and Deterministic Motion Transfer for Image-to-Video Synthesis

Generating videos with content and motion variations is a challenging ta...
research
09/01/2022

REMOT: A Region-to-Whole Framework for Realistic Human Motion Transfer

Human Video Motion Transfer (HVMT) aims to, given an image of a source p...
research
07/07/2023

AutoDecoding Latent 3D Diffusion Models

We present a novel approach to the generation of static and articulated ...
research
03/28/2022

Structured Local Radiance Fields for Human Avatar Modeling

It is extremely challenging to create an animatable clothed human avatar...
research
10/21/2019

DwNet: Dense warp-based network for pose-guided human video generation

Generation of realistic high-resolution videos of human subjects is a ch...

Please sign up or login with your details

Forgot password? Click here to reset