Pose-guided Generative Adversarial Net for Novel View Action Synthesis

10/15/2021
by   Xianhang Li, et al.
0

We focus on the problem of novel-view human action synthesis. Given an action video, the goal is to generate the same action from an unseen viewpoint. Naturally, novel view video synthesis is more challenging than image synthesis. It requires the synthesis of a sequence of realistic frames with temporal coherency. Besides, transferring the different actions to a novel target view requires awareness of action category and viewpoint change simultaneously. To address these challenges, we propose a novel framework named Pose-guided Action Separable Generative Adversarial Net (PAS-GAN), which utilizes pose to alleviate the difficulty of this task. First, we propose a recurrent pose-transformation module which transforms actions from the source view to the target view and generates novel view pose sequence in 2D coordinate space. Second, a well-transformed pose sequence enables us to separatethe action and background in the target view. We employ a novel local-global spatial transformation module to effectively generate sequential video features in the target view using these action and background features. Finally, the generated video features are used to synthesize human action with the help of a 3D decoder. Moreover, to focus on dynamic action in the video, we propose a novel multi-scale action-separable loss which further improves the video quality. We conduct extensive experiments on two large-scale multi-view human action datasets, NTU-RGBD and PKU-MMD, demonstrating the effectiveness of PAS-GAN which outperforms existing approaches.

READ FULL TEXT

page 6

page 7

page 8

research
07/07/2021

Cross-View Exocentric to Egocentric Video Synthesis

Cross-view video synthesis task seeks to generate video sequences of one...
research
10/15/2017

Text2Action: Generative Adversarial Synthesis from Language to Action

In this paper, we propose a generative model which learns the relationsh...
research
10/21/2021

LARNet: Latent Action Representation for Human Action Synthesis

We present LARNet, a novel end-to-end approach for generating human acti...
research
07/25/2021

Can Action be Imitated? Learn to Reconstruct and Transfer Human Dynamics from Videos

Given a video demonstration, can we imitate the action contained in this...
research
09/30/2019

Synthesizing Action Sequences for Modifying Model Decisions

When a model makes a consequential decision, e.g., denying someone a loa...
research
03/22/2022

QS-Craft: Learning to Quantize, Scrabble and Craft for Conditional Human Motion Animation

This paper studies the task of conditional Human Motion Animation (cHMA)...
research
10/22/2020

Novel View Synthesis from only a 6-DoF Camera Pose by Two-stage Networks

Novel view synthesis is a challenging problem in computer vision and rob...

Please sign up or login with your details

Forgot password? Click here to reset