Multi-Frame Content Integration with a Spatio-Temporal Attention Mechanism for Person Video Motion Transfer

08/12/2019 ∙ by Kun Cheng, et al. ∙ Tsinghua University Tencent Columbia University 0

Existing person video generation methods either lack the flexibility in controlling both the appearance and motion, or fail to preserve detailed appearance and temporal consistency. In this paper, we tackle the problem of motion transfer for generating person videos, which provides controls on both the appearance and the motion. Specifically, we transfer the motion of one person in a target video to another person in a source video, while preserving the appearance of the source person. Besides only relying on one source frame as the existing state-of-the-art methods, our proposed method integrates information from multiple source frames based on a spatio-temporal attention mechanism to preserve rich appearance details. In addition to a spatial discriminator employed for encouraging the frame-level fidelity, a multi-range temporal discriminator is adopted to enforce the generated video to resemble temporal dynamics of a real video in various time ranges. A challenging real-world dataset, which contains about 500 dancing video clips with complex and unpredictable motions, is collected for the training and testing. Extensive experiments show that the proposed method can produce more photo-realistic and temporally consistent person videos than previous methods. As our method decomposes the syntheses of the foreground and background into two branches, a flexible background substitution application can also be achieved.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 4

page 7

page 8

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Person video generation is a cutting-edge topic with plenty of essential applications. It can be used to generate synthetic training data for high-level vision tasks  [18, 6, 11, 13, 27, 5, 21]

, or helping the development of a more powerful video editing tool. Current methods for person video generation can be roughly classified into three categories: unconditional video generation, video prediction, and motion transfer. Unconditional person video generation, which focuses on mapping one or multiple 1D latent vectors to a person video 

[23], relies on the 1D latent vectors to provide both the appearance and the motion information. After training, unconditional video generation models can produce various videos by sampling random latent vector sequences. However, unconditional video generation does not have the flexibility to control the exact motion of the generated video. In terms of person video prediction, people are striving to develop methods predicting proceeding frames conditioned on the preceding frames [24, 25]. Person video prediction generally divides the problem into two stages. The first stage concentrates on predicting a pose sequence given the past one, while the second stage turns to generating a person video based on a given pose sequence. The second stage is indeed tackling the motion transfer problem. However, the proposed methods for video prediction pay most of the attention to the first stage, while lacking a thorough consideration on preserving appearance details and temporal consistency during the second stage.

As shown in Fig. 1, in this paper, we devote our efforts to the problem of person video motion transfer, which aims at transferring the motion of one person in the target video to another person in the source video, while preserving the appearance of the source. Unlike unconditional video generation or video prediction, we obtain a full control of the exact motion in the generated video by assigning a target video with an ideal motion sequence. Although there are plenty of approaches striving to solve the pose transfer problem for a single image [14, 15, 1, 17, 19, 20], directly applying them to videos will introduce severe flicker artifacts when the motions are complex and unpredictable. Also, the appearance information from a single image is insufficient for synthesizing an image with a vastly different target pose. Some recent methods [26, 3] try to narrow this gap by training a video-to-video translator with a fixed-range temporal discriminator to ensure realistic temporal dynamics. In fact, the existing video-to-video methods are to learn a manifold of all the source frames, and then map the target pose sequence to a trajectory on the manifold to generate a new video with the motion from the target and the appearance from the source. Thus, they require to train an individual model for each source video. In addition, both the background and the foreground of the generated video are bound together to be consistent with the source video, without the flexibility of background substitutions.

On the contrary, we strive to provide a solution for the task of general person video motion transfer, which can transfer complex and unpredictable motions between any pair of person videos by a single model. Instead of modeling a manifold of all the source frames for a single person, our model is capable of inferring the appearance of an arbitrary person in the target pose given only a few source frames. Besides only counting on a single frame to provide the appearance information as the previous methods [14, 15, 1, 17, 19, 20], we propose a multi-frame content integration strategy to extract rich appearance information from the source video. The multi-frame content integration process is guided by a spatio-temporal attention mechanism. With the divided foreground and background branches, our method can also accomplish the background substitution task in a breeze. In addition to a traditional spatial discriminator, which is adopted to enrich the frame-level photo-realistic details, a multi-range temporal discriminator is proposed to encourage the generated video to resemble the temporal dynamics of a real video over different time ranges. To evaluate the model performance for general person video motion transfer, we collect a dataset with around 500 dancing video clips, namely Dance-500, which contains complex and unpredictable motions in the wild. The proposed dataset will be made publicly available for the research purpose. Through quantitative and qualitative comparisons to previous methods, we demonstrate that our proposed method can generate more realistic motion transfer results in terms of both the spatial and temporal domains.

In summary, the main contributions of this paper are four-fold. i) We propose a flexible motion transfer pipeline, which can be adopted to decompose and recombine the foreground, background and motion information. ii) We introduce a multi-frame content integration module based on a spatio-temporal attention mechanism to produce sharper foreground details and more natural background completion results. iii) We propose a multi-range temporal discriminator to encourage the motion transfer results to be more temporally consistent. iv) We collect a challenging person video dataset with complex and unpredictable motions in wild scenes, which is more general for training and evaluating a person video motion transfer model.

2 Related Work

Figure 2: The pipeline of our method. For each time step , our model feeds the source frames,

source poses and a target pose to a multi-frame preliminary feature extraction module. The computed preliminary features of the foreground and the background are processed separately to produce a synthetic background

, a synthetic foreground and a corresponding foreground mask . Finally, we composite the output frame by combining and together under the guidance of .

Video Prediction.

Most of the state-of-the-art methods decompose the generation problem into two stages, including a pose sequence prediction stage and a pose guided person video generation stage. The second stage indeed has the same target with motion transfer. In the second stage, Villegas  [24, 25] and Yang  [29] exploit a simple encoder-decoder structure to combine the information of a single source image and a predicted motion code to produce a person image in the target pose. These methods only consider image-level adversarial losses, making the generated results have obvious flicker artifacts. Recently, Zhao  [31] propose to use an additional refinement stage to post-process the generated video to attain better temporal consistency. At the second stage of Zhao ’s method, a conditional global temporal discriminator is employed to provide an adversarial loss to train the global refinement network with computationally heavy 3D convolutions. However, Zhao ’s temporal refinement is an individual post-processing stage which is not end-to-end trainable and not practical for handling live streaming videos in a frame-by-frame way.

Person Image Generation.

A wide range of methods have been proposed to generate a realistic person image based on a given pose. Ma  [14] transfer the human pose of one image to another in a coarse to fine manner. Afterwards, Ma  [15] propose to disentangle foreground, background and pose features for improving the synthesis quality, which also makes the generation process more controllable. Meanwhile, Balakrishnan  [1]

explicitly utilize pose estimation result to split and transform different body parts accordingly, which generates impressive results. However, directly applying the above methods to person video motion transfer frame by frame leads to obvious flicker artifacts, due to the lacking of temporal consistency.

Video-to-Video Translation.

Wang  [26] propose a general video-to-video translation method, which can be applied to translate a pose sequence into a person dancing video. Chan  [3] propose a more complete system for translating a pose sequence into a person dancing video. Both of the above methods contain a local temporal discriminator, taking 2 or 3 frames as input, to enforce temporal consistency. Although impressive results are generated by these two methods, they need to train a new model for each source video, lacking the flexibility of applying the trained model to an arbitrary individual. Person video motion transfer problem can also be regarded as a more challenging video-to-video translation problem with two input videos offering messages in different aspects.

Figure 3: Multi-frame preliminary feature extraction. We take foreground and background features at the second-to-last layer of the single-frame baseline [1]

as the preliminary features. The single-frame baseline contains two branches for the foreground and the background respectively. While the background branch learns to accomplish image inpainting to produce a complete background, the foreground branch firstly transfers different body parts using simple affine transforms and then integrates all the parts together to produce a refined foreground. Finally, the foreground and background are combined together according to a predicted foreground mask.

3 Method

We address the problem of person video motion transfer. Here a person video means a video contains frames , in which a person conducts whole-body actions like dancing. To simplify the problem, we assume that both the camera and the background are static, which is already a very challenging setting remaining unsolved. Given a source person video and a target person video , person video motion transfer aims at transferring the motion of to , while preserving the appearance of . This gives an explicit control of both the motion and the appearance of the generated video . A pretrained 2D pose estimator [4, 28] is used to extract motion information in the form of a pose sequence from a video. Each pose in the -th frame is represented by a -channel heatmap, where denotes the number of different keypoints [1]. We denote the pose sequences for the source and target videos as and , respectively. More advanced pose estimators can be used in the future to acquire more accurate pose information.

Similar as the single-image pose transfer method proposed by Balakrishnan  [1], we handle the synthesis of the foreground and the background separately. However, instead of conducting pose transfer given only one source image and one target image, we introduce a multi-frame content integration mechanism with a spatio-temporal adversarial training strategy to improve the realism of the results in both the spatial and the temporal aspects. Specifically, as shown in Fig. 2, our person video motion transfer network contains four main components, including a multi-frame preliminary feature extraction module, a multi-frame foreground refinement module, a multi-frame background completion module, and a final composition module. We solve the person video motion transfer problem in a frame-by-frame way. For each time step , our transfer network takes source frames, the corresponding source poses and the target pose as input, generating an output frame .

The source frames are randomly chosen from the source video, and fixed thereafter when dealing with the same source video. In this paper, we mainly focus on demonstrating that exploiting multiple source frames sampled in a simple random manner has already brought a significant improvement. A more sophisticated strategy for choosing the keyframes will be left for the future work. Based on a single-frame baseline [1] for extracting preliminary features, we integrate multiple source frames under the guidance of a spatio-temporal attention map to provide more content information, which alleviates the problem of occlusions for both the foreground and background. During the training process, we enforce the network to generate temporally consistent results with more realistic details by coordinating the training of a series of consecutive frames with spatio-temporal discriminators.

3.1 Multi-Frame Preliminary Feature Extraction

To extract rich information from the source video, we propose a multi-frame preliminary feature extraction module. The preliminary features will be combined together by a multi-frame content integration strategy in the subsequent stages. We employ a pretrained single-frame pose transfer model [1] to extract preliminary features for each source frame individually, as shown in Fig. 3. We adopt [1] as the feature extractor for the impressive visual quality of the generated results and the capability of handling the foreground and background features separately. For each source frame , we extract the foreground features and the background features at the second-to-last layer of the single-frame model as the preliminary features. Here, are frames randomly chosen from the source video. In our experiment, we set to balance the computational cost and the generation quality. Both and are of size , where and . We choose the features in the second-to-last layer as the preliminary features instead of the generated images of the single-frame model to reserve more information for the subsequent synthesis modules.

Figure 4: Multi-frame foreground refinement module. The inputs of this module includes the preliminary foreground features, the source poses and the target pose. The structures of two variants “SA3D+RB6” and “RB6+SA2D” are shown in this figure.

3.2 Multi-Frame Foreground Refinement

In the case of single-frame pose transfer, the quality of the synthetic foreground heavily relies on the source frame choice. For example, a source frame containing the back-view of a person will generate blurry results when given a front-view target pose. The incompleteness of the information from a single view also leads to unstable synthetic results, which will intensify the temporal inconsistency in the generated video. In this paper, we propose a multi-frame foreground refinement module, which integrates the preliminary foreground features from source frames to generate a synthetic foreground with higher visual quality. For each time step , the preliminary foreground features from source frames is fed to a feature fusion module to produce a fused feature map . On the basis of , a prediction module is used to generate a synthetic foreground image and a foreground mask . The prediction module is a convolutional layer with tanh as the activation layer.

We investigate several feature fusion modules. The most simple and intuitive ones are adding average or max pooling layers to fuse the preliminary features from different source frames. To further exploit the multi-frame information, we explore three variants based on the spatio-temporal attention mechanism. The inputs of all three variants are the same, including the preliminary foreground features, the source poses and the target pose. The first variant is named “RB6”, which takes the concatenation of all the inputs and exploits six residual blocks to compute a

spatio-attention map. The spatio-temporal attention map can be regarded as spatial attention maps of size for the source frames. Then, the fused foreground feature map is calculated as: . Here, denotes the preliminary foreground feature map from the source frame . denotes the corresponding attention map. means the element-wise production. Finally, the fused feature map is fed to a prediction module to generate a synthetic foreground image and a corresponding foreground mask .

The drawback of “RB6” is that although the attention map is computed based on spatio-temporal information, the features are fused locally across the temporal domain. To alleviate this problem, we propose two more sophisticated variants “SA3D+RB6” and “RB6+SA2D” as shown in Fig. 4. “SA3D+RB6” first exploits a 3D self-attention sub-module “SA3D” to fuse all features non-locally across both the spatial and temporal domain, and then a “RB6” sub-module is exploited to refine the fused feature. A non-local attention map of size is computed for the spatial-temporal feature fusion in “SA3D”, which is different from the spatial self-attention mechanism “SA2D” proposed in [30]. “RB6+SA2D” first exploits “RB6” to fuse the features locally across the temporal domain, and then “SA2D” is exploited to fuse the features non-locally across the spatial domain. Through experiments, we find that by decomposing the spatial-temporal non-local fusion into two stages, “RB6+SA2D” alleviates the computational burden and achieves a similar performance as “SA3D+RB6”.

3.3 Multi-Frame Background Completion.

Similar to the foreground synthesis, the background completion mission can also benefit from multiple source frames. Since the pose of the person varies from frame to frame, the occluded background area in one frame may be observed in another frame. We can integrate multi-frame information to easily generate a more accurate background completion result. Similarly, the multi-frame background completion module takes the concatenated preliminary background features , source poses and the target pose as input. Then, a fused background feature map is calculated similarly as in Section 3.2. Finally, is fed to a prediction module to generate a synthetic background image .

3.4 Final Composition.

For each time step , given the synthetic foreground image , the synthetic background image and the foreground mask , the final composite frame should be: .

3.5 Spatio-Temporal Adversarial Training

To train the transfer network to generate photo-realistic and temporally consistent results, we adopt a multi-frame coordinated training strategy with the help of spatio-temporal adversarial losses. During training, given randomly chosen source frames and consecutive target frames, the transfer network generates corresponding output frames in a frame-by-frame way. On one hand, we encourage that each generated frame to be photo-realistic and contain the source person performing the target pose. On the other hand, we encourage the generated consecutive frames to resemble the temporal dynamics of the consecutive target frames. These two goals are achieved by introducing a combination of a content loss, a spatial adversarial loss and a multi-range temporal adversarial loss.

Content Loss.

To enable supervised training, we use the same video to provide the source frames and the target frames for the training phase. We make sure that the source frames and target frames have no overlap. After training, for an arbitrary source video, we can choose an arbitrary target video to provide the target pose sequence. Under the supervised training setting, we know that a generated frame should be equal to the target frame . Thus, the most straight forward loss is the mean square errors (MSEs) between the generated frames and the ground-truth frames:

. However, this loss function tends to produce blurry results, which is because the generator is learning to agree with as many solutions as possible and finally comes to an average solution. To add more details, a perceptual loss suggested in 

[9, 12] is also adopted:. Here, denotes the features extracted by a pretrained VGG19 model [22]. In our experiment, we use the features at the conv1_1, conv2_1, conv3_1 and conv4_1 layers. encourages the generated frame to be close to the ground-truth frame in the semantic feature domain defined by a pre-trained VGG network, which enhances the perceptual similarity.

Spatial Adversarial Loss.

To encourage each generated frame contains more photo-realistic details, we introduce a spatial adversarial loss. A single-frame conditional discriminator is trained to distinguish the generated frame from the ground-truth frame. We use LSGAN [16] and PatchGAN [8] for stable training:

(1)

Multi-Range Temporal Adversarial Loss.

Besides the spatial adversarial loss, we also introduce a multi-range temporal adversarial loss to encourage the generated video to resemble the temporal dynamics of a real video. Instead of using only one fixed-range temporal discriminator as in  [26], multiple temporal discriminators are trained to verify the temporal consistency over different time ranges. The multi-range temporal adversarial loss is defined as:

(2)

Here, denotes an optical flow sequence calculated by FlowNet2 [7], which contains the optical flows between every pair of adjacent frames. denotes a temporal discriminator, which takes every frames and the optical flows between them as input, and learns to distinguish the generated consecutive frames from the ground-truth ones.

Total Loss.

Finally, the total training loss is defined as:

(3)

We aim to solve: . Here, represents the collection of the temporal discriminators over different time ranges. This objective function can be optimized by alternately updating the discriminators and the generator.

Figure 5: Results on the Cross-Video subset. The results of the single-frame method [1] are blurry. The max pooling variant introduces some sharp details but its colors are strange. “SA2D” and “SA3D” achieve the best overall performance.
Figure 6: Results on the Same-Video subset. The results of the single-frame method [1] are blurry. The max pooling varaint introduces some sharp details but its colors are strange. “SA2D” and “SA3D” achieve the best overall performance.
Figure 7: Intermediate results of multi-frame content integration. The 1st row is the foreground result, the 2nd row is the background result. The attention maps indicate the “comfort zones” of different source frames.
Figure 8: Examples for background substitution. Motion 1 and 2 comes from the 1st and 3rd target person, respectively. Person A can do actions like person B in front of different backgrounds.

4 Experiment

Dataset.

We construct a novel video dataset, Dance-500, for training and evaluating models for general person video motion transfer. We collect 163 dance videos from Youtube, which contain either different individuals or the same individual in different clothes. Each video presents a different dance. Both the camera and background are static at most of the time. The videos in the dataset includes both the front and back sides of a person, fast and large motions, and various foreground and background appearances. Each video is randomly cut into 3 to 4 clips lasting for 10 seconds. Finally, we obtain about 500 clips with more than 125,000 frames. Finally, we use 400 clips for training, 50 clips for validation, and 50 clips for testing. The datasets used in [1, 23] only cover some simple action categories, including golf, yoga and tennis, Tai-Chi, jumping-jack, waving-hands and so on. The motions in theses action categories are monotonous and predictable. Compared to the above datasets, our proposed Dance-500 dataset contains a larger number of videos clips with complex and unpredictable motions in wild scenes. Dance-500 provides rich information for training and challenging examples for testing. Some cross-dataset testing results are provided in the supplementary material to demonstrate the superiority of the proposed Dance-500.

Implementation Details.

In the preliminary feature extraction module, the single-frame model follows the network architecture of [1]. The structure of the self-attention sub-module is similar as [30]. The spatial and temporal discriminators consist of six residual blocks following [26]. Please refer to the supplementary material for the detailed architectures of our networks. We adopt the Adam [10] with lr = and for training. For preliminary feature extraction, we pretrain the single-frame pose transfer model using MSE loss and a single-frame GAN loss for 200,000 iterations with a batch size of 4. The single-frame model is fixed thereafter. Then, we train our full model for 35,000 iterations with a batch size of 2, while each sample in the batch consists of 8 consecutive target frames and 4 random source frames. The snapshot with the best validation performance is recorded as the training goes on. Each frame is cropped in the size of with the person in the center. We set in our experiment.

Losses
VFID PSNR
MSE 10.25 22.10
MSE + VGG 9.50 22.83
MSE + VGG +
7.99 22.79
MSE + VGG + +
8.09 23.07
MSE + VGG + +
7.93 22.96
MSE + VGG + +
8.06 22.83
MSE + VGG + +
7.47 21.37
MSE + VGG + +
7.74 23.25
MSE + VGG + +
7.57 23.20
Table 1: The quantitative results on the Same-Video subset.
Methods Single-Frame Ours Not Sure
Overall Fidelity
17% 77.5% 5.5%
Temporal Consistency
18.5% 65% 16.5%
Table 2: Human Preference Score.

4.1 Quantitative Comparisons to Existing Methods

The VFID [26] and PSNR scores are adopted for quantitative comparisons. To compute the VFID score, we first extract the video features by a pretrained video classificatoin network I3D [2]. Then, the mean and the covariance matrix of the feature vectors are computed over all videos in a dataset. Finally, the VFID score is calculated as . The VFID score measures both the visual quality and the temporal consistency. For the Same-Video subset, the ground-truth video is equal to the target video. Thus, we can also directly calculate the PSNR for each generated frame. For the Cross-Video subset, the appearances of synthesized video and the target video are totally different, which means PSNR is not available. Also, we can not compute the VFID scores, because besides the motion information, the appearance information of a video also affect the features extracted by I3D [2]. Thus, we only provide quantitative evaluation on the Same-Video subset.

Table. 1 shows the VFID and PSNR scores of different methods on the Same-Video subset. The top two scores under each metric are highlighted. Comparing the results of “MSE” and “MSE+VGG” for the single-frame baseline, we know that introducing a VGG loss as a part of the content loss in addition to the MSE loss improves both the frame-level quality and the video-level temporal dynamics. Comparing “MSE+VGG” with different “MSE+VGG+Fusion” variants, we can observe a significant improvement of the VFID score, which indicates that the multi-frame content integration mechanism greatly benefits the video-level perceptual quality. Comparing “RB6” with “RB6+ ”, we can see consistent improvements under both metrics can be achieved by introducing a multi-range temporal discriminator. Comparing “RB6+” with “RB6+”, we know that although the PSNR of is a little bit better than , it comes at the cost of sacrificing the overall video-level perceptual quality. Among different fusion manners, “Max” shows the best VFID score but the worst PSNR, indicating that the max pooling variant introduces some nonsense details to make the results look realistic. The last two rows show that “SA3D+RB6” achieves the best PSNR score, while “RB6+SA2D” shows an outstanding performance under both metrics.

We also conduct a user study to compare “RB6+SA2D” to the single-frame method [1]. For each method, we generate 5 videos under the Same-Video setting and 5 videos under the Cross-Video setting. The results from different methods were shown in a random order for a fair comparison. We asked the user to choose the preferred result according to two criteria. The first criterion is about the overall fidelity, as we ask “which video looks more realistic”. The second criterion is about the temporal consistency, as we ask “which video contains less flicker artifacts”. In summary, 20 people aged from 20 to 30 have participated. We report the average human preference scores in Table 2, which shows that our method surpasses the state-of-the-art single-frame method by a large margin.

4.2 Qualitative Comparisons to Existing Methods

We adopt two kinds of testing subsets to evaluate different methods: i) A Cross-Video subset, where the source and the target come from different videos. ii) An Same-Video subset, where the source and the target come from the same video. We randomly choose 50 pairs of videos from the testing set to construct each subset. Note that for the Same-Video subset, we ensure that the source sequence and the target sequence have no overlap. Fig. 5 and Fig. 6 show some qualitative results on the two testing subsets. Obvious blurry artifacts can be observed in the results of the single-frame baseline. The max pooling variant tends to generate strange colors in both the foreground and background. “RB6+SA2D” and “SA3D+RB6” shows the best overall performances. By integrating the content information from multiple source frames based on a spatio-temporal attention mechanism, our background completion results are more accurate, while the foregrounds preserve more details. Please refer to the supplementary video for a clearer observation.

To give a further investigation of the multi-frame content integration mechanism, we visualize some intermediate results of the “RB6+SA2D” variant in Fig. 7. Here, only the attention maps generated by the “RB6” sub-module are shown due to limited space. Obvious artifacts can be observed in the single-frame pose transfer results. Each single-frame result may produce appealing details in one area but blurry textures in another area. Our proposed method learns to compute a spatio-temporal attention map to locate the “comfort zones” for each source frame, and guide the feature fusion process for producing synthetic foreground and background with more accurate details.

4.3 Background Substitution

As our method conduct the foreground refinement and the background completion in two branches, with a slight modification, our pipeline can also be used to accomplish both the motion transfer and the background substitution simultaneously. Specifically, we apply the background completion branch to a third video to offer a new background, while the foreground branch is still working on transferring the target pose to the source person. Fig. 8 shows some of the background substitution results. Please refer to the supplementary video for more results.

5 Conclusion

In this paper, we proposed a person video motion transfer approach, which leverages a multi-frame source content integration mechanism and a spatio-temporal adversarial training procedure to encourage the transfer network to generate temporally consistent videos with more photo-realistic details. The extensive experiments on a proposed challenging dataset Dance-500 demonstrate that our approach outperforms previous methods in terms of both the single-frame quality and the video temporal consistency.

References

  • [1] G. Balakrishnan, A. Zhao, A. V. Dalca, F. Durand, and J. Guttag (2018) Synthesizing images of humans in unseen poses. In CVPR, Cited by: Figure 1, §1, §1, Figure 3, §2, Figure 5, Figure 6, §3.1, §3, §3, §3, §4, §4, §4.1.
  • [2] J. Carreira and A. Zisserman (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In CVPR, Cited by: §4.1.
  • [3] C. Chan, S. Ginosar, T. Zhou, and A. A. Efros (2018) Everybody dance now. arXiv preprint arXiv:1808.07371. Cited by: §1, §2.
  • [4] H. Fang, S. Xie, Y. Tai, and C. Lu (2017) RMPE: regional multi-person pose estimation. In ICCV, Cited by: §3.
  • [5] C. Feichtenhofer, A. Pinz, and A. Zisserman (2016) Convolutional two-stream network fusion for video action recognition. In CVPR, Cited by: §1.
  • [6] R. A. Güler, N. Neverova, and I. Kokkinos (2018) Densepose: dense human pose estimation in the wild. In CVPR, Cited by: §1.
  • [7] E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox (2017) Flownet 2.0: evolution of optical flow estimation with deep networks. In CVPR, Cited by: §3.5.
  • [8] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In CVPR, Cited by: §3.5.
  • [9] J. Johnson, A. Alahi, and L. Fei-Fei (2016)

    Perceptual losses for real-time style transfer and super-resolution

    .
    In ECCV, Cited by: §3.5.
  • [10] D. P. Kingma and J. Ba (2015) Adam: a method for stochastic optimization. In ICLR, Cited by: §4.
  • [11] M. Kocabas, S. Karagoz, and E. Akbas (2018) MultiPoseNet: fast multi-person pose estimation using pose residual network. In ECCV, Cited by: §1.
  • [12] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network.. In CVPR, Cited by: §3.5.
  • [13] J. Liu, B. Ni, Y. Yan, P. Zhou, S. Cheng, and J. Hu (2018) Pose transferrable person re-identification. In CVPR, Cited by: §1.
  • [14] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool (2017) Pose guided person image generation. In NeurIPS, Cited by: §1, §1, §2.
  • [15] L. Ma, Q. Sun, S. Georgoulis, L. Van Gool, B. Schiele, and M. Fritz (2018) Disentangled person image generation. In CVPR, Cited by: §1, §1, §2.
  • [16] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley (2017) Least squares generative adversarial networks. In ICCV, Cited by: §3.5.
  • [17] N. Neverova, R. Alp Guler, and I. Kokkinos (2018) Dense pose transfer. In ECCV, Cited by: §1, §1.
  • [18] A. Newell, Z. Huang, and J. Deng (2017) Associative embedding: end-to-end learning for joint detection and grouping. In NeurIPS, Cited by: §1.
  • [19] A. Pumarola, A. Agudo, A. Sanfeliu, and F. Moreno-Noguer (2018) Unsupervised person image synthesis in arbitrary poses. In CVPR, Cited by: §1, §1.
  • [20] A. Siarohin, E. Sangineto, S. Lathuilière, and N. Sebe (2018) Deformable gans for pose-based human image generation. In CVPR, Cited by: §1, §1.
  • [21] K. Simonyan and A. Zisserman (2014) Two-stream convolutional networks for action recognition in videos. In NeurIPS, Cited by: §1.
  • [22] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. In ICLR, Cited by: §3.5.
  • [23] S. Tulyakov, M. Liu, X. Yang, and J. Kautz (2018) MoCoGAN: decomposing motion and content for video generation. In CVPR, Cited by: §1, §4.
  • [24] R. Villegas, J. Yang, S. Hong, X. Lin, and H. Lee (2017) Decomposing motion and content for natural video sequence prediction. In ICLR, Cited by: §1, §2.
  • [25] R. Villegas, J. Yang, Y. Zou, S. Sohn, X. Lin, and H. Lee (2017) Learning to generate long-term future via hierarchical prediction. In ICML, Cited by: §1, §2.
  • [26] T. Wang, M. Liu, J. Zhu, G. Liu, A. Tao, J. Kautz, and B. Catanzaro (2018) Video-to-video synthesis. In NeurIPS, Cited by: §1, §2, §3.5, §4, §4.1.
  • [27] L. Wei, S. Zhang, W. Gao, and Q. Tian (2018) Person transfer gan to bridge domain gap for person re-identification. In CVPR, Cited by: §1.
  • [28] Y. Xiu, J. Li, H. Wang, Y. Fang, and C. Lu (2018) Pose Flow: efficient online pose tracking. In BMVC, Cited by: §3.
  • [29] C. Yang, Z. Wang, X. Zhu, C. Huang, J. Shi, and D. Lin (2018) Pose guided human video generation. In ECCV, Cited by: §2.
  • [30] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena (2018) Self-attention generative adversarial networks. arXiv preprint arXiv:1805.08318. Cited by: §3.2, §4.
  • [31] L. Zhao, X. Peng, Y. Tian, M. Kapadia, and J. Metaxas (2018) Learning to forecast and refine residual motion for image-to-video generation. In ECCV, Cited by: §2.