generate realistic colors and patterns, as the pixels are explicitly copied from given frames. However, for challenging scenes with heavy occlusion, complex deformation or fast motion, flow-based interpolation suffers due to inaccuracy in optical flow estimation algorithms. To compensate for optical flow error, one can add another network to refine the interpolation results, at a cost of much higher computational cost. Alternatively, a kernel-based interpolation approach seeks to learn a pixel-level transformation from input images to the target frame without requiring a precise per-pixel flow estimation. The size of blending kernels in these methods directly restricts the motion that the network is able to capture. To allow larger motion, a big () kernel is used in , which results in heavy memory and computation resource usage.
Generative Adversarial Network can hallucinate pixels on occluded object or sharpen blurry image. However, without an accurate motion estimation, such generative models are susceptible to mode collapse, resulting in severe over-fitting issue: when object is blurry, it favors removing the object all together.
We propose a lightweight video synthesis that combines the benefits each of the approaches above. Our system consists of a two-stage interpolation network: a cascade design with a flow-based module followed by a kernel-based module. In the first module, we learn a bi-directional optical flow estimation and a sampling map that captures figure-ground (dis)occlusion. This flow-based interpolation module can deal with large motion displacement and generate a roughly correct interpolated image by flow-based warping.
An image synthesis module combines the warped image and its CNN feature map to make fine-scale corrections. Our cascade network design substantially alleviates the computational resource at inference stage, as it requires neither 1) large network size needed to estimate accuracy optical flow, nor 2) large kernel sizes needed to preserve clean boundary and capture large motion.
We use a combination of a synthesis loss and an adversarial loss to train the network. Image level supervision has a tendency to remove object details, particularly when the optical flow is blurry. We propose a region-of-interest (RoI) discriminator to focus our system on the fine details of individual of moving objects.
Simply zooming-in is not sufficient: if the ‘ground-truth’ reference images are also lack of details due to lower solution or motion blur, there is not sufficient feedback to the network how to correct its mistakes.
Our key observation is that in the video we often have similar objects that appear at high-resolution with greater details. This allows the algorithm to learn not just from the current reference frame, but also from semantically similar objects at higher-resolution. This design allows our network to leverage instance-level attention in learning, and thus perform better in challenging scenes with large displacement motion and fine-grained details.
2 Related Work
Optical flow estimation is a basic building block for video frame interpolation , and frame interpolation has been used to evaluate the accuracy of optical flow estimation . With rapidly improving quality of optical flow estimation, state-of-the-art optical flow methods  can serve as a strong baseline for video interpolation. However, drawbacks for flow-based video interpolation include 1) producing artifacts around object boundary due to lack of occlusion reasoning, and 2) the approaches are not end-to-end trainable.
To correct this problem, Liu et al. 
develop a network to extract per-pixel 3D optical flow vector across space and time in the input video. The intermediate image is generated by trilinear interpolation across the input video volume. The method obtains high-quality results in frame interpolation and their unsupervised flow estimation results are also promising compared to the state-of-the-art. Noticeably, the deep voxel flow method tends to fail when the scene contains repetitive patterns.
Similarly, a recent work by Jiang et al.  interpolate frames at any arbitrary time step between two frames. They adopt the U-Net architecture  for bidirectional flow and visibility mask estimation, followed by a flow refinement network.
Niklaus et al.  warp both the input frames and pixel-wise contextual information extracted from ResNet-18, and employ a synthesis network with a GridNet  architecture to generate the interpolated frame.
Alternatively, several methods interpolate frames without explicit motion estimation  . Niklaus et al.  combine motion information and pixel synthesis in one step of local convolution over two input frames. They employ a network to estimate a spatially-adaptive convolution kernel for each pixel. Although this method enables high-quality video frame interpolation, it is difficult to estimate all the kernels at once and the interpolation is very memory intensive. Niklaus et al.  improve the efficiency by approximating a 2D kernel with a pair of 1D kernels. This work relieves the intensive memory requirement but the fundamental limitation still exists, where the capability of capturing large motion and flexibility in frame resolution is still limited by the kernel size, which can be prohibitively expensive to increase. Phase-based methods encode the motion in the phase shift between input frames. A recent work of video interpolation by Meyer et al.  propagates predicted phase information across oriented multi-scale pyramid levels to cope with large motions.
A related problem is video frame extrapolation. Earlier approaches focused on developing latent variable models that can represent the inherent underlying uncertainty in prediction. Mathieu et al.  develop a multi-scale conditional GAN architecture to improve the prediction, however it still suffers from blurriness and would contain artifacts for large motion. Vondrick et al.  uses a network with two independent streams: a moving foreground and a static background. A recent work by Lee et al.  uses a stochastic video prediction model, based on VAE-GANs, which combine VAE-based latent variable models with an adversarial loss.
Several authors developed methods which seek to learn a transformation from past pixels to the future. Vondrick et al.  untangle the memory of the past from the prediction of the future by learning to predict sampling kernels. The work of  can be extended to video extrapolation since it learns offset vectors for sampling. Reda et al.  combines the merits of both flow-based and kernel-based approaches by learning a model for predicting a motion vector and a kernel for each pixel.
We first introduce our forward image interpolation model, flow estimation module in Sec.3.1 and image synthesis module in Sec.3.2. We discuss our region-of-interest (ROI) discriminator in Sec.3.3, which is used to facilitate training our synthesis model. In Sec.3.4 we describe the overall losses used to train our network. Finally, we show our training details in Sec.3.5.
3.1 Coarse optical flow estimation
To compensate large displacement motion, we first estimate coarse optical flow to generate an initial interpolated frame given two consecutive video frames and . We use a U-Net like network to estimate bidirectional optical flows and , which can be used to warp and respectively to designated time . In the meantime, our network also predicts a per-pixel weighting mask to blend two synthesis images into one. The blending mask here can be seen as a confidence mask and it’s designed to deal with occlusion. Meanwhile, inspired by , we employ a pre-trained feature extractor to extract high level features on both and , denoted as and respectively. Note that empirically the flow-based methods present a satisfying performance on a large area but often fail to cope with fine-grain details and complex motions. Thus our flow estimation module only serves as an initial step for video interpolation task.
3.2 Image synthesis module
We perform both pixel-level and semantic feature-level warping as shown in Fig.2. In detail, we feed images ,
, corresponding deep feature maps, , flows , and mask into later module for further refinement. In the image synthesis module, we use estimated bi-directional flow , and blending mask to warp both images and features into time by bi-linear interpolation.
here the function is the warping function which takes a warping map
to warp image tensorto .
And the operator is an element-wise multiplication. Then we concatenate warped features and image and feed it into the image synthesis layer. Different from , in which the author used a giant GridNet to refine the image, we simply use three convolutional layers with kernel size 9 to approximate a large receptive field. We will show that this approximation is enough to get good performance with our proposed instance-level adversarial loss.
3.3 Instance level Discriminator
Directly using the flow-guided warped image as the synthesized image has two problems: (a) as the optical flow is trained on the whole image, it is more likely to have twisted and blurry boundaries in image , as shown in Fig.1; b) the bi-linear interpolation result is less realistic compared with the original target image . We employ adversarial learning 
to solve these problems. The adversarial learning has shown promising performance in static image generation, activity prediction, and other computer vision problems. In this paper, we explore two algorithm variations on the video interpolation problem: (a) directly use the adversarial learning on the whole image, and (b) use adversarial learning conditioned on the instance area, as shown in Fig.3. Directly employing the adversarial learning on the whole image makes the generated more realistic compared with the real image , however since the majority of the image is usually the background, the adversarial loss of Eq.3.4 will be biased in these areas, which hurts the optimization of foreground areas (see Fig.1 row 2 and 3). To make the model focus more on the foreground, we further employ instance-level discriminator. The instance-centered learning force the model to pay more attention to instances, especially for small-scale objects.
Here we introduce instance level adversarial learning in more detail. Given the real image , we use region proposal method to generate several regions of interest(ROI), and the ROIs on and are fed into a bi-linear interpolation reshaping layer and are reshaped into patches with a fixed size of . The bi-linear interpolation reshaping layer can achieve two effects: a) through bi-linear interpolation, the estimated loss could be back-propagated to the exact pixel location and previous modules, thus the total network can be updated end-to-end; b) reshaping operation naturally realizes zoom-in effect, balancing network’s focus on close and far away, large and small objects. The reshaped ROI results of different objects are illustrated in Fig.1. Generated ROI numbers are inconsistent among different images, which makes it hard to construct data batches fed into the network. To solve the issue, a fixed region proposal number is assigned to each image, and empty proposal spaces left in each image are filled in with zeros. An extra number recording valid ROI numbers in each image is fed into the network as well, in order to guarantee that only valid ROIs are processed in later discrimination process. A discriminator with spectral normalization is employed to examine only on the specific ROIs instead of on the whole image. The details of adversarial loss computation is described in the next section.
3.4 Training Objectives
We use two losses to constrain the network: a synthesis loss and an adversarial loss .
Synthesis Loss For the synthesis loss, we first calculate the robust norm on the per-pixel color difference, which is commonly used in recent self-supervised optical flow estimation work. Additionally, we estimate the difference of the first-order gradient for per-pixel color, which further improves the reconstruction quality. The overall photometric loss is computed as
where is the robust norm also known as Charbonnier norm.
The second term of synthesis loss we add is perceptual loss. It quantifies the network higher-level feature reconstruction quality and thus makes more visually plausible results for image interpolation. Our results show that the perceptual loss enables the network to learn to reconstruct crispy images. The perceptual loss is defined as follows
is the feature extraction function and in our work, we use the latent space features from RPN in Faster R-CNN. We apply photometric loss and perceptual loss on both initial interpolated image and synthesis image to stabilize flow estimation network training. We also constrain the first-order gradient of bi-directional optical flow and the corresponding blending mask to be locally smooth, resulting in smoothness loss .
The above loss functions mainly guide the initial synthesis of our network and we group them as the synthesis loss,
Adversarial Loss Importantly, in order to deal with complex scenarios and enlarge model capacity, we utilize another network to discriminate synthesis images. The adversarial loss consists of two parts, namely the generator loss and the discriminator loss. As illustrated in Fig.3, a typical practice is computing adversarial losses on the whole image, which leads to image-level adversarial training. This training provides a uniform gradient across the whole image, such that semantic details and fine-grained differences are ignored. In contrast, we propose an instance-level adversarial learning. For each synthesis and ground truth image pair , we use region proposal method to propose N region proposals and resize them into patches, resulting in ROI pairs. As shown in Fig. 3, if we have access to the high-resolution images for training, we crop the ROIs from high-resolution images and use them guide synthesis of low-resolution results. The refers to a pair of synthesis and ground truth ROIs, where . The discriminator will examine each one of them. And the adversarial losses are formulated as:
The network is trained on a mixer of UCF101 and cityscapes dataset. Here we randomly pick four triplets in every video clip of UCF101 and one triplet in every sequence of cityscapes training set, which gives us 3198 triplets in all. In practice, as our proposed network is self-contained and does not need labels, any collection of video clips are sufficient to train our network. We keep UCF101 original image size and downsample cityscapes images to for training inputs. Noticeably, we use the high-resolution version of images in cityscapes dataset to supervise adversarial learning. Forming high-resolution and low-resolution training pair is the key to our learning algorithm. During training, we select a triplet with duration of 80ms and randomly crop a region of input triplets for training. We also randomly flip images for data augmentation. The Adam optimizer  with and
is used with initial learning rate 0.0001, which is decayed exponentially by a factor of 0.1 for 10 epochs and clipped at 1e-8 during training. The weights for different losses are. We trained a model with batch size 2 on a 11GB NVIDIA GeForce 1080 Ti GPU for around 12 hours.
|Ground Truth||Ours||Deep Voxel Flow||Separable Convolution|
|Ground Truth||Ground Truth Enlarged||Ours||Deep Voxel Flow||Separable Convolution||Super SloMo|
To evaluate our method, we quantitatively and qualitatively compare it with several state-of-art video frame interpolation methods. DVF is a classic paper employing flow warping method to do video interpolation. We also select SepConv which is based on adaptive separable convolutions. And SuperSloMo is now the state-of-art method doing video interpolation. We compare the algorithm results on two different datasets, cityscapes  and UCF101  datasets. Cityscapes dataset contains many different sizes of objects like cars, people, traffic lights, etc., in both close and far away distance, which is good to differentiate algorithms’ interpolation abilities on small objects. UCF101 contains people activities like boating, applying makeup, etc., which is good at showing non-rigid body motion results. In ablation experiments, we compare several variations of our proposed approach, basic flow warping structure, adversarial learning on the entire image structure, and instance-level adversarial learning network structure.
4.1 Ablation studies
Baseline. In this section, we evaluate the effectiveness of different modules in our proposed system for video interpolation tasks and show both quantitative and qualitative results on cityscapes datasets. We refer Ours as our network trained with proposed ROI adversarial loss, Ours as the model trained with the adversarial loss on overall image, and Ours as the model trained without any adversarial loss. Note that without discriminator in the training stage, our model mostly degenerates to deep voxel flow. For all studies in this section, we trained the networks on mixed dataset consisting of cityscapes and UCF101 for 300k iterations and report the interpolation error (IE), peak signal to noise ratio (PSNR) and structural similarity index(SSIM).
Adversarial training. We verify the advantages of using adversarial learning to improve video interpolation performance. From experiment on both datasets, we can find training with adversarial loss gives us sharper boundaries on images. In Table 4 and 6, we find that our model with adversarial loss consistently outperforms the baseline model. In Fig.1, we show an example of the effectiveness of adversarial loss. From zoomed-in figures, we can see adversarial loss helps preserve edges. This can be attributed to the adversarial loss better-facilitating image synthesis module’s learning and potentially correcting the inaccurate optical flow estimation.
Interestingly, we also find training model with adversarial loss on the whole image would lead to a local minimum solution sometimes. This phenomenon is especially noted when testing on cityscapes dataset. The network tends to erase the uncertain objects in the scene and recover the background. This is because the discriminator in the training is much stronger than the generator and the data distribution is dominated by rigid objects and background, leading to a biased learning result. In the next part, we will discuss our proposed ROI discriminator which would potentially fix this issue.
ROI discriminator. We further verify the advantages of introducing a focus mechanism in adversarial training, which greatly improves the video interpolation performance as a result. From experiment, it is shown that training with ROI zoom-in method gives us sharper boundaries on small, thin objects and image details. With the adversarial loss, both rigid moving object and non-rigid human body shape are preserved better than baseline method with only flow warping loss, as we can see in Fig.1. From Table 4, we show Ours method outperforms Ours and Ours method on all three standards. One typical failure case for applying adversarial losses to the whole image is that it is prone to erasing large non-rigid moving body parts as mentioned above. From Table 6, we can see that as the ROI size in the image is quite close to the entire image size, the instance-level discriminator model and entire-image-level discriminator model perform considerably similar. By introducing the region proposal method, we are actually making the network focus more on instance-level semantics in the image. By formulating video interpolation problem as perturbing semantic objects in image space, the pixel-level motion estimation can be better grouped and updated.
Training with high resolution patches. We also study the effects of training with different image resolution. Due to data augmentation and the concerns of training speed, researchers used to down-sample high-resolution images or crop part of images for training. However, high-resolution images often provide fine-grained information and it can potentially improve algorithms performance. In our model, we train our proposed model with ROI discriminator on real image patches from high-resolution images. More specifically, based on region proposal we crop ”fake” ROIs from synthesized images and crop the corresponding ”real” patches from its high-resolution counterpart, forming low-resolution high-resolution pairs. The high-resolution patches ultimately force the generator to pay much more attention to details on low-resolution images. From Table 4, we show that using high-resolution patches to train the network with our method would boost performances beyond both the baseline model and the model using full image adversarial training.
4.2 Quantitative evaluation
We compare our approach with state-of-the-art video interpolation methods, including separable adaptive convolution (SepConv) , and deep voxel flow (DVF)  on both UCF101 and cityscapges dataset. As shown in Table 4, our method achieves the best SSIM score. Table 6 demonstrates the quantitative results on UCF101 dataset, where we also compare with the SuperSloMo . We re-run the evaluation on the images provided by  and , and images generated from . All metrics are computed under the motion masks provided by , which highlights the capabilities to cope with regions of motion and occlusion. Our method achieves the highest SSIM score among the lightweight models and performs comparably to the heavy model, SuperSloMo .
4.3 Qualitative results
In this section, we present an intuitive comparison with state-of-the-art video frame interpolation methods, using multiple examples across scenes and datasets. In Fig.5, we present the comparison on cityscapes dataset on different street scenes under different lighting condition. It is obvious that DVF generates most artifacts such as distortion of the whole scene, unrealistic deformation of cars and buildings, misalignment of white lines and etc.. Sepconv is capable to deal with motion within their kernel size, but it consistently results in severe blur and artifacts near the image boundary, as shown in all of our examples. Our proposed approach is particularly good at recovering fine-grained details, for example, the traffic sign in the first example. Also, it fills up the occluded regions in a natural and realistic way, such as the white lines on the road in the fourth example. Fig.6 shows a qualitative comparison on UCF101. It is hard for DVF to handle the occlusion as shown in the second example, although it was trained on UCF101. SepConv is observed to have frequent duplicate artifacts, such as splits of horse legs and pole vault pole. SuperSloMo performs well in most scenes but sometimes fails in the refinement of details in small scale such as the chin of the boxing player, and legs of running horses. Our proposed method enables the reconstruction with fine-grained details and thus is capable of interpolating most challenging scenes of fast motion, such as running horses and boxing.
Our network can achieve state-of-art video interpolation accuracy results using minimal model parameters and running fastest at inference time. Only the image synthesis module is needed at inference time which is very light-weight compared to SuperSlomo using two-stage U-Nets, SDC-Net using FlowNet2 and Context-aware Synthesis using an expensive GridNet. It costs our network 0.36s to run on a image, while SDC-Net needs 1.66s and Context-aware Synthesis needs 0.77s to run on a image. In terms of visual comparison, besides keeping good interpolation results on rigid moving objects and background, our network works well on preserving sharp instance details like bicycle wheels, pen tips, etc. The network also recovers non-rigid body movement remarkably, keeping objects realistic when interpolating human faces, bodies, horse legs, etc. However, our network still has several limitations. For large non-rigid body movements, similar to other state-of-art methods, it is still hard for the model to recover body shape when doing interpolation and the interpolated objects are slightly distorted. Also, as the left figure in Fig.7 shows, cluttered scenes will lead to failure. This is due to the large overlapping instances and severe occlusion that make system hard to distinguish individual objects. Finally, larger models with mature optical flow estimation would still generate better results than ours on a clean and texture-rich area, like the ground, which is taking advantages of accurate flow estimation.
We have demonstrated a lightweight video interpolation framework that can retain object details in image synthesis. We use a flow estimation module to synthesize the intermediate frame followed by a simple image synthesis module to correct detailed shape errors. The network is trained by a region based discriminator which utilizes high-resolution image patches to supervise low-resolution ROIs synthesis, constraining instances in images to look realistic. Our proposed adversarial training strategy can be universally used as a training block to improve algorithm performance. In the future, we hope to improve the model design to compensate some drawbacks in our model, such as employing deformable convolutions to tackle large motions and deformations. And we want to further expand our work to video prediction task.
-  S. Baker, D. Scharstein, J. P. Lewis, S. Roth, M. J. Black, and R. Szeliski. A database and evaluation methodology for optical flow. International Journal of Computer Vision, 92(1):1–31, Mar 2011.
M. Cordts, M. Omran, S. Ramos, T. Rehfeld, M. Enzweiler, R. Benenson,
U. Franke, S. Roth, and B. Schiele.
The cityscapes dataset for semantic urban scene understanding.In
Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
-  A. Dosovitskiy, P. Fischer, E. Ilg, P. Hausser, C. Hazirbas, V. Golkov, P. Van Der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 2758–2766, 2015.
-  D. Fourure, R. Emonet, E. Fromont, D. Muselet, A. Trémeau, and C. Wolf. Residual conv-deconv grid network for semantic segmentation. In Proceedings of the British Machine Vision Conference, 2017, 2017.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 2672–2680. Curran Associates, Inc., 2014.
-  K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
-  E. Ilg, N. Mayer, T. Saikia, M. Keuper, A. Dosovitskiy, and T. Brox. Flownet 2.0: Evolution of optical flow estimation with deep networks. In IEEE conference on computer vision and pattern recognition (CVPR), volume 2, page 6, 2017.
-  M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. In Advances in neural information processing systems, pages 2017–2025, 2015.
-  H. Jiang, D. Sun, V. Jampani, M. Yang, E. G. Learned-Miller, and J. Kautz. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. CoRR, abs/1712.00080, 2017.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In European Conference on Computer Vision, pages 694–711. Springer, 2016.
-  D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
-  A. X. Lee, R. Zhang, F. Ebert, P. Abbeel, C. Finn, and S. Levine. Stochastic adversarial video prediction. arXiv preprint arXiv:1804.01523, 2018.
-  X. Liang, L. Lee, W. Dai, and E. P. Xing. Dual motion GAN for future-flow embedded video prediction. CoRR, abs/1708.00284, 2017.
-  M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. CoRR, abs/1511.05440, 2015.
-  S. Meyer, A. Djelouah, B. McWilliams, A. Sorkine-Hornung, M. H. Gross, and C. Schroers. Phasenet for video frame interpolation. CoRR, abs/1804.00884, 2018.
-  T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018.
-  S. Niklaus and F. Liu. Context-aware synthesis for video frame interpolation. In IEEE Conference on Computer Vision and Pattern Recognition, 2018.
-  S. Niklaus, L. Mai, and F. Liu. Video frame interpolation via adaptive convolution. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  S. Niklaus, L. Mai, and F. Liu. Video frame interpolation via adaptive separable convolution. In IEEE International Conference on Computer Vision, 2017.
-  F. A. Reda, G. Liu, K. J. Shih, R. Kirby, J. Barker, D. Tarjan, A. Tao, and B. Catanzaro. Sdc-net: Video prediction using spatially-displaced convolution. In The European Conference on Computer Vision (ECCV), September 2018.
-  S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.
-  O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pages 234–241. Springer, 2015.
-  K. Soomro, A. R. Zamir, and M. Shah. UCF101: A dataset of 101 human actions classes from videos in the wild. CoRR, abs/1212.0402, 2012.
-  D. Sun, S. Roth, and M. J. Black. A quantitative analysis of current practices in optical flow estimation and the principles behind them. International Journal of Computer Vision, 106(2):115–137, 2014.
-  D. Sun, X. Yang, M.-Y. Liu, and J. Kautz. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8934–8943, 2018.
-  C. Vondrick, H. Pirsiavash, and A. Torralba. Generating videos with scene dynamics. CoRR, abs/1609.02612, 2016.
-  C. Vondrick and A. Torralba. Generating the future with adversarial transformers. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2992–3000, July 2017.
-  T.-C. Wang, J.-Y. Zhu, N. K. Kalantari, A. A. Efros, and R. Ramamoorthi. Light field video capture using a learning-based hybrid imaging system. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2017), 36(4), 2017.
-  A. Z. Zhu, L. Yuan, K. Chaney, and K. Daniilidis. Ev-flownet: Self-supervised optical flow estimation for event-based cameras. arXiv preprint arXiv:1802.06898, 2018.
-  X. T. Y. L. Ziwei Liu, Raymond Yeh and A. Agarwala. Video frame synthesis using deep voxel flow. In Proceedings of International Conference on Computer Vision (ICCV), October 2017.