1 Introduction
Aspiring artists often learn their craft by following step-by-step tutorials. There are many tutorials that describe how to paint common scenes, but how does one learn to create novel pieces without such guidance? For instance, how might an artist paint a unique fantasy landscape, or mimic the striking style of Paul Cezanne? We present a new video synthesis problem: given a completed painting, can we synthesize a time lapse video depicting how an artist might have painted it?

Artistic time lapses present many challenges for video synthesis methods. There is a great deal of variation in how people create art. Suppose two artists are asked to paint the same landscape. One artist might start with the sky, while the other might start with the mountains in the distance. One might finish each object before moving onto the next, while the other might work a little at a time on each object. During the painting process, there are often few visual cues indicating where the artist will apply the next stroke. The painting process is also long, often spanning hundreds of paint strokes and dozens of minutes.
In this work, we present a solution to the painting time lapse synthesis problem. We begin by defining the problem and describing its unique challenges. We then derive a principled, learning-based model to capture a distribution of steps that a human might use to create a given painting. We introduce a training scheme that encourages the method to produce realistic changes over many time steps. We demonstrate that our model can learn to solve this task, even when trained using a small, noisy dataset of painting time lapses collected from the web. We show that human evaluators almost always prefer our method to an existing video synthesis baseline, and often find our results indistinguishable from time lapses produced by real artists.
This work presents several technical contributions:
-
[leftmargin=15pt,itemsep=1pt]
-
We demonstrate the use of a probabilistic model to capture stochastic decisions made by artists, thereby capturing a distribution of plausible ways to create a painting.
-
Unlike work in future frame prediction or frame interpolation, we synthesize long-term videos spanning dozens of time steps and many real-time minutes.
-
We demonstrate a model that successfully learns from painting time lapses “from the wild.” This data is small and noisy, having been collected from uncontrolled environments with variable lighting, spatial resolution and video capture rates.
2 Related work
To the best of our knowledge, this is the first work that models and synthesizes distributions of videos of the past, given a single final frame. The most similar work to ours is a recent method called visual deprojection [5]. Given a single input image depicting a temporal aggregation of frames, their model captures a distribution of videos that could have produced that image. We compare our method to theirs in our experiments. Here, we review additional related literature in three main areas: video prediction, interpolation, and art synthesis.
2.1 Future frame prediction
Future video frame prediction is the problem of predicting the next frame or few frames of a video, given a sequence of past frames. Early work in this area focused on predicting motion trajectories [8, 16, 34, 50, 54] or synthesizing motions in small frames [40, 41, 49]. Recent methods train convolutional neural networks on large video datasets to synthesize videos of natural scenes and human actions [35, 38, 45, 51, 52]. Zhou et al. synthesize time lapse videos, but output only a few frames depicting specific physical processes: melting, rotting, or flowers blooming [68].
Our problem differs from these works in several key ways. First, most future frame prediction methods focus on short time scales, synthesizing frames on the order of seconds into the future, and encompassing relatively small changes. In contrast, painting time lapses span minutes or even hours, and depict dramatic content changes over time. Second, most future frame predictors output a single most likely sequence, making them ill-suited for capturing a variety of very different plausible painting trajectories. One study [62]
uses a conditional variational autoencoder to model a distribution of plausible future frames of moving humans. We build upon these ideas to model paint strokes across multiple time steps. Finally, future frame prediction methods focus on natural videos, which depict of motions of people and objects
[51, 52, 62] or physical processes [68]. The input frames often contain visual cues about how the motion, action or physical process will progress, limiting the space of possibilities that must be captured. In contrast, snapshots of paintings provide few visual cues, leading to many plausible trajectories.2.2 Frame interpolation
Our problem can be thought of as a long-term frame interpolation task between a blank canvas and a completed work of art, with many possible painting trajectories between them. In frame interpolation, the goal is to temporally interpolate between two frames in time. Classical approaches focus on natural videos, and estimate dense flow fields
[4, 57, 64] or phase [39] to guide the interpolation process. More recent methods use convolutional neural networks to directly synthesize the interpolated frame [44], or combine flow fields with estimates of scene information [28, 43]. Most frame interpolation methods predict a single or a few intermediate frames, and are not easily extended to predicting long sequences, or predicting distributions of sequences.2.3 Art synthesis
The graphics community has long been interested in simulating physically realistic paint strokes in digital media. Many existing methods focus on physics-based models of fluids or brush bristles [6, 7, 9, 12, 56, 61]. More recent learning-based methods leverage datasets of real paint strokes [31, 36, 67], often posing the artistic stroke synthesis problem as a texture transfer or style transfer problem [3, 37]. Several works focus on simulating watercolor-specific effects such as edge darkening [42, 55]. We focus on capturing large-scale, long-term painting processes, rather than fine-scale details of individual paint strokes.
In style transfer, images are transformed to simulate a specific style, such as a painting-like style [20, 21] or a cartoon-like style [66]. More recently, neural networks have been used for generalized artistic style transfer [18, 69]. We leverage insights from these methods to synthesize a realistic progressions of paintings.
Several recent works use reinforcement learning by first designing parameterized brush strokes, and then training an agent to apply strokes to produce a given painting
[17, 22, 26, 27, 58, 59]. Some works focus on specific artistic tasks such as hatching or other repetitive strokes [29, 60]. These approaches require careful hand-engineering, and are not ++optimized to produce varied or realistic painting progressions. In contrast, we learn a broad set of effects from real painting time lapse data.3 Problem overview
![]() |
![]() |
![]() |
![]() |
Given a completed painting, our goal is to synthesize different ways that an artist might have created it. We work with recordings of digital and watercolor painting time lapses collected from video websites. Compared to natural videos of scenes and human actions, videos of paintings present unique challenges.
High Variability
-
[topsep=2pt,itemsep=1pt,leftmargin=2ex]
-
Painting trajectories: Even for the same scene, different artists will likely paint objects in different temporal orders (Figure 2).
-
Painting rates: Artists work at different speeds, and apply paint in different amounts.
-
Scales and shapes: Over the course of a painting, artists use strokes that vary in size and shape. Artists often use broad strokes early on, and add fine details later.
-
Data availability: Due to the limited number of available videos in the wild, it is challenging to gather a dataset that captures the aforementioned types of variability.
Medium-specific challenges
-
[topsep=2pt,itemsep=1pt,leftmargin=2ex]
-
Non-paint effects: Tools that apply local blurring, smudging, or specialized paint brush shapes are common in digital art applications such as Procreate [23]. Artists can also apply global effects simulating varied lighting or tones.
-
Erasing effects: In digital art programs, artists can erase or undo past actions, as shown in Figure 3.
-
Physical effects in watercolor paintings: Watercolor painting videos exhibit distinctive effects resulting from the physical interaction of paint, water, and paper. These effects include specular lighting on wet paint, pigments fading as they dry, and water spreading from the point of contact with the brush (Figure 4).
In this work, we design a learning-based model to handle the challenges of high variability and painting medium-specific effects.


4 Method
We begin by formalizing the time lapse video synthesis problem. Given a painting , our task is to synthesize the past frames . Suppose we have a training set of real time lapse videos . We first define a principled probabilistic model, and then learn its parameters using these videos. At test time, given a completed painting, we sample from the model to create new videos that show realistic-looking painting processes.
4.1 Model
We propose a probabilistic, temporally recurrent paint strokes model. At each time instance , the model predicts a pixel-wise intensity change that should be added to the previous frame to produce the current frame; that is, . This change does not necessarily correspond to a single paint stroke – it could represent one or multiple physical or digital paint strokes, or other effects such as erasing or fading.
We model as being generated from a random latent variable , the completed piece , and the image content at the previous time step ; the likelihood is
. Using a random variable
helps to capture the stochastic nature of painting. Using both and enables the model to capture time-varying effects such as the progression of coarse to fine brush sizes, while the Markovian assumption facilitates learning from a small number of video examples.

It is common to define such image likelihoods as a per-pixel normal distribution, which results in an L2 image similarity loss term in maximum likelihood formulations
[33]. In synthesis tasks, using L2 loss often produces blurry results [24]. We instead optimize both the L1 distance in pixel space and the L2 distance in a perceptual feature space. Perceptual losses are commonly used in image synthesis and style transfer tasks to produce sharper and more visually pleasing results [14, 24, 30, 44, 65]. We use the L2 distance between normalized VGG features [48] as described in [65]. We let the likelihood take the form:(1) |
where , represents a function parameterized by , is a function that extracts normalized VGG features, and are fixed noise parameters.
We assume the latent variable is generated from the multivariate standard normal distribution:
(2) |
We show a diagram of this model in Figure 5.
We aim to find model parameters that best explain all videos in our dataset:
(3) |
This integral is intractable, and the posterior is also intractable, preventing the use of the EM algorithm. We instead use variational inference and introduce an approximate posterior distribution [32, 62, 63]. We let this approximate distribution take the form of a multivariate normal:
(4) |
where are functions parameterized by , and is diagonal.


4.1.1 Neural network framework
4.2 Learning
We learn model parameters using short sequences from the training video dataset, which we discuss in further detail in Section 5.1. We use two stages of optimization to facilitate convergence: pairwise optimization, and sequence optimization.
4.2.1 Pairwise optimization
From Equations (3) and (4), we obtain an expression for each pair of consecutive frames (a derivation is provided in the appendix):
(5) |
where denotes the Kullback-Liebler divergence. Combining Equations (1), (2), (4), and (5), we minimize:
(6) |
where , and represent L1 and L2 distance respectively. We refer to the last two terms as image similarity losses.
We optimize Equation (6) on single time steps, which we obtain by sampling all pairs of consecutive frames from the dataset. We also train the model to produce the first frame from videos that begin with a blank canvas, given a white input frame , and . These starter sequences are important for teaching the model how to start a painting at inference time.
4.2.2 Sequence optimization
To synthesize an entire video, we run our model recurrently for multiple time steps, building upon its own predicted frames. It is common when making sequential predictions to observe compounding errors or artifacts over time [51]. We use a novel sequential training scheme to enforce that the outputs of the model are accurate and realistic over multiple time steps. We alternate between two sequential training modes.
-
[leftmargin=0pt, itemsep=1pt]
-
Sequential CVAE training encourages sequences of frames to be well-captured by the learned distribution, by reducing the compounding of errors. Specifically, we train the model sequentially for a few frames, predicting each intermediate frame using the output of the model at the previous time step: . We compare each predicted frame to the corresponding training frame using the image similarity losses in Eq. (6). We illustrate this strategy in Figure 7.
-
Sequential sampling training encourages random samples from our learned distribution to look like realistic partially-completed paintings. During inference (described below), we rely on sampling from the prior at each time step to synthesize new videos. A limitation of the variational strategy is the limited coverage of the latent space during training [15], sometimes leading to unrealistic predictions for . To compensate for this, we introduce supervision on such samples by amending the reconstruction term in Equation (5) using a conditional critic loss term [19]:
(7) where is a critic function with parameters . The critic encourages the distribution of sampled strokes to match the distribution of training strokes . We use a critic architecture based on [10] and optimize it using WGAN-GP [19].
In addition to the critic loss, we apply the image similarity losses discussed above after time steps, to encourage the model to eventually produce the completed painting.
4.3 Inference: video synthesis
Given a completed painting and learned model parameters , we synthesize videos by sampling from the model at each time step. Specifically, we synthesize each frame using the synthesized previous frame and a randomly sampled . We start each video using , a blank frame.
4.4 Implementation
We implement our model using Keras
[11]and Tensorflow
[1]. We experimentally selected the hyperparameters controlling the reconstruction loss weights to be
, using the validation set.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
|
5 Experiments
5.1 Datasets
We collected recordings of painting time lapses from YouTube and Vimeo. We selected digital and watercolor paintings (which are common painting methods on these websites), and focused on landscapes or still lifes (which are common subjects for both mediums). We downloaded each video at resolution and cropped it temporally and spatially to include only the painting process (excluding any introductions or sketching that might have preceded the painting). We split each dataset in a 70:15:15 ratio into training, validation, and held-out test video sets.111While we cannot host individual video files, we make our download scripts available at https://xamyzhao.github.io/timecraft.
-
[itemsep=1pt,leftmargin=0pt]
-
Digital paintings: We collected digital painting time lapses. The average duration is 4 minutes, with many videos having already been sped up by artists using the Procreate application [23]. We selected videos with minimal zooming and panning. We manually removed segments that contained movements such as translations, flipping and zooming. Figure 3 shows example video sequences.
-
Watercolor paintings: We collected watercolor time lapses, with an average duration of 20 minutes. We only kept videos that contained minimal movement of the paper, and manually corrected any small translations of the painting. We show examples in Figure 4.
A challenge with videos of physical paintings is the presence of the hand, paintbrush and shadows in many frames. We trained a simple convolutional neural network to identify and remove frames that contained these artifacts.
-
Sequence extraction. We synthesize time lapses at a lower temporal resolution than real-time for computational feasibility. We extract training sequences from the raw videos at a period of frames (i.e., skipping
real frames in each synthesized time step), with a maximum variance of
frames. Allowing some variance in the sampling rate enables us to extract sequences at an approximate period of frames, which is useful for (1) improving robustness to varied painting rates, and (2) extracting sequences from watercolor painting videos where many frames containing hands or paintbrushes have been removed. We select and independently for each dataset. We avoid capturing static segments of each video (e.g., when the artist is speaking) by requiring that adjacent frames in each sequence have at least of the pixels changing by a fixed intensity threshold. We use a dynamic programming method to find all sequences that satisfy these criteria. We train on sequences of length 3 or 5 for sequential CVAE training, and length for sequential sampling training, which we determined using experiments on the validation set. For the test set, we extract a single sequence from each test video that satisfies the filtering criteria. -
Crop extraction. To facilitate learning from small numbers of videos, we extract multiple crops from each video. We first downsample each video spatially by a factor of , so that most patches contained visually interesting content and spatial context.
5.2 Baselines
We compare our method to the following baselines:
-
[itemsep=1pt,topsep=2pt]
-
Stochastic video synthesis (vdp): Visual deprojection synthesizes a distribution of videos from a single temporally-projected input image [5].
We design each baseline model architecture to have a comparable number of parameters to our model. Both baselines output videos of a fixed length, which we choose to be to be comparable to our choice of in Section 5.1.
5.3 Results
Comparison | All paintings | Watercolor paintings | Digital paintings |
---|---|---|---|
real vdp | 90% | 90% | 90% |
real ours | 55% | 60% | 51% |
ours vdp | 91% | 90% | 88% |
We conducted both quantitative and qualitative evaluations. We first present a user study quantifying human perception of the realism of our synthesized videos. Next, we qualitatively examine our synthetic videos, and discuss characteristics that contribute to their realism. Finally, we discuss quantitative metrics for comparing sets of sampled videos to real videos. We show additional results, including videos and visualizations using the tipiX tool [13] on our project page at https://xamyzhao.github.io/timecraft.
We experimented with training each method on digital or watercolor paintings only, as well as on the combined paintings dataset. For all methods, we found that training on the combined dataset produced the best qualitative and quantitative results (likely due to our limited dataset size), and we only present results for those models.
5.3.1 Human evaluations
We surveyed 158 people using Amazon Mechanical Turk [2]. Participants compared the realism of pairs of videos, with each pair containing videos randomly sampled from ours, vdp, or the real videos. In this study, we omit the weaker baseline unet, which performed consistently worse on all metrics (discussed below).
We first trained the participants by showing them several examples of real painting time lapses. We then showed them a pair of time lapse videos generated by different methods for the center crop of the same painting, and asked “Which video in each pair shows a more realistic painting process?” We repeated this process for 14 randomly sampled paintings from the combined test set. We include full study details in the appendix. Table 1 indicates that almost every participant thought synthetic videos produced by our model looked more realistic than those produced by vdp (). Furthermore, participants confused our synthetic videos with real videos nearly half of the time. In the next sections, we show example synthetic videos and discuss aspects that make the results of our model appear more realistic, offering an explanation for these promising user study results.
5.3.2 Qualitative results
Figure 9 shows sample sequences produced by our model, for two input paintings. Our model chooses different orderings of semantic regions from the beginning of each sequence, leading to different paths that still converge to the same completed painting.
Figure 10 shows sequences synthesized by each method. To objectively compare the stochastic methods vdp and ours, we show the most similar prediction by L1 distance to the ground truth sequence. The ground truth sequences show that artists tend to paint in a coarse-to-fine manner, using broad strokes near the start of a painting, and finer strokes near the end. As we highlight with arrows, our method captures this tendency better than baselines, having learned to focus on separate semantic regions such as mountains, cabins and trees. Our predicted trajectories are similar to the ground truth, showing that our sequential modeling approach is effective at capturing realistic temporal progressions. In contrast, the baselines tend to make blurry changes without separating the scene into components, a common result for methods that do not explicitly model stochastic processes.
We examine failure cases from our method in Figure 11, such as making many fine or disjoint changes in a single time step and creating an unrealistic effect.
![]() |
![]() |
|
|
. As we draw more samples from each stochastic method (solid lines), the best video similarity to the real video improves. This indicates that some samples are close to the artist’s specific painting choices. We use L1 distance as the metric on the left (lower is better), and stroke IOU on the right (higher is better). Shaded regions show standard deviations of the stochastic methods. We highlight several insights from these plots. (1) Both our method and
vdp produce samples that are comparably similar to the real video by L1 distance (left). However, our method synthesizes strokes that are more similar in shape to those used by artists (right). (2) At low numbers of samples, the deterministic unet method is closer (by L1 distance) to the real video than samples from vdp or ours, since L1 favors blurry frames that average many possibilities. (3) Our method shows more improvement in L1 distance and stroke area IOU than vdp as we draw more samples, indicating that our method captures a more varied distribution of videos.5.3.3 Quantitative results
Comparing synthesized results to “ground truth” in a stochastic task is ill-defined, and developing quantitative measures of realism is difficult [25, 47]; these challenges motivated our user study above. In this section, we explore quantitative metrics designed to measure aspects of time lapse realism. For each video in the test set, we extract a 40-frame long sequence according to the criteria described in Section 5.1, and evaluate each method on 5 random crops using several video similarity metrics:
-
[itemsep=1pt,leftmargin=0pt]
-
Best (across samples) overall video similarity: For each test painting, we draw sample videos from each model and report the closest sample to the true video using a per-pixel L1 loss [5]. A method that has captured the distribution of real time lapses well should produce better “best” estimates as . This captures whether some samples drawn from a model get close to the real video, and also whether a method is diverse enough to capture each artist’s specific choices.
-
Best (across samples) stroke shape similarity: We quantify how similar the set of stroke shapes are between the ground truth and each predicted video, disregarding the order in which they were performed. We define stroke shape as a binary map of the changes made in each time step. For each test video, we compare the artist’s stroke shape to the most similarly shaped stroke synthesized by each method, as measured by intersection-over-union (IOU), and report the average IOU over all ground truth strokes. This captures whether a method paints in similar semantic regions to the artist.
In Table 2 we introduce the interp baseline, which linearly interpolates in time, as a quantitative lower bound. For , the deterministic interp and unet approaches perform poorly for both metrics. vdp and our method are able to produce samples that lead to comparable “best video similarity”, highlighting the strength of methods designed to capture distributions of videos. The stroke IOU metric shows that our method synthesizes strokes that are significantly more realistic than the other methods.
We show the effect of increasing in Figure 12. At low , the blurry videos produced by interp and unet attain lower L1 distance to the real video than vdp and ours, likely because L1 distance penalizes samples with different painting progressions more than it penalizes blurry frames. In other words, a blurry, gradually fading video with “average” frames will typically have a lower L1 distance to the artist’s time lapse, compared to different plausible painting processes. As increases, vdp and ours produce some samples that are close to the real video. Together with the user study described above, these metrics illustrate encouraging results that our method captures a realistic variety of painting time lapses.
Method | Digital paintings | Watercolor paintings | ||
---|---|---|---|---|
L1 | Stroke IOU | L1 | Stroke IOU | |
interp | ||||
unet | ||||
vdp | ||||
ours |
6 Conclusion
In this work, we introduce a new video synthesis problem: making time lapse videos that depict the creation of paintings. We proposed a recurrent probabilistic model that captures the stochastic decisions of human artists. We introduced an alternating sequential training scheme that encourages the model to make realistic predictions over many time steps. We demonstrated our model on digital and watercolor paintings, and used it to synthesize realistic and varied painting videos. Our results, including human evaluations, indicate that the proposed model is a powerful first tool for capturing stochastic changes from small video datasets.
7 Acknowledgments
We thank Zoya Bylinskii of Adobe Inc. for her insights around designing effective and accurate user studies. This work was funded by Wistron Corporation.
References
-
[1]
(2016)
Tensorflow: large-scale machine learning on heterogeneous distributed systems
. arXiv preprint arXiv:1603.04467. Cited by: §4.4. - [2] (2005) Amazon mechanical turk: overview. Cited by: §5.3.1.
- [3] (2010) Segmental brush synthesis with stroke images. Cited by: §2.3.
-
[4]
(2011)
A database and evaluation methodology for optical flow.
International Journal of Computer Vision
92 (1), pp. 1–31. Cited by: §2.2. - [5] (2019) Visual deprojection: probabilistic recovery of collapsed dimensions. In IEEE International Conference on Computer Vision (ICCV), Cited by: §2, §5.2, §5.3.3.
- [6] (2004) A viscous paint model for interactive applications. Computer Animation and Virtual Worlds 15 (3-4), pp. 433–441. Cited by: §2.3.
- [7] (2004) A versatile interactive 3d brush model. In Computer Graphics and Applications, 2004. PG 2004. Proceedings. 12th Pacific Conference on, pp. 319–328. Cited by: §2.3.
- [8] (2002) Learning motion patterns of persons for mobile service robots. In Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), Vol. 4, pp. 3601–3606. Cited by: §2.1.
- [9] (2015) Wetbrush: gpu-based 3d painting simulation at the bristle level. ACM Transactions on Graphics (TOG) 34 (6), pp. 200. Cited by: §2.3.
-
[10]
(2018)
Stargan: unified generative adversarial networks for multi-domain image-to-image translation
. InProceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 8789–8797. Cited by: Figure 13, §4.2.2. - [11] (2015) Keras. GitHub. Note: https://github.com/fchollet/keras Cited by: §4.4.
- [12] (2005) MoXi: real-time ink dispersion in absorbent paper. In ACM Transactions on Graphics (TOG), Vol. 24, pp. 504–511. Cited by: §2.3.
- [13] TipiX: rapid visualization of large image collections. MICCAI-IMIC Interactive Medical Image Computing Workshop. Cited by: §5.3.
- [14] (2016) Generating images with perceptual similarity metrics based on deep networks. In Advances in neural information processing systems, pp. 658–666. Cited by: §4.1.
- [15] (2018) Latent constraints: learning to generate conditionally from unconditional generative models. In International Conference on Learning Representations, External Links: Link Cited by: §4.2.2.
- [16] (1999) Trajectory clustering with mixtures of regression models. In KDD, Vol. 99, pp. 63–72. Cited by: §2.1.
- [17] (2018) Synthesizing programs for images using reinforced adversarial learning. arXiv preprint arXiv:1804.01118. Cited by: §2.3.
- [18] (2015) A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576. Cited by: §2.3.
- [19] (2017) Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777. Cited by: Figure 13, §4.2.2.
- [20] (1998) Painterly rendering with curved brush strokes of multiple sizes. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques, pp. 453–460. Cited by: §2.3.
- [21] (2015) Synthesis of oil-style paintings. In Pacific-Rim Symposium on Image and Video Technology, pp. 15–26. Cited by: §2.3.
- [22] (2019) Learning to paint with model-based deep reinforcement learning. In IEEE International Conference on Computer Vision (ICCV), Cited by: §2.3.
- [23] (2016) Procreate artists’ handbook. Savage. Cited by: §3, §5.1.
-
[24]
(2016)
Image-to-image translation with conditional adversarial networks
. arXiv preprint arXiv:1611.07004. Cited by: §4.1, §5.2. - [25] (2017) Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §5.3.3.
- [26] (2019) LPaintB: learning to paint from self-supervision. CoRR abs/1906.06841. External Links: Link, 1906.06841 Cited by: §2.3.
- [27] (2019) PaintBot: a reinforcement learning approach for natural media painting. CoRR abs/1904.02201. External Links: Link, 1904.02201 Cited by: §2.3.
- [28] (2018) Super slomo: high quality estimation of multiple intermediate frames for video interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9000–9008. Cited by: §2.2.
- [29] (2002) Hatching by example: a statistical approach. In Proceedings of the 2nd international symposium on Non-photorealistic animation and rendering, pp. 29–36. Cited by: §2.3.
-
[30]
(2016)
Perceptual losses for real-time style transfer and super-resolution
. In European conference on computer vision, pp. 694–711. Cited by: §4.1. - [31] (2010) An example-based approach to synthesize artistic strokes using graphs. In Computer Graphics Forum, Vol. 29, pp. 2145–2152. Cited by: §2.3.
- [32] (2014) Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems, pp. 3581–3589. Cited by: Appendix A, §4.1.
- [33] (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §4.1.
- [34] (2010) Sift flow: dense correspondence across scenes and its applications. IEEE transactions on pattern analysis and machine intelligence 33 (5), pp. 978–994. Cited by: §2.1.
- [35] (2017) Video frame synthesis using deep voxel flow. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4463–4471. Cited by: §2.1.
- [36] (2013) RealBrush: painting with examples of physical media. ACM Transactions on Graphics (TOG) 32 (4), pp. 117. Cited by: §2.3.
- [37] (2015) Brushables: example-based edge-aware directional texture painting. In Computer Graphics Forum, Vol. 34, pp. 257–267. Cited by: §2.3.
- [38] (2016-11) Deep multi-scale video prediction beyond mean square error. pp. . Cited by: §2.1.
- [39] (2015) Phase-based frame interpolation for video. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1410–1418. Cited by: §2.2.
- [40] (2014) Modeling deep temporal dependencies with recurrent grammar cells””. In Advances in neural information processing systems, pp. 1925–1933. Cited by: §2.1.
-
[41]
(2014)
Structured recurrent temporal restricted boltzmann machines
. In International Conference on Machine Learning, pp. 1647–1655. Cited by: §2.1. - [42] (2017) Edge-and substrate-based effects for watercolor stylization. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, pp. 2. Cited by: §2.3.
- [43] (2018) Context-aware synthesis for video frame interpolation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1710. Cited by: §2.2.
- [44] (2017) Video frame interpolation via adaptive separable convolution. In Proceedings of the IEEE International Conference on Computer Vision, pp. 261–270. Cited by: §2.2, §4.1.
- [45] (2014) Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604. Cited by: §2.1.
- [46] (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §5.2.
- [47] (2016) Improved techniques for training gans. In Advances in neural information processing systems, pp. 2234–2242. Cited by: §5.3.3.
- [48] (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Cited by: §4.1.
- [49] (2009) The recurrent temporal restricted boltzmann machine. In Advances in neural information processing systems, pp. 1601–1608. Cited by: §2.1.
- [50] (2004) Motion prediction for moving objects: a statistical approach. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA’04. 2004, Vol. 4, pp. 3931–3936. Cited by: §2.1.
- [51] (2017) Learning to generate long-term future via hierarchical prediction. In ICML, Cited by: §2.1, §2.1, §4.2.2.
- [52] (2016) Generating videos with scene dynamics. In Advances In Neural Information Processing Systems, pp. 613–621. Cited by: §2.1, §2.1.
- [53] (2016) An uncertain future: forecasting from static images using variational autoencoders. In European Conference on Computer Vision, pp. 835–851. Cited by: §4.1.1.
- [54] (2014) Patch to the future: unsupervised visual prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3302–3309. Cited by: §2.1.
- [55] (2014) Towards photo watercolorization with artistic verisimilitude. IEEE transactions on visualization and computer graphics 20 (10), pp. 1451–1460. Cited by: §2.3.
- [56] (2001) The Synthesis of Rock Textures in Chinese Landscape Painting. Computer Graphics Forum. External Links: ISSN 1467-8659, Document Cited by: §2.3.
- [57] (2011) Optical flow guided tv-l 1 video interpolation and restoration. In International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 273–286. Cited by: §2.2.
- [58] (2012) Artist agent: A reinforcement learning approach to automatic stroke generation in oriental ink painting. CoRR abs/1206.4634. External Links: Link, 1206.4634 Cited by: §2.3.
-
[59]
(2015)
Stroke-based stylization learning and rendering with inverse reinforcement learning.
In
Proceedings of the 24th International Conference on Artificial Intelligence
, IJCAI’15, pp. 2531–2537. External Links: ISBN 978-1-57735-738-4, Link Cited by: §2.3. - [60] (2014) Autocomplete painting repetitions. ACM Transactions on Graphics (TOG) 33 (6), pp. 172. Cited by: §2.3.
- [61] (2002) A solid model based virtual hairy brush. In Computer Graphics Forum, Vol. 21, pp. 299–308. Cited by: §2.3.
- [62] (2016) Visual dynamics: probabilistic future frame synthesis via cross convolutional networks. In Advances in neural information processing systems, pp. 91–99. Cited by: Appendix A, §2.1, §4.1.
- [63] (2016) Attribute2image: conditional image generation from visual attributes. In European Conference on Computer Vision, pp. 776–791. Cited by: Appendix A, §4.1.1, §4.1.
- [64] (2013) Multi-level video frame interpolation: exploiting the interaction among different levels. IEEE Transactions on Circuits and Systems for Video Technology 23 (7), pp. 1235–1248. Cited by: §2.2.
-
[65]
(2018)
The unreasonable effectiveness of deep features as a perceptual metric
. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595. Cited by: §4.1. - [66] (2017) Data-driven synthesis of cartoon faces using different styles. IEEE Transactions on image processing 26 (1), pp. 464–478. Cited by: §2.3.
- [67] (2017) Example-based brushes for coherent stylized renderings. In Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, pp. 3. Cited by: §2.3.
- [68] (2016-10) Learning temporal transformations from time-lapse videos. Vol. 9912, pp. 262–277. External Links: ISBN 978-3-319-46483-1, Document Cited by: §2.1, §2.1.
- [69] (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §2.3.
Appendix A ELBO derivation
We provide the full derivation of our model and losses from Equation (3). We start with our goal of finding model parameters
that maximize the following probability for all videos and all
:Appendix B Network architecture
We provide details about the architecture of our recurrent model and our critic model in Figure 13.

with a gradient penalty weight of 10 and 5 critic training iterations for each iteration of our model. All strided convolutions and downsampling layers reduce the size of the input volume by a factor of 2.
Appendix C Human study
We surveyed 150 human participants. Each participant took a survey containing a training section followed by 14 questions.
-
[leftmargin=0pt,topsep=2pt,itemsep=0pt]
-
Calibration: We first trained the participants by showing them several examples of real digital and watercolor painting time lapses.
-
Evaluation: We then showed each participant 14 pairs of time lapse videos, comprised of a mix of watercolor and digital paintings selected randomly from the test sets. Although each participant only saw a subset of the test paintings, every test painting was included in the surveys. Each pair contained videos of the same center-cropped painting. The videos were randomly chosen from all pairwise comparisons between real, vdp, and ours, with the ordering within each pair randomized as well. Samples from vdp and ours were generated randomly.
-
Validation: Within the survey, we also showed two repeated questions comparing a real video with a linearly interpolated video (which we described as interp in Table 2 in the paper) to validate that users understood the task. We did not use results from users who chose incorrect answers for one or both validation questions.
Appendix D Additional results
We include additional qualitative results in Figures 14 and 15. We encourage the reader to view the supplementary video, which illustrates many of the discussed effects.
![]() |
![]() |
![]() |
![]() |
Comments
There are no comments yet.