Deep Line Art Video Colorization with a Few References

03/24/2020 ∙ by Min Shi, et al. ∙ Victoria University of Wellington Institute of Computing Technology, Chinese Academy of Sciences 2

Coloring line art images based on the colors of reference images is an important stage in animation production, which is time-consuming and tedious. In this paper, we propose a deep architecture to automatically color line art videos with the same color style as the given reference images. Our framework consists of a color transform network and a temporal constraint network. The color transform network takes the target line art images as well as the line art and color images of one or more reference images as input, and generates corresponding target color images. To cope with larger differences between the target line art image and reference color images, our architecture utilizes non-local similarity matching to determine the region correspondences between the target image and the reference images, which are used to transform the local color information from the references to the target. To ensure global color style consistency, we further incorporate Adaptive Instance Normalization (AdaIN) with the transformation parameters obtained from a style embedding vector that describes the global color style of the references, extracted by an embedder. The temporal constraint network takes the reference images and the target image together in chronological order, and learns the spatiotemporal features through 3D convolution to ensure the temporal consistency of the target image and the reference image. Our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples when dealing with an animation of a new style. To evaluate our method, we build a line art coloring dataset. Experiments show that our method achieves the best performance on line art video coloring compared to the state-of-the-art methods and other baselines.



There are no comments yet.


page 1

page 2

page 3

page 4

page 5

page 8

page 9

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The process of animation production requires high labor input. Coloring is one of the important stages after the line art images are created, which is a time-consuming and tedious task. Usually, “inbetweeners” colorize a series of line art images according to several reference color images drawn by artists. Some commercial devices and software can be used to speed up the workflow of line art image coloring. But it still needs a lot of repetitive work in each frame. Therefore, automatic methods for coloring line art images based on reference color images are highly demanded, which can greatly reduce the costs in animation production.

Early research on line art image coloring mostly relies on manually specifying colors, which are then spread out to similar regions [28, 32, 19]

. However, the efficiency of the coloring stage in animation production cannot be significantly improved using the above methods. Inspired by the success of generative models on image synthesis tasks in recent years, researchers have used deep convolutional neural networks (CNNs) to automatically color line art images  

[8, 39, 10]. However, user interactions are required to achieve final satisfactory coloring results. To encourage temporal consistency in the colored animation, Thasarathan et al. [33] input the previous frame together with the current frame to a discriminator to improve the colorization of neighboring frames. However, the model cannot fully guarantee the consistency between the color styles of their results and the reference image, which is important for the color quality of the animation.

Coloring line art videos based on given reference images is challenging. Firstly, unlike real life videos, animation videos do not hold pixel-wise continuity between successive frames. For example, lines corresponding to limbs of an animated character may jump from one shape to another to depict fast motion. As such temporal continuity cannot be directly used to guide the learning process to maintain the color consistency between regions in adjacent frames. Furthermore, line art images only consist of black and white lines, which lack rich texture and intensity information compared to grayscale images, making colorization a harder task. Also, in practice when coloring a new anime video from line art images, only a few examples are available. This requires the model to be trainable with a small number of samples, when applied to color animation videos with a new color style.

In this paper, we propose a deep architecture to automatically color line art videos based on reference images. In order to avoid possible error accumulation when continuously coloring a sequence of line art images, we determine the region correspondences between the target image and reference images by adopting a similarity-based color transform layer, which is able to match similar region features extracted by the convolutional encoders from the target and reference images, and use this to transform the local color information from the references to the target. Then, to ensure the global color consistency, the Adaptive Instance Normalization (AdaIN) 

[13] parameters are learned from the embeddings extracted for describing the global color style of the reference images. It is the first time to utilize such a global and local network architecture to color line art images. At the same time, in order to keep the coloring result of the target line art image consistent with the reference image order, we use 3D convolution in the refine network to achieve temporal stability. Moreover, our model can achieve even better coloring results by fine-tuning the parameters with only a small amount of samples when dealing with animations of a new color style.

2 Related Work

2.1 Sketch colorization without references

Early research on line drawing colorization  [28, 32, 19] allows the user to specify color using brushes and then propagates the color to similar areas. The range of propagation can be determined by finding the region similarity or specified by the user. Qu et al.  [28] use a level-set method to find similar regions, and propagate users’ scribbles to those regions. Orzan et al. [27]

require the users to set the gradient range of curves to control the range of their scribbles. Given a set of diffusion curves as constraints, the final image is generated by solving the Poisson equation. Those methods require a lot of manual interactions to achieve desired coloring results. With the development of deep learning, researchers have used it to achieve automatic or interactive coloring 

[7, 34]. However, a lot of manual interaction is still required to obtain reliable coloring results.

Fig. 1: Comparison of different line art image extraction methods. (a) original color image; (b) Canny  [4]; (c) XDoG  [37]; (d) Coherent Line Drawing  [16]; (e) SketchKeras  [23]; (f) distance field map from SketchKeras results.

2.2 Sketch colorization with references

To reduce manual workload, colorization methods with reference images have been proposed to achieve the specified color style for the target sketch. Sato et al. [29] represent the relationships between the regions of line art images using a graph structure, and then solve the matching problem through quadratic programming. However, it is difficult to accurately segment complex line art images. Deep learning methods are proposed to avoid the requirement of accurate segmentation. Zhang et al.  [39]

extract VGG features from the reference image as their description, but their results have blurred object boundaries and mixed colors, probably due to the different characteristics of line art and normal images. To further refine the results, Zhang et al.  

[40] divide the line art image colorization problem into the drafting stage and refinement stage, and users can input manual hints or provide a reference image to control the color style of the results  [24]. However, these methods still require user interactions to achieve satisfactory coloring results. Hensman et al. [10] use cGANs (conditional Generative Adversarial Networks) to colorize gray images with little need for user refinement, but the method is only applicable to learn relationships between grayscale and color images, rather than line art images. In addition, Liao et al.  [21] use the PatchMatch [2] algorithm to match the high-dimensional features extracted from the reference and target images, and realize style conversion between image pairs. It can be used for the line art image coloring task, but it does not match images with different global structures very well.

Fig. 2: The overall structure of the our network. To produce a color image, the color transform network combines the style latent features extracted from reference images (colored key frames) with content latent features extracted from the input image to be colored (in-between frames). And, the temporal constraint network control colored image which is consistent with the reference image in time sequence.

2.3 Video colorization

In video coloring research, animation coloring is much less explored than natural video coloring. Normal grayscale video coloring work [20, 14, 38, 18] learns temporal color consistency by calculating optical flow. However, animated video frames do not in general hold pixel-level continuity, causing optical flow algorithms to fail to get good results. Sýkora et al.  [31] propose a method which uses path-pasting to colorize black-and-white cartoons for continuous animation. However, when given sketch images contain lines which are not continuous, path-pasting is hard to find accurate correspondence. Automatic Temporally Coherent Video Colorization (TCVC) [33] inputs the previous colored frame into the generator as a condition for the current line art image, and inputs both the previous and current colored frames into the discriminator to enforce temporal coherence. However, this method will cause error propagation in the generated coloring results, as errors introduced in one frame affect coloring results of all future frames. Instead, we use a method based on region matching with reference images to find the color features of the target image. Our method can better process long sequences of line art images without color error accumulation.

3 Method

In animation production, coloring in-between line art frames according to the key frames drawn and colored by the original artist is a necessary step. For automating this process, we design a generative network structure. Our first aim in the design is to maintain the local color consistency by matching the corresponding regions in the reference image. We expect that the model can learn the transformations from the local color information in the reference images to the target line art image based on the matching of local features. It involves a similarity-based color transform layer after the convolutional feature encoding stage. However, due to the large deformation of anime line art images, it is not possible to obtain accurate suggested colors simply by matching between line art images. Therefore, we also expect the network to be capable of controlling the color style globally and correct the coloring errors caused by line art similarity. In order to ensure that the coloring style is globally consistent with the reference images, the reference image style is represented by embeddings extracted by a sub-module, Embedder, to control the global style of the generated color image. Finally, in order to maintain the stability of the target line drawing result and the reference image timing, we add a 3D convolution network to control the temporal stability.

3.1 Data Preparation

Since there is no public dataset for the evaluation of line art animation coloring, we built such a dataset to train our model and evaluate it. We collect a set of animation videos and divide them into video shots using the color difference between the successive frames. The line art images corresponding to all the color frames of each video shot are extracted. See more details in section 4.1.

In order to obtain high-quality line art images from color animation, we tried various line extraction methods. The line art images extracted by the traditional Canny edge detection algorithm [4] are quite different from the line drawing images drawn by cartoonists. Other extractors, like XDoG edge extraction operator [37] and the Coherent Line Drawing [16]

method, are too sensitive to user-specified parameters. Sketch-Keras 

[23] is a deep line drafting model that uses a neural network trained to not only adapt to color images with different quality, but also to extract lines for the important details. Therefore, here we use the sketch-Keras to generate line art images, which extracts overall best lines, contains rich details and is close to the cartoonists. Due to the data sparsity of lines in line art images, inspired by SketchyGAN [6], we also convert line art images into distance field maps to improve matching between images when training our model. The comparisons of different methods are shown in Fig. 1.

3.2 Overall Network Architecture

Fig. 3: Similarity-based color transform layer. It first extracts the feature maps of the input distance field maps of the target and reference images through the encoder . Then the similarity map is calculated, which is for transforming the color features of the reference images to those of the target image . Finally, the feature and the color feature of the reference images are dynamically combined to obtain the color feature map of the target image .

Our network consists of two sub-networks, color transform network and temporal constraint network. The color transform network consists of a conditional generator and an adversarial discriminator

. Inspired by the image-to-image translation framework 

[22], our generator learns global style information through Embedder, and provides local color features through similarity based color transform layer. Thus, our generator takes a target line art image , a target distance field map and two reference images (including the line art, distance field map and color image for each reference image) as input and produces the target color image via


The subscript represents the reference object. The is the preliminary coloring result generated by the color transform network. Where, ”0” and ”1” represent the beginning and end of the video sequence, respectively.

Figure 2 shows the architecture of the color transform network, which is mainly composed of six parts, encoders, a similarity-based color transform layer , a group of middle residual blocks , a style information Embedder , a decoder , and a discriminator . First, we use the encoders to extract the feature maps of the following images: the target line art image , the target distance field map , the reference distance field maps , and the reference color images . , , and are three identically constructed encoders that extract the features of line art images, field distance maps, and color images, respectively. The extracted feature maps of the above distance field maps and color images are then fed to the similarity-based color transform layer to obtain the local color features of the target line art image, which provide a local colorization for the target image. After concatenating the line art images and the reference color images, we also feed them separately into an Embedder module [22] to get intermediate latent vectors for reference images, and then compute the mean of their intermediate latent vectors to obtain the final style embedding vector (SEV) . The SEV is used to adaptively compute the affine transformation parameters for the adaptive instance normalization (AdaIN) residual blocks [13] via a two-layer fully connected network, which learns the global color information for the target image. Then, with the SEV controlling the AdaIN parameter, the output features of the layer (capturing similarity matching between the target and references) and (capturing local image information of the target) are added to get the input to the module, whose output then forms the input to the decoder to generate the final target color image . Eq. 1 is factorized to


where means concatenation, and means element-wise summation. The design of our color transform network ensures that the coloring process considers both the local similarity with the reference image regions and the global color style of the reference images.

The color transform network discriminator takes as input an image pair of line art image and corresponding color image, either real color image from the training set, or produced by the generator . It is trained to solve a binary classification task to determine whether the color image is real or fake (generated). The discriminator is trained in an adversarial way to try to make the generated images indistinguishable from real color animation images.

To improve the continuity of the coloring results of the line art video sequence while avoiding error accumulation, we apply a 3D convolutional generation adversarial network to learn the temporal relationship between the target coloring result and the reference image. In order to reduce the training time and parameter amount of 3D convolution, we use learnable gated temporal shift module[5] (LGTSM) based on temporal shift module (TSM) in the network. The LGTSM structure uses 2D convolution to achieve the effect of 3D convolution with guaranteed performance. We input the reference images and the target image (as input an image pair of line art image and corresponding color image) into the generator, and finally obtain the target image coloring result . The discriminator is trained to solve a binary classification task to determine whether the color images are real or fake.

3.3 Similarity-based Color Transform Layer

To cope with larger changes between adjacent frames in sketches, existing work [33] learns the matching of adjacent frames by increasing the number of intermediate residual blocks in the generator network. However, our method needs to match references with target line art drawing, which can have even more significant differences. We therefore utilize the global relationship in non-local neural networks [35], video colorization [38]. Compared with grayscale image feature matching [38], line art image feature matching cannot directly obtain good results. We expect the network to learn the credibility of the matching results by itself, so the network itself learns the masks of different reference reference image features, and adaptively selects the position in the reference image with the highest matching probability. In addition, due to the large deformation of the line art image, accurate matching results cannot be obtained by using high-dimensional feature matching. We dynamically combine matching features with reference image color features, where the reference image color features provide accurate color features in the same area, and the matching features provide reference color features in all areas.

The overall structure of the similarity-based color transform layer is shown in Fig. 3. We calculate the similarity of the high-dimensional features extracted from the target line art image and the reference line art images on a global scale. To make matching more effective, all the input line art images are first turned into distance field maps, which are used for extracting feature maps using the encoder . Similarly, the color information of reference images and are extracted through encoders . Let and be the width and height of the input images, and , , and are the width, height and channel number of the feature maps. The feature map size is reduced to of the input image size to make a reasonable computation resource demand of similarity calculation. The module uses two learned internal masks and :


where is the features from the target, and is the -th reference image (). The mask is used to select new features from the reference color image features , and is used to combine the matching features with the new reference color image features . We use the following approach:


to get final similarity color output feature .

The matching feature in Eq. 5 is obtained through feature matching. To reduce the complexity of global feature map matching, we apply convolutions to reduce the number of channels for feature maps to , and reshaped to size . The similarity map () measuring the similarity between pairs of feature map locations from the target and the -th reference image is obtained through matrix multiplication: . We concatenate and apply softmax to to form the matching matrix of size . Similarly, the color feature of reference images apply convolutions to reduce the channels, reshaped and concatenated to form reference color matrix of size . The output of the module , which transforms the local color information from the reference images to the target based on the similarity of feature maps.

Fig. 4: Network model component analysis.
Fig. 5: Intermediate results. (a) Reference color image 1, (b) Reference color image 2, (c) Latent decoder image , (d) Latent decoder image , (e) No temporal color image, (f) Final color image, (g) Ground truth.
Fig. 6: Sim module reconstruction results. The first raw represents the network generation results, and the second raw represents the Sim module reconstruction results. (a) All convolutional layers use spectral normalization; (b) All convolutional layers do not use spectral normalization; (c) Only Encoders and Decoder use spectral normalization.
Fig. 7: Comparison with existing methods. (a) input sketch image; (b) the reference image; (c) results of cGAN [10]; (d) results of Deep Image Analogy [21]; (e) results of Two-stage method [40]; (f) results of TCVC [33]; (g) our results; (h) Ground truth.

3.4 Loss

To enforce the color coherence between similar regions and penalize the color style difference with the reference images, we define our loss function based on the following loss terms.

Meanwhile, in the temporal constraint network, the generation result of the network is divided into multiple images and the following loss terms is applied. Finally, the calculated loss values of multiple images are averaged to obtain the final loss value.

L1 loss. To encourage the generated results to be similar to the ground truth, we use the pixel level L1 loss measuring the difference between the network generated result and the ground truth color image :


Perceptual loss. In order to ensure that the generated results are perceptually consistent with ground truth images, we use the perceptual loss introduced in  [15]:


where (

represents the feature map extracted at the ReLU

_1 layer from the VGG19 network [30]. For our work, we extract feature maps from the VGG19 network layers ReLU 1_1; ReLU2_1; ReLU 3_1; ReLU 4_1 and ReLU 5_1. represents the number of elements in the -th layer feature map.

Style loss. Similar to image style conversion work [9, 36, 33], we calculate the similarity of the Gram matrices on the high-dimensional feature maps of the generated image and the ground truth to encourage the generated result to have the same style as the ground truth.


where calculates the Gram matrix for the input feature maps. We use the VGG19 network to extract image feature maps from the same layers for calculating perceptual loss.

Latent constraint loss. In order to improve the stability of the generated effect, inspired by [39, 17], in addition to constraining the final generation results, we introduce further constraints on intermediate results of the network. Specifically, we add multi-supervised constraints to the similarity-based color transform layer output and module output .

To make perceptual similarity measure easier, and first pass through latent decoders (involving a convolution layer) to output 3-channel color images and . We then use L1 loss to measure their similarity with the ground truth as follows:


Adversarial Loss. The adversarial loss promotes correct classification of real images () and generated images ().


Overall objective loss function. Combining all of the above loss terms together, we set the optimization goal for our model:


where controls the importance of terms. We set , , , , . Since the resulted style loss value is relatively small in our experiments, we set its weight as , to make its contribution comparable with the GAN loss.

3.5 Implementation details

Our network consists of two sub-networks, color transform network and temporal constraint network.

The color transform network is composed of a generator network and a discriminator network. The generator network consists of encoders, a similarity-based color transform layer, middle residual convolution blocks, a Decoder, and an Embedder. The encoders for line art images, distance field maps and color images share the same network architecture. They are composed of convolution layers, and use instance normalization since colorization should not be affected by the samples in the same batch. They utilize the ReLU activations. We have middle residual blocks [13] with AdaIN  [12] as the normalization layer and ReLU activations. The Decoder is composed of convolutional layers, also with the instance normalization and the ReLU activations. Before the convolution of Decoder, we use the nearest neighbor upsampling to enlarge the feature map by times along each spatial dimension. This will eliminate the artifacts of checkerboard pattern in the generated results of GAN [26]. The Embedder consists of convolution layers followed by a mean operation along the sample axis. Specifically, it maps multiple reference images to latent vectors and then averages them to get the final style latent vector. The affine transformation parameters are adaptively computed using the style latent vector by a two-layer fully connected network. Meanwhile, encoders and decoder add spectral normalization [25] to convolutional layers to increase the stability of training. The discriminator network structure is similar with Embedder, which consists of layers of convolutions. At the same time, in addition to the last layer of convolution, the discriminator adds spectral normalization [25] to other convolutional layers. It utilizes the Leaky LeakyReLU activations.

The temporal constraint network is composed of a generator network and a patch discriminator network. The generator is composed of 3D gated convolutional layers Encoder, dilatd 3D gated convolutional layers and 3D gated convolutional layers Decoder. Before the convolution of Decoder, we also use the nearest neighbor upsampling to enlarge the feature map by times along each spatial dimension. The patch discriminator is composed of 3D convolutional layers. The spectral normalization is applied to both the generator and discriminator to enhance training stability.

Fig. 8: Video sequence coloring result comparison. We compare the method with cGan-based  [10], Deep Image Analogy  [21], Two-stage  [40], TCVC  [33] on a long sequence of coloring results. We show several frames from the long sequence at an equal interval.

4 Experiments

We evaluate the line art video coloring results on our line art colorization dataset. We show that our model outperforms other methods both quantitatively and qualitatively. In the following, we first introduce the data collection details. Then we analyze the effectiveness of the various components of the proposed method, and report the comparison between our method and the state-of-the-art methods both qualitatively and quantitatively.

Fig. 9: Comparison of the coloring results with different numbers of reference images .

4.1 Data Collection and Training

We extract video frames from selected animations and extract the line art images to form our training dataset. We calculate a 768-dimensional feature vector of histograms of R, G, B channels for each frame. The difference between frames is determined by calculating the mean square error of the feature vectors, which is used for splitting the source animations into shots. When the difference between the neighboring frames is greater than 200, it is considered to belong to different shots. In order to improve the quality of the data, we remove shots in which the mean square errors between all pairs of frames are less than 10 (as they are too uniform), and the shot with a length less than 8 frames. Then we filter out video frames that are too dark or too faded in color. Finally we get a total of 1096 video sequences from 6 animations, with a total of 29,834 images. Each video sequence has 27 frames on average. These 6 anime videos include: 1) Little Witch Academia; 2) Dragon Ball; 3) Amanchu; 4) SSSS.Gridman; 5) No.6; 6) Soul Eater. All the images are scaled to size.

To evaluate the versatility, we choose the data of Anime 1, a total of 416 video sequences, 11,241 images for training and testing the network. Specifically, 50% of the data in the Anime 1 is used as the training data, and the remaining 50% is used as the test data. Other anime data is mainly used for testing, apart from using a few sequences for fine-tuning.

We use the Adam optimizer and set the generator learning rate to , the discriminator learning rate to , to 0.5,

to 0.999, batch size 4. The color transform network trained 40 epochs, temporal constraint network trained 10 epochs. The experiment is performed on a computer with an Intel i7-6900K CPU and a GTX 1080Ti GPU, and the training time is about 2 days.

It takes 71ms to color a line art image.

Since the number of frames in each video sequence varies but at least 8, during training, we randomly and continuously extract 8 frames of images, using the first and last frames as reference images, and the middle frame as the target image. When testing with other methods, the first frame of the sequence is usually selected as the reference image to color other frames of the sequence.

No Sim 0.0220 17.27 0.77 48.37
No Emb 0.0149 19.92 0.86 26.97
No Line Art 0.0198 18.71 0.77 37.06
No Distance 0.0126 20.46 0.84 30.75
No Perc. Loss 0.0122 20.57 0.83 33.65
No Latent Loss 0.0125 20.51 0.84 30.86
No Style Loss 0.0123 20.56 0.84 34.41
No Temporal 0.0111 21.60 0.86 27.67
Full 0.0101 22.81 0.87 26.92

TABLE I: Ablation studies for different components on the coloring results, using mean MSE, PSNR, SSIM and FID.
cGAN [10] 0.0366 15.07 0.72 63.48
TCVC [33] 0.0426 14.95 0.73 50.75
Two-stage [40] 0.0352 14.91 0.65 42.08
Deep IA [21] 0.0478 15.36 0.67 38.22
Ours 0.0104 22.41 0.86 27.93
TABLE II: Quantitative comparison with [10, 21, 33, 40] using mean MSE, PSNR, SSIM and FID. Some examples of visual comparison are shown in Fig. 7.

4.2 Ablation Studies

We perform ablation studies to evaluate the contribution of each module and loss term. The quantitative results comparing our full pipeline with one component disabled are reported in Table  I, using standard metrics PSNR, MSE, SSIM, Fréchet Inception Distance (FID) [11]. This shows that the similarity-based color transform layer, Embedder, usage of distance field maps, perceptual loss, latent loss, style loss, and filter are all essential to the performance of our method. During testing, we use the first and last frames of the video sequence as reference images to test all intermediate frames. The visual comparison of an example is shown in Figure 4. It can be seen from Figure 4 that the module greatly contributes to the generation result. The module can constrain the generation result to conform to the style of the reference image (such as the human hair area in the figure). The temporal constraint network can ensure that the generation result is more stable. At the same time, in order to better illustrate the importance of the network structure, we show the reconstruction results of each component of the network in Figure 5. It can be seen from the figure 5 that when the target image and the reference image have large deformation, cannot learn accurate results, and can constrain the generation style well.

In experiments, we found that adding spectral normalization to the network can improve the stability of the training, but adding the spectral normalization to the Emb module of the color transform network will cause the generation results to be dim and the colors not bright. Figure 6 shows the results of Sim reconstruction and network generation in the following three cases. The color transform network all adds spectral normalization, none of them adds spectral normalization, and only the module does not add it. It can be seen from Figure 6 that the final result of adding the spectral normalization network to the module is dim, and cannot learn specific color information without adding it.

Basic  [33]- 0.0523 13.32 0.63 78.64
Fine-tuned  [33]- 0.0282 16.17 0.71 72.12
Basic ours- 0.0132 19.57 0.85 66.72
Fine-tuned ours- 0.0073 23.03 0.89 33.71
Basic  [33]+ 0.0951 10.71 0.56 86.04
Fine-tuned  [33]+ 0.0584 12.94 0.59 77.81
Basic ours+ 0.0197 18.06 0.82 68.23
Fine-tuned ours+ 0.0138 20.50 0.86 37.79

TABLE III: Quantitative evaluation with  [33] using mean MSE, PSNR, SSIM and FID on animation Dragon Ball. “-” indicates a short sequence of length 8 is used for testing; “+” indicates the entire sequence is tested, regardless of the length of the sequence.

4.3 Comparisons of Line Art Image Colorization

We compare our approach to the state-of-the-art line art image coloring methods, cGan-based [10], Deep Image Analogy (IA) [21], Two-stage method [40], and TCVC [33]. For fairness, we use the training set described in Sec.  4.1 for all methods and then evaluate them on the same test set, and only one reference image is used for each sequence as other methods cannot take multiple references. In our method test, both reference images use the first frame of the video sequence. Figure 7 shows the coloring results of all the methods. We further use several standard metrics to quantitatively evaluate the coloring results, which are presented in Table II. The experimental results show that our method not only gets better coloring results, but also maintains the style consistency with the reference image.

3  [33] 0.0345 15.23 0.66 76.34
Ours 0.0105 20.91 0.83 51.99
4  [33] 0.0412 14.52 0.66 99.13
Ours 0.0090 22.16 0.85 56.30
5  [33] 0.0455 14.67 0.59 95.68
Ours 0.0095 21.91 0.82 58.27
6  [33] 0.0585 13.14 0.62 91.38
Ours 0.0113 20.97 0.83 54.52
TABLE IV: Quantitative evaluation of different anime videos, using 32 video sequences to fine tune network parameters.

4.4 Comparisons on Video Colorization

We compare our method with other colorization methods for animation coloring, as shown in Figure 8. We select 32 sequences from Anime 2 to fine-tune models of all methods and then test them on the remaining sequences. When coloring each sequence, the first image of the sequence is treated as the reference image, and the coloring results of the remaining images are evaluated. For the cGan-based [10], Deep Image Analogy [21], and Two-stage generation [40] methods, the selected reference image is used to color the remaining images in the sequence. For TCVC [33], the selected frame in a sequence is first used as the condition input to color the second frame, and the subsequent frames use the generation result of the previous frame as the condition input. It can be seen that TCVC [33] keeps propagating color errors to subsequent frames, leading to error accumulation, while our method can effectively avoid this problem. The cGan-based [10] and Deep Image Analogy [21] methods cannot handle the line art image coloring with large differences in shape and motion. The result of Two-stage generation [40] does not maintain consistency with the reference images well.

Our model trained on one animation also generalizes well to new animations. When their color styles are different, our method benefits from using a small number of sequences from the new animation for fine tuning. To demonstrate this, we apply the network trained on Anime 1 to color Anime 2 (basic model), with optional fine-tuning of the network using 32 sequences from Anime 2 (fine-tuned model). We compare the results with TCVC [33], using the same settings. As shown in Table III, our method achieves better coloring results after fine-tuning the network with only a small amount of new animation data. Table IV shows the quantitative testing results of our method and TCVC on other 4 anime videos where the models are fine-tuned by 32 video sequences. This demonstrates the strong generalizability of our network to unseen animation styles. Our method outperforms TCVC [33] by a large margin in all settings. For fairness testing, our method uses only the first frame as the reference image.

Fig. 10: Comparison of coloring results of the network after inputting different numbers of images to fine tune the network.

4.5 More Discussions

Number of reference images . We test the effect of feeding different numbers of reference images to train the model for coloring long sequences. We divide a video sequence into multiple segments based on different reference images, and then test the middle frame of each segment to generate results, and select the reference images at the ends of each segment as a reference. We set respectively. Our results are shown in Figure 9, which demonstrate that the coloring results can be improved by an increasing number of reference images. Quantitative performance is shown in Table V. It can be seen that more reference images generally lead to better results, although when exceeds 3, the improvement of the coloring results is less obvious.

Number of sequences for fine tuning. For new animations, if the style is similar to the training dataset, we can get high quality color results. But for anime with a different style, a small amount of data is needed to fine tune the network to learn new styles. We test different numbers of sequences (Seq-Num) to fine tune the network parameters to achieve satisfactory coloring results in a new style, where Seq-Num is set to 8, 16, 24, 32, 40 respectively. The fine-tuning phase takes about 2 hours of training for 30 epochs. As shown in Figure 10 and quantitative comparison in Table VI, the network generally produces better coloring results with increasing Seq-Num, and the results stabilize when Seq-Num exceeds 32. Thus, to balance between the coloring results and required training data amount, we set Seq-Num to 32 by default.

1 0.0170 19.87 0.83 31.09
2 0.0111 21.59 0.86 27.67
3 0.0084 22.76 0.88 24.18
4 0.0069 23.44 0.89 22.20
5 0.0062 23.83 0.89 21.79
TABLE V: Quantitative comparison of different numbers of reference images on the coloration of video sequences.
Seq-Num8 0.0204 18.19 0.78 85.81
Seq-Num16 0.0157 19.44 0.80 78.42
Seq-Num24 0.0147 19.91 0.81 77.12
Seq-Num32 0.0127 20.21 0.83 71.84
Seq-Num40 0.0128 20.55 0.83 70.74
TABLE VI: Quantitative evaluation of fine-tuning with different sequence numbers.

5 Conclusions

In this paper, we propose a new line art image video coloring method based on reference images. The architecture exploits both local similarity and global color styles for improved results. Our method does not use sequential propagation to color consecutive frames, which avoids error accumulation. We collect a dataset for evaluating line art colorization. Extensive evaluation shows that our method not only performs well for coloring on the same video used for training, but also generalizes well to new anime videos, and better results are obtained with fine-tuning with a small amount of data.


  • [1] (2018) 6th international conference on learning representations, ICLR 2018, vancouver, bc, canada, april 30 - may 3, 2018, conference track proceedings. External Links: Link Cited by: 25.
  • [2] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman (2009-07) PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28 (3), pp. 24:1–24:11. External Links: ISSN 0730-0301, Link, Document Cited by: §2.2.
  • [3] Y. Bengio and Y. LeCun (Eds.) (2015) 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings. External Links: Link Cited by: 30.
  • [4] J. Canny (1986) A computational approach to edge detection. IEEE Transactions on pattern analysis and machine intelligence (6), pp. 679–698. Cited by: Fig. 1, §3.1.
  • [5] Y. Chang, Z. Y. Liu, K. Lee, and W. Hsu (2019) Free-form video inpainting with 3d gated convolution and temporal patchgan.

    In Proceedings of the International Conference on Computer Vision (ICCV)

    Cited by: §3.2.
  • [6] W. Chen and J. Hays (2018) Sketchygan: towards diverse and realistic sketch to image synthesis. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    pp. 9416–9425. Cited by: §3.1.
  • [7] Y. Ci, X. Ma, Z. Wang, H. Li, and Z. Luo (2018)

    User-guided deep anime line art colorization with conditional adversarial networks

    In Proceedings of the 26th ACM International Conference on Multimedia, MM ’18, New York, NY, USA, pp. 1536–1544. External Links: ISBN 978-1-4503-5665-7, Link, Document Cited by: §2.1.
  • [8] C. Furusawa, K. Hiroshiba, K. Ogaki, and Y. Odagiri (2017) Comicolorization: semi-automatic manga colorization. In SIGGRAPH Asia 2017 Technical Briefs, SA ’17, New York, NY, USA, pp. 12:1–12:4. External Links: ISBN 978-1-4503-5406-6, Link, Document Cited by: §1.
  • [9] L. A. Gatys, A. S. Ecker, and M. Bethge (2016) Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2414–2423. Cited by: §3.4.
  • [10] P. Hensman and K. Aizawa (2017) CGAN-based manga colorization using a single training image. CoRR abs/1706.06918. External Links: Link, 1706.06918 Cited by: §1, §2.2, Fig. 7, Fig. 8, §4.3, §4.4, TABLE II.
  • [11] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637. Cited by: §4.2.
  • [12] X. Huang and S. Belongie (2017) Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510. Cited by: §3.5.
  • [13] X. Huang, M. Liu, S. Belongie, and J. Kautz (2018) Multimodal unsupervised image-to-image translation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 172–189. Cited by: §1, §3.2, §3.5.
  • [14] V. Jampani, R. Gadde, and P. V. Gehler (2017) Video propagation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 451–461. Cited by: §2.3.
  • [15] J. Johnson, A. Alahi, and L. Fei-Fei (2016)

    Perceptual losses for real-time style transfer and super-resolution

    In European conference on computer vision, pp. 694–711. Cited by: §3.4.
  • [16] H. Kang, S. Lee, and C. K. Chui (2007) Coherent line drawing. In Proceedings of the 5th International Symposium on Non-photorealistic Animation and Rendering, NPAR ’07, New York, NY, USA, pp. 43–50. External Links: ISBN 978-1-59593-624-0, Link, Document Cited by: Fig. 1, §3.1.
  • [17] H. Kim, H. Y. Jhoo, E. Park, and S. Yoo (2019) Tag2Pix: line art colorization using text tag with secat and changing loss. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9056–9065. Cited by: §3.4.
  • [18] C. Lei and Q. Chen (2019-06) Fully automatic video colorization with self-regularization and diversity. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Cited by: §2.3.
  • [19] A. Levin, D. Lischinski, and Y. Weiss (2004-08) Colorization using optimization. ACM Trans. Graph. 23 (3), pp. 689–694. External Links: ISSN 0730-0301, Link, Document Cited by: §1, §2.1.
  • [20] A. Levin, D. Lischinski, and Y. Weiss (2004-08) Colorization using optimization. ACM Trans. Graph. 23 (3), pp. 689–694. External Links: ISSN 0730-0301, Link, Document Cited by: §2.3.
  • [21] J. Liao, Y. Yao, L. Yuan, G. Hua, and S. B. Kang (2017-07) Visual attribute transfer through deep image analogy. ACM Trans. Graph. 36 (4), pp. 120:1–120:15. External Links: ISSN 0730-0301, Link, Document Cited by: §2.2, Fig. 7, Fig. 8, §4.3, §4.4, TABLE II.
  • [22] M. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz (2019) Few-shot unsupervised image-to-image translation. CoRR abs/1905.01723. External Links: Link, 1905.01723 Cited by: §3.2, §3.2.
  • [23] lllyasviel (2018) SketchKeras. Note: April 4, 2019 Cited by: Fig. 1, §3.1.
  • [24] lllyasviel (2018) Style2paints. Note: April 3, 2019 Cited by: §2.2.
  • [25] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. See 1, External Links: Link Cited by: §3.5.
  • [26] A. Odena, V. Dumoulin, and C. Olah (2016) Deconvolution and checkerboard artifacts. Distill. External Links: Link, Document Cited by: §3.5.
  • [27] A. Orzan, A. Bousseau, P. Barla, H. Winnemöller, J. Thollot, and D. Salesin (2013-07) Diffusion curves: a vector representation for smooth-shaded images. Commun. ACM 56 (7), pp. 101–108. External Links: ISSN 0001-0782, Link, Document Cited by: §2.1.
  • [28] Y. Qu, T. Wong, and P. Heng (2006-07) Manga colorization. ACM Trans. Graph. 25 (3), pp. 1214–1220. External Links: ISSN 0730-0301, Link, Document Cited by: §1, §2.1.
  • [29] K. Sato, Y. Matsui, T. Yamasaki, and K. Aizawa (2014) Reference-based manga colorization by graph correspondence using quadratic programming. In SIGGRAPH Asia 2014 Technical Briefs, SA ’14, New York, NY, USA, pp. 15:1–15:4. External Links: ISBN 978-1-4503-2895-1, Link, Document Cited by: §2.2.
  • [30] K. Simonyan and A. Zisserman (2015) Very deep convolutional networks for large-scale image recognition. See 3rd international conference on learning representations, ICLR 2015, san diego, ca, usa, may 7-9, 2015, conference track proceedings, Bengio and LeCun, External Links: Link, 1409.1556 Cited by: §3.4.
  • [31] D. Sýkora, J. Buriánek, and J. Žára (2004) Unsupervised colorization of black-and-white cartoons. In Proceedings of the 3rd International Symposium on Non-photorealistic Animation and Rendering, NPAR ’04, New York, NY, USA, pp. 121–127. External Links: ISBN 1-58113-887-3, Link, Document Cited by: §2.3.
  • [32] D. Sýkora, J. Dingliana, and S. Collins (2009) LazyBrush: flexible painting tool for hand-drawn cartoons. Computer Graphics Forum 28 (2), pp. 599–608. Cited by: §1, §2.1.
  • [33] H. Thasarathan, K. Nazeri, and M. Ebrahimi (2019) Automatic temporally coherent video colorization. CoRR abs/1904.09527. External Links: Link, 1904.09527 Cited by: §1, §2.3, Fig. 7, Fig. 8, §3.3, §3.4, §4.3, §4.4, §4.4, TABLE II, TABLE III, TABLE IV.
  • [34] D. Varga, C. A. Szabó, and T. Szirányi (2017) Automatic cartoon colorization based on convolutional neural network. In Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, CBMI ’17, New York, NY, USA, pp. 28:1–28:6. External Links: ISBN 978-1-4503-5333-5, Link, Document Cited by: §2.1.
  • [35] X. Wang, R. Girshick, A. Gupta, and K. He (2018) Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7794–7803. Cited by: §3.3.
  • [36] P. Wilmot, E. Risser, and C. Barnes (2017) Stable and controllable neural texture synthesis and style transfer using histogram losses. CoRR abs/1701.08893. External Links: Link, 1701.08893 Cited by: §3.4.
  • [37] H. WinnemöLler, J. E. Kyprianidis, and S. C. Olsen (2012) XDoG: an extended difference-of-gaussians compendium including advanced image stylization. Computers & Graphics 36 (6), pp. 740–753. Cited by: Fig. 1, §3.1.
  • [38] B. Zhang, M. He, J. Liao, P. V. Sander, L. Yuan, A. Bermak, and D. Chen (2019) Deep exemplar-based video colorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8052–8061. Cited by: §2.3, §3.3.
  • [39] L. Zhang, Y. Ji, X. Lin, and C. Liu (2017)

    Style transfer for anime sketches with enhanced residual u-net and auxiliary classifier gan

    In 2017 4th IAPR Asian Conference on Pattern Recognition (ACPR), pp. 506–511. Cited by: §1, §2.2, §3.4.
  • [40] L. Zhang, C. Li, T. Wong, Y. Ji, and C. Liu (2018-12) Two-stage sketch colorization. ACM Trans. Graph. 37 (6), pp. 261:1–261:14. External Links: ISSN 0730-0301, Link, Document Cited by: §2.2, Fig. 7, Fig. 8, §4.3, §4.4, TABLE II.