Learning Diverse Image Colorization

12/06/2016 ∙ by Aditya Deshpande, et al. ∙ 0

Colorization is an ambiguous problem, with multiple viable colorizations for a single grey-level image. However, previous methods only produce the single most probable colorization. Our goal is to model the diversity intrinsic to the problem of colorization and produce multiple colorizations that display long-scale spatial co-ordination. We learn a low dimensional embedding of color fields using a variational autoencoder (VAE). We construct loss terms for the VAE decoder that avoid blurry outputs and take into account the uneven distribution of pixel colors. Finally, we build a conditional model for the multi-modal distribution between grey-level image and the color field embeddings. Samples from this conditional model result in diverse colorization. We demonstrate that our method obtains better diverse colorizations than a standard conditional variational autoencoder (CVAE) model, as well as a recently proposed conditional generative adversarial network (cGAN).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

page 7

page 8

page 10

page 11

page 12

Code Repositories

divcolor

Implementation of "Learning Diverse Image Colorization" CVPR'17


view repo

pytorch_divcolor

Diverse Colorization in Torch


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In colorization, we predict the 2-channel color field for an input grey-level image. It is an inherently ill-posed and an ambiguous problem. Multiple different colorizations are possible for a single grey-level image. For example, different shades of blue for sky, different colors for a building, different skin tones for a person and other stark or subtle color changes are all acceptable colorizations. In this paper, our goal is to generate multiple colorizations for a single grey-level image that are diverse and at the same time, each realistic. This is a demanding task, because color fields are not only cued to the local appearance but also have a long-scale spatial structure. Sampling colors independently from per-pixel distributions makes the output spatially incoherent and it does not generate a realistic color field (See Figure 2

). Therefore, we need a method that generates multiple colorizations while balancing per-pixel color estimates and long-scale spatial co-ordination. This paradigm is common to many ambiguous vision tasks where multiple predictions are desired viz. generating motion-fields from static image

[25], synthesizing future frames [27], time-lapse videos [31], interactive segmentation and pose-estimation [1] etc.

A natural approach to solve the problem is to learn a conditional model for a color field conditioned on the input grey-level image . We can then draw samples from this conditional model to obtain diverse colorizations. To build this explicit conditional model is difficult. The difficulty being and are high-dimensional spaces. The distribution of natural color fields and grey-level features in these high-dimensional spaces is therefore scattered. This does not expose the sharing required to learn a multi-modal conditional model. Therefore, we seek feature representations of and that allow us to build a conditional model.

Our strategy is to represent by its low-dimensional latent variable embedding . This embedding is learned by a generative model such as the Variational Autoencoder (VAE) [14] (See Step 1 of Figure 1). Next, we leverage a Mixture Density Network (MDN) to learn a multi-modal conditional model (See Step 2 of Figure 1). Our feature representation for grey-level image comprises the features from conv-7 layer of a colorization CNN [30]. These features encode spatial structure and per-pixel affinity to colors. Finally, at test time we sample multiple and use the VAE decoder to obtain the corresponding colorizations for each (See Figure 1). Note that, our low-dimensional embedding encodes the spatial structure of color fields and we obtain spatially coherent diverse colorizations by sampling the conditional model.

The contributions of our work are as follows. First, we learn a smooth low-dimensional embedding along with a device to generate corresponding color fields with high fidelity (Section 3, 7.2). Second, we a learn multi-modal conditional model between the grey-level features and the low-dimensional embedding capable of producing diverse colorizations (Section 4). Third, we show that our method outperforms the strong baseline of conditional variational autoencoders (CVAE) and conditional generative adversarial networks (cGAN) [10] for obtaining diverse colorizations (Section 7.3, Figure (k)k).

Figure 1: Step 1, we learn a low-dimensional embedding for a color field . Step 2, we train a multi-modal conditional model that generates the low-dimensional embedding from grey-level features . At test time, we can sample the conditional model and use the VAE decoder to generate the corresponding diverse color fields .

2 Background and Related Work

Colorization. Early colorization methods were interactive, they used a reference color image [26] or scribble-based color annotations [18]. Subsequently, [3, 4, 5, 11, 20] performed automatic image colorization without any human annotation or interaction. However, these methods were trained on datasets of limited sizes, ranging from a few tens to a few thousands of images. Recent CNN-based methods have been able to scale to much larger datasets of a million images [8, 16, 30]. All these methods are aimed at producing only a single color image as output. [3, 16, 30] predict a multi-modal distribution of colors over each pixel. But, [3] performs a graph-cut inference to produce a single color field prediction, [30] take expectation after making the per-pixel distribution peaky and [16] sample the mode or take the expectation at each pixel to generate single colorization. To obtain diverse colorizations from [16, 30], colors have to be sampled independently for each pixel. This leads to speckle noise in the output color fields as shown in Figure 2. Furthermore, one obtains little diversity with this noise. Isola et al. [10] use conditional GANs for the colorization task. Their focus is to generate single colorization for a grey-level input. We produce diverse colorizations for a single input, which are all realistic.

(a) Sampling per-pixel distribution of [30]
(b) Ground truth
Figure 2: Zhang et al. [30]

predict a per-pixel probability distribution over colors. First three images are diverse colorizations obtained by sampling the per-pixel distributions independently. The last image is the ground-truth color image. These images demonstrate the speckled noise and lack of spatial co-ordination resulting from independent sampling of pixel colors.

Variational Autoencoder. As discussed in Section 1, we wish to learn a low-dimensional embedding of a color field . Kingma and Welling [14] demonstrate that this can be achieved using a variational autoencoder comprising of an encoder network and a decoder network. They derive the following lower bound on log likelihood:

(1)

The lower bound is maximized by maximizing Equation 1 with respect to parameters . They assume the posterior

is a Gaussian distribution

. Therefore, the first term of Equation 1 reduces to a decoder network with an L loss . Further, they assume the distribution

is a zero-mean unit-variance Gaussian distribution. Therefore, the encoder network

is trained with a KL-divergence loss to the distribution . Sampling,

, is performed with the re-parameterization trick to enable backpropagation and the joint training of encoder and decoder. VAEs have been used to embed and decode Digits

[6, 12, 14], Faces [15, 28] and more recently CIFAR images [6, 13]. However, they are known to produce blurry and over-smooth outputs. We carefully devise loss terms that discourage blurry, greyish outputs and incorporate specificity and colorfulness (Section 3).

3 Embedding and Decoding a Color Field

We use a VAE to obtain a low-dimensional embedding for a color field. In addition to this, we also require an efficient decoder that generates a realistic color field from a given embedding. Here, we develop loss terms for VAE decoder that avoid the over-smooth and washed out (or greyish) color fields obtained with the standard L loss.

3.1 Decoder Loss

Specificity. Top-k principal components, , are the directions of projections with maximum variance in the high dimensional space of color fields. Therefore, producing color fields that vary primarily along the top-k principal components provides reduction in L loss at the expense of specificity in generated color fields. To disallow this, we project the generated color field and ground-truth color field along top-k principal components. We use

in our implementation. Next, we divide the difference between these projections along each principal component by the corresponding standard deviation

estimated from training set. This encourages changes along all principal components to be on an equal footing in our loss. The residue is divided by standard deviation of the (for our case ) component. Write specificity loss using the squared sum of these distances and residue,

The above loss is a combination of Mahalanobis distance [19]

between vectors

and with a diagonal covariance matrix and an additional residual term.

Colorfulness. The distribution of colors in images is highly imbalanced, with more greyish colors than others. This biases the generative model to produce color fields that are washed out. Zhang et al. [30] address this by performing a re-balancing in the loss that takes into account the different populations of colors in the training data. The goal of re-balancing is to give higher weight to rarer colors with respect to the common colors.

We adopt a similar strategy that operates in the continuous color field space instead of the discrete color field space of Zhang et al. [30]. We use the empirical probability estimates (or normalized histogram) of colors in the quantized ‘ab’ color field computed by [30]. For pixel , we quantize it to obtain its bin and retrieve the inverse of probability . is used as a weight in the squared difference between predicted color and ground-truth at pixel . Write this loss in vector form,

(2)

Gradient. In addition to the above, we also use a first order loss term that encourages generated color fields to have the same gradients as ground truth. Write and for horizontal and vertical gradient operators. The loss term is,

(3)

Write overall loss on the decoder as

(4)

We set hyper-parameters and . The loss on the encoder is the KL-divergence to , same as [14]. We weight this loss by a factor with respect to the decoder loss. This relaxes the regularization of the low-dimensional embedding, but gives greater importance to the fidelity of color field produced by the decoder. Our relaxed constraint on embedding space does not have adverse effects. Because, our conditional model (Refer Section 4) manages to produce low-dimensional embeddings which decode to natural colorizations (See Figure (f)f, (k)k).

4 Conditional Model (G to z)

We want to learn a multi-modal (one-to-many) conditional model , between the grey-level image and the low dimensional embedding

. Mixture density networks (MDN) model the conditional probability distribution of target vectors, conditioned on the input as a mixture of gaussians

[2]. This takes into account the one-to-many mapping and allows the target vectors to take multiple values conditioned on the same input vector, providing diversity.

MDN Loss.

Now, we formulate the loss function for a MDN that models the conditional distribution

. Here,

is Gaussian mixture model with

components. The loss function minimizes the conditional negative log likelihood for this distribution. Write for the MDN loss, for the mixture coefficients, for the means and for the fixed spherical co-variance of the GMM. and

are produced by a neural network parameterized by

with input . The MDN loss is,

(5)

It is difficult to optimize Equation 6 since it involves a log of summation over exponents of the form . The distance is high when the training commences and it leads to a numerical underflow in the exponent. To avoid this, we pick the gaussian component with predicted mean closest to the ground truth code and only optimize that component per training step. This reduces the loss function to

(6)

Intuitively, this -approximation resolves the identifiability (or symmetry) issue within MDN as we tie a grey-level feature to a component ( component as above). The other components are free to be optimized by nearby grey-level features. Therefore, clustered grey-level features jointly optimize the entire GMM, resulting in diverse colorizations. In Section 7.3, we show that this MDN-based strategy produces better diverse colorizations than the baseline of CVAE and cGAN (Section 5).

5 Baseline

Conditional Variational Autoencoder (CVAE). CVAE conditions the generative process of VAE on a specific input. Therefore, sampling from a CVAE produces diverse outputs for a single input. Walker et al. [25] use a fully convolutional CVAE for diverse motion prediction from a static image. Xue et al. [27] introduce cross-convolutional layers between image and motion encoder in CVAE to obtain diverse future frame synthesis. Zhou and Berg [31] generate diverse timelapse videos by incorporating conditional, two-stack and recurrent architecture modifications to standard generative models.

Recall that, for our problem of image colorization the input to the CVAE is the grey-level image and output is the color field . Sohn et al. [23] derive a lower bound on conditional log-likelihood of CVAE. They show that CVAE consists of training an encoder network with KL-divergence loss and a decoder network with an L loss. The difference with respect to VAE being that generating the embedding and the decoder network both have an additional input .

Conditional Generative Adversarial Network (cGAN). Isola et al. [10]

recently proposed a cGAN based architecture to solve various image-to-image translation tasks. One of which is colorizing grey-level images. They use an encoder-decoder architecture along with skip connections that propagate low-level detail. The network is trained with a patch-based adversarial loss, in addition to L

loss. The noise (or embedding ) is provided in the form of dropout [24]. At test-time, we use dropout to generate diverse colorizations. We cluster colorizations into cluster centers (See cGAN in Figure (k)k).

An illustration of these baseline methods is in Figure 3. We compare CVAE and cGAN to our strategy of using VAE and MDN (Figure 1) for the problem of diverse colorization (Figure (k)k).

Figure 3: Illustration of the CVAE baseline (left) and cGAN baseline (right). For CVAE, the embedding is generated by using both and . The decoder network is conditioned on , in addition to . At test time, we do not use the highlighted encoder and embedding is sampled randomly. cGAN consists of an encoder-decoder network with skip connections, and noise or embedding is due to dropout.

6 Architecture and Implementation Details

Notation. Before we begin describing the network architecture, note the following notation. Write for convolutions with kernel size

, stride

, output channels and activation ,

for batch normalization,

for bilinear up-sampling with scale factor and for fully connected layer with output channels

. Note, we perform convolutions with zero-padding and our fully connected layers use dropout regularization

[24].

Figure 4: An illustration of our VAE architecture. The dimensions of feature maps are at the bottom and the operations applied to the feature map are indicated at the top. This figure shows the encoder. For the decoder architecture, refer to the details in Section 6.1.

6.1 Vae

Radford et al. propose a DCGAN architecture with generator (or decoder) network that can model complex spatial structure of images [21]. We model the decoder network of our VAE to be similar to the generator network of Radford et al. [21]. We follow their best practices of using strided convolutions instead of pooling, batch normalization [9]

, ReLU activations for intermediate layers and tanh for output layer, avoiding fully connected layers except when decorrelation is required to obtain the low-dimensional embedding. The encoder network is roughly the mirror of decoder network, as per the standard practice for autoencoder networks. See Figure

4 for an illustration of our VAE architecture.

Encoder Network. The encoder network accepts a color field of size and outputs a dimensional embedding. Encoder network can be written as, Input: .

Decoder Network. The decoder network accepts a -dimensional embedding. It performs 5 operations of bilinear up-sampling and convolutions to finally output a color field (a and b of Lab color space comprise the two output channels). The decoder network can be written as, Input: .

We use for all our three datasets (Section 7.1).

6.2 Mdn

The input to MDN are the grey-level features from [30] and have dimension . We use components in the output GMM of MDN. The output layer comprises activations for means and softmax-ed activations for mixture weights of the components. We use a fixed spherical variance of . The MDN network uses 5 convolutional layers followed by two fully connected layers and can be written as, Input: . Equivalently, the MDN is a network with convolutional and 2 fully connected layers, with the first convolutional layers pre-trained on task of [30] and held fixed.

At test time, we can sample multiple embeddings from MDN and then generate diverse colorizations using VAE decoder. However, to study diverse colorizations in a principled manner we adopt a different procedure. We order the predicted means in descending order of mixture weights and use these top-k () means as diverse colorizations shown in Figure (k)k (See ours, ours+skip).

6.3 Cvae

In CVAE, the encoder and the decoder both take an additional input . We need an encoder for grey-level images as shown in Figure 3. The color image encoder and the decoder are same as the VAE (Section 6.1). The grey-level encoder of CVAE can be written as, Input: . This produces an output feature map of . The -dimensional latent variable generated by the VAE (or color) encoder is spatially replicated () and multiplied to the output of grey-level encoder, which forms the input to the decoder. Additionally, we add skip connections from the grey-level encoder to the decoder similar to [10].

At test time, we feed multiple embeddings (randomly sampled) to the CVAE decoder along with fixed grey-level input. We feed embeddings and cluster outputs to colorizations (See CVAE in Figure (k)k).

7 Results

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
(t)
(u)
(v)
(w)
(x)
(y)
(z)
(aa)
(ab)
(ac)
(ad)
(ae)
(af)

In Section 7.2, we evaluate the performance improvement by the loss terms we construct for the VAE decoder. Section 7.3 shows the diverse colorizations obtained by our method and we compare it to the CVAE and the cGAN. We also demonstrate the performance of another variant of our method: “ours+skip”. In ours+skip, we use a VAE with an additional grey-level encoder and skip connections to the decoder (similar to cGAN in Figure 3) and the MDN step is the same. The grey-level encoder architecture is the same as CVAE described above.

Dataset L2-Loss Mah-Loss
Mah-Loss
+ Colorfulness
+ Gradient
All Grid All Grid All Grid
LFW .034 .035 .034 .032 .029 .029
Church .024 .025 .026 .026 .023 .023
ImageNet
-Val
.031 .031 .039 .039 .039 .039
Table 1: For test set, our loss terms show better mean absolute error per pixel (wrt ground-truth color field) when compared to the standard L loss on LFW and Church.
Dataset L2-Loss Mah-Loss
Mah-Loss
+ Colorfulness
+ Gradient
All Grid All Grid All Grid
LFW 7.20 11.29 6.69 7.33 2.65 2.83
Church 4.9 4.68 6.54 6.42 1.74 1.71
ImageNet
-Val
10.02 9.21 12.99 12.19 4.82 4.66
Table 2: For test set, our loss terms show better weighted absolute error per pixel (wrt ground-truth color fields) when compared to L loss on all the datasets. Note, having lower weighted error implies, in addition to common colors, the rarer colors are also predicted correctly. This implies a higher quality colorization, one that is not washed out.

7.1 Datasets

We use three datasets with varying complexity of color fields. First, we use the Labelled Faces in the Wild dataset (LFW) [17] which consists of face images aligned by deep funneling [7]. Since the face images are aligned, this dataset has some structure to it. Next, we use the LSUN-Church [29] dataset with images. These images are not aligned and lack the structure that was present in the LFW dataset. They are however images of the same scene category and therefore, they are more structured than the images in the wild. Finally, we use the validation set of ILSVRC-2015 [22] (called ImageNet-Val) with images as our third dataset. These images are the most un-structured of the three datasets. For each dataset, we randomly choose a subset of images as test set and use the remaining images for training.

Method LFW Church ImageNet-Val
Eob. Var. Eob. Var. Eob. Var.
CVAE .031 .029 .037
cGAN .047 .048 .048
Ours .030 .036 .043
Ours+
skip
.031 .036 .041
Table 3: For every dataset, we obtain high variance (proxy measure for diversity) and often low error-of-best per pixel (Eob.) to the ground-truth using our method. This shows our methods generate color fields closer to the ground-truth with more diversity compared to the baseline.

7.2 Effect of Loss terms on VAE Decoder

We train VAE decoders with: the standard L loss, the specificity loss of Section 3.1, and all our loss terms of Equation 4. Figure (af)af shows the colorizations obtained for the test set with these different losses. To achieve this colorization we sample the embedding from the encoder network. Therefore, this does not comprise a true colorization task. However, it allows us to evaluate the performance of the decoder network when the best possible embedding is available. Figure (af)af shows that the colorizations obtained with the L loss are greyish. In contrast, by using all our loss terms we obtain plausible and realistic colorizations with vivid colors. Note the yellow shirt and the yellow equipment, brown desk and the green trees in third row of Figure (af)af. For all datasets, using all our loss terms provides better colorizations compared to the standard L loss. Note, the face images in the second row have more contained skin colors as compared to the first row. This shows the subtle benefits obtained from the specificity loss.

In Table 1, we compare the mean absolute error per-pixel with respect to the ground-truth for different loss terms. And, in Table 2, we compare the mean weighted absolute error per-pixel for these loss terms. The weighted error uses the same weights as colorfulness loss of Section 3.1. We compute the error over: all pixels (All) and over a uniformly spaced grid in the center of image (Grid). We compute error on a grid to avoid using too many correlated neighboring pixels. On the absolute error metric of Table 1, for LFW and Church, we obtain lower errors with all loss terms as compared to the standard L loss. Note unlike L loss, we do not specifically train for this absolute error metric and yet achieve reasonable performance with our loss terms. On the weighted error metric of Table 2, our loss terms outperform the standard L loss on all datasets.

7.3 Comparison to baseline

In Figure (k)k, we compare the diverse colorizations generated by our strategy (Sections 3, 4) and the baseline methods – CVAE and cGAN (Section 5). Qualitatively, we observe that our strategy generates better quality diverse colorizations which are each, realistic. Note that for each dataset, different methods use the same train/test split and we train them for epochs. The diverse colorizations have good quality for LFW and LSUN Church. We observe different skin tones, hair, cloth and background colors for LFW, and we observe different brick, sky and grass colors for LSUN Church. More colorizations in Figures (f)f,7,(k)k and 9.

In Table 3, we show the error-of-best (i.e. pick the colorization with minimum error to ground-truth) and the variance of diverse colorizations. Lower error-of-best implies one of the diverse predictions is close to ground-truth. Note that, our method reliably produces high variance with comparable error-of-best to other methods. Our goal is to generate diverse colorizations. However, since diverse colorizations are not observed in the ground-truth for a single image, we cannot reliably evaluate them. Therefore, we use the weaker proxy of variance to evaluate diversity. Large variance is desirable for diverse colorization, which we obtain. We rely on qualitative evaluation to verify the naturalness of the different colorizations in the predicted pool.

(a)
(b)
(c)
(d)
(e)
Ours
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
(t)
(u)
(v)
(w)
(x)
(y)
(z)
(aa)
(ab)
(ac)
(ad)
(ae)
(af)
(ag) Ours
(ah)
(ai)
(aj) GT
(f)
Figure 5: Diverse colorizations from our method. Top two rows are LFW, next two LSUN Church and last two ImageNet-Val. See Figure (k)k for comparisons to baseline.
(f)

8 Conclusion

Our loss terms help us build a variational autoencoder for high fidelity color fields. The multi-modal conditional model produces embeddings that decode to realistic diverse colorizations. The colorizations obtained from our methods are more diverse than CVAE and cGAN. The proposed method can be applied to other ambiguous problems. Our low dimensional embeddings allow us to predict diversity with multi-modal conditional models, but they do not encode high spatial detail. In future, our work will be focused on improving the spatial detail along with diversity.

Acknowledgements. We thank Arun Mallya and Jason Rock for useful discussions and suggestions. This work is supported in part by ONR MURI Award N00014-16-1-2007, and in part by NSF under Grants No. NSF IIS-1421521.

(a)
(b)
(c) cGAN
(d)
(e)
(f)
(g)
(h) CVAE
(i)
(j)
Ours
(l)
(m)
(n) Ours
(o)
(p)
(q)
(r)
(s) Ours+Skip
(t)
(u)
(v)
(w)
(x) cGAN
(y)
(z)
(aa)
(ab)
(ac) CVAE
(ad)
(ae)
(af) GT
(ag)
(ah)
(ai) Ours
(aj)
(ak)
(al)
(am)
(an) Ours+Skip
(ao)
(ap)
(aq)
(ar)
(as) cGAN
(at)
(au)
(av)
(aw)
(ax) CVAE
(ay)
(az)
(ba) GT
(bb)
(bc)
(bd) Ours
(be)
(bf)
(bg)
(bh)
(bi) Ours+Skip
(bj)
(bk)
(bl)
(bm)
(bn) cGAN
(bo)
(bp)
(bq)
(br)
(bs) CVAE
(bt)
(bu)
(bv) GT
(bw)
(bx)
(by) Ours
(bz)
(ca)
(cb)
(cc)
(cd) Ours+Skip
(ce)
(cf)
(cg)
(ch)
(ci) cGAN
(cj)
(ck)
(cl)
(cm)
(cn) CVAE
(co)
(cp)
(cq) GT
(cr)
(cs)
(ct) Ours
(cu)
(cv)
(cw)
(cx)
(cy) Ours+Skip
(cz)
(da)
(db)
(dc)
(dd) cGAN
(de)
(df)
(dg)
(dh)
(di) CVAE
(dj)
(dk)
(dl) GT
(dm)
(dn)
(do) Ours
(dp)
(dq)
(dr)
(ds)
(dt) Ours+Skip
(du)
(dv)
(k) GT
Figure 6: Diverse colorizations from our methods are compared to the CVAE, cGAN and the ground-truth (GT). We can generate diverse colorizations, which cGAN [10] do not. CVAE colorizations have low diversity and artifacts.
(k) GT

References

  • [1] D. Batra, P. Yadollahpour, A. Guzmán-Rivera, and G. Shakhnarovich. Diverse m-best solutions in markov random fields. In ECCV (5), volume 7576 of Lecture Notes in Computer Science, pages 1–16. Springer, 2012.
  • [2] C. M. Bishop. Mixture density networks, 1994.
  • [3] G. Charpiat, M. Hofmann, and B. Schölkopf. Automatic image colorization via multimodal predictions. In

    Proceedings of the 10th European Conference on Computer Vision: Part III

    , ECCV ’08, pages 126–139, 2008.
  • [4] Z. Cheng, Q. Yang, and B. Sheng. Deep colorization. In 2015 IEEE International Conference on Computer Vision (ICCV), pages 415–423, Dec 2015.
  • [5] A. Deshpande, J. Rock, and D. A. Forsyth. Learning large-scale automatic image colorization. In ICCV, pages 567–575. IEEE Computer Society, 2015.
  • [6] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra.

    Draw: A recurrent neural network for image generation.

    In

    Proceedings of the 32nd International Conference on Machine Learning (ICML-15)

    , pages 1462–1471, 2015.
  • [7] G. B. Huang, M. Mattar, H. Lee, and E. Learned-Miller. Learning to align from scratch. In NIPS, 2012.
  • [8] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Let there be Color!: Joint End-to-end Learning of Global and Local Image Priors for Automatic Image Colorization with Simultaneous Classification. ACM Transactions on Graphics (Proc. of SIGGRAPH 2016), 35(4), 2016.
  • [9] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
  • [10] P. Isola, J. Zhu, T. Zhou, and A. A. Efros.

    Image-to-image translation with conditional adversarial networks.

    In

    Computer Vision and Pattern Recognition

    , 2017.
  • [11] J. Jancsary, S. Nowozin, and C. Rother. Loss-specific training of non-parametric image restoration models: A new state of the art. Proceedings of the 12th European Conference on Computer Vision - Volume Part VII, pages 112–125, 2012.
  • [12] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3581–3589. 2014.
  • [13] D. P. Kingma, T. Salimans, R. Józefowicz, X. Chen, I. Sutskever, and M. Welling. Improving variational autoencoders with inverse autoregressive flow. In NIPS, pages 4736–4744, 2016.
  • [14] D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning Representations (ICLR), 2014.
  • [15] T. D. Kulkarni, W. F. Whitney, P. Kohli, and J. Tenenbaum. Deep convolutional inverse graphics network. In Advances in Neural Information Processing Systems 28, pages 2539–2547. 2015.
  • [16] G. Larsson, M. Maire, and G. Shakhnarovich. Learning representations for automatic colorization. In European Conference on Computer Vision (ECCV), 2016.
  • [17] E. Learned-Miller, G. B. Huang, A. RoyChowdhury, H. Li, and G. Hua. Labeled Faces in the Wild: A Survey, pages 189–248. Springer International Publishing, Cham, 2016.
  • [18] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. ACM Trans. Graph., 23(3):689–694, Aug. 2004.
  • [19] P. C. Mahalanobis. On Tests and Measures of Groups Divergence. International Journal of the Asiatic Society of Benagal, 26, 1930.
  • [20] Y. Morimoto, Y. Taguchi, and T. Naemura. Automatic colorization of grayscale images using multiple images on the web. In SIGGRAPH 2009: Talks, SIGGRAPH ’09, New York, NY, USA, 2009. ACM.
  • [21] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
  • [22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
  • [23] K. Sohn, X. Yan, and H. Lee. Learning structured output representation using deep conditional generative models. In Proceedings of the 28th International Conference on Neural Information Processing Systems, NIPS’15, pages 3483–3491, Cambridge, MA, USA, 2015. MIT Press.
  • [24] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. J. Mach. Learn. Res., 15(1):1929–1958, Jan. 2014.
  • [25] J. Walker, C. Doersch, A. Gupta, and M. Hebert. An uncertain future: Forecasting from static images using variational autoencoders. In European Conference on Computer Vision, 2016.
  • [26] T. Welsh, M. Ashikhmin, and K. Mueller. Transferring color to greyscale images. In SIGGRAPH, 2002.
  • [27] T. Xue, J. Wu, K. L. Bouman, and W. T. Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. In NIPS, 2016.
  • [28] X. Yan, J. Yang, K. Sohn, and H. Lee. Attribute2image: Conditional image generation from visual attributes. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV, pages 776–791, 2016.
  • [29] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. CoRR, abs/1506.03365, 2015.
  • [30] R. Zhang, P. Isola, and A. A. Efros. Colorful image colorization. ECCV, 2016.
  • [31] Y. Zhou and T. L. Berg. Learning Temporal Transformations from Time-Lapse Videos, pages 262–277. Springer International Publishing, 2016.