End-to-End Conditional GAN-based Architectures for Image Colourisation

08/26/2019 ∙ by Marc Gorriz, et al. ∙ 17

In this work recent advances in conditional adversarial networks are investigated to develop an end-to-end architecture based on Convolutional Neural Networks (CNNs) to directly map realistic colours to an input greyscale image. Observing that existing colourisation methods sometimes exhibit a lack of colourfulness, this paper proposes a method to improve colourisation results. In particular, the method uses Generative Adversarial Neural Networks (GANs) and focuses on improvement of training stability to enable better generalisation in large multi-class image datasets. Additionally, the integration of instance and batch normalisation layers in both generator and discriminator is introduced to the popular U-Net architecture, boosting the network capabilities to generalise the style changes of the content. The method has been tested using the ILSVRC 2012 dataset, achieving improved automatic colourisation results compared to other methods based on GANs.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 6

Code Repositories

ColorGAN

Open source repository at GitHub for End-to-End Conditional GAN-based Architectures for Image Colourisation


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Colourisation refers to the process of adding colours to greyscale or other monochrome images such that the colourised results are perceptually meaningful and visually appealing. In general, greyscale content is present in many multimedia applications: from “black and white” videos in old archives and videos with faded colours, to computer vision applications that discard the chroma component in order to simplify processing. However, while the luminance information provides valuable content-related information regarding shapes and structures, the perception of colour is important for modern video viewing. It is also essential for understanding the visual world, allowing the distinction between objects and physical variations, such as shadow gradations, light source reflections or reflectance variations on video frames. For this reason, adding chromatic information to images and improving the quality of colour has become a research area of significant interest for a wide variety of domains that traditionally have resorted to using luminance data alone. This includes medical imaging

[11], surveillance systems [6] or restoration of degraded historical images [16].

Recently, the emergence of deep learning has enabled the development of new colourisation algorithms which better generalise the natural data distribution of colours. Convolutional Neural Networks (CNNs) outperform many state-of-the-art methods based on hand-crafted features in tasks such as image enhancement, image classification or object detection

[10, 12]. State-of-the-art colourisation methods based on Generative Adversarial Neural Networks (GANs) [8]

aim to mimic the natural colour distribution of the training data by forcing the generated samples to be indistinguishable from natural images. Moreover, using adversarial loss, the discriminator can learn a trainable loss function that guarantees a correct adaptation of the differences between generated and real images in the target domain. However, existing methods still suffer ambiguity when trying to predict realistic colours causing desaturated results in most cases. Nevertheless, GANs are a suitable basis for further tackling the desaturation problem and gaining colourfulness.

Motivated by the recent success of Conditional Adversarial Networks in image-to-image translation tasks, including colourisation

[8, 25], this paper proposes an automatic colourisation paradigm using end-to-end convolutional neural network architectures. Improved colourisation is achieved by introducing techniques that improve the stability of the adversarial loss during training, leading to better colourisation of a variety of images from large multi-class datasets. Further enhancements are achieved by applying feature normalisation techniques which are widely used in style transfer models. The capabilities of adversarial models in image colourisation are improved by adapting an Instance-Batch Normalisation (IBN) convolutional architecture [18] to conditional GANs. Some examples of the results achieved by the proposed method are presented in Figure 1. The main contributions of this work are the following:

  1. Analysis of drawbacks in stat-of-the art methods for automatic image colourisation.

  2. Identify and integrate appropriate architectural features and training procedures which lead to a boosted GAN performance for image colourisation. The proposed steps of improvement include:

    1. A novel generator-discriminator setting which adapts the IBN paradigm to an encoder-decoder architecture, enabling generalisation of the content’s style changes while encouraging stabilisation during GAN training.

    2. The use of Spectral Normalisation (SN) [15] for improving the generalisation of the adversarial colourisation and preventing training instability.

    3. The use of multi-scale discriminators to achieve an improved colour generation in small areas and local details and a boosted colourfulness.

The paper has the following structure: Section II reviews related work in the literature, identifying the main drawbacks and possible improvements, Section III details the proposed methodology, Section IV provides information about the implementation and data used in the experiments and a quantitative evaluation of the results while Section V provides conclusions and identifies future work.

Ii Background

Automatic colourisation was originally introduced in 1970 to describe a novel computer-assisted technology for adding colour to black and white movies and TV programs [14]

. Although such semi-automatic method improved the efficiency of traditional hand-crafted techniques, it still required a considerable amount of manual effort and artistic experience to achieve acceptable results. Since then, it has been shown that the task is complex, ill-conditioned and inherently ambiguous due to the large degrees of freedom during the assignment of colour information

[26].

In some cases, the semantics of the scene and the variations of the luminance intensity can help to infer priors of the colour distribution of the image. For example, an algorithm can successfully associate rapid changes to vegetation areas, assigning ranges of green to it, or smooth areas to sky, inferring blue tones. Nevertheless, in most cases the ambiguity in the decisions can lead a system to make random choices. For instance, the hypothetical prior of a car being red is the same as it being green or blue, although in reality the decision will converge towards the dominant samples in the training data. This fact motivated the research of conservative solutions, such as scribbled-based colourisation [19, 27] and exemplar-based colourisation [1]

, which involve user interaction through semi-automatic methods. In the first method, the user annotates ground truth colours at certain informative points and the system learns how to propagate them to the entire image. Alternatively, a whole colour reference is carefully selected with similar content and semantics to the target, and the system attempts to transfer the colours from reliable estimated correspondences. However, the quality of the final results depends on the choice of the reference samples and the style transfer methodology used to estimate the correspondences.

Another common issue is the well-known desaturated effect [26, 13]

, which is associated with treating automatic colourisation as a standard regression problem. Taking a greyscale input image, a parametric model can learn how to predict corresponding chrominance channels by minimising the Euclidean distance between the estimations and the ground truth. Nevertheless, basic solutions are commonly based on averaging the colours of the training examples. In this way the basic model produces desaturated results characterised by low absolute values in the colour channels when trained on large databases of natural images. Previously this problem has been addressed through a deep learning approach which introduced a rebalancing process during training with the aim of penalising the predicted colours based on their likelihood in the prior distribution of training data

[26]. Such a method outperforms previous state-of-the-art approaches, including recent successes with GANs, in which more complex architectures need to adopt the methodology to generalise the predicted colours.

As proposed in the pix2pix framework [8], a more traditional regression loss such as or distance is beneficial when included in the final objective function. This enables a conditional GAN to increase the error rate of the discriminator while producing realistic results close to the ground truth. Although such a framework achieves state-of-the-art performance across a range of image-to-image translation tasks, it still requires the aforementioned rebalancing method, targeting colour rarity far from the desaturated mean of natural data distributions. The high instability during training when a GAN deals with complex generator architectures and high-resolution training images, can lead the pix2pix framework to mode collapse, converging towards undesirable local equilibria between the generator and discriminator [22, 25]. This effect reduces the contribution of the adversarial loss in the multi-loss objective, giving the total weight of convergence to the regression loss and hence leading the system again to desaturated results.

Fig. 2: Generator and discriminator architectures with IBN adaptation to the U-Net and PatchGAN architectures. Note the fake input to the discriminator is composed by the concatenation of the original greyscale input and the generated -dimensional colour channels.

Iii Proposed Method

Aiming at colourisation of images, the goal of our method is to enable automatic CNN-based colourisation of an input greyscale image, denoted , where H × W is image dimension in pixels, and represented by the lightness channel in the CIE Lab colour space. To achieve this, it is essential to train an end-to-end CNN architecture capable of learning the direct mapping to the two associated colour channels . As commonly used in the literature, CIE Lab colour space is chosen as it is designed to maintain perceptual uniformity and is more perceptually linear than other colour spaces [2]. The mapping function can be expressed in a neural network form as:

(1)

where is the set of learning parameters for a -layer CNN, omitting the bias terms for simplicity, and

the corresponding non-linear activation function, with

.

Iii-a Conditional Adversarial Networks

A mapping convolutional model is trained using a generative adversarial methodology with conditional GANs. This work uses the pix2pix framework [8] as baseline to solve image-to-image translation tasks such as generating realistic street scenes from semantic segmentation maps, aerial photography from cartographic maps or image colourisation from greyscale inputs. As per the traditional GANs setting [4], two CNNs (a generator and discriminator ) are trained simultaneously in a minimax two-player game, with the objective of reaching the Nash equilibrium between them. Given an input greyscale image

and a vector of random noise

, the aim of the generator is to capture the original colour distribution of the training data and to learn a realistic mapping to the colourisation result. On the other hand, the discriminator aims to distinguish real images from colourised ones through the mapping

, estimating the probability that a sample came from the training data rather than from

. Therefore, the conditional GAN framework will model the colour distribution of the training data following the minimax training strategy:

(2)

where the objective function is given by

(3)
(4)
(5)

using to control the contribution of the regression loss.

As suggested in recent works [5, 17], the standard loss function for the generator is redefined in order to guarantee non-saturation by maximising the probability of the discriminator being mistaken and converting the loss to a strictly decreasing function. Moreover, note the aforementioned distance introduced in the final generator objective to encourage a colourisation close to the ground truth outputs. Regarding the GAN architectures, the pix2pix framework uses a U-Net [21] as generator and a Markovian PatchGAN [8] as discriminator, yielding output probability maps based on the discrimination of patches in the input domain. They exploit the intrinsic fully convolutional architecture of the discriminator to control the input patch size via its respective receptive field.

Iii-B Mini-batch Normalisation

The application of mini-batch normalisation techniques such as batch normalisation (BN) [7], have become a common practice in deep learning to accelerate the training of deep neural networks. In the case of GANs, the DCGAN architecture it was proven that applying batch normalisation in both generator and discriminator architectures can be very beneficial to stabilise the GAN learning and to prevent a mode collapse due to poor initialisation [20]

. Internally, batch normalisation preserves content-related information by reducing the covariance shift within a mini-batch during training. It uses the internal mean and variance of the batch to normalise each feature channel.

On the other hand, Instance Normalisation (IN) [24] was proven to be beneficial in style transfer speeding-up fast stylisation. Image colourisation, as other style transfer techniques, aims to capture style information by learning features that are invariant to appearance changes, with the aim to generalise the colourisation process within a mini-batch of variable content. Therefore, unlike batch normalisation, IN uses the statistics of an individual sample instead of the whole mini-batch to normalise features.

Inspired by IBN-Net [18]

, in the presented approach BN and IN are combined in the same convolutional architecture with the aim to exploit the instance-based normalisation capabilities in style transfer while encouraging stabilisation during training, to both improve the learning and generalisation capacities of the GAN. This work adapts the residual IBN-Net architecture to a U-Net generator and a patch-based discriminator. The IBN-Net work discussed that appearance variance in a deep convolutional model mainly lies in shallow layers, while the feature discrimination for content is higher in deep layers. Therefore, IBN-Net avoids IN in deep layers to preserve content discrimination in deep features, while it keeps batch normalisation in the whole architecture to preserve content-related features at different levels. Figure

2 shows final proposed architectures for generator and discriminator. Note that normalisation is not applied to the input layers to avoid sample oscillation and model instability.

Iii-C Weight regularisation

One common strategy to improve the generalisation of the network and to prevent instability during training is the use weight regularisation. This technique penalises proportionally the weights of the network based on their size, aiming to keep small values during training and hence preventing small changes in the input leading to large changes in the output. In the context of GANs, the use of sigmoid activations in the discriminator can lead the optimisation process towards unbounded gradients. To prevent such a situation, Spectral Normalisation (SN) [15] was introduced as a regularisation step to control the Lipschitz constant of the discriminator.

In the context of convolutional neural networks, they proved that the Lipschitz constant of a linear mapping , e.g.

between the pre-activations of two layers, is its largest singular value or spectral norm. Then, they performed spectral normalisation by replacing the layer weights

with ,

being the largest eigenvalue of

.

Iii-D Multi-scale discriminators

A challenge in colourisation is to achieve precision in small areas and local details. Using the Markovian PatchGAN discrimination in the pix2pix framework, colourfulness can be boosted by increasing the receptive field of the discriminator, albeit at the price of increasing the complexity with deeper architectures and loosing spatial information, commonly leading to blurry effects and tilling artefacts. A better solution is to use the multi-scale discrimination setting to tackle high-resolution image processing without varying the discriminator architecture [25]. This is achieved using discriminators at different scales by downsampling the actual inputs. Therefore, keeping fixed the original discriminator architecture, variable receptive fields are obtained. These fields are larger at the coarsest levels, and the modified objective function in the GAN context is given as:

(6)

where , are the inputs, with , and the number of discriminator scales.

Iv Experiments

Iv-a Data

Training examples are generated from the ImageNet dataset [3], particularly from the 1,000 synsets selected for the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012. Samples are selected from the reduced validation set, containing RGB

images uniformly distributed as

images per class. The test dataset is created by randomly selecting images per class from the training set, generating up to examples. All images are resized to pixels and converted to the CIE Lab colour space.

Iv-B Network architectures

The pix2pix framework is used as a GAN baseline. The generator consists of a U-Net encoder-decoder architecture, conforming to the following structure: . The encoder’s blocks consist of

convolutions with spectral normalisation and a stride of

, followed by the normalisation layers as explained in Section III-B and a

Leaky ReLU

activation. The decoder blocks apply the same block composition but using ReLU activations. The last layer is a convolution with a tanh activation producing a -dimensional output space. For layers of encoder-decoder architecture, skip connections between layers at the encoder and layers at the decoder are applied in order to recover the information lost during the downsampling operations. After the generator, the discriminator is used in a form of PatchGAN with the following fully convolutional architecture: . The output layer is a convolution producing the output probability maps, and the input layer takes the concatenation between the original greyscale input and the original or generated colour channels. Regarding the multi-scale discrimination, a setup of different discriminators is used, downsampling the original input volumes by a factor of and .

Fig. 3: Learning curves of the generator during training.

Fig. 4: Comparison of colour histograms over the test data.

Fig. 5: Visual comparison of colourisation techniques. Note the improvement of the proposed configurations over the BL model: (1) the gain in colourfulness after applying SN, (2) the localisation improvement of MD and (3) the benefits of IBN architecture rather than applying BN and IN separately.

Iv-C Quantitative evaluation

Figure 3 illustrates the convergence behaviour of the adversarial and regression losses conforming the generator’s objective function. A poor response from adversarial loss can be observed for the baseline pix2pix

method, represented by the BN line, which rapidly collapses to a local minimum, giving all the weight of global convergence to the regression loss. A loss of colourfulness occurs after this point where the regression loss abruptly starts to overfit leading to the generation of desaturated colours. A considerable improvement results after adding spectral normalisation, the BN+SN line, where weight regularisation helps to stabilise the adversarial loss and slows down the convergence of the regression, hence preserving colourfulness and preventing overfitting. The aforementioned behaviour can be validated by observing the IBN+SN line. Although instance normalisation leads to instability due to increasing the variance of content-based features during training, a sudden improvement of the adversarial loss can be observed after epoch

, where the combination of both normalisation techniques leads to colour generalisation while penalising the regression loss and helping the system to prevent desaturation.

The effect of overfitting and lack of colourfulness can be evaluated by comparing deterministic measures, such as the averaged or distance, with perceptually-based ones designed to better capture the visual plausibility of the results. From the results summarised in Table I it can be observed that, unlike the perceptual evaluation, deterministic measures reflect poor performance for those models generating wider colour distributions, e.g. the chrominance distance of a red car colourised with a plausible blue will be always higher than being colourised with a desaturated colour. Additionally, the perceptual loss is computed using a VGG19 model for image classification [23] pretrained on Imagenet. As proposed in previous works [9, 25], the

distance between the convolutional features produced by classifying real and generated samples is averaged as:

(7)

where

is an input tensor,

are the convolutional features from layers , and is the number of features of each volume.

Method PSNR []
IN 9.92 26.70 63.85
BN + SN 10.76 25.69 58.58
IN + SN 9.89 26.73 60.36
BN + SN + MD 11.50 25.11 58.52
IBN + SN + MD 11.20 25.32 57.77
BN (baseline) 9.83 26.77 64.05
TABLE I: Quantitative evaluation

Finally, colourfulness is evaluated by estimating the logarithmic colour distribution of the generated samples in the test dataset, comparing the proposed configurations with the prior distribution of the real data. As shown in Figure 4, SN provides improved colourfulness for both channels, reducing the area of intersection to the real data distribution with respect to the baseline methodology (BN) with uses Batch Normalisation. Finally, we improved the BN+SN setting by applying Multi-scale Discrimination (MD), which enables an increase in colourfulness by gaining detail in local and small areas. Examples of colourisation achieved by all analysed methods are presented in Figure  5.

V Conclusions

The work presented in this paper improved the state-of-the art for automatic colourisation using conditional adversarial networks. Proposed GAN architecture integrates techniques from the literature to ensure good training stability and to increase the contribution of the adversarial loss during training, which prevents the GAN from collapsing into desaturated colours. It was also shown that batch normalisation and instance normalisation can be integrated together in a fully-convolutional encoder-decoder architecture within a GAN framework without lowering performance, and encouraging the assignation of more plausible colours. Finally, this work shows that by boosting the performance of the adversarial framework, reduction of the desaturation effect can be achieved due to improvement of the discrimination of unreliable colours.

References

  • [1] A. Bugeau, V. Ta, and N. Papadakis (2013)

    Variational exemplar-based image colorization

    .
    IEEE Trans. on Image Proc. 23 (1). Cited by: §II.
  • [2] C. Connolly and T. Fleiss (1997) A study of efficiency and accuracy in the transformation from RGB to CIELAB color space. IEEE Trans. on Image Proc. 6 (7). Cited by: §III.
  • [3] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    IEEE Conf. on Computer Vision and Pattern Recognition

    ,
    Cited by: §IV-A.
  • [4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Proc. Systems, Cited by: §III-A.
  • [5] I. Goodfellow (2016) NIPS 2016 tutorial: Generative adversarial networks. arXiv:1701.00160. Cited by: §III-A.
  • [6] A. Haq, I. Gondal, and M. Murshed (2010) Automated multi-sensor color video fusion for nighttime video surveillance. In The IEEE symposium on Computers and Communications, Cited by: §I.
  • [7] S. Ioffe and C. Szegedy (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167. Cited by: §III-B.
  • [8] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017) Image-to-image translation with conditional adversarial networks. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Cited by: §I, §I, §II, §III-A, §III-A.
  • [9] J. Johnson, A. Alahi, and L. Fei-Fei (2016)

    Perceptual losses for real-time style transfer and super-resolution

    .
    In European Conf. on Computer Vision, Cited by: §IV-C.
  • [10] A. Krizhevsky, I. Sutskever, and G. E. Hinton (2012) Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Proc. Systems, Cited by: §I.
  • [11] P. Lagodzinski and B. Smolka (1995) Colorization of medical images. Pattern Recognition Letters 17. Cited by: §I.
  • [12] S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back (1997) Face recognition: A convolutional neural-network approach. IEEE Trans. on Neural Networks 8 (1). Cited by: §I.
  • [13] A. Levin, D. Lischinski, and Y. Weiss (2004) Colorization using optimization. In ACM Trans. on Graphics, Vol. 23. Cited by: §II.
  • [14] W. Markle and B. Hunt (1988-July 5) Coloring a black and white signal using motion detection. Google Patents. Note: US Patent 4,755,870 Cited by: §II.
  • [15] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida (2018) Spectral normalization for generative adversarial networks. arXiv:1802.05957. Cited by: item 2b, §III-C.
  • [16] S. G. Narasimhan and S. K. Nayar (2003) Contrast restoration of weather degraded images. IEEE Trans. on Pattern Analysis & Machine Intelligence (6). Cited by: §I.
  • [17] K. Nazeri, E. Ng, and M. Ebrahimi (2018) Image colorization using generative adversarial networks. In Int’l Conf. on Articulated Motion and Deformable Objects, Cited by: §III-A.
  • [18] X. Pan, P. Luo, J. Shi, and X. Tang (2018) Two at once: enhancing learning and generalization capacities via IBN-Net. In Proc. of the European Conf. on Computer Vision, Cited by: §I, §III-B.
  • [19] Y. Qu, T. Wong, and P. Heng (2006) Manga colorization. In ACM Trans. on Graphics, Vol. 25. Cited by: §II.
  • [20] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434. Cited by: §III-B.
  • [21] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In Int’l Conf. on Medical Image Computing and Computer-Assisted Intervention, Cited by: §III-A.
  • [22] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen (2016) Improved techniques for training GANs. In Advances in Neural Information Proc. Systems, Cited by: §II.
  • [23] K. Simonyan and A. Zisserman (2014) Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Cited by: §IV-C.
  • [24] D. Ulyanov, A. Vedaldi, and V. Lempitsky (2016) Instance normalization: the missing ingredient for fast stylization. arXiv:1607.08022. Cited by: §III-B.
  • [25] T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz, and B. Catanzaro (2018) High-resolution image synthesis and semantic manipulation with conditional GANs. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Cited by: §I, §II, §III-D, §IV-C.
  • [26] R. Zhang, P. Isola, and A. A. Efros (2016) Colorful image colorization. In European Conf. on Computer Vision, Cited by: §II, §II.
  • [27] R. Zhang, J. Zhu, P. Isola, X. Geng, A. S. Lin, T. Yu, and A. A. Efros (2017) Real-time user-guided image colorization with learned deep priors. arXiv:1705.02999. Cited by: §II.