Deep Inverse Tone Mapping Using LDR Based Learning for Estimating HDR Images with Absolute Luminance

02/28/2019 ∙ by Yuma Kinoshita, et al. ∙ 0

In this paper, a novel inverse tone mapping method using a convolutional neural network (CNN) with LDR based learning is proposed. In conventional inverse tone mapping with CNNs, generated HDR images cannot have absolute luminance, although relative luminance can. Moreover, loss functions suitable for learning HDR images are problematic, so it is difficult to train CNNs by directly using HDR images. In contrast, the proposed method enables us not only to estimate absolute luminance, but also to train a CNN by using LDR images. The CNN used in the proposed method learns a transformation from various input LDR images to LDR images mapped by Reinhard's global operator. Experimental results show that HDR images generated by the proposed method have higher-quality than HDR ones generated by conventional inverse tone mapping methods,in terms of HDR-VDP-2.2 and PU encoding + MS-SSIM.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The low dynamic range (LDR) of modern digital cameras is a major factor preventing cameras from capturing images as well as human vision. This is due to the limited dynamic range that imaging sensors have. For this reason, the interest of high dynamic range (HDR) imaging has been increasing.

To generate an HDR image from a single LDR image, various research works on inverse tone mapping have so far been reported [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Traditional way for inverse tone mapping is expanding the dynamic range of input LDR images by using a fixed function or a specific parameterized function [1, 2, 3, 4, 5, 6, 7]. However, inverse tone mapping without prior knowledge is generally an ill-posed problem because the following two reasons: pixel values might be lost by the sensor saturation, a non-linear camera response function (CRF) used at the time of photographing is unknown. Hence, HDR images produced by these methods have limited quality. To obtain high-quality HDR images, inverse tone mapping methods based on deep learning have recently attracted attention.

Several convolutional neural network (CNN) based inverse tone mapping methods have been proposed in the past [9, 8, 10]. These methods significantly improved the performance of inverse tone mapping. However, they still have some problems. Eilertsen et al. [9] aim to reconstruct saturated areas in input LDR images via a CNN using a new loss function calculated in the logarithmic domain, but their method is applicable only when a CRF used at the time of photographing is given. Endo et al. [8] have proposed a CNN based method that produce a set of differently exposed images from a single LDR image, to avoid difficulty of desining loss functions for HDR images. In the work by Marnerides et al. [10], an HDR image is directly produced by a CNN, where all HDR images are normalized into the interval instead of designing loss functions for HDR images. HDR images produced by Endo’s and Marnerides’ methods cannot have absolute luminance, although relative luminance can. Therefore, luminance calibration is necessary for all predicted HDR images.

Thus, in this paper, we propose a novel inverse tone mapping method using a CNN with LDR based learning. The proposed method enables us not only to estimate absolute luminance, but also to train a CNN by using LDR images. Instead of learning a map from LDR images to HDR ones, the CNN used in the proposed method learns a transformation from various input LDR images to LDR images mapped by Reinhard’s global operator. Our inverse tone mapping is done by applying the inverse transform of Reinhard’s global operator to LDR images produced by CNNs.

We evaluated the effectiveness of the proposed method in terms of the quality of generated HDR images, by a number of simulations. In the simulations, the proposed method was compared with state-of-the-art inverse tone mapping methods. Visual comparison results show that the proposed method can produce higher-quality HDR images than that of conventional methods. Moreover, the proposed method outperforms the conventional methods in terms of two objective quality metrics: HDR-VDP-2.2 and PU encoding + MS-SSIM.

2 Preparation

Figure 1 shows a typical imaging pipeline for a digital camera [11]. HDR images have pixel values that denote the radiant power density at the sensor, i.e., irradiance . The goal of inverse tone mapping is restoring irradiance from a single LDR image distorted by the sensor saturation and a non-linear CRF.

Figure 1: Imaging pipeline of digital camera

2.1 Tone mapping

Tone mapping is an operation that generates an LDR image from an HDR image. Since HDR images correspond to irradiance , tone mapping can be interpreted as virtually taking photos.

Reinhard’s global operator is one of typical tone mapping methods [12]. Under the use of the operator, each pixel value in LDR image is calculated from HDR image by

(1)
(2)

where denotes a pixel and is given by using two parameters and as

(3)

In Reinhard’s global operator, two parameters and are used. determines brightness of an output LDR image and

is the geometric mean of

, given by

(4)

where is the set of all pixels and is a small value to avoid singularities at .

2.2 Inverse transform of Reinhard’s global operator

Reinhard’s global operator is invertible when two conditions are satisfied: two parameters and are given, and LDR image is not quantized. The inverse transform is given from eqs. (1), (2), and (3, as follows:

(5)
(6)

Therefore, when LDR images are generated by Reinhard’s global operator, HDR images can be reconstructed under the conditions, so the HDR images can have the same absolute luminance as that of the original HDR ones.

The literature [7] showed that parameter can be calculated by using parameter and LDR image , as

(7)

where is calculated by substituting of eq. (6) into eq. (4), , and . Therefore, the inverse transform of Reinhard’s global operator is done by using only .

However, ordinary LDR images are not produced by using Reinhard’s global operator. In this paper, we aim to transform any LDR images into LDR ones produced by Reinhard’s global operator, by using a CNN. This conception will lead to a novel inverse tone mapping method which consists of the CNN and the inverse transform of Reinhard’s global operator.

3 Proposed inverse tone mapping

The proposed inverse tone mapping operation is described here.

3.1 Overview

The following is an overview of our training procedure and predicting procedure (see Fig. 2).

Figure 2: Proposed inverse tone mapping

Training

  1. [label=, align=parleft, leftmargin=*, nosep]

  2. Generate input LDR image from HDR image , by using a virtual camera with various CRFs. This corresponds to assuming that input LDR images are captured with various cameras.

  3. Generate target LDR image from HDR image by using Reinhard’s global operator with parameter , by using eq. (1) to eq. (4), where maps the average luminance of to the middle gray [12].

  4. Train a CNN in order to transform input LDR image into target LDR image .

Detailed training conditions will be shown in 3.4.

Predicting

  1. [label=, align=parleft, leftmargin=*, nosep]

  2. Let be an LDR image taken with a digital camera.

  3. Transform into by using the trained CNN. This transformation aims to estimate an LDR image generated by Reinhard’s global operator.

  4. Generate an HDR image from by using eq. (5) to eq. (7), where parameter is also utilized in inverse tone mapping.

In the proposed method, all target LDR images are generated with a fixed parameter . Therefore, parameter can be used when the inverse transform of Reinhard’s global operator is applied to LDR images generated by our CNN. As described in 2.2, another parameter can be calculated by using eq. (7) with and LDR image , so HDR images with absolute luminance can be estimated.

3.2 Network architecture

Figure 3 shows the network architecture of the CNN used in the proposed method. The CNN is designed on the basis of U-Net[13]. The input of this CNN is a 24-bit color LDR image with a size of pixels.

Figure 3: Network architecture. Each box denotes multi-channel feature map produced by each layer (or block). Number of channels is denoted on top or bottom of each box. Resolution of feature map is provided at left edge of box.

Each convolutional block consists of two convolutional layers, in which the number of filters is commonly the same. From the first block to the last block, the numbers are given as , , , , , , , , , , and , where all filters in convolutional blocks have a size of

. Max pooling layers with a kernel size of

and a stride of

are utilized for image downsampling.

For image upsampling, transposed convolutional layers with a stride of

and a filter size of are used in the proposed method. From the first transposed convolutional layer to the last one, the numbers of filters are , , , , and , respectively. Finally, an output LDR image is produced by a convolutional layer which has three filters with a size of .

The rectified linear unit (ReLU) activation function

[14]

is used for all convolutional layers and transposed convolutional ones except the final convolutional layer. Further, batch normalization

[15]

is applied to outputs of ReLU functions after convolutional layers. The activation function of the final layer is a sigmoid function.

3.3 Training

A lot of LDR images taken under various conditions and corresponding LDR ones generated by Reinhard’s global operator are needed for training the CNN in the proposed method. However, collecting these images with a sufficient amount is difficult. We therefore utilize HDR images to generate both input images and target images by using a virtual camera [9] and Reinhard’s global operator, respectively. For training, total 978 HDR images was collected from online available databases [16, 17, 18, 19, 20, 21].

A training procedure of our CNN is shown as follows.

  1. [label=, align=parleft, leftmargin=*, nosep]

  2. Select eight HDR images from 978 HDR images at random.

  3. Generate total eight couples of input LDR image and target LDR one from each of the eight HDR ones. Each couple is generated according to the following steps.

    1. [label=(), align=parleft, leftmargin=*, nosep]

    2. Crop an HDR image to an image patch with pixels. The size is given as a product of a uniform random number in range and the length of a short side of . In addition, the position of the patch in the HDR image is also determined at random.

    3. Resize to pixels.

    4. Flip

      upside down with probability 0.5.

    5. Flip left and right with probability 0.5.

    6. Calculate exposure from by , where shutter speed is calculated as by using a uniform random number in range .

    7. Generate an input LDR image from by a virtual camera , as

      (8)

      where and

      are random numbers that follow normal distributions with mean

      and variance

      and with mean and variance , respectively.

    8. Generate a target LDR image from by Reinhard’s global operator (see eq. (1) to eq. (4)) with parameter .

  4. Predict eight LDR images from eight input LDR images by the CNN.

  5. Evaluate errors between predicted images and target images by using the mean squared error.

  6. Update filter weights and biases in the CNN by back-propagation.

Note that steps ii(f) and ii(g) are applied to luminance of , and then RGB pixel values of and are obtained so that ratios of RGB values of LDR images are equal to those of HDR images.

In our experiments, the CNN was trained with 5000 epochs, where the above procedure was repeated 122 times in each epoch. In addition, each HDR image had only one chance to be selected, in step i in each epoch. He’s method

[22] was used for initialization of the CNN In addition, Adam optimizer [23] was utilized for optimization, where parameters in Adam were set as and .

4 Simulation

We evaluated the effectiveness of the proposed method by using two objective quality metrics.

4.1 Simulation conditions

In this experiment, test LDR images was generated from seven HDR images which were not used for training, according to steps from ii(a) to ii(f) in 3.3.

The quality of HDR images generated by the proposed method is evaluated by two objective quality metrics: HDR-VDP-2.2 [24], and PU encoding [25] with MS-SSIM [26] which utilize an original HDR image as a reference. Literature [27] have showed that these metrics are suitable for evaluating the quality of HDR images.

The proposed method is compared with three state-of-the-art methods: direct inverse tone mapping operator (Direct ITMO) [7], pseudo-multi-exposure-based tone fusion (PMET) [5], deep reverse tone mapping (DrTMO) [8]

, where the third method is based on CNNs, but the other methods are not based on machine learning.

4.2 Results

Figures 4 and 5 show examples of HDR imgaes generated by the four methods. Here, these images were tone-mapped from generated HDR images because HDR images cannot be displayed in commonly-used LDR devices.

(a) Input
(b) Direct ITMO[7]
(c) PMET[5]
(d) DrTMO[8]
(e) Proposed
(f) Ground truth
Figure 4: Experimental Results (Image 1)
(a) Input
(b) Direct ITMO[7]
(c) PMET[5]
(d) DrTMO[8]
(e) Proposed
(f) Ground truth
Figure 5: Experimental Results (Image 5)

From Fig. 4, it is confirmed that the proposed method produced an higher-quality HDR image which similar to an original HDR one , than other methods. The image generated by DrTMO includes unnatural color distortion because DrTMO produced extremely bright multi-exposure images. Figure 5 shows that the proposed method and DrTMO produced images which clearly represent details in dark regions. However, details in these regions of images generated by Direct ITMO and PMET are unclear. Therefore, the proposed method can produce higher-quality HDR images, which are similar to original HDR images having absolute luminance, than other methods. In addition, the proposed method has lower computational cost than DrTMO because DrTMO utilizes two CNNs having 3D deconvolution layers.

Tables 1 and 2 illustrate results of objective assessment in terms of HDR-VDP and PU encoding + MS-SSIM. As shown in Table 1, the proposed method provided the highest HDR-VDP scores in the four methods for five images. Since HDR-VDP evaluates HDR images in absolute-luminance scale, this result indicates that the proposed method can estimate absolute luminance with higher accuracy than the other methods. Moreover, the proposed method also provided the highest MS-SSIM scores for all images (see Table 2). For these reasons, the proposed method outperforms the conventional methods in terms of both HDR-VDP and PU-encoding + MS-SSIM.

Direct ITMO [7] PMET[5] DrTMO[8] Proposed
Image 1 32.89 32.89 47.92 50.94
Image 2 31.19 31.19 31.41 35.17
Image 3 28.57 28.57 45.83 44.86
Image 4 56.93 49.94 48.47 56.39
Image 5 39.38 39.38 34.33 48.24
Image 6 33.06 33.06 48.26 57.61
Image 7 44.77 44.77 43.82 61.49
Table 1: HDR-VDP-2.2 scores
Direct ITMO [7] PMET[5] DrTMO[8] Proposed
Image 1 0.6399 0.7750 0.4912 0.8828
Image 2 0.0599 0.0641 0.2382 0.8059
Image 3 0.7444 0.7393 0.5566 0.8502
Image 4 0.8445 0.5893 0.6624 0.9143
Image 5 0.4358 0.5734 0.1729 0.6641
Image 6 0.9156 0.7982 0.4083 0.9696
Image 7 0.5906 0.6997 0.6042 0.9804
Table 2: PU encoding + MS-SSIM scores

Those experimental results show that the proposed method is effective to generate high-quality HDR images from single LDR images.

5 Conclusion

In this paper, a novel inverse tone mapping method using a CNN with LDR based learning was proposed. By using LDR images mapped by Reinhard’s global operator for training, the proposed method enables us to estimate absolute luminance without specific loss functions for HDR images. Moreover, the proposed method has not only a higher performance but also a simpler network architecture, than conventional CNN based methods. Experimental results showed that the proposed method outperforms state-of-the-art inverse tone mapping methods in terms of visual comparison, HDR-VDP-2.2 and PU-encoding + MS-SSIM.

In future work, we will compare computational cost of the proposed method with that of conventional methods.

References

  • [1] F. Banterle, P. Ledda, K. Debattista, and A. Chalmers, “Inverse tone mapping,” in Proceedings of the 4th international conference on Computer graphics and interactive techniques in Australasia and Southeast Asia.   ACM, 2006, pp. 349–356.
  • [2] A. G. Rempel, M. Trentacoste, H. Seetzen, H. D. Young, W. Heidrich, L. Whitehead, and G. Ward, “Ldr2hdr: on-the-fly reverse tone mapping of legacy video and photographs,” ACM Transactions on Graphics (TOG), vol. 26, no. 3, p. 39, 2007.
  • [3] P.-H. Kuo, C.-S. Tang, and S.-Y. Chien, “Content-adaptive inverse tone mapping,” in Visual Communications and Image Processing (VCIP).   IEEE, 2012, pp. 1–6.
  • [4] H. Youngquing, Y. Fan, and V. Brost, “Dodging and burning inspired inverse tone mapping algorithm,” Journal of Computational Information Systems, vol. 9, no. 9, pp. 3461–3468, 2013.
  • [5] T.-H. Wang, C.-W. Chiu, W.-C. Wu, J.-W. Wang, C.-Y. Lin, C.-T. Chiu, and J.-J. Liou, “Pseudo-multiple-exposure-based tone fusion with local region adjustment,” Multimedia, IEEE Transactions on, vol. 17, no. 4, pp. 470–484, 2015.
  • [6] Y. Kinoshita, S. Shiota, and H. Kiya, “Fast inverse tone mapping with reinhard’s grobal operator,” in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).   IEEE, 2017, pp. 1972–1976.
  • [7] ——, “Fast inverse tone mapping based on reinhard’s global operator with estimated parameters,” IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, vol. 100, no. 11, pp. 2248–2255, 2017.
  • [8] Y. Endo, Y. Kanamori, and J. Mitani, “Deep reverse tone mapping,” ACM Transactions on Graphics (Proc. of SIGGRAPH ASIA 2017), vol. 36, no. 6, pp. 177:1–177:10, Nov. 2017.
  • [9] G. Eilertsen, J. Kronander, G. Denes, R. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNs,” ACM Transactions on Graphics (TOG), vol. 36, no. 6, pp. 178:1–178:15, 2017.
  • [10] D. Marnerides, T. Bashford-Rogers, J. Hatchett, and K. Debattista, “Expandnet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content,” in Computer Graphics Forum, vol. 37, no. 2.   Wiley Online Library, 2018, pp. 37–49.
  • [11] F. Dufaux, P. L. Callet, R. Mantiuk, and M. Mrak, High Dynamic Range Video, From Acquisition, to Display and Applications.   Elsevier Ltd., 2016.
  • [12] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic tone reproduction for digital images,” ACM Transactions on Graphics (TOG), vol. 21, no. 3, pp. 267–276, 2002.
  • [13] O. Ronneberger, P.Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI), ser. LNCS, vol. 9351.   Springer, 2015, pp. 234–241, (available on arXiv:1505.04597 [cs.CV]). [Online]. Available: http://lmb.informatik.uni-freiburg.de/Publications/2015/RFB15a
  • [14] X. Glorot, A. Bordes, and Y. Bengio, “Deep sparse rectifier neural networks,” in

    Proceedings of the fourteenth international conference on artificial intelligence and statistics

    , 2011, pp. 315–323.
  • [15] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” arXiv preprint arXiv:1502.03167, pp. 1–11, 2015.
  • [16] “Github - openexr.” [Online]. Available: https://github.com/openexr/
  • [17] “High dynamic range image examples.” [Online]. Available: http://www.anyhere.com/gward/hdrenc/pages/originals.html
  • [18] “The HDR photographic survey.” [Online]. Available: http://rit-mcsl.org/fairchild/HDRPS/HDRthumbs.html
  • [19] “EMPA HDR images dataset,” this dataset is unavailable now. [Online]. Available: http://empamedia.ethz.ch/hdrdatabase/index.php
  • [20] H. Nemoto, P. Korshunov, P. Hanhart, and T. Ebrahimi, “Visual attention in LDR and HDR images,” in 9th International Workshop on Video Processing and Quality Metrics for Consumer Electronics (VPQM), 2015, pp. 1–6. [Online]. Available: http://mmspg.epfl.ch/hdr-eye
  • [21] “Max planck institut informatik.” [Online]. Available: http://resources.mpi-inf.mpg.de/hdr/gallery.html
  • [22]

    K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” in

    Proceedings of the IEEE international conference on computer vision

    , 2015, pp. 1026–1034.
  • [23] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, pp. 1–15, 2014.
  • [24] M. Narwaria, R. K. Mantiuk, M. P. Da Silva, and P. Le Callet, “Hdr-vdp-2.2: a calibrated method for objective quality prediction of high-dynamic range and standard images,” Journal of Electronic Imaging, vol. 24, no. 1, pp. 010 501–010 501, 2015.
  • [25] T. O. Aydın, R. Mantiuk, and H.-P. Seidel, “Extending quality metrics to full luminance range images,” in Electronic Imaging 2008.   International Society for Optics and Photonics, 2008, pp. 68 060B–68 060B.
  • [26] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in Signals, Systems and Computers, 2004. Conference Record of the Thirty-Seventh Asilomar Conference on, vol. 2.   IEEE, 2003, pp. 1398–1402.
  • [27] P. Hanhart, M. V. Bernardo, M. Pereira, A. M. Pinheiro, and T. Ebrahimi, “Benchmarking of objective quality metrics for hdr image quality assessment,” EURASIP Journal on Image and Video Processing, vol. 2015, no. 1, pp. 1–18, 2015.