1 Introduction
There is no doubt that highquality image plays a critical role in computer vision tasks such as object detection and scene understanding. Unfortunately, the images obtained in reality are often degraded in some cases. For example, when captured in lowlight conditions, images always suffer from very low contrast and brightness, which increases the difficulty of subsequent highlevel tasks in a great extent. Figure
1(a) provides one case, from which many details have been buried into the dark background. Due to the fact that in many cases only lowlight images can be captured, several lowlight image enhancement methods have been proposed to overcome this problem. In general, these methods can be categorized into two groups: histogrambased methods and Retinexbased methods.In this paper, a novel lowlight image enhancement model based on convolutional neural network and Retinex theory is proposed. To the best of our knowledge, this is the first work of using convolutional neural network and Retinex theory to solve lowlight image enhancement. Firstly, we explain that multiscale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. The main drawback of multiscale Retinex is that the parameters of kernels depend on artificial settings rather than learning from data, which makes the accuracy and flexibility of the model reduce in some way. Motivated by this fact, we put forward a Convolutional Neural Network (MSRnet) that directly learns an endtoend mapping between dark and bright images. Our method differs fundamentally from existing approaches. We regard lowlight image enhancement as a supervised learning problem. Furthermore, the surround functions in Retinex theory
[19] are formulated as convolutional layers, which are involved in optimization by backpropagation.Overall, the contribution of our work can be boiled down to three aspects: First of all, we establish a relationship between multiscale Retinex and feedforward convolutional neural network. Secondly, we consider lowlight image enhancement as a supervised learning problem where dark and bright images are treated as input and output respectively. Last but not least, experiments on a number of challenging images reveal the advantages of our method in comparison with other stateoftheart methods. Figure 1 gives an example. Our method achieves a brighter and more natural result with a clearer texture and richer details.
2 Related Work
2.1 Lowlight Image Enhancement
In general, lowlight image enhancement can be categorized into two groups: histogrambased methods and Retinexbased methods.
Directly amplifying the lowlight image by histogram transformation is probably the most intuitive way to lighten the dark image. One of the simplest and most widely used technique is histogram equalization(HE), which makes the histogram of the whole image as balanced as possible. Gamma Correction is also a great method to enhance the contrast and brightness by expanding the dark regions and compressing the bright ones in the mean time. However, the main drawback of these method is that each pixel in the image is treated individually, without the dependence of their neighborhoods, which makes the result look inconsistent with real scenes. To resolve the mentioned problems above, variational methods which use different regularization terms on the histogram have been proposed. For example, contextual and variational contrast enhancement
[5] tries to find a histogram mapping to get large graylevel difference.In this work, Retinexbased methods have been taken into more account. Retinex theory is introduced by Land [19] to explain the color perception property of the human vision system. The dominant assumption of Retinex theory is that the image can be decomposed into reflection and illumination. Singlescale Retinex(SSR) [17], based on the center/surround Retinex, is similar to the differenceofGaussian(DOG) function which is widely used in natural vision science, and it treats the reflectance as the final enhanced result. Multiscale Retinex(MSR) [16] can be considered as a weighted sum of several different SSR outputs. However, these methods often look unnatural. Further, modified MSR [16] applies the color restoration function(CRF) in the chromaticity space to eliminate the color distortions and gray zones evident in the MSR output. Recently, the method proposed in [11]
tries to estimate the illumination of each pixel by finding the maximum value in R, G and B channel, then refines the initial illumination map by imposing a structure prior on it. Seonhee Park
et al. [23] use the variationaloptimizationbased Retinex algorithm to enhance the lowlight image. Fu et al. [10] propose a new weighted variational model to estimate both the reflection and the illumination. Different from conventional variational models, their model can preserve the estimated reflectance with more details. Inspired by the dark channel method on dehazing, [8] finds the inverted lowlight image looks like haze image. They try to remove the inverted lowlight image of haze by using the method proposed in [12] and then invert it again to get the final result.2.2 Convolutional Neural Network for Lowlevel Vision Tasks
Recently, powerful capability of deep neural network [18] has led to dramatic improvements in object recognition [13], object detection [24], object tracking [29], semantic segmentation [20]
and so on. Besides these highlevel vision tasks, deep learning has also shown great ability at lowlevel vision tasks. For instance, Dong
et al. [7]train a deep convolutional neural network (SRCNN) to accomplish the image superresolution tasks. Fu
et al. [9] try to remove rain from single images via a deep detail network. Cai et al. [4] propose a trainable endtoend system named DehazeNet, which takes a hazy image as input and outputs its medium transmission map that is subsequently used to recover a hazefree image via atmospheric scattering model.3 CNN Network for Lowlight Image Enhancement
We elaborate that multiscale Retinex as a lowlight image enhancement method is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels from a novel perspective. Subsequently, we propose a Convolutional Neural Network (MSRnet) that directly learns an endtoend mapping between dark and bright images.
3.1 Multiscale Retinex is a CNN Network
The dominant assumption of Retinex theory is that the image can be decomposed into reflection and illumination:
(1) 
Where and represent the captured image and the desired recovery, respectively. Singlescale Retinex(SSR) [17], based on the center/surround Retinex, is similar to the differenceofGaussian(DOG) function which is widely used in natural vision science. Mathematically, this takes the form
(2) 
Where is the associated Retinex output, is the image distribution in the color spectral band, denotes the convolution operation, and is the Gaussian surround function
(3) 
By changing the position of the logarithm in the above formula and setting
(5) 
In this way we obtained a classic high pass linear filter, but applied to instead of . The above two formulas, Equation 2 and Equation 5
, are of course not equivalent in mathematical form. The former is the logarithm of ratio between the image and a weighted average of it, while the latter is the logarithm of ratio between the image and a weighted product. Actually, this amounts to choosing between an arithmetic mean and a geometric mean. Experiments show that these two methods are not much different. In this work we choose the latter for simplicity.
Further, multiscale Retinex(MSR) [16] is considered as a weighted sum of the outputs of several different SSR outputs. Mathematically,
(6) 
Where is the number of scales, denotes the component of the scale, represents the spectral component of the MSR output and is the weight associated with the scale.
After experimenting with one small scale (standard deviation ) and one large scale (standard deviation ), the need for the third intermediate scale is immediately apparent in order to eliminate the visible “halo” artifacts near strong edges [16]. Thus, the formula is as follows:
(7)  
More concrete, we have
(8)  
Noticing the fact that convolution of two Gaussian functions is still a Gaussian function, whose variance is equal to the sum of two original variance. Therefore, we can represent the above equation
8 by using the cascading structure, as Figure 2(a) shows.The three cascading convolution layers are considered as three different Gaussian kernels. More concrete, the parameter of the first convolution layer is based on a Gaussian distribution, whose variance is
. Similarly, the variances of the second and the third convolution layers are , , respectively. At last, the concatenation and convolution layers represent the weighted average. In a word, multiscale Retinex is practically equivalent to a feedforward convolutional neural network with a residual structure.3.2 Proposed Method
In the previous section, we put forward the fact that multiscale Retinex is equivalent to a feedforward convolutional neural network. In this section, inspired by the novel fact, we consider a convolutional neural network to solve the lowlight image enhancement problem. Our method outlined in Figure 2(b) differs fundamentally from existing approaches, which takes lowlight image enhancement as a supervised learning problem. The input and output data correspond to the lowlight and bright images, respectively. More detail about our training dataset will be explained in section 4.
Our model consists of three components: Multiscale Logarithmic Transformation, Differenceofconvolution and Color Restoration Function. Compared to singlescale logarithmic transformation in MSR, our model attempts to use multiscale logarithmic transformation, which has been verified to achieve a better performance in practice. Figure 7 gives an example. Differenceofconvolution plays an analogous role with differenceofGaussian in MSR, and so does color restoration function. The main difference between our model and original MSR is that most of the parameters in our model are learned from the training data, while the parameters in MSR such as the variance and other constant depend on the artificial setting.
Formally, we denote the lowlight image as input and corresponding bright image as . Suppose , , denote three subfunctions: multiscale logarithmic transformation, differenceofconvolution, and color restoration function. Our model can be written as the composition of three functions:
(9) 
Multiscale Logarithmic Transformation: Multiscale logarithmic transformation takes the original lowlight image as input and computes the same size output . Firstly, the dark image is enhanced by several difference logarithmic transformation. The formula is as follows:
(10) 
Where denotes the output of the scale with the logarithmic base , and
denotes the number of logarithmic transformation function. Next, we concatenate these 3D tensors
(3 channels width height) to a larger 3D tensor (3n channels widthheight) and then make it go through convolutional and ReLU layers.
(11)  
(12) 
Where * denotes a convolution operator, is a convolution kernel that shrinks the channels to 3 channels, corresponds to a ReLU and is a convolution kernel with three output channels for better nonlinear representation. As we can see from the above operation, this part is mainly designed to get a better image via weighted sums of multiple logarithmic transformations, which accelerates the convergence of the network.
Differenceofconvolution: Differenceofconvolution function takes the input and computes the same size output . Firstly, the input passes through multiconvolutional layers.
(13)  
(14) 
Where denotes the convolutional layer, is equal to the number of convolutional layers. And represents the kernel. As mentioned earlier in section 3.1, are considered as smooth images at different scales, then we concatenate these 3D tensors to a larger 3D tensor and get it pass the convolutional layer:
(15)  
(16) 
Where the is a convolutional layer with three output channels and the receptive field, which is equivalent to averaging these images. Similar to MSR, the output of is the subtraction between and :
(17) 
Color Restoration Function: Considering that MSR result often looks unnatural, modified MSR [16] applies the color restoration function(CRF) in the chromaticity space to eliminate the color distortions and gray zones evident in the MSR output. In our model CRF is imitated by a convolutional layer with three output channels:
(18) 
Where is the final enhanced image. For more visualization, a low light image and the results of ,, have been shown in Figure 3 respectively.
3.3 Objective function
The goal of our model is to train a deep convolutional neural network to make the output and the label as close as possible under the criteria of Frobenius norm.
(19) 
Where is the number of training samples, represents the regularization parameter.
Weights and bias are the whole parameters in our model. Besides, the regularization parameter , the number of logarithmic transformation function , the scale of logarithmic transformation and the number of convolutional layers , are considered as the hyperparameters in the model. The parameters in our model are optimized by backpropagation, while the hyperparameters are chosen by gridsearch. More detail about the sensitivity analysis of hyperparameters will be elaborated in section 4.
Dataset  Ground truth  Synthetic image  MSRCR[16]  Dong[8]  LIME[11]  SRIE[10]  Ours 

1st row  1/2.69  0.35/6.44  0.89/3.29  0.64/3.57  0.74/3.04  0.58/3.72  0.91/2.54 
2nd row  1/3.17  0.23/4.43  0.90/3.61  0.42/4.69  0.68/4.23  0.39/3.94  0.94/3.54 
3rd row  1/3.46  0.26/3.73  0.81/3.34  0.47/3.75  0.62/3.89  0.48/3.57  0.91/3.33 
2,000 test images  1/3.67  0.74/3.53  0.90/3.50  0.69/4.16  0.84/3.89  0.63/3.66  0.92/3.46 
4 Experiments
In this section, we elaborately construct an image dataset and spend about 10 hours on training the endtoend network by using the Caffe software package
[15]. To evaluate the performance of our method, we use both the synthetic test data, the public realworld dataset and compare with four recent stateoftheart lowlight image enhancement methods. At the same time, we analyse the running time and evaluate the effect of hyperparameters to the final results.Realworld images  MSRCR[16]  Dong[8]  LIME[11]  SRIE[10]  Ours  

Madison  9.56/3.05  15.01/3.13  11.19/3.63  10.74/3.43  10.97/4.18  15.49/3.06 
Cave  13.75/3.21  14.91/5.91  14.54/6.35  12.66/6.19  14.66/4.44  15.75/5.84 
Kluki  10.36/3.19  14.62/2.33  12.71/2.43  12.86/2.78  12.60/2.42  15.91/1.81 
MEF dataset  11.02/4.31  15.47/3.32  13.27/4.06  13.52/3.69  13.16/3.63  16.28/3.03 
NPE dataset  12.54/4.13  14.61/4.37  14.41/4.12  13.93/4.25  14.12/4.05  15.34/3.73 
VV dataset  12.47/3.52  17.87/2.54  14.80/2.76  15.79/2.48  14.25/2.78  18.26/2.29 
4.1 Image Dataset Generation
On the one hand, in order to learn the parameters of the MSRnet, we construct a new image dataset, which contains a great amount of high quality(HQ) and lowlight(LL) natural images. An important consideration is that all the image should be selected in real world scenes. We collect more than 20,000 images from the UCID dataset [26], the BSD dataset [2] and Google image search. Unfortunately, many of these images suffer significant distortions or contain inappropriate content. Images with obvious distortions such as heavy compression, strong motion blur, out of focus blur, low contrast, underexposure or overexposure and substantial sensor noise are deleted firstly. After this, we exclude inappropriate images such as too small or too large size, cartoon and computer generated content to obtain 1,000 better source images. Then, for each image, we use Photoshop method [1] to figure out the ideal brightness and contrast settings, then process them one by one to get the high quality(HQ) images with the best visual effect. At last, each HQ image is used to generate 10 lowlight(LL) images by reducing brightness and contrast randomly and using gamma correction with stochastic parameters. So we attain a dataset containing 10,000 pairs of HQ/LL images. Further, 8,000 images in the dataset are randomly selected to generate one million HQ/LL patch pairs for training. And the remaining 2,000 images pairs are used to test the trained network during training(please see more details about the dataset generation in the supplemental materials).
4.2 Training Setup
We set the depth of MSRnet to , and use Adam with weight decay of and a minibatch size of 64. We start with a learning rate of , dividing it by 10 at 100K and 200K iterations, and terminate training at 300K iterations. During our experiments, we found that the network with multiscale logarithmic transformation performs better than that with singlescale logarithmic transformation, so we set the number of logarithmic transformation function and respectively. The size of convolution kernel has been described partially in the previous section 3.2, and the specific values are shown in the Table 3.
conv132  conv13  

conv33  conv13  
conv332 
4.3 Results on synthetic test data
Figure 4 shows visual comparison for three synthesized low light images. As we can see, the result of MSRCR [16] looks unnatural, the method proposed by Dong [8] always generates unexpected black edge and the result of SRIE [10] tends to be dark in some extent. LIME [11] has a similar result to our method, while ours achieves better performance in dark regions.
Since the ground truth is known for the synthetic test data, we use SSIM [31] for a quantitative evaluation and NIQE [22] to assess the natural preservation. A higher SSIM indicates that the enhanced image is closer to the ground truth, while a lower NIQE value represents a higher image quality. All the best results are boldfaced. As shown in Table 1, our method achieves higher SSIM and lower NIQE average than other methods for 2,000 test images.
4.4 Results on realworld data
Figure 5 also shows the visual comparison for three realworld lowlight images. As shown in every red rectangle, our method MSRnet always achieves better performance in dark regions. More specifically, in the first and second image we get brighter result. In the third image we achieve more natural result, for instance, the tree has been enhanced to be bright green. Besides, from the Garden image in Figure 1, our result gets a clearer texture and richer details than other methods.
Besides the NIQE to evaluate the image quality, we assess the detail enhancement through the Discrete Entropy [32]. A higher discrete entropy shows that the color is richer and the outline is clearer. We delete the highlight images and only keep the lowlight images on the MEF dataset [21, 33], NPE dataset [30] and VV dataset [27, 28] to evaluate our method. As shown in Table 2, for different dataset, MSRnet can also obtain lower NIQE and higher discrete entropy.
Considering the fact that dealing with realworld low light images sometimes causes noise, we attempt to use a denoising algorithm BM3D [6] as a postprocessing. An example is shown in Figure 6, where removing the noise after our deep network can further improve the visual quality on realworld low light image.
4.5 Color Constancy
In addition to enhance the dark image, our model also does a good job in correcting the color. Figure 4 provides some examples. As we can see, our enhanced image is much more similar to the ground truth. To evaluate the performance of the different algorithms, the angular error [14] between the ground truth image and model result is used:
(20) 
A lower error indicates that the color of enhanced image is closer to the ground truth. Table 4 gives some specific result of the images in Figure 4.
4.6 Running time on test data
Compared with other nondeep methods, our approach processes the lowlight images efficiently. Table 5 shows the average running time of processing a test image for three different sizes, and each averaged 100 testing images. These experiments are tested on a PC running Windows 10 OS with 64G RAM, 3.6GHz CPU and Nvidia GeForce GTX 1080 GPU. All codes of these methods are run in Matlab, which ensures the fairness of time comparison. Methods [16],[8],[11],[10] are implemented using CPU, while our method is tested on both CPU and GPU. Because our method is a completely feedforward process after network training, we can find that our approach on GPU processes significantly faster than methods [16],[8],[10] except [11].
4.7 Study of MSRnet Parameters
The number of logarithmic transformation function and the number of convolutional layers are two main hyperparameters in MSRnet. In this subsection, we try to experiment on the effect of these hyperparameters on the final results. As we all know, the effectiveness of deeper structures for lowlevel image tasks is found not as apparent as that shown in highlevel tasks [3, 25]. Specifically, we test for the number of logarithmic transformation function and ;; respectively. At the same time, we set the number of convolutional layers . For the sake of fairness, all these networks are iterated 100K times and 100 synthetic images are used to measure the result by averaging their SSIM.
As shown in Table 6, adding more hidden layers obtains higher SSIM and achieves better results. We believe that, with an appropriate design to avoid overfitting, deeper structure can improve the network’s nonlinear capacity and learning ability. In Figure 7, from the color of the boy’s skin and the clothes, the network using multiscale logarithmic transformation performs better. It is also essential for MSRnet to improve nonlinear capacity by using multiscale logarithmic transformation. To get better performance within the running time and hardware limits, we finally chose the number of logarithmic transformation function and the depth of convolutional layers for our experiments above.
0.879  0.897  0.904  
0.890  0.902  0.914  
0.893  0.905  0.920 
5 Conclusion
In this paper, we propose a novel deep learning approach for lowlight image enhancement. It shows that multiscale Retinex is equivalent to a feedforward convolutional neural network with different Gaussian convolution kernels. After this, we construct a Convolutional Neural Network(MSRnet) that directly learns an endtoend mapping between dark and bright images with little extra pre/postprocessing beyond the optimization. Experiments on synthetic and realworld data reveal the advantages of our method in comparison with other stateoftheart methods from the qualitative and quantitative perspective. Nevertheless, there are still some problems with this approach. Because of the limited receptive field in our model, very smooth regions such as clear sky are sometimes attacked by halo effect. Enlarging receptive field or adding hidden layers may solve this problem.
References
 [1] http://www.photoshopessentials.com/photoediting/addingabrightnesscontrastadjustmentlayerinphotoshop.html.
 [2] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. IEEE transactions on pattern analysis and machine intelligence, 33(5):898–916, 2011.
 [3] J. Bruna, P. Sprechmann, and Y. Lecun. Image superresolution using deep convolutional networks. Computer Science, 2015.
 [4] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. Dehazenet: An endtoend system for single image haze removal. IEEE Transactions on Image Processing, 25(11):5187–5198, 2016.
 [5] T. Celik and T. Tjahjadi. Contextual and variational contrast enhancement. IEEE Transactions on Image Processing, 20(12):3431–3441, 2011.
 [6] K. Dabov, A. Foi, V. Katkovnik, and K. O. Egiazarian. Image restoration by sparse 3d transformdomain collaborative filtering. In Image Processing: Algorithms and Systems, page 681207, 2008.
 [7] C. Dong, C. C. Loy, K. He, and X. Tang. Image superresolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295–307, 2016.
 [8] X. Dong, Y. A. Pang, and J. G. Wen. Fast efficient algorithm for enhancement of low lighting video. In ACM SIGGRAPH 2010 Posters, page 69. ACM, 2010.
 [9] X. Fu, J. Huang, D. Z. Y. Huang, X. Ding, and J. Paisley. Removing rain from single images via a deep detail network.

[10]
X. Fu, D. Zeng, Y. Huang, X.P. Zhang, and X. Ding.
A weighted variational model for simultaneous reflectance and
illumination estimation.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pages 2782–2790, 2016.  [11] X. Guo, Y. Li, and H. Ling. Lime: Lowlight image enhancement via illumination map estimation. IEEE Transactions on Image Processing, 26(2):982–993, 2017.
 [12] K. He, J. Sun, and X. Tang. Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12):2341–2353, 2011.
 [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
 [14] S. D. Hordley and G. D. Finlayson. Reevaluating colour constancy algorithms. 1(1):76–79, 2004.
 [15] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, pages 675–678. ACM, 2014.
 [16] D. J. Jobson, Z.u. Rahman, and G. A. Woodell. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image processing, 6(7):965–976, 1997.
 [17] D. J. Jobson, Z.u. Rahman, and G. A. Woodell. Properties and performance of a center/surround retinex. IEEE transactions on image processing, 6(3):451–462, 1997.
 [18] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
 [19] E. H. Land. The retinex theory of color vision. Scientific American, 237(6):108–129, 1977.
 [20] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3431–3440, 2015.
 [21] K. Ma, K. Zeng, and Z. Wang. Perceptual quality assessment for multiexposure image fusion. IEEE Transactions on Image Processing, 24(11):3345–3356, 2015.
 [22] A. Mittal, R. Soundararajan, and A. C. Bovik. Making a completely blind image quality analyzer. IEEE Signal Processing Letters, 20(3):209–212, 2013.
 [23] S. Park, S. Yu, B. Moon, S. Ko, and J. Paik. Lowlight image enhancement using variational optimizationbased retinex model. IEEE Transactions on Consumer Electronics, 63(2):178–184, 2017.
 [24] S. Ren, K. He, R. Girshick, and J. Sun. Faster rcnn: Towards realtime object detection with region proposal networks. In Advances in neural information processing systems, pages 91–99, 2015.
 [25] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M. H. Yang. Single image dehazing via multiscale convolutional neural networks. pages 154–169, 2016.
 [26] G. Schaefer and M. Stich. Ucid: An uncompressed color image database. In Storage and Retrieval Methods and Applications for Multimedia 2004, volume 5307, pages 472–481. International Society for Optics and Photonics, 2003.
 [27] V. Vonikakis, D. Chrysostomou, R. Kouskouridas, and A. Gasteratos. Improving the robustness in feature detection by local contrast enhancement. In Imaging Systems and Techniques (IST), 2012 IEEE International Conference on, pages 158–163. IEEE, 2012.
 [28] V. Vonikakis, D. Chrysostomou, R. Kouskouridas, and A. Gasteratos. A biologically inspired scalespace for illumination invariant feature detection. Measurement Science and Technology, 24(7):074024, 2013.
 [29] L. Wang, W. Ouyang, X. Wang, and H. Lu. Visual tracking with fully convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 3119–3127, 2015.
 [30] S. Wang, J. Zheng, H.M. Hu, and B. Li. Naturalness preserved enhancement algorithm for nonuniform illumination images. IEEE Transactions on Image Processing, 22(9):3538–3548, 2013.
 [31] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.
 [32] Z. Ye, H. Mohamadian, and Y. Ye. Discrete entropy and relative entropy study on nonlinear clustering of underwater and arial images. In Control Applications, 2007. CCA 2007. IEEE International Conference on, pages 313–318. IEEE, 2007.
 [33] K. Zeng, K. Ma, R. Hassen, and Z. Wang. Perceptual evaluation of multiexposure image fusion algorithms. In Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on, pages 7–12. IEEE, 2014.
Comments
There are no comments yet.