I Introduction
Highvisibility images with sufficient details of target scenes are quite essential for many multimedia and computer vision applications. However, the captured images often suffer from lowvisibility due to nighttime, backlighting or some light limited scenes in most realworld scenarios. Thus, underexposed image correction is generally demanded in many practical fields. During the past few years, various underexposed image correction techniques have been proposed.
In the early stage, researchers tend to design the methods based on histogram modification ([1, 2]) for underexposed image correction. This type of technique indeed improves the luminance, but it cannot work well for nonuniform illumination. Retinexbased image decomposition approaches ([3, 4, 5]) are widely used for this task at the present stage, which follow the physical law so that they achieve excellent performance. However, since they always need to design the complex prior regularization to narrow down the solution, not only increasing the timeconsuming, but also resulting in the insufficient improvement of luminance and depict of details. Undoubtedly, networkbased techniques ([6, 7]) can be directly adopted to settle this task. Unfortunately, the difficulty of obtaining training pairs limits the development of deep network in this task. Actually, most of existing related networkbased works tend to generate the unnaturalness enhanced results in real scenarios.
(a) Input  (b) LIME  (c) HDRNet  (d) RetinexNet  (e) Ours 
Ia Our Contributions
As discussed above, to compensate for the illposedness of the image decomposition, strong priors for both the reflectance and illumination are required to regularize the solution space. However, designing such exact priors in handcrafted manner is challenging and needs extremely high mathematical skills. More importantly, the purely designed priors may only suitable for the data with given distributions, thus limit their applications in more complex realworld scenarios (see Fig. 1 (b)). Additionally, due to the lack of exact references for training, the networks learned by the endtoend manners may hard to enhance all these details in the dark regions (see Fig. 1 (c)) or generate the unnatural result (see Fig. 1 (d)).
In this work, we propose a novel underexposed image correction framework, in which the domain knowledge and training data are integrated to generate the hybrid priors for Retinex decomposition. Specifically, we establish a generic energyinspired deep propagation framework, based on the image decomposition model in Eq. (1). By introducing a schematic alternating halfquadratic splitting scheme, the fundamental propagations of the reflectance and illumination are established. To navigate the coupled iterations towards the desired solutions, we develop the hybrid priors, which consist of both explicitly designed distribution constraints (i.e., knowledge) and implicitly trained deep architectures (i.e., data). The advantage of our proposed methodology is initially verified by comparing it with one representative decompositionbased method (i.e., LIME [3]) and two endtoend discriminative learning approaches (i.e., HDRNet [6], RetinexNet [7]) on an example image in Fig. 1.
The main contributions of our proposed method can be summarized in the following four aspects:

We provide a generic energyinspired deep propagation perspective to formulate the image decomposition problem. It will be demonstrated that both underexposed image correction and other related vision tasks (e.g., dehazing) can be addressed within this framework.

With the flexibility of our proposed framework, we can successfully combine domain knowledge (e.g., Retinex principle, structure priors) and datadependent architectures (e.g., learnable descent directions) to navigate the propagations of the coupled image component.

To fully indicate the naturalness and practical value of our proposed method, we not only conduct extensive experiments on challenging underexposed images, but also execute the face detection to further manifest the naturalness and practical values of our method.

To evaluate the scalability of our built framework, the task of single image haze removal is considered. We present visual comparison on some challenging hazy images in realworld scenarios, which indicates our superiority.
Ii Related Works
In this section, we state a brief review of the related works. Generally, existing underexposed image correction techniques can be divided into three categories: the methods based on histogram modification, image decomposition and discriminate learning.
Histogram Modification: The histogrambased methods make efforts to modify the histogram distributions to recall visibilities of dark regions. Histogram equalization [8] is one of the most commonly used histogram modification techniques. However, it tends to result in overenhancement. Different constraints have also been designed for brightness preservation [1, 2, 9] and weight adjustment [10]. However, these methods cannot work well for nonuniform illuminations.
Image Decomposition: Retinex theory assumes that the scene in human’s eyes is the product of reflectance and illumination layers [11], in which illumination represents the light intensity and reflectance denotes the physical characteristic of objects [12]. Given the observation , this model can be formulated as
(1) 
where “” denotes pixelwise multiplication, and are the reflectance and illumination parts, respectively. With basic physical principles, it is also necessary to assume that the pixel values of and are in definite ranges, i.e., and .
Although with the additional value constraints, the decomposition above is highly illposed [13]; thus strong priors are required for both the reflectance and illumination to regularize the solution space. The work in [14] adopted
regularizer to estimate the illumination in the logarithmic domain.
[15] adopted Total Variational (TV) based model in the logarithmic domain for intrinsic image decomposition. Fu et al. directly designed probabilistic formulations to simultaneously estimate reflectance and illumination in the image domain [16]. Furthermore, they considered a weighted variational decomposition formulation in the logarithmic domain [17]. In [3], the illumination is refined by only preserving the main contour based on an initial illumination. Similarly, the paper [4] proposed a perceptually bidirectional similarity to produce naturallooking results based on the illumination optimization. The work in [12] combined different priors to build a complex energy model. It can be seen that these methods just adopt contrived priors to constrain their decompositions. However, it is hard to utilize humandesigned priors to investigate the intrinsic structures of the underlying illumination and reflectance, especially in the image domain. This is because we are actually still not sure about the exact distributions of these latent components.Discriminative Learning:
Very recently, discriminative learning based methods are proposed. Especially like Convolutional Neural Networks (CNNs), which has been demonstrated that CNNs can learn realistic natural image distributions from a number of images
[18]. Thus, several approaches have been proposed to apply the implicit CNN priors for lowlevel vision tasks, such as superresolution (
[19, 20]), deconvolution ([21, 22]), dehazing ([23]), and others ([24, 25, 26]). However, since there exist highly coupled variables and complex constraints, which cause that both synthetic and realworld datasets are all obtained difficult, thus it is extremely challenging to apply CNNs to inference the decomposition models in Eq. (1). Up to now, there indeed exist some CNNbased approaches for addressing underexposed image correction task ([6, 7, 27, 28, 29]). Actually, training dataset becomes the key restraints for the practical performance of these works. In [6], professional photographers are employed to generate the training pairs. [27] proposes to train the designed network using the synthetic dataset generated from the operator of Gamma Correction. Considering the multiexposure images, the work in [28] builds a large scale multiexposure image dataset, and generates highquality reference images based on 13 MEF and HDR algorithms for network training. The turning point arises in the work [7] which builds a new dataset, i.e. LOwLight dataset (LOL), which is generated by adjusting the exposure time. This paper also proposes an endtoend Retinexbased deep network which combines the Retinex theory and learnable architecture. Lately, a practical networkbased algorithm is proposed in [29] based on LOL dataset. Overall, these networkbased approaches all tend to generate the unnaturalness performance with insufficient details and under/over exposure. The reason is that training pairs are inaccurate to depict the real distribution, e.g., LOL just considers the exposure time, while there exist many physical factors (e.g., illumination condition) in real scenarios.In summary, training pairs are hard to generate using existing techniques for underexposure image correction, which severely limits the development of deep learning in this area. So it is intuitive to consider the learnable architecture in some deductive ways, to skip the difficulty of directly generating training pairs for this task.
Iii The Proposed Framework
Most existing decompositionbased models [14, 15, 17] are based on the logarithmic transformation. However, the side effect is that these undesired structures are amplified in the low magnitude stimuli areas and edges may become fuzzy. Therefore, in this work we directly formulate our Retinex decomposition in the image domain as the following regularized variational energy model:
(2) 
where is the fidelity term derived from the model Eq. (1), and are the prior regularization terms of illumination and reflectance, respectively.
Notice that different from most existing decompositionbased methods, which only design the prior penalties based on their intuitions, we provide a new way to integrate knowledge and data to obtain more efficient hybrid priors for our decomposition problem in the next section.
Iiia Hybrid Priors Navigated Deep Propagation
Since the expression of the explicit formulations of and in Eq. (2) is hard to obtain, it is indeed challenging to adopt standard iteration schemes to optimize this variational energy. Next, we will provide a new propagation framework to integrate alternating halfquadratic splitting scheme and hybrid priors to respectively obtain our desired illumination and reflectance.
IiiA1 Illumination Propagation
We first consider the prior term of illumination as the combination of one principled assumption (i.e., spatial smoothness) and one implicit datadependent term as follows:
(3) 
where is a tradeoff parameter, enforces the smooth constraint and denotes our implicit prior submodule (learned from data).
Then utilizing halfquadratic splitting technique with an auxiliary variable (with penalty parameter ), we have the following subproblem for illumination updating
(4) 
Rather than explicitly formulating and calculating , we directly update the auxiliary variable via the following learnable descent scheme
(5) 
where denotes the parameterized descent directions (e.g., CNN architectures) with parameters . We will discuss the details of these learnable architectures in the next part. It should be emphasized that we actually provide a way to learn the guidance from training data to navigate our illumination propagation. Then we are ready to update for the th stage. By further reformulating the fidelity as , we can obtain the updating scheme of as
(6) 
where denotes the projection on .
IiiA2 Reflectance Propagation
As for the reflectance , we would like to preserve the sharp edge structure of the reflectance during the enhancement process. Thus we consider the following hybrid regularization term as
(7) 
where the first term is a widely used nonconvex potential function (with a sparsity controlled parameter , can be used to reveal the sharp edge structure) [30], is the tradeoff parameter and denotes the datadependent prior for reflectance. Here denotes the th element of . Using halfquadratic reformulation technique, we can obtain the subproblem as
(8) 
where is an auxiliary variable and is the penalty parameter.
Intuitively, we may follow the idea in subproblem to introduce another network to calculate . However, by recalling the physical rule in Eq. (1), we can obtain a much simpler updating scheme for as
(9) 
where denotes the weight coefficient.
However, due to the nonconvex potential function, we cannot obtain closedform solution of the subproblem in Eq. (8). Thus, we adopt a projected gradient type rule to update as following:
(10) 
where and is the projection on .
IiiA3 Illumination Adjustment
It is known that the illumination contains the lightness information. So the underexposed image correction task now reduces to the problem of adjusting the illumination to generate highvisually reconstructions. Gamma correction is a common measure to encode and decode the luminance by taking advantage of the nonlinear manner in which humans perceive light and color [31]. So we adopt the following Gamma correction operation to adjust our obtained illumination:
(11) 
where denotes the final enhanced result. is a tunning parameter (empirically designed as 2.2). Now we are ready to summarize our algorithm in Alg. 1 and Fig. 2.
Illumination  Reflectance  Output  Zoomedin 
(a) “(RP)”+“(EP)”+“(SS)”  
(b) “(RP)”+“(EP)”+“(SS)”+“(IA)”  
(c) “(RP)”+“(LDD)”+“(IA)”  
(d) Ours ( “(RP)”+“(EP)”+“(SS)”+“(LDD)”+“(IA)”) 
To demonstrate the necessity of our propagation with hybrid priors navigation, we provide an illustrative comparison of different updating strategies for the decomposition and the corresponding enhancement performance in Fig. 3. Concretely, we consider three different variations of priors, i.e. only datadependent (i.e., “(LDD)”), only knowledgebased (i.e., “(EP)”+“(SS)”) and our proposed hybrid prior (i.e., “(LDD)”+“(EP)”+“(SS)”). Moreover, we also consider a condition of only knowledgebased prior without illumination adjustment (i.e., “(IA)”). It evidents that the illumination adjustment is essential to adjust the luminance of illumination as the subfigures (a) and (b) of Fig. 3 show. Obviously, some details are missing in the reflectance generated by the knowledgebased prior, thus, there exist some details cannot be recovered in the enhanced results. We observe that the reflectance estimated by datadependent prior is oversmooth, which leads to the absence of some structural information in the enhanced result. In contrast, the reflectance obtained by the proposed hybrid prior is felicitous and the enhanced result has the distinguished enhancement effects (see zoomedin regions).
IiiB Learnable Architecture
As for the learnable architecture, we would like to point out that we actually produce a learnable descent direction derived from the data distributions (see ”(LDD)” in Fig. 2), to assist searching our desired solution. Additionally, since the knowledgebased submodule in our hybrid priors has the ability to roughly estimate the latent image structures, the main left task to the datadependent submodule should be refining the rich details and removing small corrections. Therefore, it is essential to adopt a denoisingtype strategy for our learnable architecture. That is, we generate the training image pairs by adding different levels of Gaussian noises to simulate the corruptions and consider the clear images as the outputs of the architecture.
Specifically, a simple CNN architecture is adopted as our learnable architecture, which consists of 7 dilated convolution layers with 64 kernels, acting on a kernel of size 3. We set a ReLU as nonlinear activation function in between two convolution layers, batch normalizations are also introduced for convolution operations from 2nd to 6th linear layers. We adopt mean square error as our training loss. As for the training data, we randomly select 800 images from ImageNet database
[32]. We crop them into small patches of size 3535, and also augment our training set by rotation and flip.Iv Experimental Results
In this section, we conduct a series of experiments to evaluate our algorithm. Since the reference images are unavailable, it is hard to assess the quantitative performance using standard metrics (e.g., PSNR). Thus we follow most exiting works to adopt Natural Image Quality Evaluator (NIQE) [33] as our quantitative metric in all experiments. Please notice that the lower value of NIQE indicates a higher image quality. The decomposition is applied for the Vchannel in the HSV (Hue, Saturation and Value) space, and then transform it back to the RGB domain. All these experiments are conducted on a PC with Intel Core i78700 CPU at 3.70GHz, 32 GB RAM and an NVIDIA GeForce GTX 1080 Ti 11GB GPU.
Input  HDRNet 
RetinexNet  Ours 
(a) NIQE vs.  (b) NIQE vs. 
(a) NASA  (b) NPE  (c) LIME 
Input  HE (3.9318)  SRIE (4.0469)  WVM (3.9181) 
JIEP (4.2309)  LIME (4.3105)  HDRNet (4.1184)  Ours (3.8763) 
Input  HE (3.0200)  SRIE (3.0925) 
WVM (2.8731)  LIME (2.9322)  JIEP (2.8731) 
HDRNet (3.0753)  RetinexNet (2.8091)  Ours (2.6186) 
Input  LIME  JIEP 
HDRNet  RetinexNet  Ours 
Iva Methodology Comparisons
We provide a range of experiments to compare the performance of different methodologies on underexposed image correction. Then we illustrate the roles of datadriven and knowledgebased submodules in our deep model. In Fig. 4, we compare our proposed method with the stateoftheart decomposition based enhancement approaches [16, 17, 12] by illustrating the decomposed components and final enhanced results. Obviously, the illumination of our method is smoother than other stateoftheart approaches, so that the reflectance and enhanced result preserves most details and thus are clearer than the results of compared methods. All these results verify that our method can obtain more realistic constraints for the Retinex type intrinsic image decomposition and therefore is more suitable for underexposed image correction.
Furthermore, we conduct some experiments to compare with the recent discriminative deep learning approach (e.g., HDRNet [6], RetinexNet [7]
). Notice that since no physical knowledges is considered in HDRNet, this method can only obtain the enhanced results by learning their network model (designed in heuristic manner) from synthesized training data. RetinexNet considers the Retinex decomposition, but due to the naive generation fashion of training data, i.e., changing exposure time, the predicted enhanced results usually contain too many details and lack the naturalness in real world scenarios. As shown in Fig.
5, HDRNet fails to recover more details in the enhanced results, the result of RetinexNet generates more details, but it is extremely unnatural. Our method can successfully recover most of the details in the dark region and keep the naturalness to be most extent. Moreover, our result has a distinguishing promotion in terms of brightness, presenting much higher visibility.In the third experiment, we explore the performance of our hybrid prior navigated deep propagation (based on the ensemble of two different methodologies). That is, in Fig. 6, we plot visual performances and quantitative results of our method with varied algorithmic parameters and . As for , it is used to balance the principally designed and datadriven priors. We turn this parameter in the range and plot the visual performance and quantitative results in subfigure (a). It can be seen that the performance of our method is stable and the NIQE scores only slightly changed in a small interval (about ). While the parameter is to penalize the auxiliary variable (calculated based on the network propagation). We observe in subfigure (b) that the performances with are also stable. We argue that the stability of our hybrid prior is mainly because that the knowledgebased submodule actually provides a baseline performance guarantee and the datadriven submodule can successfully enrich more details to further improve the performance.
IvB Underexposed Image Correction
In this part, we first make a series of quantitative and qualitative comparisons with a lot of stateoftheart approaches for settling the underexposed image correction. Then the experiments in real scenarios are conducted to test the visual performance. Finally face detection based YOLOv3 [34] is executed to further verify our naturalness.
Challenging Benchmarks In this section, we evaluate the performance of our method against stateoftheart methods, including HE [8], MSRCR [35], GOLW [36], NPEA [37], LIME [3], SRIE [16], WVM [17], JIEP [12], HDRNet [6], RRM [5], RetinexNet [7] on different benchmarks. Such as NASA ^{1}^{1}1https://dragon.larc.nasa.gov/retinex/pao/news/ (23 images in the indoor and outdoor scenes), NPE [37] (130 images in different natural scenes ), LIME [3] (10 images in different challenging scenes).
Fig. 7 reports the averaged NIQE values on different datasets. It is obvious that our method obtains better quantitative performance than other stateoftheart methods. We also plot visual comparisons on example images in Figs. 89. It can be seen that most methods can partially improve the visual quality of the given observations. However, the results of SRIE, WVM and JIEP still express lowvisibility, especially on the most challenging example in Fig. 9. Although with improved contrast, LIME tends to obtain images with severe overexposure. Additionally, all these compared methods all fail to recover the detailed information in the dark, such as the rose flower in the second zoomedin region of Fig. 9. Learningbased approaches (i.e., HDRNet, RetinexNet) generate unrealistic results with color distortion, especially the result of RetinexNet. In contrast, our proposed method not only enhances the visibility, but also preserves most of the details, providing much better enhancement performance.
We also compare our method with four recently proposed methods to evaluate the computational cost. Table. I shows the average running time in seconds among on different benchmarks. It can be seen that our method is the fastest among these compared methods except the NASA dataset. This indicates our method has the significant advantage in terms of time cost.
Underexposed image with label  Underexposed input (0 / 0)  LIME [3] (60.00 / 42.86) 
HDRNet [6] (1.00 / 42.86)  RetinexNet [7] (66.67 / 57.14)  Ours (83.33 / 71.43) 
Dataset  LIME  JIEP  HDRNet  RetinexNet  Ours 

NASA  0.0336  0.9858  4.8329  0.1618  0.0477 
NPE  0.1902  5.0584  13.6522  0.2948  0.1112 
LIME  0.2149  2.5536  14.9201  0.3356  0.1758 
Realworld Scenarios We also evaluate our method on realworld underexposed scenarios. We select an example image from HDR+ Burst Photography Dataset [38], which is captured by the Android mobile cameras using the public Android Camera2 API. As Fig. 10 shows, it can be seen that our method, LIME, and RetinexNet all have better performance than other stateoftheart methods in the dark regions. However, the zoomedin regions of LIME are overexposed and contain color distortion, RetinexNet generates the unnatural enhanced result which looks like style migration. In contrast, our proposed method obtains more natural visual quality with clear details on the test image.
Face Detection Based on YOLOv3 We know that the naturalness of enhanced results is not precise enough to illustrate by NIQE value derived from the statistic regular. Indeed, Visual expression of enhanced results further supports the naturalness, but it is oversubjective because of personal preference. To address these problems, we consider evaluating the naturalness property from the perspective of the performance of the face detection task.
Metric  Input  LIME  HDRNet  RetinexNet  Ours 

mAP (%)  16.77  53.49  34.25  37.10  54.28 
Avg. Recall (%)  12.67  74.17  35.67  42.99  78.43 
To be specific, we adopt a wellknown object detection framework, i.e., YOLOv3 [34] to present the task of face detection. Following most existing face detection works, we use WIDER Face dataset [39] as our training data. It needs to be noticed that the illumination is also considered in this dataset, but those images are easy to recognize the objects by our eyes. To fully verify the capability of underexposed image correction algorithms, we select 100 challenging images from DARK FACE dataset ^{2}^{2}2https://flyywh.github.io/CVPRW2019LowLight/ which comes from the subchallenge of UG2 PRIZE CHALLENGE held at CVPR 2019. We select some images from our built dataset to present the difficulty of recognition and detection as Fig. 11 shows.
As Table. II shows, our method achieves the best quantitative performance in all metrics (i.e., mAP, Average Recall) against other stateoftheart methods. Specifically, endtoend network based methods (i.e., HDRNet, RetinexNet) harvest the worst quantitative performance. It is worth noting that RetinexNet is superior to HDRNet, the reason may be that RetinexNet adopts the Retinex decomposition to achieve a more favorable performance for detection. The representative Retinexbased method (i.e., LIME) indeed presents the fine numerical results, which only consider the designed prior driven by knowledges. In contrast, the average recall of our method is higher than the LIME about four percentage points, which reflects our method can detect more objects. Actually, the detection network trained by lots of natural images needs the input which satisfies the distribution of natural images, to achieve more excellent performance. In this view, our method indeed performs more effective naturalness. Actually, our numerical results are not objectively prominent for face detection task, whose cause may be that many noises and artifacts are produced in the enhanced procedure to influence the detection (as Fig. 12 shows). We will consider the procedure of noises removal to further improve the enhanced performance in our future work.
Input  DCP  NLD  AODNet  Ours 
IvC Single Image Haze Removal
Finally, we further conduct an experiment of single image haze removal to verify the effectiveness of our proposed framework. We follow the duality of Retinex and image dehazing [40], to execute this task. The visual comparisons with three representative methods (i.e., the classical dehazing method, DCP [41], the traditional optimization based method, NLD [42] and the endtoend network, AODNet [23]) are presented in Fig. 13. Obviously, traditional methods (i.e., DCP, NLD) generate the dehazing results with many artifacts and color distortion, which indicates that only depending on prior regularization is extremely hard to achieve the desired results in real complex scenarios. While AODNet (networkbased method) presents unclear details depict and underexposure results. By comparison, it can be easily seen that our results have the prominent visual expression and more natural than other stateoftheart approaches.
V Conclusions
In this paper, we provided a new perspective to formulate the problem of underexposed image correction within hybrid priors (i.e., knowledge and learnable architectures) energyinspired model. We designed a propagation procedure navigated by the hybrid priors to simultaneously propagate the reflectance and illumination. Extensive experiments of underexposed image correction validated that the effectiveness and superiority of our method against other stateoftheart approaches both in the enhanced effects and running time. Subsequently, face detection task is executed to further verify the naturalness and practical values of our proposed hybrid priors navigated deep propagation. Finally, the single image haze removal task is also performed to illustrate our excellence again other stateoftheart approaches.
Acknowledgment
This work is partially supported by the National Natural Science Foundation of China (Nos. 61672125, 61300086, and 61632019), the Fundamental Research Funds for the Central Universities .
References
 [1] C. Wang and Z. Ye, “Brightness preserving histogram equalization with maximum entropy: a variational perspective,” IEEE Transactions on Consumer Electronics, vol. 51, no. 4, pp. 1326–1334, 2005.
 [2] D. Sheet, H. Garud, A. Suveer, M. Mahadevappa, and J. Chatterjee, “Brightness preserving dynamic fuzzy histogram equalization,” IEEE Transactions on Consumer Electronics, vol. 56, no. 4, 2010.
 [3] X. Guo, Y. Li, and H. Ling, “Lime: Lowlight image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982–993, 2017.
 [4] Q. Zhang, W.S. Zheng, G. Yuan, C. Xiao, and L. Zhu, “Highquality exposure correction of underexposed photos,” in ACM Multimedia, 2018.
 [5] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structurerevealing lowlight image enhancement via robust retinex model,” IEEE Transactions on Image Processing, vol. 27, no. 6, pp. 2828–2841, 2018.
 [6] M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for realtime image enhancement,” ACM Transactions on Graphics, vol. 36, no. 4, p. 118, 2017.
 [7] C. Wei, W. Wang, W. Yang, and J. Liu, “Deep retinex decomposition for lowlight enhancement,” in British Machine Vision Conference, 2018.
 [8] H. Cheng and X. Shi, “A simple and effective histogram equalization approach to image enhancement,” Digital Signal Processing, vol. 14, no. 2, pp. 158–170, 2004.
 [9] M. J. Power, C. Whitlock, P. Bartlein, and L. R. Stevens, “Multihistransactions on graphicsram equalization methods for contrast enhancement and brightness preserving,” IEEE Transactions on Consumer Electronics, vol. 53, no. 3, pp. 1186–1194, 2011.
 [10] S. H. Yun, H. K. Jin, and S. Kim, “Contrast enhancement using a weighted histransactions on graphicsram equalization,” in IEEE International Conference on Consumer Electronics, vol. 10, no. 11, 2011, pp. 203–204.
 [11] J. McCann, Retinex Theory. Springer New York, 2016.

[12]
B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao, “A joint intrinsicextrinsic
prior model for retinex,” in
Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition
, 2017.  [13] E. H. Land and J. J. McCann, “Lightness and retinex theory,” Journal of the Optical Society of America, vol. 61, no. 1, pp. 1–11, 1971.
 [14] R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel, “A variational framework for retinex,” International Journal of Computer Vision, vol. 52, no. 1, pp. 7–23, 2003.
 [15] M. K. Ng and W. Wang, “A total variation model for retinex,” SIAM Journal on Imaging Sciences, vol. 4, no. 1, pp. 345–365, 2011.
 [16] X. Fu, Y. Liao, D. Zeng, Y. Huang, X.P. Zhang, and X. Ding, “A probabilistic method for image enhancement with simultaneous illumination and reflectance estimation,” IEEE Transactions on Image Processing, vol. 24, no. 12, pp. 4965–4977, 2015.
 [17] X. Fu, D. Zeng, Y. Huang, X.P. Zhang, and X. Ding, “A weighted variational model for simultaneous reflectance and illumination estimation,” in Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
 [18] D. Ulyanov, A. Vedaldi, and V. S. Lempitsky, “Deep image prior,” in Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
 [19] J. Kim, J. K. Lee, and K. M. Lee, “Accurate image superresolution using very deep convolutional networks,” in Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1646–1654.
 [20] Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, “Residual dense network for image superresolution,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 2472–2481.

[21]
R. Liu, X. Fan, S. Cheng, X. Wang, and Z. Luo, “Proximal alternating direction
network: A globally converged deep unrolling framework,” in
Association for the Advancement of Artificial Intelligence
, 2018.  [22] R. Liu, Y. He, S. Cheng, X. Fan, and Z. Luo, “Learning collaborative generation correction modules for blind image deblurring and beyond,” in ACM Multimedia, 2018.
 [23] B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aodnet: Allinone dehazing network,” in Proceedings of the IEEE International Conference on Computer Vision, 2017.
 [24] R. Liu, L. Ma, Y. Wang, and L. Zhang, “Learning converged propagations with deep prior ensemble for image enhancement,” IEEE Transactions on Image Processing, vol. 28, no. 3, pp. 1528–1543, 2018.
 [25] R. Liu, S. Cheng, L. Ma, X. Fan, and Z. Luo, “Deep proximal unrolling: Algorithmic framework, convergence analysis and applications,” IEEE Transactions on Image Processing, 2019.
 [26] R. Liu, S. Cheng, Y. He, X. Fan, Z. Lin, and Z. Luo, “On the convergence of learningbased iterative methods for nonconvex inverse problems,” IEEE transactions on pattern analysis and machine intelligence, 2019.

[27]
K. G. Lore, A. Akintayo, and S. Sarkar, “Llnet: A deep autoencoder approach to natural lowlight image enhancement,”
Pattern Recognition, vol. 61, pp. 650–662, 2017.  [28] J. Cai, S. Gu, and L. Zhang, “Learning a deep single image contrast enhancer from multiexposure images,” IEEE Transactions on Image Processing, vol. 27, no. 4, pp. 2049–2062, 2018.
 [29] Y. Zhang, J. Zhang, and X. Guo, “Kindling the darkness: A practical lowlight image enhancer,” in ACM Multimedia, 2019.
 [30] S. Roth and M. J. Black, “Fields of experts,” International Journal of Computer Vision, vol. 82, no. 2, pp. 205–229, 2009.
 [31] C. Poynton, Digital video and HD: Algorithms and Interfaces, 2012.
 [32] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Neural Information Processing Systems, 2012.
 [33] A. Mittal, R. Soundararajan, and A. C. Bovik, “Making a completely blind image quality analyzer,” IEEE Signal Processing Letters, vol. 20, no. 3, pp. 209–212, 2013.
 [34] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.
 [35] Z.u. Rahman, D. J. Jobson, and G. A. Woodell, “Retinex processing for automatic image enhancement,” Journal of Electronic Imaging, vol. 13, no. 1, pp. 100–111, 2004.
 [36] Q. Shan, J. Jia, and M. S. Brown, “Globally optimized linear windowed tone mapping,” IEEE Transactions on Visualization and Computer Graphics, vol. 16, no. 4, pp. 663–675, 2010.
 [37] S. Wang, J. Zheng, H.M. Hu, and B. Li, “Naturalness preserved enhancement algorithm for nonuniform illumination images,” IEEE Transactions on Image Processing, vol. 22, no. 9, pp. 3538–3548, 2013.
 [38] S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, “Burst photography for high dynamic range and lowlight imaging on mobile cameras,” ACM Transactions on Graphics, vol. 35, no. 6, 2016.
 [39] S. Yang, P. Luo, C. C. Loy, and X. Tang, “Wider face: A face detection benchmark,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016.
 [40] A. Galdran, A. AlvarezGila, A. Bria, J. VazquezCorral, and M. Bertalmıo, “On the duality between retinex and image dehazing,” in Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.
 [41] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341–2353, 2011.
 [42] D. Berman, S. Avidan et al., “Nonlocal image dehazing,” in Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 1674–1682.
Comments
There are no comments yet.