Outdoor images are often degraded by a loss of visibility produced by small particles lying in the piece of atmosphere in between the imaged scene and the observer. This physical phenomenon is known as haze, fog, or mist, and it causes the radiance captured by the camera to be attenuated along its path. Haze removal, or image dehazing, is an image processing task concerned with the mitigation of this effect, thereby increasing quality of outdoors images, with the goal of improving performance of further computer vision algorithms, or simply enhancing image visualization.
In turn, Retinex [24, 26] was originally defined as a color vision model of human perception. It aims to explain the human ability to perceive color as stable regardless of changes in global illumination. Retinex is based on the observation that color sensation is not related to the radiance values that reach the eye, but to the integrated reflectance. The integrated reflectance is defined as the ratio at each waveband between the value of the object and the value of a white object under the same illuminant. Retinex was promptly adapted by researchers in color photography due to its effectiveness for the enhancement of images . Since then, variations of the Retinex model have been applied for many different image processing tasks, from non-uniform (local) color constancy , to shadow removal , gamut mapping , or contrast enhancement . In this paper, we consider Retinex as an image enhancement technique, in accordance with these last methods.
Retinex has been related to image dehazing in the past, either explicitly or implicitly. In  , multi-scale Retinex was applied to increase contrast in the luminance channel. The result was then median-filtered and used as an estimate of scene’s depth. In the Stress (Spatio-Temporal Retinex-inspired Envelope with Stochastic Sampling) framework was applied for image dehazing. Stress is a general image enhancement technique, and the authors adapt the behavior of the algorithm to achieve image dehazing through a heuristic adjustment of its parameters.
, multi-scale Retinex was applied to increase contrast in the luminance channel. The result was then median-filtered and used as an estimate of scene’s depth. In, single-scale Retinex was employed after a wavelet transform to enhance the chromatic aspect of the result, whereas in 
the Stress (Spatio-Temporal Retinex-inspired Envelope with Stochastic Sampling) framework was applied for image dehazing. Stress is a general image enhancement technique, and the authors adapt the behavior of the algorithm to achieve image dehazing through a heuristic adjustment of its parameters.
In contrast with previous works, in this paper we do not intend to adapt Retinex-like ideas to the essentially different problem of image dehazing. Instead, our main contribution is a formal proof of the following direct relationship between Retinex and image dehazing:
Furthermore, we show that this equivalence holds not only at the algorithmic level, but at the modelization level too. This enables the use of existing Retinex-based algorithms to dehazing images directly by incorporating two intensity-inversion operations. This means that we do not need to adjust or modify Retinex-based algorithms to perform image dehazing, we only need to transform their input by simple intensity inversion operations. A schematic representation of this process is shown in Fig. 1. In addition, we demonstrate, through a wide set of experimental results, that this new approach to image dehazing can compete surprisingly well with current state-of-the-art fog removal techniques.
2 Previous Approaches to Image Dehazing
Many image dehazing techniques have been proposed in recent years. They can be grouped in two main approaches: Machine Learning and Image Processing methods.
Machine Learning techniques learn visual features relevant for classifying an image as hazy or haze-free. These features can be manually specified[7, 44]
or automatically learned in the framework of Deep Convolutional Neural Networks[5, 27]. A model is then trained to learn a mapping between hazy and haze-free images. In this case, training examples need to be annotated previously, which is a complex task. A common approach consists of synthesizing hazy images from natural haze-free images, which is usually accomplished through a physical model of image acquisition under hazy conditions, due to Kochsmieder :
where is the degraded image, are the intensities in a haze-free image, is the medium transmission, a scalar quantity describing the amount of light that reaches the receiver, inversely related to depth, and
is a constant (RGB)-vector known as atmospheric light. The additive combined degradation of transmission and atmospheric lightis usually known as airlight, and it accounts for a possible shift in scene colors due to the presence of different sources of illumination other than sunlight.
Kochsmieder’s model lies also at the heart of image dehazing techniques belonging to the category of Image Processing. In this case, the goal is to solve the above underconstrained model (2) by building a prior assumption that is fulfilled by a haze-free image. This prior is then imposed on eq. (2), in order to infer and . Once estimates for and have been obtained, the eq. (2) can be inverted:
Image Processing techniques are thus spatially-variant contrast enhancement methods that attempt to increase detail visibility and saturation on degraded areas while leaving unaltered regions that already have good contrast. Several priors can be imposed on the structure of in order to estimate and . For instance the Dark Channel Prior  imposes that most local patches in a haze-free image contain pixels which have very low intensity in at least one color channel:
being a local neighborhood of . Assuming the Dark Channel Prior is fulfilled by the haze-free image , we can take minima in eq. (2) after normalizing by , cancel the term associated to , and recover an estimate of :
Other haze-free priors can be imposed on , such as maximal local contrast/saturation , or certain distribution of color pixels in the RGB space [2, 10]. Different alternatives exist: the reader can find in [29, 41] comprehensive reviews.
A variation of the above methods consists of dehazing techniques attempting to recover the true physical radiance of the scene objects. These techniques typically require external sources of information , or multiple images of the same scene [33, 39]. Remarkably, in [23, 34] the authors overcome this need by a joint probabilistic estimation of depth and true radiance through a two-latent-layers Markov random field. The method requires radiometrically calibrated input, and assumes the atmospheric light is known in advance, which can result in chromatic distortions .
3 The Retinex Theory of Color Vision
Edwin H. Land introduced the Retinex theory  as a color vision model of human perception. He named it Retinex as a portmanteau of Retina and Cortex, since Land did not want to venture where exactly this process was carried out in the visual pathway. In short, the original Retinex color vision model can be defined as a theoretical spectral channel that makes spatial comparisons between scene regions so as to calculate “Lightness” sensations . It became rapidly apparent to Land and his collaborators that Retinex was also useful for the enhancement of color photographs , replacing the human cone-photoreceptors (L,M,S) by the camera sensors (R,G,B). From now on, we will focus on this second meaning of Retinex, that has been widely applied in image processing tasks [9, 30, 45].
When applied to digital color images, the Retinex model computes a triplet of lightness values for each pixel. In the original Retinex implementation, lightness is computed through a chain of pixel intensity comparisons with respect to other image locations’ intensities. Land suggested that this comparison cannot occur directly, but needs to be computed by comparing adjacent pixels . Given an image taking values in , two points , , and a path , we compute their ratio through consecutive ratios :
The unfolding of the computation is non-trivial due to the addition of two supplementary mechanisms, called threshold and reset. The threshold mechanism sets to ratios in eq. (6) that are close to : for a small , when we set . This disregards unwanted effects in lightness estimation due to a smooth spatially variant illumination. However, it has been shown that parameter is redundant in Retinex computations . Ignoring it does not have a critical impact in the algorithm. Hence, in this work we will not consider threshold-based Retinex variants.
The reset mechanism acts as follows: when the chain of computations in (6) reaches a pixel with intensity greater than all previous points in , the sequential product up to resets to , and lightness computation restarts from it:
Eq. (7) shows that the chain of ratios (6) simplifies to , where is the pixel of maximum intensity along . This reveals the local white balance character of Retinex: points activating the reset mechanism become local references for white.
The sequential product of eq. (6) is scaled by a non-decreasing function , often a logarithm to simplify calculations, and gives an estimate of the lightness in . To improve this estimate, paths ending at but starting at different initial points are considered, and the result is averaged, obtaining the Retinex lightness estimate at :
In the above form, Retinex contains unspecified parameters, such as the number of paths, or the way in which we sample the image to build them. Also, the reset mechanism in eq. (8) makes much of the paths information redundant. In , path-based sampling was replaced by sampling through random sprays with radially decreasing density. In , random sprays wwere replaced by a 2-dimensional representation, with a kernel modeling the sampling density of the spray in the limit, leading to the Kernel-Based Retinex:
models the probability of selecting pixelin the proximity of , and the reset mechanism is automatically implemented, since is defined as for .
Center-surround techniques were first proposed in  as a simple alternative that still preserves the characteristic features of Retinex. They compute the ratio between image intensity at a pixel and its surrounding:
where is a weighted average operator. This amounts to integrating local information instead of sampling it. The first practical implementation of this idea was proposed in , where the average operator was a Gaussian kernel :
The scaling function is here a logarithm. Homomorphic filtering can also be seen as a particular case of this model, in which the logarithm and the convolution occur in inverted order in the right-hand term of (11). This was later extended to multi-scale Retinex , a normalized linear combination of (11
) applied with different standard deviations.
4 The Duality between Retinex and Image Dehazing
We begin by observing that any solution to the haze formation model should decrease the intensities of the input hazy image. This can be easily seen by rearranging (2) into:
Since transmission lies always in , then , which implies .
At this point, it is useful to make a simplifying assumption on the haze formation model (2). As often done in the image dehazing literature , we assume the input image is globally white-balanced, i.e. no chromatic component dominates the scene. This amounts to fixing in eq. (2), and a solution of the image dehazing problem can be rewritten, after a simple manipulation, as:
In this paper we consider Retinex as an image enhancement technique that can produce, imposing a local color constancy hypothesis, a uniform illumination image from an image acquired under an irregular illumination:
where is a slowly-varying illumination field affecting the scene. While Retinex produces good results in this ill-posed task, the property of always increasing intensity is a known limitation of most Retinex implementations: they are only able to enhance under-exposed images affected by shadows, while over-exposed images will not be enhanced. This limitation is usually circumvented by some further post-processing operations, typically image-dependent and hard to tune. In this work, we turn this limitation into an advantage through the definition of the following operator acting on an image with inverted intensities:
Note that according to the above observations, if the solves the Image Dehazing problem, it must share the intensity decreasing property given by eq. (13), i.e. the intensities of must be smaller than those of . This is demonstrated by the following lemma:
Operator always decreases intensities.
Since Retinex increases intensities in any image , for the inverted image we have . This implies that .
This justifies the suitability of the operator for the image dehazing task. Now we are ready to prove the central result of this paper.
Applying operator to a hazy image provides a solution of the Image Dehazing problem (2).
Assuming , the haze formation model can be written as:
where is a solution to the dehazing problem, i.e. . Eq. (16) can be rearranged as:
Consider a second image resulting of inverting the intensities of the initial hazy image , i.e. . Eq. (17) can be written as:
Since is piecewise smooth, application of a Retinex method can remove from eq. (18), resulting in:
But was a solution for the image dehazing problem:
which shows the initial statement.
The implications of this relationship are manifold. First, the above connection between Retinex and Image Dehazing has the advantage that it is valid not only at an algorithmic level, but also at a modelization level. It provides a powerful mechanism by which, if we have a numerical technique to solve Retinex, we can solve Dehazing by applying it to inverse intensities and inverting the result.
Second, since eq. (1) holds, a question arises: is it possible to employ dehazing techniques to solve (14), i.e. can dehazing on inverted intensities remove a smooth illumination field from an irregularly illuminated image? In the considered case of a neutral-color illumination, through a change of variables , eq. (1) can be re-written as:
This implies that inverting the result of running a dehazing method on inverted intensities will return an illumination-free image. In addition, it can be easily shown that the operator is non-decreasing. This means that this operator can be applied to remove illumination, although it will only work for under-exposed images.
Indeed, eqs. (21) and (22) build up a bidirectional image processing tool. Not only can algorithms for illumination factorization be applied to remove fog, but also image dehazing techniques can factor out non-uniform illumination from under-exposed images, as Retinex does. An example of the effect of applying formula (22), with the dehazing method from , is shown in Fig. 3. This idea has been recently explored in several works related to low-light image enhancement [35, 4, 28, 15]. The above result can be regarded as providing a theoretical support to these works.
4.1 Dark Channel as Retinex
Consider a monochromatic hazy image acquired under neutral illumination. From eq. (5), transmission reduces to:
Consider , and a neighborhood around it given by . The following property holds:
Thus, the transmission that the Dark Channel computes from an image after inverting its intensities is given by:
It becomes apparent now that the solution the Dark Channel computes for the Retinex problem relates the denominator of the haze inversion formula (3) to the denominator of the Retinex equation (8). However, in this case the scaling function is the identity and the geometry of the neighborhoods is the simplest one: square neighborhoods with no weighting factor. Hence the need for refining that affects this algorithm, as well as other techniques derived from it.
5 Experiments and Results
The connections demonstrated in section 4 are not tied to one specific Retinex method, but they hold at a fundamental level. Hence, in order to verify the validity of the proposed image dehazing approach, we only require that the applied algorithm is able to separate a smoothly variant illumination field from the reflectance of the scene, in a way consistent with the assumptions outlined in section 4. For this reason, we have selected four different popular implementations of Retinex: Single Scale Retinex (SSR) , Multi-Scale Retinex (MSR) , Random Spray Retinex (RSR)  and its faster version, Light Random Spray Retinex (LRSR) . In addition, we include the Homomorphic Filtering (HF), which can be interpreted as a member of the Retinex family, as well as a recent illumination-reflectance separation technique (WVRI) .
These techniques are executed on inverted intensities, and inverted afterwards, following Theorem 4.2. We deliberately prefer not to perform extensive parameter optimization over Retinex implementations, so as to show their general behavior for the Image Dehazing task. Since MSR and SSR operate on the logarithmic domain, we map back their results to by simple affine translation, saturating a small percentage of pixels at both extremes (). This operation is applied on the result of , maintaining the property of not decreasing intensity values of Retinex. The remaining parameters are fixed as the default values proposed by the respective authors (, , for MSR and for SSR). The spray size for both and is set to , with and number of sprays, respectively. For LRSR, kernel sizes are , and row and column step sizes are both . The remaining methods were also executed with the baseline parameter configuration provided by their respective authors.
Below we compare both qualitatively and quantitatively the result of our proposed approach with a wide set of well-established image dehazing techniques: the popular Dark Channel Prior (DCP) , the Fast Visibility Restoration (FVR) technique of , Image Dehazing with Robust Artifact Suppression (RAS) , DEFADE , Bayesian Defogging (BYD) , the Boundary-Constrained Contextual Regularization technique (BCCR) , EVID , FVID , and the Color Attenuation Prior (CAP) technique . We also consider Histogram Equalization, to analyze the comparative performance of a simple contrast enhancement method. We must stress that our goal is not to produce results largely improving those of the Image Dehazing state-of-the-art, but to demonstrate the general usability of existing Retinex implementations for the task of fog removal.
5.1 Qualitative Evaluation
In this section, we show several visual examples of the application of operator with the Retinex algorithms mentioned above, compared to the result of applying Image Dehazing techniques111Further qualitative results can be found in the supplementary material. Fig. (4) displays a first example of such results. As predicted by Theorem (4.2), Retinex-based techniques can improve visibility to an extent similar to that of other specialized fog removal algorithms, showing good contrast and saturation on areas that are far away from the camera. Even the simple Homomorphic Filtering has a good performance in retrieving visibility in those areas.
Figure (5) provides an interesting example. Application of formula (15) leads again to a visibility increase on areas of the scene’s bottom. In this case, the result of different implementations of Retinex produces colors that are sometimes unnatural. This is related to the per-channel processing Retinex performs. The role of the atmospheric light in eq. (2) is ignored in this implementation, leading to a disparate color recovery in different images. Although the performance of Retinex under the presence of color shifts is reasonable, the relationship between Retinex and Image Dehazing when the term is considered is complex, and remains a topic of future research.
5.2 Quantitative Evaluation
There exist two different approaches to quantitatively assess the quality of an image dehazing method, namely by full-reference metrics, and by no-reference metrics. In the first case, a ground-truth optimal solution is assumed to exist, and the error between the result of a dehazing technique and its corresponding clean scene can be computed. In the second case, a score describing the quality of a hazy image and its dehazed counterpart can be analyzed without the need of a clear version of the original image. Below we follow both approaches to verify the applicability of the operator as defined on eq. (15) to increase the visual quality of images degraded by haze.
5.2.1 Full-Reference Quality Assessment
We first assess the performance of Retinex-based techniques for the dehazing problems by means of full-reference metrics. We use a set of outdoors images on which synthetic fog is added through perturbed versions of the haze formation model of eq. (2), following . In this dataset it is possible to compute full-reference error measurements. Table 1 shows the obtained results after applying all considered image dehazing and Retinex-based techniques, and measuring deviation with respect to the haze-free groundtruth image in terms of the well-known Structural Similarity Index (SSIM) , Color Peak Signal-to-Noise Ratio (CPSNR), and  mean errors across the dataset. Numerical results confirm that the proposed approach shows a dehazing capability in line with that of current fog removal methods, sometimes even outperforming it. Overall, the best-performing techniques were the DCP  and the weighted variational method for illumination separation from , acting on inverted intensities. These methods achieved a first and a second place under two different metrics. First, second, and third best performing methods were relatively well-distributed between image dehazing and Retinex-based techniques, which supports the hypothesis that Retinex methods can compete with specialized fog removal algorithms.
|Method||BYD ||HE||DCP ||EVID ||FVID ||FVR||BCCR||DEFADE |
5.2.2 No-Reference Quality Assessment
|Method||None||HE||DCP ||EVID ||FVID ||FVR||BCCR||DEFADE |
For a no-reference assessment, we evaluate the proposed Retinex-based approach by conducting a series of experiments on the dataset provided in , which is publicly available online222http://live.ece.utexas.edu/research/fog/index.html. This dataset comprises natural hazy images of varying sizes, fog density and content, and includes most of the typical test images used in most previous works.
We now compare Retinex-based implementations with results obtained on the same dataset by the set of state-of-the-art image dehazing algorithms from the previous section. The Perceptual Fog Density measure (FADE) proposed in  is employed. We also consider three extra quality metrics, introduced in : , , and , reflecting different aspects of the quality of dehazed images, i.e. percentage of new visible edges after the enhancement process (), increase of visibility/contrast level (), and percentage of pixels becoming saturated after processing an image ().
We report in Table 2 the mean of the FADE metric and the , , coefficients for the aforementioned set of images. Several interesting conclusions can be drawn. First, notice that in terms of the FADE score, the best-performing technique is DEFADE. However, this is a machine learning approach that was trained to remove fog on the same image set we analyze here. Thus, its good performance is expected. As for the FADE score, Retinex-based methods seem to perform on pair with image dehazing techniques, which verifies the duality proposed in this paper. This is confirmed by the , , scores, which point to the RSR technique as capable of revealing new visible edges while avoiding to saturate previously unsaturated pixels. Finally, we notice that Histogram Equalization performs poorly when compared to other techniques, confirming that the task of fog removal is substantially different from simple spatially-invariant contrast increasing, and that Retinex on inverted intensities can fulfill that task successfully.
In this work we have provided a rigorous mathematical proof of the dual relationship connecting the problems of image dehazing and non-uniform illumination separation, showing that applying a Retinex operation on an inverted image followed by inverting the result again provides a dehazed result, and vice versa. Rather than being limited to a particular algorithm, we have formally and experimentally showed that this holds for a wide range of Retinex methods. Qualitative and quantitative experiments showed competitive results when compared to current dehazing algorithms.
JVC was supported by the Spanish government grant ref. IJCI-2014-19516, and MB by European Research Council, Starting Grant ref. 306337, by the Spanish government grant ref. TIN2015-71537-P, & by Icrea Academia Award.
-  N. Banić and S. Lončarić. Light Random Sprays Retinex: Exploiting the Noisy Illumination Estimation. IEEE Signal Processing Letters, 20(12):1240–1243, Dec. 2013.
D. Berman, T. Treibitz, and S. Avidan.
Non-local Image Dehazing.
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1674–1682, June 2016.
-  M. Bertalmío, V. Caselles, and E. Provenzi. Issues About Retinex Theory and Contrast Enhancement. International Journal of Computer Vision, 83(1):101–119, June 2009.
-  B. Cai, X. Xu, K. Guo, K. Jia, B. Hu, and D. Tao. A Joint Intrinsic-Extrinsic Prior Model for Retinex. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4000–4009, 2017.
-  B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Transactions on Image Processing, 25(11):5187–5198, Nov. 2016.
-  C. Chen, M. N. Do, and J. Wang. Robust Image and Video Dehazing with Visual Artifact Suppression via Gradient Residual Minimization. In Computer Vision – ECCV 2016, pages 576–591. Springer, Cham, Oct. 2016.
-  L. K. Choi, J. You, and A. C. Bovik. Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging. IEEE Transactions on Image Processing, 24(11):3888–3901, Nov. 2015.
-  V. W. D. Dravo and J. Y. Hardeberg. Stress for dehazing. In 2015 Colour and Visual Computing Symposium (CVCS), pages 1–6, Aug. 2015.
-  M. Ebner. Color Constancy. John Wiley & Sons, Apr. 2007.
-  R. Fattal. Dehazing Using Color-Lines. ACM Trans. Graph., 34(1):13:1–13:14, Dec. 2014.
-  G. D. Finlayson, S. D. Hordley, and M. S. Drew. Removing Shadows from Images Using Retinex. In Color Imaging Conference, CIC, Nov. 2002.
-  X. Fu, D. Zeng, Y. Huang, X. P. Zhang, and X. Ding. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2782–2790, June 2016.
-  A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío. Enhanced Variational Image Dehazing. SIAM Journal on Imaging Sciences, 8(3):1519–1546, Jan. 2015.
-  A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmío. Fusion-Based Variational Image Dehazing. IEEE Signal Processing Letters, 24(2):151–155, Feb. 2017.
-  X. Guo, Y. Li, and H. Ling. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Transactions on Image Processing, 26(2):982–993, Feb. 2017.
-  N. Hautière, J.-P. Tarel, D. Aubert, and . Dumont. Blind Contrast Enhancement Assessment by Gradient Ratioing at Visible Edges. Image Analysis & Stereology, 27(2):87–95, May 2011.
-  K. He, J. Sun, and X. Tang. Single Image Haze Removal Using Dark Channel Prior. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(12):2341–2353, Dec. 2011.
-  D. J. Jobson, Z. Rahman, and G. A. Woodell. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing, 6(7):965–976, July 1997.
-  D. J. Jobson, Z. Rahman, and G. A. Woodell. Properties and performance of a center/surround retinex. IEEE Transactions on Image Processing, 6(3):451–462, Mar. 1997.
-  R. Kimmel, M. Elad, D. Shaked, R. Keshet, and I. Sobel. A Variational Framework for Retinex. International Journal of Computer Vision, 52(1):7–23, Apr. 2003.
-  J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski. Deep Photo: Model-based Photograph Enhancement and Viewing. In ACM SIGGRAPH Asia 2008 Papers, SIGGRAPH Asia ’08, pages 116:1–116:10, New York, NY, USA, 2008. ACM.
-  H. Koschmieder. Theorie der horizontalen Sichtweite: Kontrast und Sichtweite. Keim & Nemnich, 1925.
-  L. Kratz and K. Nishino. Factorizing Scene Albedo and Depth from a Single Foggy Image. In 2009 IEEE 12th International Conference on Computer Vision, pages 1701–1708, Sept. 2009.
-  E. H. Land. The retinex theory of color vision. Scientific American, 237(6):108–128, Dec. 1977.
-  E. H. Land. An alternative technique for the computation of the designator in the retinex theory of color vision. Proceedings of the National Academy of Sciences of the United States of America, 83(10):3078–3080, May 1986.
-  E. H. Land and J. J. McCann. Lightness and Retinex Theory. JOSA, 61(1):1–11, Jan. 1971.
-  B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng. AOD-Net: All-In-One Dehazing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4770–4778, 2017.
-  L. Li, R. Wang, W. Wang, and W. Gao. A low-light image enhancement method for both denoising and contrast enlarging. In 2015 IEEE International Conference on Image Processing (ICIP), pages 3730–3734, Sept. 2015.
-  Y. Li, S. You, M. S. Brown, and R. T. Tan. Haze visibility enhancement: A Survey and quantitative benchmarking. Computer Vision and Image Understanding, 165:1–16, Dec. 2017.
-  J. J. McCann. Color gamut mapping using spatial comparisons. In Color Imaging: Device-Independent Color, Color Hardcopy, and Graphic Arts VI, volume 4300, pages 126–131. International Society for Optics and Photonics, Dec. 2000.
-  J. J. McCann. Retinex at 50: color theory and spatial algorithms, a review. Journal of Electronic Imaging, 26(3):031204, Feb. 2017.
-  G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the 2013 IEEE International Conference on Computer Vision, ICCV ’13, pages 617–624, Washington, DC, USA, 2013. IEEE Computer Society.
-  S. G. Narasimhan and S. K. Nayar. Chromatic framework for vision in bad weather. In Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000, volume 1, pages 598–605 vol.1, 2000.
-  K. Nishino, L. Kratz, and S. Lombardi. Bayesian Defogging. International Journal of Computer Vision, 98(3):263–278, July 2012.
-  A. Panagopoulos, C. Wang, D. Samaras, and N. Paragios. Estimating Shadows with the Bright Channel Cue. In Trends and Topics in Computer Vision, Lecture Notes in Computer Science, pages 1–12. Springer, Berlin, Heidelberg, Sept. 2010.
-  E. Provenzi, M. Fierro, A. Rizzi, L. De Carli, D. Gadia, and D. Marini. Random spray Retinex: a new Retinex implementation to investigate the local properties of the model. IEEE transactions on image processing: a publication of the IEEE Signal Processing Society, 16(1):162–171, Jan. 2007.
-  E. Provenzi, D. Marini, L. D. Carli, and A. Rizzi. Mathematical definition and analysis of the Retinex algorithm. JOSA A, 22(12):2613–2621, Dec. 2005.
-  Z. Rong and W. L. Jun. Improved wavelet transform algorithm for single image dehazing. Optik - International Journal for Light and Electron Optics, 125(13):3064–3066, July 2014.
-  Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001, volume 1, pages I–325–I–332 vol.1, 2001.
-  G. Sharma, W. Wu, and E. N. Dalal. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application, 30(1):21–30, Feb. 2005.
-  D. Singh and V. Kumar. Comprehensive survey on haze removal techniques. Multimedia Tools and Applications, pages 1–26, Nov. 2017.
-  M. Sulami, I. Glatzer, R. Fattal, and M. Werman. Automatic recovery of the atmospheric light in hazy images. In 2014 IEEE International Conference on Computational Photography (ICCP), pages 1–11, May 2014.
-  R. T. Tan. Visibility in bad weather from a single image. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2008.
-  K. Tang, J. Yang, and J. Wang. Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 2995–3002, June 2014.
M. F. Tappen, E. H. Adelson, and W. T. Freeman.
Estimating Intrinsic Component Images using Non-Linear Regression.In 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pages 1992–1999, 2006.
-  J. P. Tarel and N. Hautière. Fast visibility restoration from a single color or gray level image. In 2009 IEEE 12th International Conference on Computer Vision, pages 2201–2208, Sept. 2009.
-  J. Vazquez-Corral, S. W. Zamir, A. Galdran, D. Pardo, and M. . Bertalmío. Image processing applications through a variational perceptually-based color correction related to Retinex. Electronic Imaging, 2016(6):1–6, Feb. 2016.
-  V. Vonikakis, D. Chrysostomou, R. Kouskouridas, and A. Gasteratos. A biologically inspired scale-space for illumination invariant feature detection. Measurement Science and Technology, 24(7):074024, 2013.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, Apr. 2004.
-  B. Xie, F. Guo, and Z. Cai. Improved Single Image Dehazing Using Dark Channel Prior and Multi-scale Retinex. In Proceedings of the 2010 International Conference on Intelligent System Design and Engineering Application - Volume 01, ISDEA ’10, pages 848–851, Washington, DC, USA, 2010. IEEE Computer Society.
-  Q. Zhao, P. Tan, Q. Dai, L. Shen, E. Wu, and S. Lin. A Closed-Form Solution to Retinex with Nonlocal Texture Constraints. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(7):1437–1444, July 2012.
-  Q. Zhu, J. Mai, and L. Shao. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Transactions on Image Processing, 24(11):3522–3533, Nov. 2015.