Images captured in the wild are often degraded in visibility, colors, and contrasts caused by haze, fog and smoke. Recovering high-quality clear images from degraded images (a.k.a. image dehazing) is beneficial for both low-level image processing and high-level computer vision tasks. Dehazed images are more visually appealing to generate for image processing tasks. Dehazed images can improve the robustness of vision systems that often assume clear images as input. Typical applications that benefit from image dehazing include image super-resolution, visual surveillance, and autonomous driving. Image dehazing is highly desired because of the increasing demand of deploying visual system for real-world applications.
Image dehazing is a challenging problem. The effect of haze is caused by atmospheric absorption and scattering that depend on the distance of the scene points from the camera. In computer vision, the hazy image is often described by a simplified physical model, i.e., the atmospheric scattering model [25, 27, 15, 23],
where is the observed hazy image, is the scene radiance (clear image), is the medium transmission map, and is the global atmospheric light. When the atmosphere is homogeneous, can be further expressed as a function of the scene depth d(x) and the scattering coefficient of the atmosphere as . The goal of image dehazing is to recover clear image from hazy image . Single image dehazing is particularly challenging. It is under-constrained because haze is dependent on many factors, including the unknown depth information that is difficult to recover from a single image.
The atmospheric scattering model (1) has been extensively used in previous methods for single image dehazing [10, 36, 38, 15, 26, 11, 4, 6]. These works either separately or jointly estimate the transmission map and the atmospheric light to generate the clear image from a hazy image. Due to the under-constrained nature of single image dehazing, the success of previous methods often relies on hand-crafted priors such as dark channel prior , contrast color-lines , color attenuation prior , and non-local pior . However, it is difficult for these priors to be always satisfied in practice. For example, dark channel prior is known to be unreliable for areas that are similar to the atmospheric light.
More recent works learn convolutional neural networks (CNNs) to estimate components in the atmospheric scattering model for image dehazing[5, 28, 22, 24, 42, 41]. These methods are often trained with limited (synthetic) images, and use only a few layers of convolutional filters. The learned shallow networks have limited capacity to represent or process images, making them difficult to surpass the prior-based methods. In contrast, training deep neural networks with large-scale data has made significant progress and achieved state-of-the-art performance in many vision tasks [21, 35, 16]
. Moreover, the deep features extracted by a pre-trained deep network are used as powerful image representation in many applications, such as domain invariant recognition, perceptual evaluation , and characterizing image statistics . More recently, the architecture of CNNs itself has been recognized as a prior for image processing . In this paper, we study how to release the power of deep network for single image dehazing.
We propose an encoder-decoder architecture as an end-to-end system for single image dehazing. We exploit the representation power of deep features by adopting the convolutional layers of the deep VGG net  as our encoder, and pre-train the encoder on large-scale image classification task . We add skip connections with instance normalization between the encoder and decoder, and then train decoder with both reconstruction loss and VGG perceptual loss . We show that the recently proposed instance normalization , which is designed for image style transfer, is also effective in image dehazing. The proposed method effectively learns the statistics of clear images based on the deep feature representation, which benefits the dehazing process on the input image. Our approach outperforms the state-of-the-art results by a large margin on a recently released benchmark dataset , and performs surprisingly well in several cross-domain experiments. Our method depends on neither the explicit atmospheric scattering model nor the hand-crafted image priors, and only exploits the deep network architecture and pre-trained models to tackle the under-constrained single image dehazing problem. Our simple yet effective network can serve as a strong baseline for future study in this topic.
2 Related work
Traditional methods focus on representing human knowledge as priors for image processing. Tan  assumes higher contrast of clear images and proposes a patch-based contrast-maximization method. Fattal  assumes the transmission and surface shading are locally uncorrelated, and estimates the albedo of the scene. Dark channel prior (DCP)  assumes local patches contain low intensity pixels in at lease one color channel and hence estimates the transmission map. Fast visibility restoration (FVR)  is a filtering approach by atmospheric veil inference and corner preserving smoothing. Meng et al.  uses boundary constraint and contextual regularization (BCCR), and Chen et al.  uses gradient residual minimization (GRM) to surpress artifacts. Tang et al. 
combines priors by learning with random forests model. Color attenuation prior (CAP) assumes a linear model of brightness and the saturation and then learns the coefficients. Berman et al.  assumes each color cluster in the clear image becomes a line in RGB space, and proposes non-local image dehazing (NLD).
There is an increasing interest in applying convolutional neural networks (CNNs) for image dehazing. DehazeNet  and multi-scale convolutional neural networks (MSCNN)  are trained to estimate the transmission map. AOD-Net estimates a new variable based on the transformation of the atmospheric scattering model. Zhang and Patel  and Li et al.  estimate transmission map and atmospheric light by separate CNNs. Yang et al.  adversarially train generators for components of the atmospheric scattering model. These methods use relatively small CNNs and do not exploit the pre-trained deep networks for image representation. A few days before our submission, we notice a preprint  that also uses the pre-trained deep networks. The proposed method is quite different from : we use encoder-decoder with skip connections, while Cheng et al.  only use feature maps extracted from one layer of the pre-trained network as input; we study instance normalization and demonstrate its effectiveness; we train an end-to-end system from hazy image to clear image, while Cheng et al.  estimate transmission map and atmospheric light; we can generate impressive results without explicitly applying the atmospheric scattering model.
Deep neural networks can be used as “priors” for image generation and image processing. The architecture of CNNs itself can be a constraint for image processing  and image generation [20, 14]. A pre-trained deep networks can be used as general purpose feature extractors  and perceptual metric . The second-order information of the features extracted by a pre-trained network describes the style of images . Instance normalization layers that effectively change the statistics of deep features are widely used for image style transfer [39, 9, 13, 17].
3 VGG-based U-Net with instance normalization
We propose an end-to-end encoder-decoder network architecture for single image dehazing, as shown in fig. 1. The input is a hazy image, and the output is the desired clear image. We introduce different components of the network in the following paragraphs of this section.
Encoder. Our encoder uses the convolutional layers of the VGG net 
pre-trained on Imagenet large-scale image classification task
. VGG net contains five blocks of convolutional layers, and we use the first three blocks and the first convolutional layer of the forth block. Each block contains several convolutional layers, and each convolutional layer is equipped with ReLU
as activation function. The width (number of channels) and size (height and width) of convolutional layers are shown infig. 1
. There is a maxpooling layer of stride two between blocks, which enlarges the receptive field of higher layers. The width of convolutional layer is doubled after the subsampling of feature maps by maxpooling.
The pre-trained VGG net is a powerful feature extractor for perceptual metric  and image statistics . Our encoder is deep and wide, and the extracted deep features are capable to capture the semantic information of the input image. We fix the encoder during training to exploit the power of pre-trained VGG net as “priors”, and avoid overfitting from relatively small number of samples in image dehazing dataset.
Decoder and skip connection. Our decoder is designed to be roughly symmetric to the encoder. The decoder also contains four blocks, and each block contains several convolutional layers. The last layer of the first three blocks of the decoder uses transposed convolutional layer to upsample the feature maps. We use ReLU activation for convolutional and transposed convolutional layers except for the last layer, where we use Tanh as activation function.
We add skip connections from the output of the first convolutional layer of encoder block 1,2,3 to the input of decoder block 4,3,2 by concatenating (cat) the feature maps, respectively. Hence our deep encoder-decoder network has a U-Net [29, 19] structure except that our skip connections are based on blocks instead of layers . We use trainable instance normalization for skip connections, and have instance normalization before each convolutional layer in decoder except the first one. Our deep encoder-decoder network has large capacity, and skip connections make the information smoothly flow to easily train a large network.
Instance normalization. We briefly review instance normalization , and discuss our motivation in applying instance normalization for single image dehazing. Let represent the feature map of a convolutional layer from a minibatch of samples, where is the batch size, is the width of the layer (number of channels), and are height and width of the feature map. denotes the element at height , width of the th channel from the th sample, and instance normalization layer can be written as,
are learnable affine parameters, is a very small constant, and
represent the mean and variance for each feature map per channel per sample.
If we replace instance level variables with batch level variables
that are estimated for all samples of a minibatch, we get the well-known batch normalization layer. We show instance normalization is preferred than batch normalization for single image dehazing in our experimental ablation study.
The learnable affine parameters of instance normalization shift the first and second order statistics (mean and variance) of the feature maps. Instance normalization is effective for image style transfer, and the style of images can be represented by learned affine parameters . Shifting the statistics of deep features extracted by pre-trained networks has achieved impressive results for arbitrary style transfer . Shifting the statistics of images is intuitive for dehazing, however, it can be difficult to decide the exact amount to change because haze depends on the unknown depth. The deep features extracted by a pre-trained VGG net contain semantic information to effectively infer depth for haze, and hence the learned affine parameters effectively shift the statistics of images. We apply instance normalization on the deep features extracted by pre-trained VGG net for single image dehazing.
Training loss. Our network is trained with both reconstruction loss and VGG perceptual loss. Denoting the training pairs of hazy image and clear image as , we use the mean squared loss,
where represents the trainable instance normalization and decoder layers in our network, represents the perceptual function, and
is a hyperparameter. We set, and use the features extracted by the first convultional layer of the third block from the pre-trained VGG net as perceptual function.
In this section, we conduct various experiments on both synthetic and natural images to demonstrate the effectiveness of the proposed method. The atmospheric scattering model is widely used to synthesize images for both training and testing. The hazy images are synthesized from groundtruth clear images and grountruth depth images [23, 3], or estimated depth images .
We train our model on the recently released RESIDE-standard dataset . RESIDE-standard contains 13,990 images for training, and 500 images for testing. These images are generated by existing indoor depth datasets, NYU2  and Middlebury stereo . The atmospheric scattering model is used, where atmospheric lights is randomly chosen between (0.7, 1.0) for each channel, and scattering coefficient is randomly selected between (0.6, 1.8).
We also apply our model trained on RESIDE-standard for cross-domain evaluation on D-Hazy , I-Haze  and O-Haze  dataset. D-Hazy dataset  is another synthetic dataset, which contains 23 images synthesized from Middlebury and 1449 images synthesized from NYU2, with atmospheric lights and scattering coefficient . Though D-Hazy dataset use the same clean images as RESIDE-standard, the generated hazy images are quite different. I-Haze  and O-Haze  are two recent released datasets on natural indoor and outdoor images, respectively. I-Haze contains 35 pairs of indoor images and O-Haze contains 45 pairs of outdoor images, where the hazy images are generated by using a physical haze machine.
We compare our results quantitatively and qualitatively with previous methods. We compare with prior-based methods, DCP , FVR , BCCR  , GRM , CAP  and NLD  . We also compare with learning-based methods DehazeNet , MSCNN  , and AOD-Net . We have provided a brief review of these baseline methods in section 2. We use peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as metrics for quantitative evaluation. For the benchmark evaluation on RESIDE-side, all the learning-based methods are trained on the same dataset. For cross-domain evaluation on D-Hazy, O-Haze and I-Haze, we use the released best pre-trained model for the learning-based baseline methods.
We train our model by SGD with minibatch size 16 and learning rate 0.1 for 60 epochs, and linearly decrease the learning rate after 30 epochs. We use momentum 0.9 and weight decay
for all our experiments. We will release our Pytorch code and pre-trained models.
4.1 Quantitative evaluation on benchmark dataset
|DCP ||FVR ||BCCR ||GRM ||CAP |
|NLD ||DehazeNet ||MSCNN ||AOD-Net ||Ours|
We present the performance of our network and baseline methods on the RESIDE-standard benchmark dataset  in table 1. Our network and the learning-based baselines [5, 28, 22] are trained on the provided synthetic data, and evaluated on the separate testing set. We evaluate our results by metrics provided by , and compare with the baseline results reported in . The learning-based methods perform slightly better than the prior-based method. CAP  performs best in prior-based method, which has a learning phase for the coefficients of the linear model. DehazeNet  performs best in baseline methods, which uses a relatively small network to predict components.
Our approach outperforms all the baseline methods on both PSNR and SSIM by a large margin. The synthetic data for both training and testing are generated by the atmospheric scattering model, and the baseline methods explicitly use the atmospheric scattering model. In contrast, our approach only uses instance normalization to transform the statistics of deep features . The superior performance of our network on the benchmark dataset demonstrate the effectiveness of deep networks and instance normalization for single image dehazing.
4.2 Ablation study
We provide more discussion on the proposed network. We verify the effectiveness of instance normalization with ablation study on network structures, as shown in table 2. We use no normalization (NA), batch normalization (BN), or instance normalization (IN) for skip connections and decoders, respectively. The normalization layers are added before each convolutional layer of the decoder except for the first layer. All the results in table 2 are obtained by only using reconstruction loss (
in loss function (3)) except for the last one, where IN and combined loss () are used. We train and evaluate our network on the RESIDE-standard dataset.
First, comparing the NA results in table 2 with previous best results in table 1, our encoder-decoder only achieves competitive results. Second, adding normalization to either skip connections or decoder significantly improves the performance of our network. The normalization layers for decoder are implicitly applied to the features from the skip connections, which makes the result of only normalizing decoder slightly better than only normalizing skip connections. Third, instance normalization works better than batch normalization, which demonstrates the effectiveness of shifting the mean and variance of deep features at instance level.
Finally, the perceptual loss only helps a little for quantitative evaluation, but it can help generate more visually appealing output images. We show an qualitative example in fig. 2, where the hazy input, the groundtruth clear image, outputs of our network without normalization layers and no perceptual loss (NA-NA), our network with instance normalization and no perceptual loss (IN-IN), and our network with instance normalization and perceptual loss (IN-IN-Percep). We enlarge the bottom left corner of the results to show more details. The results of IN-IN look much better than NA-NA. The enlarged area of the result with perceptual loss (IN-IN-Percep) looks sharper and clearer.
4.3 Cross-domain evaluation
|D-Hazy-NYU ||D-Hazy-MB ||I-Haze ||O-Haze |
In this section, we focus on the cross-domain performance by evaluating our network trained on RESIDE-standard  on the cross domain datasets, D-Hazy , I-Haze  and O-Haze . We compare with baseline methods that have publicly available code, and these are strong baselines according to benchmark evaluation in table 1. For learning-based methods DehazeNet , MSCNN , and AOD-Net , we use the best model the authors have released. MSCNN  and AOD-Net  are trained with synthetic images similar to RESIDE-standard, while DehazeNet  is trained with patches of web images.
We present the quantitative results in table 3, where we use bold to label the best results and underline to label the second best results. Our approach achieves best results, or close to the best results for all the cross-domain evaluations. Our first observation is that the learning-based methods [5, 28, 22], including ours, generalize reasonably well and perform equally or better than the prior-based methods [15, 44].
Our network performs well on the cross-domain D-Hazy dataset . Particularly, our approach outperforms all baseline methods by a large margin on the images synthesized from NYU depth dataset. D-Hazy dataset is synthesized by the same clear images as our training data RESIDE-standard, but uses different parameters of the atmospheric scattering model. Our trained network has effectively captured the statistics of the deep features of the desired clear images.
I-Haze  and O-Haze  images look quite different from our training images, and our network may have difficulty to infer the exact statistics of deep features for these images. DehazeNet  may have gained some advantage on these two datasets because it is trained on patches of web images. Our approach still produces competitive results compared with DehazeNet , and outperforms all the other baselines. Notice again that our network does not use the powerful atmospheric scattering model, and is only trained on a limited number of indoor synthetic images. The cross-domain evaluation further demonstrates the power of deep features and instance normalization in our approach.
4.4 Qualitative evaluation
We present qualitative results from cross-domain evaluation in fig. 3. The images are from D-Hazy-NYU , D-Hazy-MB , I-Haze  and O-Haze , respectively. We show the hazy image and groundtruth clear image, and compare our results with DCP , CAP , DehazeNet , MSCNN , and AOD-Net . We use the best released model for the learning-based baselines [5, 28, 22], and train our network on RESIDE-standard .
Our network makes the best efforts to remove haze and recover the real color of images, as shown in fig. 3. The results of baselines still have a large amount of undesired haze and look blurry (row 2,3,4). Particularly, the baselines have difficulty in dark areas of the image, and DCP also has difficulty in area of white and blue walls (row 1,3). For the outdoor image (row 4), our network produces a little artifact due to the significant domain difference between the desired images and the training indoor images. Use regularizers such as total variation  may help reduce these artifacts, and we plan to investigate it in the future. Our simple yet effective network has generated visually appealing results, without depending on extra constraints like the atmospheric scattering model.
We proposed a simple yet effective end-to-end system for single image dehazing. Our network has an encoder-decoder architecutre with skip connections. We manipulated the statistics of deep features extracted by pre-trained VGG net and demonstrated the effectiveness of instance normalization for image dehazing. Moreover, without explicitly using the atmospheric scattering model, our approach outperforms previous methods by a large margin on the benchmark datasets. Notice that both the training and testing data are generated by the atmospheric scattering model, and the baseline methods all explicitly use the model. Our network effectively learns the transformation from hazy image to clear image with limited synthetic data, and generalizes reasonably well.
The atmospheric scattering model is powerful and has been successfully deployed for image dehazing in the past decade. However, the atmospheric scattering model, as a simplified model, also constrained the learnable components to be “linearly” combined by element-wise multiplication and summation, which may not be ideal for training deep models. Our study sheds light on the power of deep neural networks and the deep features extracted by pre-trained network for single image dehazing, and encourages the rethinking on how to effectively exploit the physical model for haze. How will physical model help when training powerful deep networks? It is still an open question, and our approach serves as a strong baseline for future study.
Our network outperforms state-of-the-art methods by a large margin on the benchmark dataset, and achieves competitive results on cross-domain evaluation. The key idea of our approach is to apply instance normalization to shift the statistics of deep features for image dehazing. For cross-domain evaluation, it may be difficult to effectively infer the desired statistics of deep features of clear images that is quite different from the training data. Our generalization ability can be significantly improved by training from large-scale natural images. In the future, we will explore adversarial training to use unpaired hazy and clear images that are easier to collect from the web.
- Ancuti et al. [2018a] Codruta O Ancuti, Cosmin Ancuti, Radu Timofte, and Christophe De Vleeschouwer. I-haze: a dehazing benchmark with real hazy and haze-free indoor images. arXiv preprint arXiv:1804.05091, 2018a.
- Ancuti et al. [2018b] Codruta O Ancuti, Cosmin Ancuti, Radu Timofte, and Christophe De Vleeschouwer. O-haze: a dehazing benchmark with real hazy and haze-free outdoor images. arXiv preprint arXiv:1804.05101, 2018b.
- Ancuti et al.  Cosmin Ancuti, Codruta O Ancuti, and Christophe De Vleeschouwer. D-hazy: A dataset to evaluate quantitatively dehazing algorithms. In ICIP, pages 2226–2230. IEEE, 2016.
- Berman et al.  Dana Berman, Shai Avidan, et al. Non-local image dehazing. In CVPR, 2016.
- Cai et al.  Bolun Cai, Xiangmin Xu, Kui Jia, Chunmei Qing, and Dacheng Tao. Dehazenet: An end-to-end system for single image haze removal. IEEE TIP, 25(11):5187–5198, 2016.
- Chen et al.  Chen Chen, Minh N Do, and Jue Wang. Robust image and video dehazing with visual artifact suppression via gradient residual minimization. In ECCV, pages 576–591. Springer, 2016.
- Cheng et al.  Ziang Cheng, Shaodi You, Viorela Ila, and Hongdong Li. Semantic single-image dehazing. arXiv preprint arXiv:1804.05624, 2018.
- Donahue et al.  Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric Tzeng, and Trevor Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. In ICML, pages 647–655, 2014.
- Dumoulin et al.  Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. A learned representation for artistic style. ICLR, 2017.
- Fattal  Raanan Fattal. Single image dehazing. ACM TOG, 27(3):72, 2008.
- Fattal  Raanan Fattal. Dehazing using color-lines. ACM TOG, 34(1):13, 2014.
- Gatys et al.  Leon A Gatys, Alexander S Ecker, and Matthias Bethge. Image style transfer using convolutional neural networks. In CVPR, pages 2414–2423. IEEE, 2016.
- Ghiasi et al.  Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent Dumoulin, and Jonathon Shlens. Exploring the structure of a real-time, arbitrary neural artistic stylization network. BMVC, 2017.
- Goodfellow et al.  Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, pages 2672–2680, 2014.
- He et al.  Kaiming He, Jian Sun, and Xiaoou Tang. Single image haze removal using dark channel prior. IEEE TPAMI, 33(12):2341–2353, 2011.
- He et al.  Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
- Huang and Belongie  Xun Huang and Serge Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. In CVPR, pages 1501–1510, 2017.
- Ioffe and Szegedy  Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In ICML, pages 448–456, 2015.
- Isola et al.  Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017.
- Kingma and Welling  Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
- Krizhevsky et al.  Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097–1105, 2012.
- Li et al. [2017a] Boyi Li, Xiulian Peng, Zhangyang Wang, Jizheng Xu, and Dan Feng. Aod-net: All-in-one dehazing network. In ICCV, pages 4770–4778, 2017a.
- Li et al. [2017b] Boyi Li, Wenqi Ren, Dengpan Fu, Dacheng Tao, Dan Feng, Wenjun Zeng, and Zhangyang Wang. Reside: A benchmark for single image dehazing. arXiv preprint arXiv:1712.04143, 2017b.
- Li et al.  Chongyi Li, Jichang Guo, Fatih Porikli, Huazhu Fu, and Yanwei Pang. A cascaded convolutional neural network for single image dehazing. arXiv preprint arXiv:1803.07955, 2018.
- McCartney  Earl J McCartney. Optics of the atmosphere: scattering by molecules and particles. New York, John Wiley and Sons, Inc., 1976. 421 p., 1976.
- Meng et al.  Gaofeng Meng, Ying Wang, Jiangyong Duan, Shiming Xiang, and Chunhong Pan. Efficient image dehazing with boundary constraint and contextual regularization. In ICCV, pages 617–624. IEEE, 2013.
- Narasimhan and Nayar  Srinivasa G Narasimhan and Shree K Nayar. Vision and the atmosphere. International Journal of Computer Vision, 48(3):233–254, 2002.
- Ren et al.  Wenqi Ren, Si Liu, Hua Zhang, Jinshan Pan, Xiaochun Cao, and Ming-Hsuan Yang. Single image dehazing via multi-scale convolutional neural networks. In ECCV, pages 154–169. Springer, 2016.
- Ronneberger et al.  Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241. Springer, 2015.
- Rudin et al.  Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: nonlinear phenomena, 60(1-4):259–268, 1992.
- Russakovsky et al.  Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 2015.
- Sakaridis et al.  Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Semantic foggy scene understanding with synthetic data. arXiv preprint arXiv:1708.07819, 2017.
Scharstein et al. 
Daniel Scharstein, Heiko Hirschmüller, York Kitajima, Greg Krathwohl, Nera
Nešić, Xi Wang, and Porter Westling.
High-resolution stereo datasets with subpixel-accurate ground truth.
German Conference on Pattern Recognition, pages 31–42. Springer, 2014.
- Silberman et al.  Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. ECCV, pages 746–760, 2012.
- Simonyan and Zisserman  Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint, 2014.
- Tan  Robby T Tan. Visibility in bad weather from a single image. In CVPR, 2008.
- Tang et al.  Ketan Tang, Jianchao Yang, and Jue Wang. Investigating haze-relevant features in a learning framework for image dehazing. In CVPR, pages 2995–3000, 2014.
- Tarel and Hautiere  Jean-Philippe Tarel and Nicolas Hautiere. Fast visibility restoration from a single color or gray level image. In ICCV, pages 2201–2208. IEEE, 2009.
- Ulyanov et al.  Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. CoRR, abs/1607.08022, 2016.
- Ulyanov et al.  Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Deep image prior. CoRR, abs/1711.10925, 2017.
- Yang et al.  Xitong Yang, Zheng Xu, and Jiebo Luo. Towards perceptual image dehazing by physics-based disentanglement and adversarial training. AAAI, 2018.
- Zhang and Patel  He Zhang and Vishal M Patel. Densely connected pyramid dehazing network. arXiv preprint arXiv:1803.08396, 2018.
- Zhang et al.  Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. arXiv preprint arXiv:1801.03924, 2018.
- Zhu et al.  Qingsong Zhu, Jiaming Mai, and Ling Shao. A fast single image haze removal algorithm using color attenuation prior. IEEE TIP, 24(11):3522–3533, 2015.