Haze is the obscuration of lower atmosphere, typically caused by the presence of suspended particles in the air such as dust, smoke and other dry particulates. The presence of haze usually reduces the visibility range, thus affecting quality of images captured by camera sensors that will be processed by computer vision systems. A sample hazy image is shown on the left side of Figure1. It can be clearly observed that the existence of haze in an image greatly obscures the background scene. The problem of estimating a clear image from a single hazy input image is commonly referred to as dehazing. Image dehazing has attracted a significant interest in the computer vision and image processing communities in recent years [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21].
The deterioration of image quality is captured by the following mathematical model :
where is the location in the image co-ordinates, represents the observed hazy image, is the image before degradation, is the global atmospheric light, and is the transmission map. Transmission map contains the per-pixel attenuation information that affects the light reaching the camera sensor and it is a factor of depth as shown below:
where is attenuation coefficient of the atmosphere and is the depth map. One can view (1) as the superposition of two components: 1. Direct attenuation , and 2. Airlight . Direct attenuation represents the effect of scattering of light and the eventual decay of light before it reaches the camera sensor. Airlight is a phenomenon that results from the scattering of environmental light causing a shift in the apparent brightness of the scene. Note that Airlight is a function of scene depth and the global atmospheric light . As it can be observed from Eq. 1, image dehazing is an inherently ill-posed problem which has been addressed in different ways. Many previous methods overcome this issue by including extra prior assumption such as multiple images of the same scene  or depth information  to determine a solution. However, no extra information such as depth or multiple images is available for the problem of single image dehazing. To tackle this issue, different prior information has to be considered into the optimization framework such as dark-channel prior , contrast color-lines ] and hazeline prior . For example, based on the observation that there always exists one channel that is significant dark in the captured outdoor images, dark-channel prior  is leveraged in the optimization framework to guarantee dehazed images are “dark-channel”. Different from dark-channel prior, 
leverage the haze-line prior in the framework, based on the observation that color cluster in the clear image can be approximated as the haze-line in RGB space. More recently, several learning-based methods have also been proposed, where different learning algorithms such as random forest regression and Convolutional Neural Networks (CNNs) are trained for predicting the transmission map[3, 1, 2, 8]. Many existing methods make an important assumption of constant atmospheric light 111Meaning that the intensity of atmosphere light is independent from its spatial location . in the image degradation model (1) and tend to follow a two-step procedure. First, they learn the mapping from input hazy image to its corresponding transmission map and then using the estimated transmission map they calculate the clear image by reformulating Eq. 1 as
As a result, most of the previous methods consider the task of transmission map estimation and dehazing as two separate tasks, except the Li et al. . By doing so, they are unable to accurately capture the transformation between the transmission map and the dehazed image. Motivated by this observation, we relax the constant atmospheric light assumption [24, 25] and propose to jointly learn the transmission map and dehazed image from an input hazy image using a deep CNN-based network. Relaxed constant atmospheric light hypothesis within a certain adjustable limit not only allows us to exploit the benefits of multi-task learning but it also enables us to regress on losses defined in the image space. By enforcing the network to learn the transmission map, we still follow the popular image degradation model (1). This joint learning enables the network to implicitly learn the atmospheric light and hence avoiding the need for manual calculation. On the other hand, previous learning-based CNN methods [1, 2] utilize Euclidean loss in generating the corresponding transmission map, which may result in blurry effect and hence poor quality dehazed images . To tackle this issue, we incorporate the gradient loss combined with the adversarial loss to generate better transmission map with sharper edges.
Figure 2 gives an overview of the proposed single image dehazing method. Our network consists of three parts: 1. Transmission map estimation, 2. Hazy image feature extraction, and 3. Dehazing network guided by transmission map and hazy image features. The transmission map estimation is learned using a combination of adversarial loss, gradient loss and pixel-wise Euclidean loss. The transmission maps from this module are concatenated with the output of hazy image feature extraction module and processed by the dehazing network. Hence, the transmission maps are also involved in the dehazing procedure via the concatenation operator. The dehazing network is learned by optimizing a weighted combination of perceptual loss and pixel-wise Euclidean loss to generate perceptually better results. Shown in Figure 1 is a sample dehazed image using the proposed method.
This paper makes the following contributions:
A novel joint transmission map estimation and image dehazing using deep networks is proposed. This is enabled by relaxing the constant atmospheric light assumption, thus allowing the network to implicitly learn the transformation from input hazy image to transmission map and transmission map to dehazed image.
We propose to use the recently introduced Generative Adversarial Network (GAN) framework for learning the transmission map.
By performing a joint learning of transmission map and image dehazing, we are able to minimize losses defined in the image space such as perceptual loss and pixel-wise Euclidean loss, thereby generating perceptually better results with high quality details.
Extensive experiments on synthetic and real image datasets are conducted to demonstrate the effectiveness of the proposed method.
Ii Related Work
We briefly review recent works on image dehazing and some commonly used losses in various CNN-based image reconstruction tasks.
Ii-a Single Image Dehazing
Early methods tend to address the dehazing problem via including certain prior assumption. For example, the authors in in  tend to recover the contrast for each patch relying on the assumption that that haze greatly decrease the contrast of the color images. Then, Kratz and Nishino  proposed to model the image with a factorial Markov random field in which the scene albedo and depth are two statistically independent latent layers. He. et.al in  proposed a dark-channel prior based on the surprising observation that RGB images from outdoor scene tend to have one channel that in significantly dark. Built on dark channel prior, Meng et al.  imposing a specific boundary constraint during the estimation of transmission map. More recently, Berman et al.  proposed a non-local prior method based on the observation that the colors of a haze-free image can be well represented by a few hundred different colors that fall into several tight clusters in the RGB space.
The success of CNNs in modeling the non-learning mapping between input and output has also inspired researchers to explore CNN-based algorithms for low-level vision tasks such as image dehazing [1, 2, 8]. Unlike previous prior-based methods in the estimation of transmission map, Cai et al.  train an end-to-end CNN network to directly estimate the transmission map from the input haze image. More recently, Ren et al.  proposed a multi-scale deep architecture to directly regress the transmission maps via a course to fine fashion. However, the method of both Ren et al.  and Cai et al.  still leveraged a two-step procedure and hence the whole algorithm is not end-to-end optimized. Most recently, Li et al proposed an all-in-one dehazing network, where a linear embedding is leveraged to encode the transmission map and the atmospheric light into a single variable. Though these CNN-based learning methods achieve superior performance over the recent state-of-the-art methods, they limit their capabilities by learning a mapping only between the input hazy image and the transmission map. This is mainly due to the fact that these methods are based on the popular image degradation model given by (1) which assumes a constant atmospheric light. In contrast, we relax this assumption and thus enable the network to learn a transformation from the input hazy image to transmission map and transmission map to dehazed image. By doing this, we are also able to use losses defined in the image domain to learn the network. In the following sub-sections, two different losses that we use to improve the performance of the proposed network are reviewed.
Ii-B Loss Functions
Loss functions form an important and integral part of a learning process, especially in CNN-based reconstruction tasks. Initial work on CNN-based image regression tasks optimized over pixel-wise L2-norm (Euclidean loss) or L1-norm between the predicted and ground truth images [30, 31, 32]. Since these losses operate at per-pixel level, their ability to capture high level perceptual/contextual details such as sharp edges and complicated contour are limited and they tend to produce blurred results. In order to overcome this issue, we use two different loss functions: adversarial loss and perceptual loss for learning the transmission map and dehazed image, respectively.
Ii-B1 Adversarial loss
The adversarial loss, formulated in the Generative Adversarial Networks(GAN) work by Goodfellow et al. , has been widely used in generating realistic images. GAN consists of a generator and a discriminator that are jointly optimized. While the generator’s goal is to synthesize images that are similar in distribution of the training images, the discriminator’s job is to identify if the images fed to it are real or synthesized (fake). After the success of this method in generating realistic images, this concept has been explored as different formulations in various applications such as data augmentation , paired and unpaired 2d/3d image to image translation [35, 36, 37, 38, yamaguchi2018high]
, image super-resolution40, 41, 42] and image de-raining . In our work, we propose to use the GAN framework as an additional loss function to guide the learning of transmission map, which when optimized appropriately, will generated realistic transmission maps.
Ii-B2 Perceptual loss
Many researchers have argued and demonstrated through their results that it would be better to optimize a perceptual loss function in various applications[44, 45, 46, 47]. The perceptual function is usually defined using high-level features extracted from a pre-trained convolutional network. The aim is to minimize the perceptual difference between the reconstructed image and the ground truth image. Perceptually superior results were obtained for both super-resolution and artistic style-transfer [48, 49, 15, 50]. In this work, a VGG-16 architecture  based perceptual loss is used to train the network for performing dehazing.
Iii Proposed Method
The proposed network is illustrated in Figure 2 which consists of the following modules: 1. Transmission map estimation, 2. Hazy image feature extraction, and 3. Transmission guided image dehazing, where the first module learns to estimate transmission maps from corresponding input hazy images, the second module extracts haze relevant features from the input hazy image and the third module learns to perform image dehazing by combining the feature information extracted from the hazy image with the estimation transmission map. In what follows, we explain these modules in detail.
Iii-a Transmission Map Estimation
The task of predicting transmission map from a given input hazy image is considered as a pixel-level image regression task. In other words, the aim is to learn a pixel-wise non-linear mapping from a given input image to the corresponding transmission map by minimizing the loss between them. In contrast to the method used by Ren et al. in , our method uses adversarial loss in addition to pixel-wise Euclidean loss to learn better quality transmission maps. Also, the network architecture used in this work is very different from the one used in .
For incorporating the adversarial loss, the transmission map estimation is learned in the Conditional Generative Adversarial Network (CGAN) framework . Similar to earlier works on GANs for image reconstruction tasks [43, 53, 39], the proposed network for learning the transmission map consists of two sub-networks: Generator and Discriminator . The goal of GAN is to train to produce samples from training distribution such that the synthesized samples are indistinguishable from the actual distribution by the discriminator . The sub-network is motivated by the success of encoder-decoder structure in pixel-wise image reconstruction [54, 55, 53]. In this work, we adopt a ‘U-Net’-based structure  as the generator for the transmission map estimation. Rather than concatenating the symmetric layers during training, shortcut connections 
are used to connect the symmetric layers with the aim of addressing the vanishing gradient problem for deep networks. To better capture the semantic information and make the generated transmission map indistinguishable from the ground truth transmission map, a CNN-based differentiable discriminator is used as a ‘guidance’ to guide the generator in generating better transmission maps. The proposed generator network is as follows (the shortcut connection is neglected here):
where represents the convolutional layer, represents transpose convolution layer, indicates Prelu  and
indicates batch-normalization. The number in the bracket represents the number of output feature maps of the corresponding layer.
To ensure that the estimated transmission map is indistinguishable from the ground truth image, a learned discriminator sub-network is designed to classify if each input image is real or fake. Inspired by the success of patch-discriminator in distinguish real from fake, we also adopt a 7070 patch discriminator, where indicates the receptive field of the discriminator, to generate visually pleasing and sharper results.  also explores other ways to make the images sharper. The structure of the discriminator is defined as follows:
Iii-B Hazy Feature Extraction and Guided Image Dehazing
A possible solution to image dehazing is to directly learn an end-to-end non-linear mapping between the estimated transmission map and the desired dehazed output. However, as shown in , while learning a mapping from transmission map-like to an RGB color image is possible, one may loose some information due to the absence of the albedo and the lighting information.
To generate better dehazed image and enable the whole process (estimation of the transmission map and the dehazed image) end-to-end, we propose a deep transmission guided network for single image dehazing via relaxing the assumption of constant atmospheric light. Inspired by guided filtering [62, 63, 64]
, where a guidance image is leveraged to guided the generation of high-quality results (eg. depth map), a set of convolutional layers with symmetric skip connections are stacked in the front and they serve as a hazy image feature extractor. Basically, the hazy feature extraction part extract deep features from the input hazy image. Then, These feature maps are concatenated with the estimated transmission map. Then the concatenation is fed into the guided image dehazing module. This module consists of another set of CNN layers with non-linearities and it essentially acts as a fusion CNN whose task is to learn a mapping from transmission map and high-dimensional feature maps to dehazed image.222Note that our network is quite different from the network proposed in  in the sense that the proposed network is a multi-task learning network with a single input while the network in  is a single-task network with two inputs. To learn this network, a perceptual loss function based on VGG-16 architecture  is used in addition to pixel-wise Euclidean loss. The use of perceptual loss greatly enhances the visual appeal of the results. Details of the network structure for the hazy feature extraction and guided image dehazing module are as follows:
where indicates concatenation.
In summary, a non-linear mapping from the input hazy image and transmission map to dehazed image is learned in a multi-task end-to-end fashion. By learning this mapping, we enforce our network to implicitly learn the estimation of atmospheric light, thereby avoiding the “manual” estimation as followed by some of the existing methods.
Iii-C Training Loss
As discussed earlier, the proposed method involves joint learning of two tasks: transmission map estimation and dehazing. Accordingly, to train the network, we define two losses and , respectively for the two tasks.
Iii-C1 Transmission map loss
To overcome the issue of blurred results due to the minimization of error, the transmission map estimation network is learned by minimizing a weighted combination of error, an adversarial error and a gradient loss. The transmission map loss is defined as
where and are two weights, is the pixel-wise Euclidean loss, is the adversarial loss and is the two-directional gradient loss. These three losses are defined as follows
where is a -channel input hazy image, is the ground truth transmission map, indicates the dimension of the input image and transmission map, is the generator sub-network for generating the transmission map and is the discriminator sub-network . The directional gradient loss, which has been discussed in other applications [65, 66], the is defined as:
where and are operators that compute image gradients along rows (horizontal) and columns (vertical), respectively and indicates the width and height of the output feature map.
Traditional techniques for transmission map estimation employ only the Euclidean loss () to learn the network weighs. However, Euclidean loss is known to introduce blur in the generated output. Hence, the use of additional loss functions (adversarial loss and gradient loss) incorporates further constraints into the learning framework. Specifically, the adversarial loss () enforces the network to generate transmission maps that are closer to the input distribution and the gradient loss () ensures consistency between the gradients of the target and estimated transmission map. The weights are set using validation.
Iii-C2 Dehazing loss
The dehazing network is learned by minimizing a weighted combination of the pixel-wise Euclidean loss and perceptual loss between the ground-truth dehazed image and the network output and is defined as follows
where is a weighting factor, is the pixel-wise Euclidean loss and is the perceptual loss and are respectively defined as
where is a -channel input hazy image, is the ground truth dehzed image, is the dimension of the input image and the dehazed image, is the proposed network, represents a non-linear CNN transformation and are the dimensions of a certain high level layer of . Similar to the idea proposed in , we aim to minimize the distance between high-level features along with pixel-wise Euclidean loss. In our method, we compute the feature loss at layer relu31 in VGG-16 model .333https://github.com/ruimashita/caffe-train/blob/master/
vgg.trainval.prototxt Note that the dehazing loss is also to be propagated to the transmission estimation part.
Relaxing the condition of constant atmospheric light enables the network to be trained in an end-to-end fashion, thus allowing the network to implicitly learn the transformation from input hazy image to transmission map and transmission map to dehazed image. While it allows more flexibility in the learning process, it introduces more complexity on the model. Hence, to efficiently learn the network parameters, the transmission map is considered since it preserve information about the portion of the light that is not scattered that reaches the camera. Furthermore, additional losses such as adversarial loss and gradient loss function introduce strong regularization, thus enabling better estimation of transmission map.
In this section, we present the details and results of various experiments conducted on synthetic and real datasets that contain a variety of hazy conditions. First we describe the datasets used in our experiments. Then, we discuss the details of the training procedure. Next, we discuss the results of the ablation study conducted to understand the improvements obtained by various modules of the proposed method. Finally, we compare the results of the proposed network with recent state-of-the-art methods. Through these experiments, we attempt to demonstrate the superiority of the proposed method and the effectiveness of its’ various components.
Since it is extremely difficult to collect a dataset that contains a large number of hazy/clear/transmission-map image pairs, training and test datasets are synthesized using (1) and following the idea proposed in [3, 2, 1]. All the training and test samples are obtained from the NYU Depth dataset . More specifically, given a haze-free image, we randomly sample four atmosphere light and the scattering coefficient of the atmosphere to generate its corresponding hazy images and transmission maps. An initial set of 600 images are randomly chosen from the NYU dataset. From each image belonging to this initial set, 4 training images are generated by using randomly sampled atmospheric light and scattering coefficient, obtaining a total of 2400 training images. In a similar way, a test dataset consisting of 300 images is obtained. We ensure that none of the training images are in the test set. By varying and , we generate our training data with a variety of different conditions.
As discussed in [1, 3], the image content is independent of its corresponding depth. Even though the training images are from the indoor dataset  and hence depths of all the images are relatively shallow, we could modify the value of the attenuation coefficient to vary the haze concentration to make sure the datasets can also used for outdoor image dehazing. Meanwhile, the experimental results have also demonstrated the effectiveness of discussed training datasets.
To demonstrate the effectiveness of the proposed method on real-world data, we also created a test dataset including 30 hazy images downloaded from the Internet.
Iv-B Training Details
The entire network is trained on a Nvidia Titan-X GPU using the torch framework. We choose and for the loss in estimating the transmission map and for the loss in single image dehazing. During training, we use ADAM  as the optimization algorithm with learning rate of and batch size of 10 images. All the training samples are resized to . To efficiently train the multi-task network, we leverage the stage-wise training strategy. First, the transmission map estimation module is trained using the loss in Eq. 4 . Then, the entire network is fine-tuned using both Eq. 8 and Eq. 4.
Iv-C Ablation Study
In order to demonstrate the improvements obtained by different modules for both transmission maps and dehazed images, we conduct two ablation studies for estimating transmission maps and dehazed images, separately.
Ablation 1: This ablation study demonstrates the effectiveness of different modules in the transmission map estimation block and it consists of the following experiments:
1) Transmission map estimation using only L2 loss (T-L2),
2) Transmission map estimation using L2 loss and gradient loss (T-L2-G), and
3) Transmission map estimation using L2 loss, gradient loss and adversarial loss (T-L2-G-GAN).
Sample results are shown in Fig 3. It can be observed that the introduction of gradient loss (T-L2-G) eliminates halo-artifacts near complicated edges . Furthermore, the introduction of the discriminator (GAN framework-T-L2-G-GAN) effectively refine the local regions and enables sharper reconstructions, thereby preserving the structure for each object. Results of quantitative analysis on synthetic datasets are presented in Table I. The effect of different modules in the proposed network can be clearly observed from this table.
Ablation 2: Similarly, another ablation study is conducted to demonstrate the improvements obtained by different modules for dehazing images. This ablation study involves the following experiments:
1) Image dehazing using L2 loss without estimation of transmission map (I-L2-noT),
2) Image dehazing using L2 loss with estimation of transmission map (I-L2-T), and
3) Image dehazing using L2 loss and perceptual loss with estimation of transmission map (I-L2-Per-T).
Sample results are shown in Fig 4. It can be observed that the method (I-L2-noT) is unable to accurately estimate the haze level and depth (both are inherently captured in the transmission map) and hence the dehazed results tend to contain some color distortion. The introduction of the branch for the estimation of transmission map helps to generate better quality images. This can be seen by comparing the second column and the third column in Fig 4. Furthermore, the final involvement of the perceptual loss I-L2-Per-T is able to generate better dehazed images with high quality details (observed from the zoom-in parts in Fig 4). We also compare the inference running time for each ablation study, as tabulated in Table III. It can be observed that the multi-task learning results in slight increase in complexity of training and inference time. However, it leads to substantial improvements in the dehazing quality. The introduction of different loss functions such as gradient loss and perceptual loss increase the training time, however, it does not affect the inference time.
Iv-D Comparison with state-of-the-art Methods
To demonstrate the improvements achieved by the proposed method, it is compared against recent state-of-the-art methods on synthetic and real datasets.
Evaluation on synthetic dataset: Synthetic dataset, as described in Section IV(A), is used for the purpose of training and evaluating the network. Due to the availability of ground-truth images, we conduct both qualitative and quantitative evaluations.
Figure 5 shows results of the proposed method as compared with recent state-of-the-art methods ([5, 70, 4, 71, 1, 8] ) on a sample image from the test split of the synthetic dataset. After carefully analyzing these results, we observed that the recent best methods resulted in either incomplete removal of haze or over-correction which reduced the visual appeal of the image. Even though,  is able to achieve good performance in the presence of moderate haze, its dehazed results tend to contain color shift. In contrast, the proposed method is able to achieve better dehazing for a variety of haze contents. Similar results can be observed regarding the quality of transmission maps estimated by the proposed multi-task method as compared with the existing methods. It can be noted that the previous methods are unable to accurately estimate the relative depth in a given image, resulting in lower quality of dehazed images. In contrast, the proposed method not only estimates high quality transmission maps, but also achieves better quality dehazing.
The quantitative performance of the proposed method is compared against five state-of-the-art methods [5, 70, 1, 4, 8] using SSIM . The quantitative results are tabulated in Table IV. It can be observed from this table that the proposed method achieves the best performance in terms SSIM. Note that, we have attempted to obtain the best possible results for the other methods by fine-tuning their respective parameters based on the source code released by the authors and kept the parameter consistent for all the experiments. As the code released by [1, 8] cannot estimate the predicted transmission map, the results for the transmission estimation corresponding to [1, 8] is not included in the discussion.
Furthermore, we also evaluate the proposed method on the synthetic images used by previous methods [2, 8]. Results are shown in Fig 6. It can be clearly observed that Berman et al. [4, 71] and the proposed methods achieve the best visual performance among all. However, by looking closer at the upper right part of Fig 6, it can be found that method from Berman et al. [4, 71] tend to bring in the color-shift and hence degrade the overall performance.
Evaluation on real dataset: In addition to the synthetic dataset, we also conducted evaluation experiments on real dataset which consists of hazy images from the real world, collected from the internet. Since the ground truths are not available for such images, we do not use this dataset for training and we perform only qualitative evaluations.
Comparison of results on four sample images used in earlier methods compared with various approaches is shown in Figure 7. Yellow rectangles are used to highlight the improvements obtained using the proposed method. Though the existing methods seem to achieve good visual performance in the top row, it can be observed from the highlighted region that these methods may result in undesirable effects such as artifacts and color over-saturation in the output images. For the bottom two rows, the existing methods either make the image darker due to overestimation of dark pixels or are unable to perform complete dehazing. For example, leaning-based methods [1, 8] underestimate the thickness of haze resulting in partial dehazing. Even though Berman et al. [4, 71] leaves less haze in the output, the resulting image tends to be darker as the haze line is tough to detect under heavy haze conditions. In contrast, the proposed method is able to achieve near-complete dehazing with visually appealing results by avoiding any undesirable effects in the output images.
Furthermore, we also illustrate three qualitative examples of dehazing results on real-world hazy images by different methods. He. et al , Li. et al  and Ren. et al  method perform well but they tend to leave haze in the output leading to loss in color contrast. Even though Berman et al [4, 71] perform better, they tend to over-estimate the haze level resulting darker output images. Overall, our proposed method is able to tackle the problems brought by the other methods and achieve the best performance visually.
In Fig 9, we present a very tough hazy image to illustrate the results. The visual comparison here also confirms our findings in the previous experiments. Particularly, from the highlighted yellow rectangle, it can be observed that the method can better recover the Mandarin characters hidden behind the haze.
Through these experiments on real dataset, we are able to demonstrate that the proposed method, although trained on synthetic dataset, is able to generalize well to real world conditions.
Run Time Comparison: The proposed method is evaluated for its computational complexity. On average, our method is able to processes 512512 images at 18 frames per second (fps), thus providing real-time performance. Further more, the proposed method is compared against several recent methods as shown in Table V. The proposed method is comparable to the Li. et.al  but with better performance. On average, it takes about 3.3s to de-rain an image of size .
Iv-E Failure Cases
Although the proposed method is able to generalize well to most of the outdoor cases, it results in saturation of certain region of specific images. For example, as shown in dehazed images in Fig 11, central part of the sky is not recovered appropriately and it looks over-exposed. This is primarily due to the rarity of similar samples during training. This is a common problem in most existing methods.
Though the success of using synthetic samples for avoiding the need of expensive annotations has demonstrated the effectiveness in single image dehazing, the performance gap between the results on synthetic and real-world images illustrates some of the limitations in learning from synthetic data. Hence, it is necessary to explore new possibilities for leveraging synthetic data in order to obtain better generalization across real world images.
This paper presented a new multi-task end-to-end CNN-based network that jointly learns to estimate transmission map and performs image dehazing. In contrast to the existing methods that consider the transmission estimation and single image dehazing as two separate tasks, we bridge the gap between them by using multi-task learning. This is achieved by relaxing the constant atmospheric light assumption in the standard image degradation model. In other words, we enforce the network to estimate the transmission map and use it for further dehazing thereby following the standard image degradation model for image dehazing. Experiments were conducted on multiple datasets (synthetic and real) and the results were compared against several recent methods. Further, detailed ablation studies were conducted to understand the significance of the different components in the proposed. method.
This work was supported by an ARO grant W911NF-16-1-0126.
-  W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in ECCV. Springer, 2016, pp. 154–169.
-  B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE TIP, vol. 25, no. 11, pp. 5187–5198, 2016.
-  K. Tang, J. Yang, and J. Wang, “Investigating haze-relevant features in a learning framework for image dehazing,” in CVPR, 2014, pp. 2995–3000.
-  D. Berman, S. Avidan et al., “Non-local image dehazing,” in CVPR, 2016, pp. 1674–1682.
-  K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. on PAMI, vol. 33, no. 12, pp. 2341–2353, 2011.
-  J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele, and D. Lischinski, “Deep photo: Model-based photograph enhancement and viewing,” in ACM TOG, vol. 27, no. 5. ACM, 2008, p. 116.
-  Z. Li, P. Tan, R. T. Tan, D. Zou, S. Zhiying Zhou, and L.-F. Cheong, “Simultaneous video defogging and stereo reconstruction,” in CVPR, 2015, pp. 4988–4997.
-  B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “An all-in-one network for dehazing and beyond,” ICCV, 2017.
-  X. Yang, Z. Xu, and J. Luo, “Towards perceptual image dehazing by physics-based disentanglement and adversarial training,” 2018.
-  J.-H. Kim, W.-D. Jang, J.-Y. Sim, and C.-S. Kim, “Optimized contrast enhancement for real-time image and video dehazing,” Journal of Visual Communication and Image Representation, vol. 24, no. 3, pp. 410–425, 2013.
-  A. Galdran, J. Vazquez-Corral, D. Pardo, and M. Bertalmio, “Fusion-based variational image dehazing,” IEEE Signal Processing Letters, vol. 24, no. 2, pp. 151–155, 2017.
W. Wang, X. Yuan, X. Wu, and Y. Liu, “Fast image dehazing method based on linear transformation,”IEEE Transactions on Multimedia, vol. 19, no. 6, pp. 1142–1155, 2017.
H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3194–3203.
-  W. Ren, J. Zhang, X. Xu, L. Ma, X. Cao, G. Meng, and W. Liu, “Deep video dehazing with semantic segmentation,” IEEE Transactions on Image Processing, vol. 28, no. 4, pp. 1895–1908, 2019.
-  H. Zhang, V. Sindagi, and V. M. Patel, “Multi-scale single image dehazing using perceptual pyramid deep network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2018, pp. 902–911.
-  R. Li, J. Pan, Z. Li, and J. Tang, “Single image dehazing via conditional generative adversarial network,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 8202–8211.
-  D. Yang and J. Sun, “Proximal dehaze-net: A prior learning-based deep network for single image dehazing,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 702–717.
-  X. Liu, M. Suganuma, Z. Sun, and T. Okatani, “Dual residual networks leveraging the potential of paired operations for image restoration,” arXiv preprint arXiv:1903.08817, 2019.
-  Z. Xu, X. Yang, X. Li, X. Sun, and P. Harbin, “Strong baseline for single image dehazing with deep features and instance normalization.”
-  D. Chen, M. He, Q. Fan, J. Liao, L. Zhang, D. Hou, L. Yuan, and G. Hua, “Gated context aggregation network for image dehazing and deraining,” in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2019, pp. 1375–1383.
C. Sakaridis, D. Dai, and L. Van Gool, “Semantic foggy scene understanding with synthetic data,”International Journal of Computer Vision, pp. 1–20.
-  R. Fattal, “Single image dehazing,” in ACM SIGGRAPH 2008 Papers, ser. SIGGRAPH ’08. New York, NY, USA: ACM, 2008, pp. 72:1–72:9. [Online]. Available: http://doi.acm.org/10.1145/1399504.1360671
-  ——, “Dehazing using color-lines,” vol. 34, no. 13. New York, NY, USA: ACM, 2014.
-  Y. Li, R. T. Tan, and M. S. Brown, “Nighttime haze removal with glow and multiple light colors,” in Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 226–234.
-  J.-P. Tarel, N. Hautiere, L. Caraffa, A. Cord, H. Halmaoui, and D. Gruyer, “Vision enhancement in homogeneous and heterogeneous fog,” IEEE Intelligent Transportation Systems Magazine, vol. 4, no. 2, pp. 6–20, 2012.
-  S.-C. Huang, B.-H. Chen, and W.-J. Wang, “Visibility restoration of single hazy images captured in real-world weather conditions,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 10, pp. 1814–1824, 2014.
-  R. T. Tan, “Visibility in bad weather from a single image,” in CVPR. IEEE, 2008, pp. 1–8.
-  L. Kratz and K. Nishino, “Factorizing scene albedo and depth from a single foggy image,” in ICCV. IEEE, 2009, pp. 1701–1708.
-  G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in ICCV, 2013, pp. 617–624.
-  J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015, pp. 3431–3440.
-  D. Eigen and R. Fergus, “Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture,” in ICCV, 2015, pp. 2650–2658.
-  X. Fu, J. Huang, X. Ding, Y. Liao, and J. Paisley, “Clearing the skies: A deep network architecture for single-image rain removal,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2944–2956, 2017.
-  I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in NIPS, 2014, pp. 2672–2680.
-  X. Peng, Z. Tang, F. Yang, R. Feris, and D. Metaxas, “Jointly optimize data augmentation and network training: Adversarial data augmentation in human pose estimation,” arXiv preprint arXiv:1805.09707, 2018.
-  Z. Zhang, L. Yang, and Y. Zheng, “Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network,” arXiv preprint arXiv:1802.09655, 2018.
-  H. Zhang, B. S. Riggan, S. Hu, N. J. Short, and V. M. Patel, “Synthesis of high-quality visible faces from polarimetric thermal faces using generative adversarial networks,” International Journal of Computer Vision, pp. 1–18.
-  R. Natsume, S. Saito, Z. Huang, W. Chen, C. Ma, H. Li, and S. Morishima, “Siclope: Silhouette-based clothed people,” CoRR, vol. abs/1901.00049, 2019. [Online]. Available: http://arxiv.org/abs/1901.00049
-  C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., “Photo-realistic single image super-resolution using a generative adversarial network,” arXiv preprint arXiv:1609.04802, 2016.
-  J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image inpainting with contextual attention,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 5505–5514.
-  ——, “Free-form image inpainting with gated convolution,” arXiv preprint arXiv:1806.03589, 2018.
-  Y. Zhao, W. Chen, J. Xing, X. Li, Z. Bessinger, F. Liu, W. Zuo, and R. Yang, “Identity preserving face completion for large ocular region occlusion,” arXiv preprint arXiv:1807.08772, 2018.
-  H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” arXiv preprint arXiv:1701.05957, 2017.
-  A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks,” in Advances in Neural Information Processing Systems, 2016, pp. 658–666.
-  J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in European Conference on Computer Vision. Springer, 2016, pp. 694–711.
-  F. Luan, S. Paris, E. Shechtman, and K. Bala, “Deep photo style transfer.”
-  Y. Zhang, K. Li, K. Li, B. Zhong, and Y. Fu, “Residual non-local attention networks for image restoration,” CoRR, vol. abs/1903.10082, 2019. [Online]. Available: http://arxiv.org/abs/1903.10082
-  C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” in ECCV, 2016, pp. 702–716.
-  L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576, 2015.
-  W. Xiong, J. Yu, Z. Lin, J. Yang, X. Lu, C. Barnes, and J. Luo, “Foreground-aware image inpainting,” CoRR, vol. abs/1901.05945, 2019. [Online]. Available: http://arxiv.org/abs/1901.05945
-  K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
-  M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.
-  P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” arxiv, 2016.
-  O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2015, pp. 234–241.
-  X.-J. Mao, C. Shen, and Y.-B. Yang, “Image denoising using very deep fully convolutional encoder-decoder networks with symmetric skip connections,” arXiv preprint arXiv:1603.09056, 2016.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
——, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” inProceedings of the IEEE international conference on computer vision, 2015, pp. 1026–1034.
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network
training by reducing internal covariate shift,” in
Proceedings of The 32nd International Conference on Machine Learning, 2015, pp. 448–456.
-  C. Yan, H. Xie, J. Chen, Z. Zha, X. Hao, Y. Zhang, and Q. Dai, “A fast uyghur text detector for complex background images,” IEEE Transactions on Multimedia, vol. 20, no. 12, pp. 3389–3398, 2018.
-  B. Ummenhofer, H. Zhou, J. Uhrig, N. Mayer, E. Ilg, A. Dosovitskiy, and T. Brox, “Demon: Depth and motion network for learning monocular stereo,” CVPR, 2017.
-  J. Li, R. Klein, and A. Yao, “A two-streamed network for estimating fine-scaled depth maps from single rgb images,” in The IEEE International Conference on Computer Vision (ICCV), Oct 2017.
-  Y. Li, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep joint image filtering,” in European Conference on Computer Vision. Springer, 2016, pp. 154–169.
-  X. Shen, C. Zhou, L. Xu, and J. Jia, “Mutual-structure for joint filtering,” in ICCV, 2015, pp. 3406–3414.
-  D. Ferstl, C. Reinbacher, R. Ranftl, M. Rüther, and H. Bischof, “Image guided depth upsampling using anisotropic total generalized variation,” in ICCV, 2013, pp. 993–1000.
-  C. Yan, H. Xie, D. Yang, J. Yin, Y. Zhang, and Q. Dai, “Supervised hash coding with deep neural network for environment perception of intelligent vehicles,” IEEE transactions on intelligent transportation systems, vol. 19, no. 1, pp. 284–295, 2018.
-  C. Yan, H. Xie, S. Liu, J. Yin, Y. Zhang, and Q. Dai, “Effective uyghur language text detection in complex background images for traffic prompt identification,” IEEE transactions on intelligent transportation systems, vol. 19, no. 1, pp. 220–229, 2018.
-  P. K. Nathan Silberman, Derek Hoiem and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in ECCV, 2012.
-  R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” in BigLearn, NIPS Workshop, 2011.
-  D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
-  Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3522–3533, 2015.
-  D. Berman, T. Treibitz, and S. Avidan, “Air-light estimation using haze-lines,” in Computational Photography (ICCP), 2017 IEEE International Conference on. IEEE, 2017, pp. 1–9.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE TIP, vol. 13, no. 4, pp. 600–612, 2004.