Learning Aggregated Transmission Propagation Networks for Haze Removal and Beyond

11/18/2017 ∙ by Risheng Liu, et al. ∙ Dalian University of Technology 0

Single image dehazing is an important low-level vision task with many applications. Early researches have investigated different kinds of visual priors to address this problem. However, they may fail when their assumptions are not valid on specific images. Recent deep networks also achieve relatively good performance in this task. But unfortunately, due to the disappreciation of rich physical rules in hazes, large amounts of data are required for their training. More importantly, they may still fail when there exist completely different haze distributions in testing images. By considering the collaborations of these two perspectives, this paper designs a novel residual architecture to aggregate both prior (i.e., domain knowledge) and data (i.e., haze distribution) information to propagate transmissions for scene radiance estimation. We further present a variational energy based perspective to investigate the intrinsic propagation behavior of our aggregated deep model. In this way, we actually bridge the gap between prior driven models and data driven networks and leverage advantages but avoid limitations of previous dehazing approaches. A lightweight learning framework is proposed to train our propagation network. Finally, by introducing a taskaware image decomposition formulation with flexible optimization scheme, we extend the proposed model for more challenging vision tasks, such as underwater image enhancement and single image rain removal. Experiments on both synthetic and realworld images demonstrate the effectiveness and efficiency of the proposed framework.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 7

page 8

page 10

page 11

page 12

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Image with dense hazes Ours
Cai et al. Ren et al.
Fig. 1: Single image dehazing results by different propagation networks on a real-world example. The zoomed in comparisons of red rectangle regions are shown in supplemental materials.

Haze is a slight obscuration of the lower atmosphere, typically caused by fine suspended particles. Due to this atmospheric phenomenon, images captured by camera will fade and the contrast of the observed object will be reduced. Mathematically, hazy image can be formulated as a per-pixel convex combination of the haze-free image and the global atmospheric light. Image dehazing is just the task of recovering haze-free images from hazy observations, which has wide applications in computer vision society (e.g., outdoor surveillance, object detection, remote sensing and underwater photograph, just name a few).

It has been investigated that image degradations caused by hazes will increase with the distance from the camera, since the scene radiance decreases and the atmospheric light magnitude increases. Thus haze removal mainly relies on the scene depth information. But unfortunately, estimating depths of pixels in the image is also a challenging task. Early approaches [1] often utilize multiple images of the same scene to deal with this problem. However, in real-world scenario, there may only exist one single image for a specific scene. Thus recent literatures tend to address the more challenging single image dehazing task, which has even fewer scene information [2].

I-a Related Works

We first survey some related works on single image dehazing, which can be roughly divided into two categories, i.e., prior driven models and data driven networks.

Prior Driven Models: In past years, various priors (also known as domain knowledge based cues) have been investigated to capture deterministic (e.g., physical rules and variational energy) or statistical (e.g., distributions) properties of hazy images. For instance, Fattal [3] estimated depth maps and haze-free images based on the assumption that surface shadings and transmissions should be locally uncorrelated. He et al. [4] observed that most local patches in haze-free images contain extremely low intensities in at least one color channel and then the thickness of the haze can be estimated by the so-called dark channel prior (DCP). The works in [5, 6] improved DCP by replacing the minimum operator with the median operator. In [7], Meng et al. reformulated DCP as a boundary constraint and incorporated a contextual regularization to model transmissions of foggy images. Fan et al. [8] proposed a two-layer Gaussian process regression model to refine DCP. Lai et al. [9] proposed two scene priors to estimate the optimal transmission map. By creating a linear formulation for the scene depth under a color attenuation prior, Zhu et al. [10] designed a supervised haze removal model. Berman et al. [11] presented the concept of haze-line and adopted it to estimate the transmission factors. The work in [12, 13] aimed to solve variational models to suppress artifacts in hazy images.

It can be observed that all above mentioned conventional prior driven dehazing methods heavily rely on certain assumptions of hazy images. So they may fail on specific data when these cues are not valid. For example, the method of Fattal performs well on thin hazy images. But this method cannot successfully recover images with thick hazes since it needs rich color information and color difference among pixels. And DCP may break down in bright areas of the scene (e.g., sky regions). As for the method of Berman et al., it may not perform well when the airlight is significantly brighter than the scene. Though suppressing artifacts, variational models often need to perform time consuming iterations to obtain the (local) optimal solutions.

Data Driven Networks:

Very recently, training heuristically constructed deep convolutional neural networks (CNNs) on large-scale datasets have delivered record breaking performance in many vision and recognition tasks 

[14, 15]. Some existing methods suggest that CNNs can also benefit single image dehazing and are effective to handle complex scenes by accurate depth information estimation. Cai et al. [16] proposed a trainable end-to-end CNN, which inputs hazy image, and outputs corresponding transmission map, to recover haze-free image. Ren et al. [17] further introduced multi-scale CNNs for single image dehazing. They first used a coarse-scale network to roughly estimate the transmission structure and then refined it by another fine-scale network.

Although relatively good results have been achieved by existing deep networks, there are still two major issues of these fully data driven models should be addressed. Firstly, domain knowledge of hazy images, which is shown to be effective in previous works, are completely discarded by these networks. Secondly, the performance of current network architectures are tightly dependent on the quality and quantity of the training data. So large amounts of data are required for networks training. For example, the training sets required in [16] and [17] are all of the order of ten thousands. It can also be seen that even with amounts of training images, standard deep models may still fail on images with different haze distributions due to their fully-training-data-dependent nature.

I-B Contributions

As prior driven dehazing models are mainly derived from particular visual cues, they may fail when their assumptions are not valid on specific images. On the other hand, recent deep networks tend to learn dehazing processes fully based on training data. So they indeed completely discard physical principles of hazes. To mitigate above issues, we propose a novel propagation formulation, named data-and-prior-aggregated transmission network (DPATN), to leverage advantages but avoid limitations of domain knowledge and training data for single image dehazing. Specifically, we first build an aggregated residual architecture to formulate transmission propagation. The intrinsic propagation behavior of DPATN is then investigated by an energy-based-modeling perspective 

[18]. A lightweight learning framework is also established for network training. Figure 1 shows performances of DPATN together with two recently proposed CNNs on a challenging real-world example. Finally, we also extend DPATN for more challenging tasks, such as underwater image enhancement and single image rain removal. In summary, our DPATN framework has at least three advantages over existing dehazing architectures.

  • Specific domain knowledge of the haze removal task is explicitly exploited to the transmission propagation in DPATN, yielding more accurate transmission estimations and realistic dehazing results.

  • DPATN needs extremely less training images than existing dehazing networks111As shown in Section V-C, dozens of images are enough for our DPATN training. While existing dehazing networks (e.g., Cai et al. and Ren et al.) all require tens of thousands of training images.. Furthermore, in contrast to conventional feed forward networks, DPATN is able to directly propagate transmissions through the network in both forward and backward passes, leading to nice propagation properties.

  • Different from most current deep formulations, which build their architectures in heuristic ways, we provide a novel manner to investigate the intrinsic propagation behaviors of our architectures following an energy minimization perspective. These insights can also be extended to guide other complex vision tasks.

  • To address more challenging image enhancement tasks, such as underwater image enhancement and single image rain removal, we extend DPATN with a task-aware separation formulation and prove the global convergence property of our final propagation.

Ii The Proposed Framework

We start with designing data-and-prior-aggregated transmission network (DPATN), to bridge the gap between domain knowledge and training data for haze removal.

Ii-a Atmospheric Scattering Model

To formulate hazy images, we first describe the following widely known atmospheric scattering model [19]:

(1)

where is the global atmospheric light, , , and are respectively the observed hazy image, the latent scene radiance, the scene depth and the medium transmission at pixel location and is the medium extinction coefficient. It is known that the value of is in the range and used to describe the portion of light that is not scattered and reaches the camera sensors. The second equality further indicates that is exponentially attenuated with the scene depth. The main goal of image dehazing is to recover from . Following Eq. (1), we have that estimating accurate transmission map plays the core role in this task. However, due to multiple solutions exist for a single hazy image, the problem is highly ill-posed. Notice that to facilitate the presentation, hereafter we denote the discrete form of as , where is the number of pixels in the hazy image.

Fig. 2: The overview of transmission propagation in DPATN (along the arrow direction). The red, pink and blue dashed rectangles denote the prior driven (i.e., ), data driven (i.e., ) and aggregated (i.e., ) architectures, respectively. The right-most column shows the estimated transmission map and the restored image of DPATN, respectively.

Ii-B Aggregated Architecture for Transmission Propagation

We first design the general propagation scheme of transmission as the following residual architecture:

(2)

where is a residual function with parameters and is the output of this module with identity skip connections. Using this basic model, our goal reduces to learn a mapping to formulate the propagation residual of transmissions. We can see that different from strictly sequential networks, the module in (2) introduces identity skip-connections that bypass residual layers, allowing transmissions to flow from any layers directly to any subsequent layers.

As for the specific form of , we define it by aggregating a data driven submodule (denoted as ) and a prior driven submodule (denoted as ), i.e.,

(3)

Here are learnable parameters in which denote the propagation parameters of and is a weight parameter to penalize . In the following, we deduce particular formulations of these architectures based on training data and physical principles, respectively.

Data Submodule: We first design the data driven submodule as a cascade of two convolutions and one activation to fit transmission propagation:

(4)

where denote nonlinear activations, denotes convolution operator and are pairs of convolutional filters before and after each activation. It is clear that propagating only with will definitely discard rich prior information of hazy images. Thus it is necessary to further utilize our domain knowledge (e.g., Eq. (1)) to control this transmission propagation toward desired stable state.

Prior Submodule: We now discuss how to design our prior submodule to incorporate visual cues to guide the data driven transmission propagation. Specifically, by reformulating Eq. (1), we have that is actually the ratio of two line segments [4] at each pixel location, i.e.,

(5)

where is the color channel index, is the translated observation such that the airlight is at the origin. Since the pixel values of the latent scene radiance should be bounded, it is natural to enforce the following constraints on : , where are two scaling parameters and and denote the maximum and minimum value of , respectively. Then by combining this inequality with Eq. (5), we can obtain the prior module as

(6)

where denotes the projection on the range . By setting in Eq. (6), we can observe that DCP formulation is just a special case of . However, it is known that the original DCP always tends to provide inexact transmission estimation on specific regions (e.g., bright sky and headlights of cars). Fortunately, it will be demonstrated in the experimental part that our proposed prior module in Eq. (6) actually provides an efferent manner to avoid incorrect transmission estimations on these challenging hazy images.

We illustrate the overall propagation pipeline of DPATN in Figure 2. The dehazing results of DPATN with different architectures (i.e., , and ) are further demonstrated in Figure 3. In this experiment, we also plot results generated by the naive combination of the prior and the media filter (denoted as “P + MF”) as our baseline. It can be seen that in the sky region, the results of the prior tends to estimate excessive depth and thus leads to obvious artifacts. While the purely data-based may also result in unclear restoration (e.g., with low contrast near the building boundary). We can see that the media filter only slightly improves the performance of . But it cannot correct the improper scene depth, thus there still exists severe artifacts in its result. In contrast, our proposed aggregation architecture can provide more accurate transmission map and much better dehazing result. Moreover, the aggregated obtains the highest quantitative score (i.e., PSNR) among all the compared strategies.




Input Ground Truth Prior (23.25) Data (23.13) Agg (26.82) Prior + MF (24.56)
Fig. 3: The dehazing results obtained by DPATN with different components: “Prior”, “Data”, and their aggregation (“Agg” for short). Top two rows: the haze removal results with zoomed-in comparisons. Bottom row: the estimated transmission maps. On the right most column, we also illustrate the results of the “Prior + MF” strategy. Here “MF” denotes the median filtering operation. The quantitative scores (i.e., PSNR) are also reported in the brackets.

Ii-C Lightweight Learning Framework

As our goal is to learn an aggregated residual network to map the hazy image to its corresponding transmission, we first collect a set of training pairs , where is a hazy image and is the corresponding ground-truth transmission map. Then we define a quadratic training cost function to measure the difference between the output of DPATN and the ground-truth transmission map as

(7)

Therefore, the learning process of DPATN can be formulated as a deep network constrained optimization model:

(8)

The key is to compute the gradients of the training cost with respect to the parameters

. By the chain rule of backpropagation, we have

(9)

In general, one may follow standard training schemes (e.g., [15]) to learn all the network parameters from training data using Eq. (9). However, such direct strategy may lead to extremely high computational burden. Therefore, we adopt a lightweight learning framework for DPATN, instead. That is, we parameterize network architectures based on their specific structures to reduce the network parameters as follows: We represent each filter as where is the DCT basis and is the set of filter coefficients to be learned. As for each transformation , we parameterize it using piecewise linear functions determined by a set of control points , i.e., where are predefined positions uniformly located within , are the function values at these positions and are combination coefficients. In this way, the number of parameters in DPATN can be significantly reduced. Besides, the derivability of required by analysis in Section III can also be guaranteed by such parameterization. Finally, we follow suggestions in [20] to utilize L-BFGS algorithm to optimize Eq. (8).

Ii-D Latent Scene Radiance Recovery

Now we are ready to discuss how to recover scene radiance from hazy observation . In addition to the medium transmission , we need to estimate the global atmospheric light in Eq. (1).

The most commonly used strategy [4, 17] is to select several brightest pixels (e.g., ) in the dark channel. Among these pixels, the one with the highest intensity in the image is selected as the atmospheric light. Here we improve this strategy based on a filtering process [7], i.e., , where the “” operation outputs the maximum values and “” denotes the sliding window minimum filter. With the estimated and , we can recover the haze-free image using Eq. (1). To avoid division by zeros, is particularly calculated by

(10)

where is a small constant.

Iii Energy-Based Propagation Investigation

The concept of “energy” in LeCun’s energy-based-model (EBM) [18] is referring to a parameterized function that maps points in input space to a scalar such that the desired points get assigned low energies, while the incorrect ones are assigned high energies. It has been demonstrated that various learning frameworks (e.g., probabilistic models [21]

, partial differential equation models 

[22] and deep networks [23, 24, 25]) can be reformulated in the light of “energy minimization”.

In this section, we would like to utilize EMB perspective to investigate the propagation behavior of our aggregated transmission network. That is, we aim to deduce a variational formulation from the developed DPATN, thus provide a more straightforward way to understand the insights in DPATN. Indeed, different from standard methodology, which deduces the practical calculation scheme from the designed variational energy, here we establish an underlying energy model from our DPATN propagation scheme, inversely.

Firstly, based on the recursive architecture in (2), we can explicitly build connections between the input and output of the network (with residual modules) as follows:

(11)

where are intermediate transmissions. Suppose there exist the derivable functions satisfying222As presented in Section II-C, this property can be guaranteed by a specifically designed parameterization scheme.

(12)

for each , then we can obtain an energy

(13)

Thus it is easy to understand that the data submodule is just the negative gradient direction of , i.e.,

(14)

in which we implicitly enforce that is obtained by rotating degrees. So in summary, DPATN actually builds a learnable propagation (i.e., gradient descent trajectory) with initial status to minimize the following transmission energy:

(15)

where we explicitly consider as parameters of to emphasize the recursive nature of this energy.

By investigating the derived form of in Eq. (13), we can link the energy minimization model in Eq. (15

) to the maximum likelihood estimation problem with the following probability distribution

(16)

Therefore, we can also interpret DPATN from Bayesian viewpoint, i.e., the behavior of DPATN can be understood as a prior guided prorogation with a learnable Gibbs distribution [26] on each intermediate and output transmissions.

In summary, DPATN provides a new way to aggregate the describing power obtained from data dependent deep architectures and visual priors revealed by the physical rule, thus bridges the gap between domain knowledge and training data in dehazing task.

Remarks: As a nontrivial byproduct, the above analysis may also be helpful for investigating insights and deducing new architectures for other vision tasks. For example, residual network (ResNet) [15] has achieved great success in high-level recognition/classification tasks. Though some efforts from the experimental point of view have been made [27, 28], it is still difficult to integrate the intrinsic mechanism of ResNet. We want to point out that the Gibbs distribution in (16) can be directly utilized to interpret the propagation behaviors of ResNet. This is because the data driven submodule in DPATN actually share the same structure with the basic unit in ResNet. Furthermore, following our methodology, it is also possible to incorporate additional human perspectives to improve the recognition/classification performance of standard ResNet.

Iv Extensions with Task-Aware Image Separation

To formulate more challenging image enhancement problems, we would like to extend the proposed propagation network using a generalized atmospheric scattering model, i.e.,

(17)

where is an additional problem-dependent image layer. It is observed that we actually combine three different terms, i.e., the direct transmission , airlight and task-related layer and assume that the incoming light intensity to a camera is linearly proportional to the camera’s pixel values. Similar to Eq. (1), within this extended framework, the transmission map also plays the main role in eliminating the hazing effect in the image. Meanwhile, the additional term will be used to incorporate image priors into the process.

Iv-a Task-Aware Image Layer Separation

In this part, we develop a new image separation formulation together with a flexible plug-and-play optimization scheme to extend DPATN for more challenging image enhancement tasks, such as underwater image enhancement and singe image rain removal. We first recognize that it is necessary to explicitly formulate the unknown corruptions (e.g. color-shift or rain layer) in these problems. Therefore, we consider as the corruption layer and introduce a latent observation variable in Eq. (17). In this way, these problems can be addressed by first estimating the latent image layer from corrupted observation and then calculating intrinsic transmission from .

Specifically, we consider the following task-aware image layer separation problem (to remove the effects of ):

(18)

where and are the regularization terms on and , respectively. Rather than directly solving Eq. (18) using standard solvers, this work provides a flexible plug-and-play scheme to incorporate different strategies to optimize the general image layer separation model for particular problems. Specifically, we first introduce a half-quadratic formulation [29] with two auxiliary variables and to Eq. (18):

(19)

The half-quadratic penalized priors and are defined as

(20)

where are penalty parameters. Then our task-aware image separation can be summarized as

(21)

We will demonstrate in the following section that the operators are related to specific tasks. Notice that we also composite a normalization process provided in [30] to these two operations to fit the general bound constraints in Eq. (18). To end this part, we provide a theorem to claim that the proposed iterations will always converge to a fixed-point.

Theorem 1.

(Fixed-point Convergent) The proposed propagations of task-aware image layers separation are globally convergent333Here “globally convergent” indicates that the whole sequence (but not any subsequences) generated by Eq. (21) is converged. Notice that this concept is fundamental and has been widely used in non-convex optimization society (see [31] for example). That is, the sequence generated by Eq. (21) is a Cauchy sequence, thus converge globally to a fixed-point.

Proof.

It is easy to check that the constraint actually enforces a bound assumption to the operations and . So we can have the following two inequalities

(22)

where are two constants. By optimal conditions of Eq. (19), we further have that

(23)

where and is a constant. So it can be derived that

(24)
(25)

Using the same methodology as that in Eq. (24)-(25), we also have the finite length property for and , i.e.,

(26)

Thus we have that , , and as . So is a Cauchy sequence, thus is globally converged to a fixed-point, which concludes our proof. ∎

In the following, we demonstrate how to apply the proposed image separation based extension of DPATN for more challenging image enhancement tasks, such as underwater image enhancement and single image rain removal.

Iv-B Underwater Image Enhancement

As shown in [32], we can directly use background light to approximate the true in-scattering term in the full radiative transport equation and thus achieve the following underwater imaging model

(27)

where is the homogeneous background light of different channels. Notice that all the variables in Eq. (27) are suffering from the effects of both light scattering and color changes by light with wavelength and thus should be considered as a function of both wavelength and the object-camera distance. We observe that the above underwater imaging model shares similar formulation with the atmospheric scattering model in Eq. (1) at each wavelength domain. So it is possible to apply DPATN to enhance the quality of underwater degraded images.

It is known that both light scattering and color change are sources of distortion for underwater photography. So different from standard dehazing task, it is necessary to incorporate image layer separation process in Eq. (18) to address color shift on underwater images. Inspired by the color constancy assumption in [30], the following cross-channel constraint is incorporated into Eq. (18) to balance the range of intensity values in RGB channels

(28)

As for prior functions, we follow [30] to specify to preserve large gradients while to smooth the color shift. Then the transmission propagation in Eq. (2) are performed based on to enhance the given underwater images.

As red light attenuates faster than blue counterpart in underwater propagation, the degraded images often show bluish tone rather than white tone in hazy images. To avoid interference from white objects and get a preciser in initial prior submodule, we build a subset with the brightest 0.1% pixels of the image (denoted as ) and calculate as follows

(29)

Iv-C Single Image Rain Removal

Single image rain removal is a challenging task, as raindrops always obstruct background scenes, resulting in several types of visibility degradations. Most existing models often directly decompose images as a rain layer and a background layer. However, due to the intrinsic overlapping between rain streaks and background texture patterns, such simple separation model is insufficient to cover some important factors in real rain images. For example, it is observed that distant rain streaks indeed accumulate and generate atmospheric veiling effects similar to mist or fog, which severely reduce the visibility by scattering light out and into the line of sight.

Therefore, it is more reasonable to utilize our generalized atmospheric scattering model Eq. (17) to formulate the rainy observation , in which is still the transmission, while and denote the rain streaks and background scene, respectively. Then the deraining task can be considered as removing the rain layer from the observation , estimating the transmission and then recovering the latent background based on Eq. (17). So we can address this task by extending DPATN with layer separation. As for the prior on

, we just adopt a Gaussian mixture model (GMM)

[33] to investigate the implicit distributions of the rain streaks. Specifically, we follow [33] to define , where is to extract the patch around (also remove its mean) and stands for the GMM model.

V Experimental Results

Input Ground Truth He et al. Meng et al. Ours
Fig. 4: Comparison with He et al. and Meng et al. (i.e., physical priors) on synthetic hazy image. The dehazing results and estimated transmission maps are presented on the top and bottom rows, respectively.

We first consider the task of single image haze removal and evaluate the proposed algorithm (i.e., DPATN) on both synthetic datasets [34, 35] and real-world hazy images  [4, 35] with comparisons to eight recent state-of-the-art methods, including prior driven approaches (e.g., [4, 10, 7, 11, 36, 13]) and fully data driven CNNs (e.g., [16, 17]). We use either their original implementations or the results provided by the authors for fair comparison.

As for DPATN, we cascade 5 residual modules, in which consist of 24 convolutional filters with the size

and 24 nonlinear activation functions. The filters and activations are respectively initialized by the average-discarded DCT basis and the unified influence function

. We train DPATN on a synthesized hazy image set, in which observations are simulated by clean images and known depth maps all from NYU depth database [34]. We generate atmospheric light () and medium extinction coefficient for each clear image. Then the corresponding synthesized hazy images are generated by Eq. (1). Finally, we crop a region from each transmission, resulting in our training set with images all of the size . The parameters of DPATN are experimentally set as follows: In the phase of latent scene radiance recovery, we set , in Eq. (6), and in Eq. (10). As for image layer separation formulation, we initialize the penalty parameters and , and then update them with the ratio at each iteration.

To perform quantitative comparisons, both Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM) metrics are used to measure the similarity between the restored results and the ground truth (GT). We also report one non-reference metric, named Natural Image Quality Evaluator (NIQE) [37] for real-world hazy images (without GTs). Notice that the lower NIQE score indicates the better performance. All the experiences are conducted on a PC with Intel Core i7-3770 CPU at 3.4 GHz, 8 GB RAM and a NVIDIA GeForece GTX 1050 Ti GPU.

Input Berman et al. (2.46)
Zhu et al. (2.73) Ours (2.37)
Fig. 5: Comparison with Berman et al. and Zhu et al. (i.e., distribution priors) on real hazy image. The non-reference NIQE scores are reported in brackets below each image.
Ground Truth Li et al.
Input Chen et al. Ours
Fig. 6: Comparison with Li et al. and Chen et al. (i.e., variational priors) on synthetic hazy image.
Input Chen et al. (5.17) Meng et al. (3.36) Li et al. (4.83) Ours (3.16)
Fig. 7: Comparison with prior based methods on real-world, low-resolution outdoor scene image with extremely dense hazes. The non-reference NIQE scores are reported in brackets below each image.
Input Ren et al. (3.74 & 3.04) Cai et al. (3.74 & 3.29) Ours (3.53 & 2.96)
Fig. 8: Comparison with Ren et al. and Cai et al. (i.e., data driven deep networks) on outdoor scenes with dense hazes. The non-reference NIQE scores are reported in brackets, in which the first and second scores are for results on the top and second rows, respectively.

V-a Qualitative Results

This part conducts experiments on both synthesis and real-world hazy images to compare performances of our DPATN against state-of-the-art dehazing techniques in qualitative way. Please notice the detailed comparisons on zoomed in regions in Figures 4, 5, 6, 7 and 8.

Prior Driven Models: We first compare DPATN with the well-known physical principle based dark channel prior (DCP) [4] and its recent improvement [7]) in Figure 4. We observe that results of DCP have apparent color distortions and serious artifacts. This is because pixels in sky regions may have high intensity in all three color channels, leading to inexact transmission estimations (see bottom row of Figure 4). In contrast, our DPATN estimates more accurate transmission, thus the restored image is closer to GT. In Figure 5, we also make a visual comparison on a real hazy image with Berman et al. [11] and Zhu et al. [10], which can be categorized as distribution prior based approaches. These two algorithms may not work well on distant region (see sky, lights and trees in red rectangle). While our method successfully removes hazes from the image. Both Li et al. [12] and Chen et al. [13] aim to design variational energies to suppress artifacts for haze removal. But we see in Figure 6 that these methods tend to over-smooth local details in dehazing results (see the white board region). Thanks to the aggregation of both priors and data, DPATN can successfully remove hazes and preserve rich details in latent image. Finally, we demonstrate results on real-world outdoor scene with extremely dense hazes in Figure 7. It can be seen that most prior-based methods fail on this challenging example. Though there still exist small artifacts due to low-resolution nature of the input image, details in distant region (e.g., airplane in red rectangle) can still be restored by DPATN.

Data He et al. Zhu et al. Berman et al. Cai et al. Ren et al. Meng et al. Ours
Fattal [35] (#11) 13.62 / 0.77 16.66 / 0.80 14.94 / 0.79 15.76 / 0.79 15.42 / 0.80 14.30 / 0.79 17.91 / 0.86
Synth T (#60) 12.32 / 0.75 14.21 / 0.72 15.69 / 0.71 16.15 / 0.76 14.11 / 0.70 13.95 / 0.64 15.89 / 0.76
M (#60) 11.00 / 0.60 12.75 / 0.67 10.20 / 0.60 12.79 / 0.67 10.10 / 0.59 9.21 / 0.57 15.78 / 0.74
D (#60) 10.31 / 0.51 11.8 / 0.58 9.49 / 0.49 11.70 / 0.56 9.47 / 0.49 9.25 / 0.52 15.42 / 0.68
A (#180) 11.21 / 0.62 12.95 / 0.66 11.79 / 0.60 13.55 / 0.67 11.23 / 0.59 10.80 / 0.58 15.70 / 0.73
TABLE I: Average PSNR / SSIM on the benchmark datasets (i.e., Fattal’s benchmark and the newly generated one). As for our dataset (denoted as “Synth”), we denote “T”, “M”, “D” and “A” as subsets of test images with thin, medium, dense hazes and all these hazy images, respectively. The numbers of images are listed in the brackets. The best two results are shown in red and green fonts.
Method He et al. Zhu et al. Berman et al. Cai et al. Ren et al. Meng et al. Li et al. Chen et al. Ours
Time 17.20s 4.38s 3.73s 5.77s 5.87s 6.95s 83.44s 105.13s 6.46s
TABLE II: Average running time (seconds per image) of dehazing methods in testing phase on Fattal’s benchmark.
church road1 lawn2 mansion raindeer Average
Ours (#30) 15.74 / 0.78 12.59 / 0.68 14.01 / 0.71 16.59 / 0.78 14.29 / 0.76 14.64 / 0.74
Ours (#50) 18.38 / 0.89 18.16 / 0.0.86 17.43 / 0.86 21.36 / 0.94 18.95 / 0.80 18.86 / 0.87
Ours (#80) 18.24 / 0.89 18.48 / 0.87 18.15 / 0.85 19.90 / 0.93 19.27 / 0.86 18.81 / 0.88
Cai et al. 14.98 / 0.78 13.62 / 0.73 13.34 / 0.72 16.99 / 0.85 18.23 / 0.82 15.43 / 0.78
Ren et al. 15.01 / 0.84 14.17 / 0.78 14.29 / 0.78 17.97 / 0.89 17.80 / 0.81 15.85 / 0.82
TABLE III: The quantitative results (i.e., PSNR / SSIM) of DPATN with different training data sizes (i.e., #30, #50, #80) on five example images in Fattal’s benchmark. The results of two CNNs based methods (i.e., Ren et al. and Cai et al.) are also reported at the bottom two rows. The best two results are shown in red and blue fonts, respectively.

Data Driven Deep Models: We then compare our DPATN with recently developed deep models (e.g., Ren et al. [17] and Cai et al. [16]) for natural outdoor scene dehazing in Figure 8. The performances of two CNNs are not very good on these challenging real-world scenarios. Actually this is not very surprising because all these networks are established based on intuitions and fully trained on synthetic hazy images. So they indeed completely ignore priors of hazes and thus the estimated transmissions will have bias if hazes in testing images have significantly different distributions with training data. In contrast, the propagation of DPATN are not only learned on training data (i.e., ), but also controlled by an adaptive domain knowledge based guidance (i.e., ), thus we actually leverage advantages of different dehazing methodologies. We can see that DPATN can well remove most hazes and produce a clear scene with vivid color information and rich details, which verify the efficiency of the proposed aggregation strategy.

V-B Quantitative Results

Benchmarks: We evaluate DPATN and report quantitative results on different benchmark datasets. The first one is from Fattel [35] and has been widely used for dehazing evaluation. It contains 11 images, including architecture, natural scenery and indoor images. We then generate a larger testing dataset with 180 synthetic hazy images using NYU depth database (completely different from our training images). Three subsets are further selected from this dataset based on the haze concentration (i.e., thin, medium and dense, each consists of 60 images).

Performance Evaluation: We report average PSNR and SSIM on two benchmarks in Table I. Prior based methods perform relatively well on Fattal’s small dataset. In contrast, the results of deep models are better than conventional ones on subset with thin haze concentrations. For example, the method of Cai et al. achieves the second best results in this case. This is because these CNNs are trained on synthetic images. So they may perform well when testing images share similar haze distributions with their training data. Thanks to the aggregation of data and priors, DPATN archives very good performance on all these benchmark datasets.

Running Time: We report average running time of dehazing methods in testing phase on Fattal’s database in Table II. One can see that the speed of DPATN is faster than most prior based methods, but a little bit slower than existing CNNs (e.g., Ren et al. and Cai et al.). This is mainly because their networks are shallower and thus have less convolution operators. But notice that even with deeper architectures, the number of parameters in DPATN is still extremely less than existing fully data driven deep models. So our training cost is definitely much lower than these CNNs.

V-C Ablation Studies

Now we provide analysis on the training phase of DPATN in detail as follows.

Training Data Size: Generally, existing dehazing networks all require large amounts of data for training. For example, Ren et al. collected 6,000 images and generated transmissions by 3 different medium extinction coefficients, resulting in 18,000 hazy images. Similarly, dehazing network proposed by Cai et al. required 20,000 training images. In contrast, by incorporating rich priors into the network, DPATN can yield accurate results with only dozens of training images. To verify this issue, we report quantitative performances of DPATN with different training data size in Table III. It is observed that DPATN with 50 training pairs performs much better than the 30 case. However, we cannot achieve significant improvements even when data size increases to 80. Overall, 50 training pairs are sufficient and we follow this setting in all experiments.

Fig. 9: The performances of DPATN with greedy layer-wise optimization and joint backpropagation on example images (i.e., couch, flower1, lawn1, moebius, road1) in Fattal’s benchmark.

Optimization Strategy: In this experiment, we compare two different optimization strategies for DPATN training in practice, i.e., layer-wise optimization and joint backpropagation. As for the first scheme, we simply calculate gradients of training cost with respect to parameters of each layer separately. In contrast, the second candidate scheme performs joint gradient descent updating for all the parameters. The PSNR and SSIM scores of DPATN with two training strategies are illustrated in Figure 9. One can observe that the joint strategy (with layer-wise initialization) outperforms the greedy one in all test examples. So in this paper we adopt the former strategy for the training of DPATN.

V-D Underwater Image Enhancement

In this part, we test the proposed framework on a more challenging underwater image enhancement task.

Training Set: Different from standard haze removal problem, which utilize uniformed scattering component, here we follow Eq. (27) to generate underwater training data based on NYU depth database. Specifically, as light with different color has different attenuation degree, we generate underwater images by adjusting the background light , , , and medium extinction coefficients , , in Eq. (27). We then crop a region from each transmission map to build our training set.

Performance Evaluation: We first evaluate the efficiency of image separation component for our proposed framework. That is, we compare the performance of DPATN with and without task-aware image separation (i.e., Eq. (18)) in Figure 10. The zoomed in results are shown in Figure 11. It can be observed that DPATN with task-aware separation can suppress the influence of forward scattering component and remove the haze effect at the edge of the object. Moreover, it also balances the image colors of the final results. We then compare our complete framework with existing state-of-the-art underwater enhancement algorithms (e.g., Rahman et al. [38], Chiang et al. [32] and Li et al. [12]) on several challenging degraded underwater images444All images are downloaded from https://github.com/agaldran/UnderWater.. It can be seen from Figure 12 and the zoomed in results in Figure 13 that though [38] can remove the blue and green tones well, the haze effect still exists in its results. And [32] is very sensitive to the parameters. We can also see that [12] tends to over-smooth local details, especially in the scene far away from the camera. In contrast, our proposed method can simultaneously recover image details and balance the image colors.

Input Ours (WO) Ours (W)
Fig. 10: The performances of our framework without and with task-aware image separation (respectively denoted as “WO” and “W”) on underwater degraded images.
Input Ours (WO) Ours (W)
Fig. 11: Zoomed in comparisons on underwater degraded images in Figure 10.
Input Rahman et al. Chiang et al. Li et al. Ours
Fig. 12: Comparison with Rahman et al., Chiang et al. and Li et al. on underwater degraded images.
Input Rahman et al. Chiang et al. Li et al. Ours
Fig. 13: Zoomed in comparison with Rahman et al., Chiang et al. and Li et al. on underwater degraded images in Figure 12.

V-E Single Image Rain Removal

Input SR DSC LP Ours (WO) Ours (W)
Fig. 14: Comparison with SR, DSC and LP on images with synthesized rain streaks and hazes. The last two columns are results of our proposed framework without and with task-aware image separation (respectively denoted as “WO” and “W”).
Input SR DSC LP Ours (WO) Ours (W)
Fig. 15: Comparison with SR, DSC and LP on images with real heavy rain streaks. The last two columns are results of our proposed framework without and with task-aware image separation (respectively denoted as “WO” and “W”).

Finally, we conduct experiments on rain removal and compare the performance of our proposed framework with state-of-the-art deraining approaches, including sparse representation based dictionary learning [39] (denoted as SR), discriminative sparse coding [40] (denoted as DSC) and layer prior based method [33] (denoted as LP). We first collect 5 images from [33] and follow [41] and [4] to generate rain streaks and hazes as our synthesized testing images. It can be observed in Figure 14 that SR approach tends to over-smooth the background and thus degrade the image quality. Though LP can perform well on rain streaks removal, the hazes still exist in the result images. In contrast, by simultaneously performing rain streaks and hazes removal, our proposed method can achieve the best performance among all compared approaches. We then evaluate all these methods on 4 real-world heavy rain images from [40]. Again, we obtain the best qualitative performance among all the compared approaches.

Vi Conclusions

This paper developed DPATN for single image dehazing. Different from previous works, which either design models based on priors or build networks in heuristic ways, DPATN performed transmission propagation by aggregating priors and data in a deep residual architecture. We investigated its propagation behaviors based on an energy perspective. A lightweight training framework was also developed for DPATN. Finally, we proposed a task-aware image separation technique to extend DPATN for more challenging vision tasks, such as underwater image enhancement and single image rain removal. The global convergence property also be proved for the our propagation. Experiments demonstrated that DPATN achieved favorable performance against state-of-the-art approaches.

References

  • [1] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Instant dehazing of images using polarization,” in CVPR, 2001.
  • [2] Y. Xu, J. Wen, L. Fei, and Z. Zhang, “Review of video and image defogging algorithms and related studies on image restoration and enhancement,” IEEE Access, vol. 4, pp. 165–188, 2016.
  • [3] R. Fattal, “Single image dehazing,” ACM ToG, vol. 27, no. 3, pp. 72:1–72:9, 2008.
  • [4] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE TPAMI, vol. 33, no. 12, pp. 2341–2353, 2011.
  • [5] J.-P. Tarel and N. Hautiere, “Fast visibility restoration from a single color or gray level image,” in ICCV, 2009.
  • [6] K. B. Gibson, D. T. Vo, and T. Q. Nguyen, “An investigation of dehazing effects on image and video coding,” IEEE TIP, vol. 21, no. 2, pp. 662–673, 2012.
  • [7] G. Meng, Y. Wang, J. Duan, S. Xiang, and C. Pan, “Efficient image dehazing with boundary constraint and contextual regularization,” in ICCV, 2013.
  • [8] X. Fan, Y. Wang, X. Tang, R. Gao, and Z. Luo, “Two-layer gaussian process regression with example selection for image dehazing,” IEEE TCSVT, vol. 27, no. 12, pp. 2505–2517, 2016.
  • [9] Y.-H. Lai, Y.-L. Chen, C.-J. Chiou, and C.-T. Hsu, “Single-image dehazing via optimal transmission map under scene priors,” IEEE TCSVT, vol. 25, no. 1, pp. 1–14, 2015.
  • [10] Q. Zhu, J. Mai, and L. Shao, “A fast single image haze removal algorithm using color attenuation prior,” IEEE TIP, vol. 24, no. 11, pp. 3522–3533, 2015.
  • [11] D. Berman and S. Avidan, “Non-local image dehazing,” in CVPR, 2016.
  • [12] Y. Li, F. Guo, R. T. Tan, and M. S. Brown, “A contrast enhancement framework with jpeg artifacts suppression,” in ECCV, 2014.
  • [13] C. Chen, M. N. Do, and J. Wang, “Robust image and video dehazing with visual artifact suppression via gradient residual minimization,” in ECCV, 2016.
  • [14]

    Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,”

    Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  • [15] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in CVPR, 2016.
  • [16] B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE TIP, vol. 25, no. 11, pp. 5187–5198, 2016.
  • [17] W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in ECCV, 2016.
  • [18] Y. LeCun, S. Chopra, R. Hadsell, M. Ranzato, and F. Huang, “A tutorial on energy-based learning,” Predicting structured data, 2006.
  • [19] E. J. McCartney, “Optics of the atmosphere: scattering by molecules and particles,” New York, John Wiley and Sons, Inc., vol. 1, 1976.
  • [20] Y. Chen, W. Yu, and T. Pock, “On learning optimized reaction diffusion processes for effective image restoration,” in CVPR, 2015.
  • [21]

    R. Marc’Aurelio, B. Y-Lan, C. Sumit, and Y. LeCun, “A unified energy-based framework for unsupervised learning,” in

    AISTATS, 2007.
  • [22] R. Liu, G. Zhong, J. Cao, Z. Lin, S. Shan, and Z. Luo, “Learning to diffuse: A new perspective to design pdes for visual analysis,” IEEE TPAMI, vol. 38, no. 12, pp. 2457–2471, 2016.
  • [23] J. Zhao, M. Mathieu, and Y. LeCun, “Energy-based generative adversarial network,” in ICLR, 2017.
  • [24] R. Liu, X. Fan, S. Cheng, X. Wang, and Z. Luo, “Proximal alternating direction network: A globally converged deep unrolling framework,” in AAAI, 2018.
  • [25] R. Liu, S. Cheng, Y. He, X. Fan, and Z. Luo, “Toward designing convergent deep operator splitting methods for task-specific nonconvex optimization,” in IJCAI, 2018.
  • [26] U. Schmidt, J. Jancsary, S. Nowozin, S. Roth, and C. Rother, “Cascades of regression tree fields for image restoration,” IEEE TPAMI, vol. 38, no. 4, pp. 677–689, 2016.
  • [27] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in ECCV, 2016.
  • [28] A. Veit, M. J. Wilber, and S. Belongie, “Residual networks behave like ensembles of relatively shallow networks,” in NIPS, 2016.
  • [29] D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE TIP, vol. 4, no. 7, pp. 932–946, 1995.
  • [30] Y. Li and M. S. Brown, “Single image layer separation using relative smoothness,” in CVPR, 2014.
  • [31] J. Bolte, S. Sabach, and M. Teboulle, “Proximal alternating linearized minimization for nonconvex and nonsmooth problems,” Mathematical Programming, vol. 146, no. 1-2, pp. 459–494, 2014.
  • [32] J. Y. Chiang and Y.-C. Chen, “Underwater image enhancement by wavelength compensation and dehazing,” IEEE TIP, vol. 21, no. 4, pp. 1756–1769, 2012.
  • [33] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” in CVPR, 2016.
  • [34] N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, “Indoor segmentation and support inference from rgbd images,” in ECCV, 2012.
  • [35] R. Fattal, “Dehazing using color-lines,” ACM ToG, vol. 34, no. 1, p. 13, 2014.
  • [36] Y. Li, S. You, M. S. Brown, and R. T. Tan, “Haze visibility enhancement: A survey and quantitative benchmarking,” CVIU, 2017.
  • [37] L. Liu, B. Liu, H. Huang, and A. C. Bovik, “No-reference image quality assessment based on spatial and spectral entropies,” SPIC, vol. 29, no. 8, pp. 856–863, 2014.
  • [38] Z. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-scale retinex for color image enhancement,” in ICIP, 1996.
  • [39] L.-W. Kang, C.-W. Lin, and Y.-H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE TIP, vol. 21, no. 4, pp. 1742–1755, 2012.
  • [40] Y. Luo, Y. Xu, and H. Ji, “Removing rain from a single image via discriminative sparse coding,” in ICCV, 2015.
  • [41] K. Garg and S. K. Nayar, “Photorealistic rendering of rain streaks,” in ACM ToG, vol. 25, no. 3, 2006, pp. 996–1002.