Removing rain streaks by a linear model

12/19/2018
by   Yinglong Wang, et al.
14

Removing rain streaks from a single image continues to draw attentions today in outdoor vision systems. In this paper, we present an efficient method to remove rain streaks. First, the location map of rain pixels needs to be known as precisely as possible, to which we implement a relatively accurate detection of rain streaks by utilizing two characteristics of rain streaks.The key component of our method is to represent the intensity of each detected rain pixel using a linear model: p=α s + β, where p is the observed intensity of a rain pixel and s represents the intensity of the background (i.e., before rain-affected). To solve α and β for each detected rain pixel, we concentrate on a window centered around it and form an L_2-norm cost function by considering all detected rain pixels within the window, where the corresponding rain-removed intensity of each detected rain pixel is estimated by some neighboring non-rain pixels. By minimizing this cost function, we determine α and β so as to construct the final rain-removed pixel intensity. Compared with several state-of-the-art works, our proposed method can remove rain streaks from a single color image much more efficiently - it offers not only a better visual quality but also a speed-up of several times to one degree of magnitude.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

page 7

page 12

12/20/2018

Rain Removal By Image Quasi-Sparsity Priors

Rain streaks will inevitably be captured by some outdoor vision systems,...
02/17/2019

Automated Detection of Regions of Interest for Brain Perfusion MR Images

Images with abnormal brain anatomy produce problems for automatic segmen...
05/24/2014

Improvements and Experiments of a Compact Statistical Background Model

Change detection plays an important role in most video-based application...
09/19/2017

Look Wider to Match Image Patches with Convolutional Neural Networks

When a human matches two images, the viewer has a natural tendency to vi...
08/22/2020

A novel edge detection approach based on backtracking search optimization algorithm (BSA) clustering

Image edge information is very important in application areas such as ma...
04/24/2020

Steganography Based on Pixel Intensity Value Decomposition

This paper focuses on steganography based on pixel intensity value decom...
12/30/2017

Fractional Local Neighborhood Intensity Pattern for Image Retrieval using Genetic Algorithm

In this paper, a new texture descriptor named "Fractional Local Neighbor...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Because of rain’s high reflection to light, rain usually is imaged as bright streaks in an image and influences the visual quality of the image. Hence, removing rain streaks in images has been necessary for most photographers. Garg and Nayar revealed that the visuality of rain is strongly related to some camera parameters [1]. Photographers can tune these parameters (e.g., exposure time and field depth) to constrain the captured rain streaks. However, this method can avoid rain streaks only to a small extent. In addition, the majority of rain images are obtained by outdoor vision systems where it is difficult to tune the camera’s parameters timely.

Rain streaks change the information conveyed by the original image. Therefore, the effectiveness of many computer vision algorithms that are based on some small features would be degraded severely. Though a majority of tracking and recognition algorithms is implemented for videos, rain-removal in the key frames of a video turns to play an important role.

Due to the inevitability of rain weather and the wide deployment of vision systems in practice, rain removal has been an important problem in computer vision for a long time. The related research for rain can date back to 1948 when Marshall and Palmer analyzed the relationship between the distribution of rain and its size [2]. Then, Nayar et al. studied the visual manifestations of different weather conditions, including rain and snow [3]. Because of the randomness of rain’s location, accurate detection of rain is a very difficult task. The early research works were mainly concentrating on rain removal in videos [4, 5], thanks to the strong correlation among neighboring video frames. Later on, the research focus gradually shifts to the study of rain removal in a single (color) image [16, 17, 19, 6, 7, 8, 14, 15, 18, 20, 21].

The recent methods for rain removal from a single image can be classified into four categories. The first category is simply filtering-based where a nonlocal mean filter or guided filter is often used

[16, 17, 19]. Due to the use of a filter simply, its implementation is very fast. However, it can hardly produce a satisfactory performance consistently - either the output image is left over with some rain streaks or quite a few image details are lost so that the output image becomes blurred. The second category builds models for rain streaks [18, 20, 21]. These models attempt to discriminate rain streaks from the background. However, it often happens that some details of the image will be mistreated as rain streaks. The third category, which seems more reasonable, is to form a 2-step processing [6, 7, 8]

. Specifically, a low-pass filtering is first used to decompose a rain image into the low-frequency part and high-frequency part. While the low-frequency part can be made free of rain as much as possible, some descriptors can be applied on the high-frequency part to further extract the image details to be added back into the low-frequency part. The last category combines deep learning methods with the rain removal task and obtains excellent results by designing appropriate deep networks

[9, 10, 11, 12].

(a) (b) (c) (d) (e) (f)
Fig. 1: An example of the rain-removed results: (a) original rain image, (b) result of Li et al. (in CVPR-2016) [21], (c) result of Luo et al. (in ICCV-2015) [20], (d) result of Zhang et al. (in CVPR-2018) [13], (e) result of Fu et al. (in CVPR-2017) [9], (f) our result.

Our formulation to remove rain streaks in a single color image also includes two steps. In the first step, we try to detect rain streaks by utilizing two characteristics of rain streaks. Then, a linear model is built to remove rain streaks in the second step.

Step-1: In order to remove rain streaks as much as possible, we hope that all rain streaks can be detected. However, the existing methods are difficult to obtain the locations of rain accurately and two kinds of detection errors usually occur:

  1. some rain streaks are missed, and

  2. some non-rain image details are mis-detected as rain.

If rain streaks are missed in the detection, they will remain in the final result. Hence, we try our best to avoid this detection error in our work. In our extensive tests, we found that our detection method capture nearly all rain streaks for a big majority of the test images. This is largely because that raindrops usually have strong reflection to light so that their pixel intensities are apparently larger as compared to the background pixels. Even when some rain streaks are missed, the final results could still be acceptable visually, because those missed rain-streaks have color components that are very similar to the background.

On the other hand, it is inevitable that some non-rain image details will be mis-detected as rain streaks. To reduce its influence, we try to revise the error detection by an eigen color method [24]. To this end, two characteristics of rain streaks are used:

  • rain usually possesses a higher reflection to light as compared to its neighboring non-rain objects, thus leading to larger intensities, and

  • rain is semi-transparent and colorless so as to present a gray color in the image.

These two characteristics are rather robust and have been utilized to rain removal works before, such as [8] and [25].

Step-2: We follow the imaging principle of rain pixels to build a physical model to represent each rain pixel. In reality, there are many factors influencing the imaging of rain pixels, such as light, wind, and even the background. By a reasonable approximation, we simplify the imaging of rain into a linear function:

(1)

where is the pixel intensity of the scene before being affected by rain (which is unknown), is the observed intensity of rain pixel, and are the parameters of the linear model, respectively. Our goal is to determine and for each rain pixel so that can be reconstructed optimally.

Advantages of our approach:

In our work, we propose to perform a relatively accurate detection of rain streaks to make sure that nearly no rain streaks are missed. Then, based on the linear model for rain pixels, we determine the involved parameters by optimizing a convex loss function. Since our proposed processing happens only on all detected rain pixels, other non-rain image details remain in the final result, which plays an important role in preserving image details. It can be seen from later sections that our algorithm produces higher PSNR/SSIM values for most rendered rain images compared with some state-of-the-art traditional methods. The optimization formulated in our work is a convex one. We can obtain the global optimal solution and avoid complex iterative calculation. Hence, our algorithm offers a speed-up of several times to more than one degree of magnitude when compared with several recent state-of-the-art works, thanks to the linear model and the detection of rain streaks developed in our work.

Furthermore, our linear model shows good robustness for removing rain streaks no matter on ordinary rain images or even on challenging heavy rain images. Another important point is that our algorithm is not a memory-consuming one. During experiment, we found that some algorithms (e.g., [20]) have high requirements for computer memory. If the input rain images are relatively large, they will lead to memory overflow easily. By testing, our algorithm can deal with large rain images even on ordinarily configured computer. Through relatively complete comparison, our results prove to outperform those state-of-the-art works and be comparable to the deep learning based works both subjectively and objectively - referring to Fig. 1 for one set of results.

The remainder of this paper is organized as follows. We briefly review the existing rain-removal methods in Section II. In Section III, we present the details of our rain detection method. The linear model for the imaging of rain pixels and the associated optimization to determine the involved parameter are described in Section IV. In Section V, we show the experiment results of our algorithm and make objective and subjective comparison with several state-of-the-art works. Finally, we conclude this paper in Section VI.

Ii Related Works

Rain removal from videos in the spatial domain: Early work on detection and removal of rain is mainly focused on videos by making use of the correlation among video frames. In [4], Garg and Nayar analyzed the visual effect of rain on an imaging system. In order to detect and remove rain streaks from videos, they developed a correlation model to describe the dynamics of rain and a motion blur model to explain the photometry of rain. To make the study more complete, they further revealed that the appearance of rain is the interaction result of the lighting direction, the viewing direction, and the oscillating shape of a raindrop [27]. Then, they built a new appearance model for rain (which is based on the raindrop’s oscillation model) and rendered rain streaks. In [28], they further analyzed the visual effect of rain and the factors that influence it. In order to detect and remove rain streaks in videos, they developed a photometric model that describes the intensities caused by individual rain streaks and a dynamic model that reflects the spatio-temporal properties of rain.

Based on the temporal and chromatic characteristics of rain streaks in video, Zhang et al. proposed another rain detection and removal algorithm in [29]. This work shows that rain streaks do not always influence a certain area in videos. Besides, the intensity changes of three color components (namely, , , ) of a pixel are approximately equal to each other, i.e., . By these two characteristics, rain streaks are detected and removed in videos. However, this method can only deal with videos that are captured by a stationary camera.

In [30], Brewer and Liu utilized three characteristics of rain to detect rain streaks in video, and the detected rain streaks are removed by calculating the mean value of two neighbouring frames. In [5], Bossu et al.

proposed the histogram of orientation of streaks (HOS) to detect rain streaks in image sequences. Specifically, this method decomposes an image sequence into foreground and background by a Gaussian mixture model and rain streaks are separated into the foreground. Then, HOS which follows a model of Gaussian-uniform mixture is calculated to detect rain streaks more accurately.

Rain removal from videos in the frequency domain: In [31], Barnum et al. detected rain streaks in the frequency domain. Specifically, they developed a physical model to simulate the general shape of rain and its brightness. Combined with the statistical properties of rain, this model is utilized to determine the influence of rain on the frequency of image sequences. Once the frequency of rain is detected, they will be constrained to obtain the rain-removed image sequences. Later on, in order to analyze the global effect of rain in the frequency space, they built a shape and appearance model for single rain streak in the image space [32]. Then, this model is combined with the statistical properties of rain to create another model to describe the global effect of rain in the frequency space.

Single image rain removal: In [33], monocular rain detection is implemented by Roser and Geiger in a single image, in which a photometric raindrop model is utilized. Meanwhile, Halimeh and Roser detected raindrops on the car windshield in a single image by utilizing the standard interesting point detector [34]. In this algorithm, they built a model for the geometric shape of raindrops and studied the relationship between raindrops and the environment.

To the best of our knowledge, in [6], Fu et al. accomplished the rain-removal task in single images for the first time by representing the image signal sparsely [22]. Kang et al. [7] and Chen et al. [8] followed a similar framework and demonstrated some improved results. In particular, Kang et al. identified the dictionary atoms [23] of rain streaks by utilizing the histogram of oriented gradients (HOG) [35], while Chen et al. developed the depth of field (DoF) to extract more image details from some high-frequency components. Some other learning-based image decomposition methods were also proposed to remove rain in a single image [14, 15]. They follow the similar formulation used in [6, 7, 8]. In [14], a context-constrained image segmentation on the input image is implemented. In [15], a unsupervised clustering on the observed dictionary atoms is implemented via affinity propagation, which makes the image-dependent components with similar context information be identified.

Meanwhile, Xu et al. [16] developed a rain-free guidance image and then utilized the guided filter [36] to remove rain. Luo et al. separated a rain image into the rain layer and de-rained layer by a nonlinear generative model (screen blend model) [20]. Specifically, they approximated the rain and de-rained layers by high discriminative codes over a learned dictionary. Kim et al. [17] detected rain streaks in a single image by combining an elliptical shape model of rain and a kernel regression method [37]. Then, they removed rain streaks by a non-local mean filter [38]. Based on the fact that rain streaks usually reveal similar and repeated patterns in the image, Chen et al.

captured the spatio-temporally correlated rain streaks by a generalized low-rank model from matrix to tensor structure

[18].

In [19], Ding et al. designed a guided smoothing filter and obtained a coarse rain-free image. The final rain-removed image is then acquired by a further minimization operation. Wang et al. analyzed the characteristics of rain and proposed a rain-removal framework [25]. Li et al. utilized some patch-based priors of both the background and rain to separate a rain image into the rain layer and de-rained layer [21]. In [26], we proposed a rain or snow removal algorithm based on a hierarchical approach. At first, a rain/snow image is decomposed into rain/snow-free low-frequency part and high-frequency by combining rain/snow detection and guided filter [36]. Then, we extract 3-layers non-rain/snow details from the high-frequency part by utilizing sparse coding. Finally, we add the low-frequency part with the 3-layers image details together to obtain the rain/snow-removed image.

Recently, deep learning has been applied to the rain removal task. For instance, Fu et al. extended ResNet to a deep detail network that reduces the input-to-output mapping range and makes the learning process easier [9], whereas the de-rained result is further improved by using some image-domain priori knowledge. They also built DerainNet to remove rain streaks [12] in which they use the high-frequency part of an image rather than the image itself during the training process. In the meantime, Zhang et al. proposed a de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN) [11], which adds the quantitative, visual, and discriminative performance into the objective function and obtains good results. Yang et al. designed a new rain image model and constructed a new deep learning architecture [10], which also achieves good rain-removed results. In a most recent work [13], Zhang et al.

proposed a density-aware multi-stream densely connected convolutional neural network, called DID-MDN, to remove rain from single images. This network can automatically determine the rain-density information and then remove rain efficiently.

Iii Detection of Rain

Because of the randomness of rain streaks in an image, the accurate detection of rain streaks is a challenging problem in the rain removal tasks, even by deep learning based methods. Our goal here is to detect nearly all rain pixels in the input rain image and, at the same time, try to avoid the mis-detection of non-rain details as much as possible.

In [26], we proposed a rough detection method of rain streaks based on the fact that the intensity of a rain pixel is usually larger than its neighboring non-rain pixels. In this work, we follow this approach but optimize it by utilizing the color characteristics of rain to reduce the mis-detection of non-rain details.

Fig. 2: This figure shows five windows that are utilized to implement the rain detection in our work: is a pixel and located at the different location of five widows. Different colors denote four different windows with the pixel located at bottom-right (the green one), bottom-left (the yellow one), top-right (the orange one), and top-left (the blue one); the red square denotes the window with at the center location.

Let us use to denote an input rain image, which consists of three color channels as the input image is assumed to be a color one. Because the intensity of a rain pixel is larger than its neighboring non-rain pixel in a small window, a pixel is considered as a rain pixel if its intensity is larger than the mean intensity of a window that includes . In this work, we modify slightly the detection rule as follow: is regarded as a rain pixel candidate if the inequality

(2)

holds for all three color channels in five windows where locates at the center, top-left, top-right, bottom-left, and bottom-right position respectively, as shown in Fig. 2, and

(3)

where represents a window. The parameter is an empirical value which will be given later in the experimental part. For all pixels in image , we implement the same detection. After the detection is completed, a binary map is generated in which is set to if a rain pixel candidate is detected at .

As compared to [26], a small incremental is added in (2). The reason is as follows: the intensity of some non-rain pixels could be smaller than the intensity of the neighboring rain pixel but larger than so that can avoid mis-detecting this kind of non-rain pixels as rain.

By this detection method, nearly all rain pixels can be detected. The failure case is that in some rain images, rain pixels that have color components very similar to the background may be missed. Even some rain streaks are missed, the final results could still be acceptable visually, because those missed rain-streaks usually merge into the background. At the same time, some non-rain details of the image are unfortunately mis-detected as rain pixels. An efficient way of identifying these mis-detections is described in the following.

Rain streaks usually possess a neutral color. According to this feature, Chen et al. [8] proposed to identify rain dictionary atoms by the eigen color feature [24]. In our work, we utilize the color characteristics of rain to revise the mis-detected non-rain pixels. For a given pixel , we use

to represent its color vector in the RGB space. We transform this 3-D RGB space into a 2-D space as follows:

(4)

where .

It is clear from (4) that, after the transformation, any pixels having a neutral color will be clustered around in the - space. For each rain pixel candidate detected above, we transform their RGB values into the - space to form a 2-D vector. If the magnitude of this 2-D vector (i.e.,the Euclidean distance to the origin of the - space) is larger than a pre-set value , is recognized as a mis-detected pixel. Consequently, we set its corresponding value in the location map back to .

Now, the remaining content consists of real rain pixels and a few mis-detected non-rain details that are similar to rain in their size and eigen color. We cannot revise these remaining image details any more. From the statistics presented in Section 5, however, we will see that these mis-detected pixels contribute only a small percentage. By hundreds of tests on various kinds of rain images, their influence on the final rain-removed results is tolerable visually, we will analyze the reason later.

Iv A linear model and its optimization

In this section, we build a linear model that is based on the imaging principle of a pixel to describe the influence of rain on the pixel intensity, followed by an optimization in which the involved parameters can be determined according to a convex loss function.

Iv-a The linear model

When a scene is photographed, the intensity of each image pixel is determined by an integral of the scene’s irradiance over the entire exposure time :

(5)

In a rainy scene, the intensity of a rain pixel can be expressed as:

(6)

where and are the irradiance of raindrops and the scene, respectively, and is the occupation time of the raindrop during the entire exposure time .

Raindrops have no regular shape. Hence, the irradiance of a raindrop is non-constant, and we utilize to denote the time-averaged irradiance of raindrop. For a given pixel, the corresponding scene may change during the exposure time in video. However, in photography, the scene is stationary. Therefore, the irradiance of scene is constant. Then, (6) can be rewritten as

(7)

Assuming that rain lasts for the entire exposure time, we have

(8)

For the scene, we make the same assumption and obtain

(9)

By combining (7), (8), and (9) together, we obtain

(10)

where and . This linear model establishes the relationship between the original scene intensity and the intensity after the scene is affected by rain.

As mentioned in [28], Gunn and Kinzer made an empirical study of the raindrop’s terminal velocity in terms of the size of raindrop and found , where is radius of a raindrop. In [28], Garg et al. obtained a conservative upper bound of by simulating the process of rain’s passing through the pin hole of a camera: . Then, the range of can be rewritten as . Garg et al. also found that the maximum value of is so that the maximum value of is [28].

As analyzed above, the time is small compared with the entire exposure time . Therefore, the values of for all rain pixels in an image are nearly the same. Clearly, is in the range . There are three situations according to the value of :

  1. []

  2. - the scene is not influenced by rain.

  3. - the most common situation where the pixel intensity follows the linear model described above.

  4. - implying that only rain is captured during the whole exposure time. In reality, this situation is impossible.

When a raindrop falls down from the sky, its velocity becomes larger when getting closer to the ground. As we mentioned above, the maximum value of is , which is usually small as compared with the exposure time of a camera . This implies that is close to . We can further infer some imaging principles for rain from this model. For example, a low-intensity pixel would receive a larger intensity enhancement than a high-intensity pixel while being affected by rain. To prove this, the intensity enhancement is defined as:

(11)

After applying our model, it will become:

(12)

Because is less than , the larger will lead to a smaller .

Furthermore, our proposed linear model can be used to infer some other results obtained in previous works. For instance, in [29], Zhang et al. demonstrated that the intensity changes of three color channels of a pixel are approximately equal to each other after being affected by rain. To prove this result, let us assume that is the color vector of a pixel and is the color vector after the pixel is affected by rain. Then, we have

(13)

the intensity change of each color channel is as follows:

(14)

As we analyzed above, is close to . Therefore, the intensity change of each color channel is approximately equal to each other, namely,

(15)
(a) (b) (c) (d)
Fig. 3: Step-by-step experimental results: (a) original rain image, (b) rough location map, (c) the distribution of detected candidate rain pixels in - space, (d) revised location map.

Iv-B Optimization

In order to train the linear model for the given image , we need to know the values of original pixels. Unfortunately, none of them is known in reality. The best we can do is to approximate each of them. In [30], Brewer and Liu pointed out that a majority of rain streaks appears in the relatively constant areas in an image. This point can also be observed in rain images. Even in the image with complex background, there is still a small constant area around rain streaks. Otherwise, the rain streaks can not be seen. In such a relatively constant area, a weighted average intensity of the neighboring non-rain pixels around each rain pixel can be utilized to approximate its original intensity. Such an approximation is supported by two reasons: (1) in the case of light rain, the intensity of the neighboring non-rain pixels is very close to the original intensity of the rain-affected pixel in a small window and (2) when encountered with heavy rain where the fog effect will appear, the weighted average serves as a de-hazing preprocessing and then the situation turns back to (1).

Approximation of the original intensity of rain pixel.  For each rain pixel , let us denote by the set of color vectors of all non-rain pixels in the window that is centered at , i.e., , , represents the color vector of the -th non-rain pixel in , and p is the observed color vector of rain pixel . To compute a weighted average of these non-rain pixels, the involved weights are calculated as follows:

(16)

For the rain pixel , the approximation of its original color vector is

(17)

We apply (17) to all detected rain pixels to obtain the approximations of their original intensities.

(a) (b) (c) (a1) (b1) (c1)
Fig. 4: A linear relationship between the intensities of rain pixels and the approximations of their original intensities can be seen clearly in three color channels: (a) R channel, (b) G channel, and (c) B channel. Then, we manually revise the mis-detected non-rain pixels and reduce the influence of non-rain pixels on the linear model: (a1) R channel, (b1) G channel, and (c1) B channel.

Train the parameters of our linear model.  As analyzed above, the time is small compared with the entire exposure time and the parameters and for all rain pixels in an image are approximately the same. Hence, we train the parameters of our model in a relatively large window, but only apply the resulted parameters to the rain pixel that is located at the window center.

Suppose that is a detected rain pixel, in one color channel (i.e., ), we form a relatively large window centered at and all detected rain pixels in the window will be utilized. Let as obtained by (17) be the approximations of the original intensities of rain pixels in and . According to our linear model (10), we have:

(18)

In order to determine , we minimize a loss function defined below:

(19)

where is the regularization parameter. This is a convex loss function and the solution is as follows:

(20)
(21)

where , , and

is the variance of

in the window .

After obtaining and for all detected rain pixels , in the window , we only apply the model’s parameters and to the rain pixel that is the center of , so that its rain-removed intensity can be obtained as follows:

(22)

We implement the same processing on all detected rain pixels in color channel so that the rain-removed image can be obtained.

In order to make our work more clear, we summarize our algorithm in Algorithm 1, whereas the values of the parameters used in our algorithm will be shown in the next section.

0:  input rain image
0:  
  obtain location using (2)
  transform the RGB space to the - space by (3)
  revise in the - space by
  
  for   do
     for each detected rain pixel  do
        train by (20) and (21)
        calculate the rain removed intensity by (22)
     end for
  end for
  rain-removed image
Algorithm 1 Removing rain streaks by a linear model

V Experimental results

In this section we apply our algorithm to rain images. We first show some detailed intermediate experimental results and analysis for one test image. Then, several state-of-the-art methods are selected to perform subjective and objective comparisons.

V-a Detailed experimental results

We use the rain image in Fig. 3(a) as an example to run our algorithm and show the implementation details of our algorithm.

Step-1. We first detect the rain streaks by (2). The resulted binary location map is shown in Fig. 3(b) in which the black areas stand for the detected rain pixels. In order to revise the mis-detected image details, we transform the RGB color space of each detected rain pixel candidate in into the - space by (3) and calculate its magnitude. The distribution of detected rain pixels in - space are shown in Fig. 3(c). Then, we revise all rain pixel candidates whose magnitudes in the - space are larger than the preset (the green part in Fig. 3(c)). The revised result is shown in Fig. 3(d), from which one can see that many rain pixel candidates have been revised.

(a) (b) (c) (d)
Fig. 5: (a) the result of multiplying binary location map with original rain image; (b) rain-removed result by our linear model; (c) the result of adding rain by the linear model on (b); (d) simulated diagram of the value of mis-detected non-rain details in rain-removed result.
Method [8] [20] [21] Ours
Time(s) 110.1 75.7 1257.4 7.68
TABLE I: The average time consumed by different methods on color images.
Image 1 Image 2 Image 3 Image 4 Image 5 Image 6 Image 7 Image 8 Image 9 Image 10
[8]
32.96
0.738
34.95
0.774
38.17
0.776
31.83
0.602
32.10
0.704
34.14
0.784
34.37
0.788
35.41
0.755
35.11
0.725
34.83
0.816
[20]
32.42
0.820
33.71
0.838
35.92
0.784
29.44
0.790
30.41
0.878
32.96
0.843
27.51
0.864
32.07
0.797
31.31
0.776
31.44
0.865
[21]
32.96
0.691
33.03
0.746
36.39
0.682
29.52
0.669
30.31
0.686
32.36
0.747
31.21
0.706
33.49
0.720
29.90
0.697
31.83
0.734
Ours
33.77
0.869
39.34
0.847
38.23
0.794
33.33
0.796
34.86
0.896
33.38
0.827
34.89
0.871
34.55
0.813
35.92
0.788
36.68
0.882
TABLE II: Rain image performances (top: PSNR, bottom: SSIM) of different methods (rows) on synthesized rain images (columns) against groundtruth.
Method [8] [20] [21] Ours
Percentage 10.54 % 12.16 % 7.50 % 69.80 %
TABLE III: User study result. The numbers are the percentages of votes which are obtained by each method.
(a) (b) (c) (d) (e) (f) (g) (h)
Fig. 6: Some rain-removed results for synthetic rain images: (a) ground-truthes, (b) synthetic rain images, (c) results by Chen et al. [8]; (d) results of Li et al. [21], (e) results of Luo et al. [20], (f) results of Fu et al. [9], (g) results of Zhang et al. [13] and (h) our results. We put PSNR/SSIM values of different methods at the top left corner of the images to facilitate comparisons.
(a) (b)
Fig. 7: (a) The PSNR values of three methods. (b) The SSIM values of three methods.

Step-2. For all detected rain pixels (i.e., those after the revising in Step-1), we calculate the approximations of their original intensities by (17). We plot in Fig. 4(a)-(c) the corresponding relationship between them, where 300 rain pixels are selected randomly in each of the R, G, and B channels. It can be seen from Fig. 4 that the observed intensities of rain pixels and the approximations of their original intensities (before being affected by rain) indeed construct a good linear relationship in each of the three color channels.

In the meantime, we find that a few points deviate from the linear line in Fig. 4(a)-(c) and they are the mis-detected non-rain pixels. In order to verify this, we select true rain pixels manually to implement the same statistics and the results are shown in Fig. 4(a1)-(c1). We can see that most isolated points have been eliminated. Because those isolated points contribute only a small percentage, their influence on the training of the linear models is in the tolerable range. To visually verify our detection method further, we use the binary location map to multiply the original rain image and the result is shown in Fig. 5(a). It can be seen that no rain streaks are left. We would like to point out that very similar results have also been found in many other rain images.

By optimization in Algorithm 1, we train the linear model and obtain the rain-removed result as shown in Fig. 5(b). In order to verify the linear model further, we utilize the trained linear model to add rain streaks on the rain-removed image in Fig. 5(b) and obtain the image in Fig. 5(c). We can see that this calculated rain image is very similar to the original rain image in Fig. 3(a). In this particular example, we would like to report that the ranges of and are and respectively. For other rain images, we can obtain the similar ranges of and , which implies a good consistency with our previous analysis, namely, the values of are close to 1 and vary within a small range.

In our work, the parameters , , , , and are set as , , , , , and , respectively. Note that the input rain image has been normalized into the range .

Finally, we present an analysis of the intensities of mis-detected non-rain details in the rain-removed results. The simulated diagram is shown in Fig. 5(d), where denotes the observed intensity of a mis-detected non-rain pixel. According to the statistics of Fig. 4, mis-detected non-rain pixels often have high intensities and therefore are located above the linear line drawn by the trained linear model. When of the model is close to , would be lower than . Hence, the intensities of mis-detected non-rain pixels will usually be reduced a little bit in the rain-removed results. As we analyzed previously, non-rain pixels that have high intensities and whose color channels are nearly equal to each other will be mis-detected. After applying our linear model, their color channels are still approximately equal to each other, except for the slightly reduced intensities. This leads to a similar color appearance in the final result so that the image details are maintained well even mis-detection exists.

(a) (b) (c) (d) (e) (f) (g)
Fig. 8: Some rain-removed results: (a) original light rain images, (b) results by Chen et al. [8]; (c) results of Li et al. [21], (d) results of Luo et al. [20], (e) results of Zhang et al. [13], (f) results of Fu et al. [9] and (g) our results.
(a) (b)
Fig. 9: (a) original heavy rain image with size ; (b) the image resized to . We can see that the intensity and size of rain streaks are reduced in the resized image.

V-B Performance evaluation

In this subsection, we evaluate the performance of our proposed rain-removal algorithm. We conduct both objective and subjective evaluations through a comparison with several state-of-the-art works. Specifically, three very recent works which based on the traditional methods have been selected here: the work of Li et al. in 2016 in which some patch-based priors of both the background and rain are used to separate a rain image into the rain layer and de-rained layer [21], the work of Luo et al. in 2015 that utilizes a nonlinear generative model to remove rain streaks from single images [20], and the work of Chen et al. in 2014 that removes rain steaks by sparse coding [8]. For deep learning based rain-removal works, we select the work [9] by Fu et al. to make comparisons. This work utilizes the famous ResNet which has shown its powerful ability in dealing with various computer vision tasks.

Complexity analysis.  We implement the selected methods on an Intel (R) Xeon (R) CPU E5-2643 v2 @ 3.5 GHz 3.5 GHz (2 processors) with 64G RAM, and use color images to test the time consumption. The average time consumed by our algorithm is . Specifically, the detection step consumes about , the approximation consumes about , and the optimization step consumes about . The other small part of time is consumed by other intermediate steps. The run time consumed by the works of [8], [21], [20] are listed in the Table I. Apparently, our algorithm provides a significant speed-up (of several times to more than one degree of magnitude).

With the size change of the rain images, the consumed time is also different. In our paper, we simplify the rain imaging into a linear model and reduce the calculation complexity largely. Suppose that is the number of pixels in a given rain image , is the number of candidate rain pixels after first detection, and is the number of detected rain pixels after revised by eigen color. Then the complexity for the first detection, the revision of the candidate rain pixels, and the approximation of the rain pixel are , , and , respectively. For the optimization of linear model, assume that are the number of rain pixels in the windows which centered at each detected rain pixel. The complexity of optimization is .

Objective assessment.  In order to assess different methods quantitatively, we synthesize rain images by the screen blend model111The rendering method of rain images can be referred to [20]. [20] and calculate the PSNR/SSIM [39] as the objective index. The PSNR/SSIM values of ten synthesized rain images which are handled by traditional methods are shown in Table II and several ground-truthes, corresponding rendering rain images and their rain-removed results are shown in Fig. 6. In order to facilitate the comparison, we list the PSNR/SSIM values at the top left of each rain-removed image in Fig. 6.

Overall, the method by Li et al. [21] provides lower PSNR values. In particular, this method losses a lot useful information for images that possess many details (e.g., the second row in Fig. 8). Hence, the resulted SSIM values are much lower. On the contrary, the method by Luo et al. [20] can remove light rain streaks and make the relatively heavy rain streaks blur. Therefore, it leads to high SSIM values. However, the PSNR values resulted by this method are still much lower. The method by Chen et al. [8] also losses image details, thus leading to lower SSIM values. However, the loss is not as severe as the method by Li et al. so that its PNSR values remain to be high comparatively. It is clear that our algorithm produces better PSNR/SSIM values for a large majority of test images compared with the works which use the traditional methods. The comparisons of PSNR and SSIM with the deep learning methods [9] and [13] are shown in Fig 7. We can see from these results that our method obtains comparable PSNR values compared with [9] and [13], while the SSIM values of the deep detailed network [9] are consistently higher than those of DID-MDN and our method for this group of images. For the deep learning methods, the involved networks are trained from thousands of rendering rain images and the corresponding groungtruthes. Hence, it is not surprising that the PSNR/SSIM values of deep learning methods are higher.

(a) (b) (c) (d) (e) (f) (g) (h)
Fig. 10: Some rain-removed results on heavy rain images: (a) original heavy rain images, (b) de-hazed results; (c) results by Chen et al. [8]; (d) results of Li et al. [21], (e) results of Luo et al. [20], (f) results of Zhang et al. [13], (g) results of Fu et al. [9] and (h) our results.

User study.  To conduct a visual (subjective) evaluation on the performances of different traditional methods, 20 viewers are invited to evaluate the visual quality of different methods in terms of the following three aspects:

  • less rain residual,

  • the maintenance of image details,

  • overall perception.

In the evaluation, 20 groups of results are selected and every group involves the results by Chen et al. [8], Li et al. [21], Luo et al. [20] and our method. To ensure fairness, the results in each group are arranged randomly. For each group, the viewers are asked to select only one result which they like most. The evaluation result is shown in Table III. It is clear that our rain removal results are favored by a majority of viewers ().

Real rain images.  Taking the practical utility into consideration, we implement our algorithm on several real rain images and compare with the selected works. The results are in Fig. 8. The work by Li et al. can remove rain streaks completely, but loss many image details, especially for the image like the second row. This is because that the patch-based priors cannot separate small image details from rain streaks well. Because the HOG descriptor used in the work by Chen et al. cannot identify small image details either, this work is not suitable for rain images with small details (e.g., the second and third row) - edges in the rain-removed images are blurred. Besides, when the intensities of rain become higher (e.g., the fifth row), rain streaks cannot be removed well by this work. This is because that the guided filter used in this work cannot filter out all relatively bright rain streaks. For light rain streaks (e.g., the second row), the work by Luo et al. removes rain streaks well and offers a good visual quality. Once rain streaks become relatively higher, the rain-removed performance decreases severely. Because of the detection of rain and reasonable linear model of rain imaging, the visual quality of our rain-removed results are better than selected traditional works. We can also see from Fig. 8 that, for majority of light real rain images, our linear model can obtain comparable rain-removed results compared with the deep learning work [9] and [13].

Fig. 11: From top to down: rain images, results by the hierarchical approach, results by the linear model.
(a) (b)
Fig. 12: (a) The PSNR values of two methods. (b) The SSIM values of two methods.

V-C Challenging from heavy rain images

In order to further verify our linear model, we consider the challenging from heavy rain images. In this scenario, fog often appears in the image so that we propose to implement a de-haze preprocessing before removing rain streaks.

There are several excellent algorithms for the de-hazing task. In [40], the authors learnt the mapping between haze images and their corresponding transmission maps by a multi-scale deep neutral network, and the resulting performance outperforms many previous de-haze works (e.g. the dark channel method [41] by He et al.). Hence, [40] is selected in our work.

It is necessary to point out that the rain-removing methods presented in [8] and [20] are very memory-consuming, which may lead to memory overflow even on a highly-configured computer. Hence, a down-sizing has been included in [8] and [20] before the rain-removing, while the output image is up-sized back. One example is shown in Fig. 9, from which we can see that the intensity and size of rain streaks are reduced. This would make the removal of rain streaks much easier.

In order to make the performance comparison among different algorithms fair and accurate, we need to implement them on the same rain image. To this end, we crop a patch from each original rain image. Then, the rain-removing methods in [8] and [20] can be implemented without resizing.

Some comparison results are presented in Fig. 10. Several observations can be made from these results. (1) The method of [21] can not deal with heavy rain images well and the performance of [20] is also sensitive to the scale of rain streaks. (2) The method of [8] removes majority of rain streaks and produce better results as compared to the above two methods. (3) Overall, our linear model produces the best results, no matter in image-detail preservation or rain-streak removal compared with the selected works which are based on traditional methods. Besides, our linear model has a very low computational complexity and is not memory-consuming. (4) The work by Fu et al. can not remove very heavy rain in the image of the first line. The work by Zhang et al. has done a good job in removing heavy rain streaks. Our method also removes heavy rain streaks and makes the image details clearer. For the other two images, these three methods obtain comparable rain-removed results. Although deep learning methods have made great differences in many fields, they still cannot obtain very good results for some images whose patterns are not included in the training set. One advantage of the traditional methods is that they usually produce relatively stable results.

V-D Comparisons with our previous hierarchical approach

In [26], we developed a rain/snow removal algorithm based on a hierarchical approach and it also needs to detect rain/snow. We make some comparisons between these two works in this subsection.

In this work, we use the rain/snow detection method in [26] to detect rain streaks initially. As we mentioned in [26], this is an over-detection method and some non-rain details will be mis-detected as rain streaks. In [26], the lost information is complemented by a 3-layer extraction of non-rain/snow details. In this work, we improve rain’s detection by eigen color property, which has revised many mis-detected non-rain details.

More specifically, in [26], an over-complete dictionary is learnt first and we identify rain/snow dictionary atoms by the characteristics of rain/snow. Rain/snow and non-rain/snow components are reconstructed by sparse coding, and the non-rain/snow component forms the first layer of the non-rain/snow details. Then, by combining detection of rain/snow and guided filter, we extract the second layer of non-rain/snow details from the rain/snow component that is obtained by the sparse reconstruction. Finally, in order to enhance image contrast, the third layer of non-rain/snow details is extracted by the developed sensitivity of variance across color channels (SVCC) descriptor. After adding the low frequency and three layers of image details together, we obtain the final rain/snow-removed results. In this work, we derive a linear imaging model of rain and remove rain by training our linear model.

Because of the dictionary learning method used in the hierarchical approach [26], its time consumption is much bigger than the linear model. When tested on images, the average time consumed by the hierarchical approach is seconds, while our linear model only needs seconds as mentioned earlier. In Fig. 11, we show some rain removal results of these two methods. For light to medium rain streaks (the first five images), these two methods produce comparable results. However, for rain images that contains many small image details (e.g., the first and fourth ones), the linear model maintains more image details and keeps better structural similarity with original images. When encounterring heavy rain images (the last three images), the hierarchical approach removes more rain streaks, but the linear model method can preserve more image details.

Some objective comparisons are evaluated in terms of PSNR/SSIM for these two methods, as listed in Fig. 12. It can be seen that the linear model obtains lightly better PSNR/SSIM values for most images. This is mainly due to that our linear model will not change the non-rain areas of an image and the linear model describes the imaging of rain more accurately.

V-E Limitations and future works

Our linear model has shown good rain removal performances in several important aspects and possesses good robustness no matter for common rain images or for challenging heavy rain images. Like most of algorithms, our method still have its own limitations. Rain streaks are over-detected in our algorithm, a few of non-rain details which have similar intensity and color to rain streaks can be mis-regarded as rain. Though these mis-detections are only a small part, the performance of our algorithm will become better if we can make the detection more accurate. In our future work, we will try our best to improve the detection of rain streaks by deep learning methods. Besides, we will try to combine our linear model with deep learning to solve the heavy rain tasks. We believe this will make a great difference if it can be realized.

Vi Conclusions

In this paper, we derive a simple linear model to describe the physical principle of imaging rain pixels. In order to remove rain streaks in a rain image, we first detect rain streaks by two characteristics of rain streaks. Once the binary location map of rain pixels is obtained, the original intensity of each rain pixel is approximated by a weighted average of all neighboring non-rain pixels. For every rain pixel, we train the parameters involved in the linear model. Once the parameters are determined for a rain pixel, its rain-removed intensity can be calculated by plugging the observed intensity of the rain pixel into the model. Subjective and objective evaluations demonstrate that our algorithm outperforms several state-of-the-art traditional methods for rain-removal. Compared with deep learning based method, our linear model can obtain comparable rain-removed results no matter for light rain images or for majority of heavy rain images. For some very heavy rain images, our model even outperforms the deep learning based algorithms. Moreover, our algorithm offers a significant speed-up of several times to more than one degree of magnitude compared with the selected traditional methods.

References

  • [1] K. Garg and S. K. Nayar, “When does a camera see rain?,” IEEE International Conference on Computer Vision (ICCV-2005), pp. 1067-1074, Beijing, China, Oct., 2005.
  • [2] J. S. Marshall and W. Mc K.Palmer, “The distribution of raindrops with size,” Journal of the Atmospheric Sciences, vol. 5, no. 4, pp. 165-166, 1948.
  • [3] S. K. Nayar and S. G. Narasimhan, “Vision in bad weather,” IEEE International Conference on Computer Vision (ICCV-1999), vol. 2, pp. 820-827, Kerkyra, Greece, Sep. 20-27, 1999.
  • [4] K. Garg and S. K. Nayar, “Detection and removal of rain from videos,”

    IEEE Conference on Computer Vision and Pattern Recognition

    (CVPR-2004), pp. 528-535, Washington DC, USA, June 27-July 2, 2004.
  • [5] J. Bossu, N. Hautiere, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” International Journal of Computer Vision, vol. 93, no. 3, pp. 348-367, July 2011.
  • [6] Y. H. Fu, L. W. Kang, C. W. Lin, and C. T. Hsu, “Single-frame-based rain removal via image decomposition,” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-2011), pp. 1453-1456, Prague, Czech Republic, May 22-27, 2011.
  • [7] L. W. Kang, C. W. Lin, and Y. H. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Transactions on Image Processing, vol. 21, no. 4, pp. 1742-1755, April 2012.
  • [8] D. Y. Chen, C. C. Chen, and L. W. Kang, “Visual depth guided color image rain streaks removal using sparse coding,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 8, pp. 1430-1455, Aug. 2014.
  • [9] X. Fu, J Huang, D. Zeng, Y. Huang, X. Ding and J. Paisley, “Removing rain from single images via a deep detail network,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2017), pp. 1715-1723, Honolulu, HI, USA, July 21-26, 2017.
  • [10] W. Yang, R Tan, J. Feng, J. Liu, Z. Guo and S. Yan, “Deep joint rain detection and removal from a single image,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2017), pp. 1685-1694, Honolulu, HI, USA, July 21-26, 2017.
  • [11] H. Zhang, V. Sindagi and V. Patel, “Image de-raining using a conditional generative adversarial network,” arXiv preprint arXiv:1701.05957, 2017.
  • [12] X. Fu, J. Huang, X. Ding, Y. Liao and J. Paisley, “Clearing the skies: a deep network architecture for single-image rain removal ,” IEEE Transactions on Image Processing, vol. 26, no. 6, pp. 2944-2956, June 2017.
  • [13] H. Zhang and V. Patel,, “Density-aware Single Image De-raining using a Multi-stream Dense Network,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2018), 2018.
  • [14] D. A. Huang, L. W. Kang, M. C. Yang, C. W. Lin, and Y. C. F. Wang, “Context-aware single image rain removal,” IEEE International Conference on Multimedia and Expo (ICME-2012), pp. 164-169, Melbourne, Australia, July 9-13, 2012.
  • [15] D. A. Huang, L. W. Kang, Y. C. F. Wang, and C. W. Lin, “Self-learning based image decomposition with applications to single image denoising,” IEEE Transactions on Multimedia, vol. 16, no. 1, pp. 83-93, Junary, 2014.
  • [16] J. Xu, W. Zhao, P. Liu, and X. Tang, “An improved guidance image based method to remove rain and snow in a single image,” Computer and Information Science, vol. 5, no. 3, pp. 49-55, May 2012.
  • [17] J. H. Kim, C. Lee, J. Y. Sim, and C. S. Kim, “Single-image deraining using an adaptive nonlocal means filter,” IEEE International Conference on Image Processing (ICIP-2013), Melbourne, Australia, Sep. 15-18, 2013.
  • [18] Y. L. Chen and C. T. Hsu, ”A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” IEEE International Conference on Computer Vision (ICCV-2013), pp. 1968-1975, Sydney, Australia, Dec. 1-8, 2013.
  • [19] X. H. Ding, L. Q. Chen, X. H.  Zheng, Y. Huang, and D. L. Zeng, “Single image rain and snow removal via guided l0 smoothing filter,” Multimedia Tools and Applications, vol. 24, no. 8, pp. 1-16, 2014.
  • [20] Y. Luo, X. Yong, and J. Hui “Removing rain from a single image via discriminative sparse coding,” IEEE International Conference on Computer Vision (ICCV-2015), pp. 3397-3405, Boston, MA, USA, Dec. 7-13, 2015.
  • [21] Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2016), pp. 2736-2744, Las Vegas, Nevada, USA, June 26-July 1, 2016.
  • [22] M. Aharon, M. Elad, and A. M. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4311-4322, Nov. 2006.
  • [23] J. Mairal, F. Bach, J. Ponce, and G. Sapiro, “Online learning for matrix factorization and sparse coding,”

    Journal of Machine Learning Research

    , vol. 11, pp. 19-60, Jan. 2010.
  • [24] L. Tsai, J. W. Hsieh, C. H. Chuang, Y. J. Tseng, K. Fan, and C. C. Lee, “Road sign detection using eigen colour,” IET Computer Vision, vol. 2, no. 3, pp. 164-177, Sep., 2008.
  • [25] Y. L. Wang, C. Chen, S. Y. Zhu, and B. Zeng, “A framework of single-image deraining method based on analysis of rain characteristics,” IEEE International Conference on Image Processing (ICIP-2016), pp. 4087 C4091, Phoenix, USA, Sep., 2016.
  • [26] Y. L. Wang, S. C. Liu, C. Chen, and B. Zeng, “A hierarchical approach for rain or snow removing in a single color image,” IEEE Transactions on Image Processing, vol. 26, no. 8, pp. 3936-3950, May 2017.
  • [27] K. Garg and S. K. Nayar, “Photorealistic rendering of rain streaks,” ACM Transactions on Graphics, vol. 25, no. 3, pp. 996-1002, July 2006.
  • [28] K. Garg and S. K. Nayar, “Vision and rain,” International Journal of Computer Vision, vol. 75, no. 1, pp. 3-27, 2007.
  • [29] X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng, “Rain removal in video by combining temporal and chromatic properties,” IEEE International Conference on Multimedia and Expo (ICME-2006), pp. 461-464, Toronto, Ontario, Canada, July 9-12, 2006.
  • [30] N. Brewer and N. Liu, “Using the shape characteristics of rain to identify and remove rain from video,” Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition, vol. 5342, pp. 451-458, Olando, USA, Dec. 2008.
  • [31] P. Barnum, T. Kanade, and S. Narasimhan, “Spatio-temporal frequency analysis for removing rain and snow from videos,” International Workshop on Photometric Analysis For Computer Vision (PACV-2007), Rio de Janeiro, Brazil, Oct. 2007.
  • [32] P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” International Journal of Computer Vision, vol. 86, no. 2, pp. 256-274, 2010.
  • [33] M. Roser and A. Geiger, “Video-based raindrop detection for improved image registration,” IEEE International Conference on Computer Vision Workshops (ICCV Workshops 2009), pp. 570-577, Kyoto, Japan, Sept. 29-Oct. 2, 2009.
  • [34] J. C. Halimeh and M. Roser, “Raindrop detection on car windshields using geometric-photometric environment construction and intensity-based correlation,” Intelligent Vehicles Symposium, 2009, Xi’an, China, June 3-5, 2009.
  • [35] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2005), vol. 1, pp. 886-893, San Diego, CA, USA, June 20-25, 2005.
  • [36] K. He, J. Sun, and X. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 6, pp. 1397-1409, June 2013.
  • [37] H. Takeda, S. Farsiu, and P. Milanfar, “Kernel regression for image processing and reconstruction,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 349-366, Feb. 2007.
  • [38] A. Buades, B. Coll, and J. M. Morel, “A non-local algorithm for image denoising,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR-2005), pp. 60-65, San Diego, CA, USA, June, 2005.
  • [39] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004.
  • [40] W. Ren, S. Liu, H. Zhang, J. S. Pan, X. C. Cao, and M. H. Yang, “Single Image Dehazing via Multi-Scale Convolutional Neural Networks,” European Conference on Computer Vision (ECCV-2016), pp. 154-169, Amsterdam, The Netherlands, Oct. 11-14, 2016.
  • [41] K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec. 2011.