Images and videos captured from outdoor vision systems are often affected by rain. Specifically, as a complicated atmospheric process, rain can cause different types of visibility degradations. Typically, nearby rain drops/streaks incline to obstruct or distort background scene contents and distant rain streaks tend to generate atmospheric veiling effects like mist or fog and blur the image contents [14, 15]. Rain removal has thus become a necessary preprocessing step for subsequent tasks, like object detection [1, 2], tracking , segmentation and recognition , scene analysis , person reidentification , and event detection , to further enhance their performance. Therefore, as an important research topic, removing rain streaks from videos and images has been attracting much attention recently in the filed of computer vision and pattern recognition [8, 29, 9, 10, 12, 11, 13].
In the recent years, various methods have been proposed for this rain removal task for both video and a single image [45, 43, 42, 17, 64, 66]. Comparatively, removing rain from an individual image is evidently more challenging than that from a video composing of a sequence of image frames, due to the lack of beneficial temporal information in the former case [16, 46]. The methodologies designed for two cases are thus with significant distinction. Yet similarly for both issues, conventional methods mainly adopt the model-driven methodology and especially focus on sufficiently utilizing and encoding physical properties of rain and prior knowledge of background scenes into an optimization problem and designing rational algorithms to solve it, while more recently raised methods often employ the data-driven manner by designing specific network architectures and pre-collecting rainy-clean image pairs to learn network parameters to attain complex rain removal functions [18, 19, 68]. Most of these methods have targeted certain insightful aspects of the rain removal issue and have their suitability and superiority on specific occasions.
Albeit raising so many methods for rain removal from both video and a single image, to the best of our knowledge, there is still not a comprehensive survey paper to summarize and categorize current developments along this research line. Especially, there does still not exist a easily-usable source which could provide an off-the-shelf platform for general users to attain the source codes of current methods presented along this research line for easy performance comparison and capability evaluation for these methods. This, however, should be very meaningful for further prompting the frontier of this research issue and for facilitating an easy performance reproduction of previous algorithms and discovery of more intrinsic problems existed in current methods.
Against this meaningful task, in this study we aim at presenting a possibly comprehensive review for current rain removal methods for video and a single image, as well as evaluating and analyzing the intrinsic capabilities, especially generalization, of representative state-of-the-art methods. In all, our contributions can be mainly summarized as follows:
Firstly, we comprehensively introduce the main ideas of the current rain removal methods for both video and a single image. In specific, we summarize the physical properties of rain commonly used for rain modeling by previous research. For video and single image rain removal methods raised in both conventional model-driven and latest data-driven manners, we elaborately categorize them into several hierarchal branches, as shown in Fig. 1, and introduce the main methodology and representative methods of each branch.
Secondly, we provide a comprehensive performance comparison on representative rain removal methods and evaluate their respective capacity, especially generalization capability, both visually and quantitatively, based on typical synthetic and real datasets containing diverse rain configurations. The implemented deraining methods, including 7 ones for video and 10 ones for single image, cover recent state-of-the-art model-driven and data-driven rain removal algorithms.
Most importantly, in this study we release a comprehensive repository to facilitate an easy usage and performance reproduction/comparison of current rain removal methods for general users. Particularly, this repository includes direct links to 74 rain removal papers, source codes of 9 methods for video deraining and 20 ones for single image deraining, 19 related project pages, 6 synthetic datasets and 4 real ones, and 4 commonly used image quality metrics.
The rest of the paper is organized as follows. Section II surveys the main contents of recent literatures raised on rain removal from video and a single image. Comprehensive experiments are then presented in Sections III for performance evaluation. Section IV concludes the whole paper, and lists some limitations and research issues worthy to be further investigated for future research of this direction.
Ii Review of Current Rain Removal Methods
In this section, we first introduce some physical properties of rain, which constitute the modeling foundation of most rain removal methods, and then review the deraining methods for video and single image, respectively, according to the categorization as displayed in Fig. 1.
Ii-a Physical Properties of Rain
A falling raindrop undergoes rapid shape distortions caused by many key factors, such as surface tension, hydrostatic pressure, ambient illumination, and aerodynamic pressure [20, 28]. These distortions will appear in forms of rain streaks with different brightness/directions  and distort background objects/scenes of videos/images . In the following, we introduce some intrinsic properties of rain demonstrated in a video or a single image, which represent the typical clues for optimization or network modeling for constructing a rain removal method.
Ii-A1 Geometric Property
Beard and Chuang described the shape of small raindrops as a sphere, expressed as :
where is the radius of the undistorted sphere, is the shape coefficient that depends on the radius of raindrop, and is the polar angel of elevation. represents the direction of the rainfall and is the polar radius in the direction of .
As a raindrop falls, it attains a constant velocity, called terminal velocity . By fitting a large amount of experimental data with the least squares, Foote and Toit  obtained the relationship between the terminal velocity (m/s) of a raindrop and its diameter (mm) as:
where is the air density at the location of the raindrop. and are obtained under the 1013 mb atmospheric conditions. Although a strong wind tends to change the rain orientation, the direction of rain streaks captured in the limited range of a video frame or an image is almost consistent .
Ii-A2 Brightness Property
Garg and Nayar  pointed out that raindrops can be viewed as optical lens that refract and reflect lights, and when a raindrop is passing through a pixel, the intensity of its image is brighter than the background . The imaging process was illustrated as:
where is the time during which a raindrop projects onto the pixel location and is the exposure time of a camera. is the irradiance caused by the raindrop and is the average irradiance of the background [29, 26].
Ii-A3 Chromatic Property
Zhang et al.  made further investigation about the brightness property of rain and showed that the increase in the intensities of R, G, and B channels is dependent on the background scene. By empirical examples, they found that the field of views (FOVs) of red, green, and blue lights are all around . For ease of computation, the authors directly assumed that the means of , , and are roughly equivalent for pixels covered by raindrops, where , , and denote the changes in color components of one pixel in two consecutive frames.
Ii-A4 Spatial and Temporal Property
As raindrops are randomly distributed in space and move at high velocities, they often cause spatial and temporal intensity fluctuations in a video, and a pixel at particular position is not always covered by these raindrops in every frame . Therefore, in a video with stationary scene captured by a stationary camera, the intensity histogram of a pixel sometimes covered by rain exhibits two peaks, one for the background intensity distribution and the other for the rain intensity distribution. However, the intensity histogram of a pixel never covered by rain throughout the entire video exhibits only one peak [9, 27].
Ii-B Video Rain Removal Methods
Garg and Nayar [29, 22] made early attempt for rain removal from videos, and proposed that by directly increasing the exposure time or reducing the depth of field of a camera, the effects of rain can be reduced or even removed without altering the appearance of the scene in a video. However, this method fails to deal with heavy rain and fast-moving objects that are close to the camera, and the camcorder setting cannot be adjusted by this method without substantial performance degradation of videos .
In the past few years, more intrinsic properties of rain streaks have been explored and formulated in algorithm designing for rain removal from videos in static/dynamic scenes. These algorithms can be mainly divided into four categories: time domain based ones, frequency domain based ones, low rank and sparsity based ones, and deep learning based ones. The first three categories follow the hand-crafting pipelines to model rain context and thus should be seen as model-driven methodologies, whereas the latter one follows data-driven manner where features are automatically learnt from pre-collected training data (rainy/clean frame pairs) [42, 19].
Ii-B1 Time domain based methods
Garg and Nayar  firstly presented a comprehensive analysis of visual effects of rain on an imaging system and then developed a rain detection and removal algorithm for videos, which utilized a space-time correlation model to capture the dynamics of rain and a physics-based motion blur model to explain the photometry of rain. Here the authors assumed that as raindrops fall with some velocity, they affect only a single frame. Hence, the rain streaks can be removed by exploiting the difference between consecutive frames. 
To further improve the rain detection accuracy, Zhang et al. 
incorporated both temporal and chromatic properties of rain and utilized K-means clustering to identify the background and rain streaks from videos. The idea works well in handling light and heavy rain, as well as rain in/out of focus. However, the method often tends to blur images due to a temporal average of the background. To alleviate this problem, Parket al. 
further proposed to estimate the intensity of pixels and then remove rain recursively by Kalman filter, which performs well in a video with stationary background.
Later, by introducing both optical and physical properties of rain streaks, Brewer et al.  proposed to first identify rain-affected regions showing a short-duration intensity spike, and then replaced the rain-affected pixel with average value in consecutive frames. Naturally, the method is able to distinguish intensity changes caused by rain from those made by scene motion. Yet, it is not very suitable to detect heavy rain where multiple rain streaks overlap and form undesirable shapes.
Zhao et al.  used temporal and spatial properties of rain streaks to design a histogram model for rain detection and removal, which embedded a concise K-means clustering algorithm with low complexity . To handle both dynamic background and camera motion, Bossu et al. 
Inspired by Bayesian theory, Tripathi et al.  relied on temporal property of rain and proposed a probabilistic model for rain streaks removal. Since intensity variations of rain-affected and rain-free pixels differ by the symmetry of waveform, the authors used two statistical features (intensity fluctuation range and spread asymmetry) for distinguishing rain from rain-free moving object. As there is no any assumption about the shape and size of raindrops, the method is robust to rain conditions. To further reduce the usage of consecutive frames, the authors turned to employing spatiotemporal process , which has less detection accuracy but better perceptual quality than.
Ii-B2 Frequency domain based methods
Barnum et al. [34, 35] demonstrated a spatio-temporal frequency based method for globally detecting rain and snow with a physical and statistical model, where the authors utilized a blurred Gaussian model to approximate the blurring effects produced by the raindrops and a frequency-domain filter to reduce the visibility of raindrops/snow. The idea still works in videos with both scene and camera motions and can efficiently analyze repeated rain patterns. Nevertheless, the blurred Gaussian model cannot always cover rain streaks which are not sharp enough. Besides, the frequency-based detection manner often has errors when the frequency components of rain are not in order .
Ii-B3 Low rank and sparsity based methods
In the recent decade, low rank and sparsity properties are extensively studied for rain/snow removal from videos. Chen et al. 
first considered the similarity and repeatability of rain streaks and generalized a low-rank model from matrix to tensor structure to capture the spatio-temporally correlated rain streaks. In the case of 2-dimensional images, the authors formulated rain streak estimation as:
where is the Frobenius norm, is rainy image, and are the background scene and the rain layer, respectively. is a patch map function. The total variation (TV) regularization term is used to discriminate natural image content from highly-patterned rain streaks.
To deal with highly dynamic scenes [4, 31, 33, 21], Chen et al. further designed an algorithm based on motion segmentation of dynamic scene , which first utilized photometric and chromatic constraints for rain detection and then applied rain removal filters on pixels such that their dynamic property as well as motion occlusion clue are incorporated. Spatial and temporal information is thus adaptively exploited during rain pixel recovery by the method, which, however, still does not consider camera jitters .
Later, Kim et al.
proposed to subtract temporally warped frames from the current frame to obtain an initial rain map, and then decomposed it into two types of basis vectors (rain streaks and outliers) via a support vector machine (SVM). Next, by fining the rain map to exclude the outliers and executing low rank matrix completion, rain streaks could be removed. Obviously, the method needs extra supervised samples to train SVM.
Considering heavy rain and dynamic scenes, Ren et al. divided rain streaks into sparse and dense layers, and generally model them in a matrix decomposition framework as:
where and denote the intensity fluctuations caused by sparse rain streaks and dense ones, respectively. The operator achieves foreground extraction from a video , and block matching. The pseudo-matrix norm tends to make foregrounds group sparse. , , and
are regularization parameters. Besides, the detection of moving objects and sparse snowflakes/rain streaks was formulated as a multi-label Markov random field (MRF), and dense ones were assumed to obey Gaussian distribution.
Jiang et al.[40, 41] proposed a novel tensor based video rain streaks removal approach by fully analyzing the discriminatively intrinsic characteristics of rain streaks and clean videos . In specific, rain streaks are sparse and smooth along the direction of raindrops, and clean videos possess smoothness along the rain-perpendicular direction and global and local correlation along time direction. Mathematically, the authors formulated these properties as:
where and are unidirectional TV operators of rain direction and the perpendicular direction, respectively, and indicates the time directional difference operator. By an alternation direction method of multipliers (ADMM), the authors got the approximate solution of (6).
Different from previous rain removal methods formulating rain streaks as deterministic message, Wei et al. first encoded the rain layer as a patch-based mixture of Gaussian (P-MoG). By integrating the spatio-temporal smoothness configuration of moving objects and low rank structure of background scene , the authors proposed a concise P-MoG model with parameters for rain streaks removal from an input rainy video as:
where denotes a Gaussian distribution with mean and covariance matrix . is the total number of Gaussian components, is the mixing coefficient with . Here the authors model in a low-rank form as , where , and use a binary tensor to describe moving objects with MRF. Considering the sparsity and continuousness in space and time of moving objects, the authors employ  and weighted 3-dimensional total variation (3DTV) penalties to regularize . Such stochastic manner makes the model capable of adapting a wider range of rain variations instead of certain types of rain configurations in traditional methods.
Motivated by the work , Li et al. considered two intrinsic characteristics of rain streaks in videos, i.e., repetitive local patterns sparsely scattered over different positions of a video and multiscale configurations due to their occurrence on positions with different distances to the cameras. The authors specifically formulated such understanding as a multi-scale convolutional sparse coding model (MS-CSC) with parameters :
where is a set of feature maps that approximate rain streak positions, and denotes the filters that depict repetitive local patterns of rain streaks. and denote the numbers of total filters and filters at the -th scale, respectively. Similar to , the authors additionally employ and TV to regularize the sparsity of feature maps and the smoothness of moving object layer , respectively. Such an encoding manner makes the model interpretable and capable of properly extracting rain streaks from rainy videos.
Ii-B4 Deep learning based methods
Very recently, deep learning based methods have also been investigated for the video rain removal task. For example, Chen et al.
proposed a convolutional neural network (CNN) framework for video rain streaks removal, which can handle torrential rain fall with opaque streak occlusions. In the work, superpixel has been utilized as the basic processing unit for content alignment and occlusion removal in a video with highly complex and dynamic scenes.
By exploring the wealth of temporal redundancy in videos, Liu et al.  built a hybrid rain model to depict both rain streaks and occlusions as:
where and signify the current time-step and total number of the frames in a video. , , and are the rainy image, background frame, and rain streak frame, respectively. is the rain reliance map and is an alpha matting map defined as follows:
where is the rain occlusion region where the light transmittance of raindrop is low.
Based on the model (9), the authors utilized a deep recurrent convolutional network (RNN) to design a joint recurrent rain removal and reconstruction network (J4R-Net) that seamlessly integrates rain degradation classification, spatial texture appearances based rain removal, and temporal coherence based background details reconstruction. To address deraining with dynamically detected video contexts, the authors chose a parallel technical route and further developed a dynamic routing residue recurrent network (D3R-Net), as well as an effective basic component, i.e., spatial temporal residue learning, for video rain removal .
Ii-C Single Image Rain Removal Methods
In contrast to video based deraining methods with temporal redundancy knowledge, removing rain from individual images is more challenging since less information is available. To handle the problem, the algorithm design for single image rain removal has drawn increasingly more research attention. Generally, the existing single image rain removal methods can be divided into three categories: filter based ones, prior based ones, and deep learning based ones.
Ii-C1 Filter based methods
Xu et al. proposed a single image rain removal algorithm with guided filter . In specific, by using chromatic property of rain streaks, the authors first obtained coarse rain-free image (guidance image) and then filtered rainy image to get the rain-removed image. For better visual quality, the authors incorporated brightness property of rain streaks and remended the guidance image .
Zheng et al.  later presented a multiple guided filtering based single image rain/snow removal method. In the work, the rain-removed image was acquired by taking the minimum value of rainy image and the coarse recovery image obtained by merging low frequency part (LFP) of rainy image with high frequency part (HFP) of rain-free image. To improve the rain removal performance, Ding et al.  designed a guided smoothing filter to get coarse rain-/snow-free image.
Considering that a typical rain streak has an elongated elliptical shape with a vertical orientation, Kim et al.  proposed to detect rain streak regions by analyzing rotation angle and aspect ratio of the elliptical kernel at each pixel, and then execute nonlocal means filtering on the detected regions by adaptively selecting nonlocal neighbor pixels and the corresponding weights.
Ii-C2 Prior based Methods
where , , and denote the observed rainy image, rain-free image, and rain streaks, respectively.
is the posterior probability andis the likelihood function. is the solution space. Generally, the MAP problem can be equivalently reformulated as the following energy minimization problem :
where the first term represents the fidelity term measuring the discrepancy between the input image and the recovered image . The two regularization terms and model image priors on and . Since single image rain removal is an ill-posed inverse problem, the priors play important roles in constraining solution space and enforcing desired property of the output .
Various methods have been proposed for designing the forms of all terms involved in (12). By using certain optimization algorithms, generally including an iterative process, the recovery image can then be obtained . We introduce representative works presented along this line as follows.
Fu et al. utilized morphological component analysis (MCA) to formulate rain removal as an image decomposition problem. Specifically, a rainy image was divided into LFP and HFP with a bilateral filter, and the derained result was obtained by merging the LFP and the rain-free component. The component was achieved by performing dictionary learning and sparse coding on the HFP. For more accurate HFP, Chen et al.  exploited sparse representation and then separated rain streaks from the HFP by exploiting a hybrid feature set, including histogram of oriented gradients, depth of field, and Eigen color. Similarly, Kang et al. [54, 55] exploited histogram of oriented gradients (HOGs) features of rain streaks to cluster into rain and non-rain dictionary.
To remove rain and snow for single image, Wang et al. 
designed a 3-layer hierarchical scheme. With a guided filter, the authors obtained the HFP consisting of rain/snow and image details, and then decomposed it into rain/snow-free parts and rain/snow-affected parts via dictionary learning and three classifications of dictionary atoms. In the end, with the sensitivity of variance of color image (SVCC) map and the combination of rain/snow detection and the guided filter, the useful image details could be extracted.
Novelly, Sun et al.  intended to exploit the structural similarity of image bases for single image rain removal. By focusing on basis selection and incorporating the strategy of incremental dictionary learning, the idea is not affected by rain patterns and can preserve image information well.
To finely separate rain layer and rain-removed image layer , Luo et al.  proposed a dictionary learning based single image rain removal method. The main idea is to sparsely approximate the patches of the two layers by high discriminative codes over a learned dictionary with strong mutual exclusivity. The optimization problem was expressed as:
where denotes the linear operator which maps the layer to the array of patches. and are the sparsity constraints of each column of the sparse codes and , respectively. denotes the weight vector as .
To remove more rain streaks and preserve background layer better, Li et al.  introduced GMM based patch prior to accommodate multiple orientations and scales of rain streaks, and the optimization problem has the following form:
where extracts to the patch around pixel . . denotes the gradient operator, and describes that natural images are largely piecewise smooth and their gradient fields are typically sparse.
For the progressive separation of rain streaks from background details, Zhu et al. 
modeled three regularization terms in various aspects: integrating local and nonlocal sparsity via a centralized sparse representation, measuring derivation of gradients from the estimated rain direction by analyzing the gradient statistics, and measuring the visual similarity between image patches and rain patches to filter the rain layer. Here the authors presented a joint bi-layer optimization method.
Very recently, Gu et al. proposed a joint convolutional analysis and synthesis (JCAS) sparse representation model, where image large-scale structures were approximated by analysis sparse representation (ASR) and image fine-scale textures were described by synthesis sparse representation (SSR). The single image separation was achieved by solving the following minimization problem:
where and are regularization parameters imposed on the analysis and synthesis prior terms, respectively. Here is the SSR component, where is the -th atom of convolutional synthesis dictionary, is the coefficient map, and denotes the convolution operation. The analysis prior characterizes the ASR component by regularizing the sparseness of its filter responses over analysis filters. The complementary property of ASR and SSR made the proposed JCAS able to effectively extract image texture layer without oversmoothing the background layer.
Considering the challenge to establish effective regularization priors and optimize the objective function in (12), Mu et al.  introduced an unrolling strategy to incorporate data-dependent network architectures into the established iterations, i.e., a learning bilevel layer priors method to jointly investigate the learnable feasibility and optimality of rain streaks removal problem. This is a beneficial attempt to integrate both model-driven and data-driven methodologies for the deraining task.
Ii-C3 Deep learning based methods
Eigen et al.  first utilized CNN to remove dirt and water droplets adhered to a glass window or camera lens. However, the method fails to handle relatively large/dense raindrops and dynamic rain streaks, and produces blurry outputs. In order to deal with substantial presence of raindrops, Qian et al. 
designed an attentive generative network. The basic idea is to inject visual attention into the generative and discriminative networks. Here the generative network focuses on raindrop regions and their surroundings, and the discriminative network mainly assesses the local consistency of restored regions. The loss function to train networks is expressed as:
where is the generative network and is the discriminative network.
To especially deal with single image rain streak removal, Fu et al.  first designed a CNN based DerainNet, which automatically learnt the nonlinear mapping function between clean and rainy image details ( and ) from data. The corresponding objective function is written as:
where is the network parameter and indexes the image. To improve the restoration quality, the authors additionally introduced image processing domain knowledge.
Motivated by great success of deep residual network (ResNet) , Fu et al.  further proposed a deep detail network (DDN) to reduce the mapping range from input to output and then to make the learning process significantly easier. Again, Fan and Fu et al.  proposed a residual-guided feature fusion network (ResGuideNet), where a coarse to fine estimation of negative residual was progressively obtained.
Instead of relying on image decomposition framework like [16, 17], Zhang et al.  proposed a conditional generative adversarial networks (GAN) for single image deraining which incorporated quantitative, visual, and discriminative performance into objective function. Since a single network may not learn all patterns in training samples, the authors 
further presented a density-aware image deraining method using a multistream dense network (DID-MDN). By integrating a residual-aware classifier process, DID-MDN can adaptively determine the rain-density information (heavy/medium/light).
Recently, Yang et al.  reformulated the atmospheric process of rain as a new model, expressed as:
where denotes the locations of individually visible rain streaks. Each is a rain streak layer with the same direction and is the maximum number of layers. is the global atmospheric light and is the atmospheric transmission.
The authors developed a multi-task architecture that successively learns binary rain streak map, appearance of rain streaks, and clean background. By utilizing a RNN and a contextualized dilated network , the method can remove rain streak and rain accumulation iteratively and progressively, even in the presence of heavy rain. For better deraining performance, the authors further proposed an enhanced version–JORDERE, which included an extra detail preserving step .
Similarly, Li et al. proposed a recurrent squeeze-and-excitation (SE) based context aggregation network (CAN) for single image rain removal, where SE block assigned different alpha-values to various rain streak layers and CAN acquired large receptive field and better fit the rain removal task.
Existing deep learning methods usually treated network as an encapsulated end-to-end mapping module without deepening into the rationality and superiority towards more effective rain streaks removal [67, 68]. Li et al.  proposed a non-locally enhanced encoder-decoder network to efficiently learn increasingly abstract feature representation for more accurate rain streaks and then finely preserve image details.
As seen, the constructed deep network structures become more and more complicated, making network designing hardly reproducible and attainable to many beginners in this area. To alleviate this issue, Ren et al. presented a simple and effective progressive recurrent deraining network (PReNet) by repeatedly unfolding a shallow ResNet with a recurrent layer.
A practical issue for data-driven single image rain removal methods is the requirement of synthetic rainy/clean image pairs, which cannot sufficiently cover wider range of rain streak patterns in real rainy image such as rain shape, direction and intensity. In addition, there are no public benchmarks for quantitative comparisons on real rainy images, which makes current evaluation less objective. To handle these problems, Wang et al.  semi-automatically constructed a large-scale dataset of rainy/clean image pairs that covers a wide range of natural rain scenes, and proposed a spatial attentive network (SPANet) to remove rain streaks in a local-to-global manner.
As we know, the main problem in recent data-driven single image rain removal methods is that they generally need to pre-collect sufficient supervised samples, which is time-consuming and cumbersome. Besides, most of these methods are trained on synthetic samples, making themselves less able to well generalize to real test samples. To alleviate these problems, Wei et al. adopted DDN as the backbone (supervised part) and regularized rain layer with GMM to feed unsupervised rainy images. In this semi-supervised manner, the method ameliorates the hard-to-collect-training-sample and overfitting-to-training-sample issues.
Ii-D A Comprehensive Repository for Rain Removal
To facilitate an easy use and performance reproduction/comparison of current rain removal methods for general users, we build a repository for current research development of rain removal111https://github.com/hongwang01/Video-and-Single-Image-Deraining. Specifically, this repository includes direct links to 74 rain removal papers, source codes of 9 methods for video rain removal and 20 ones for single image rain removal, 19 related project pages, 6 synthetic datasets and 4 real ones, and 4 commonly used image quality metrics as well as their computation codes including peak-signal-to-noise ratio (PSNR) , structure similarity (SSIM) , visual quality (VIF) , and feature similarity (FSIM) . The state-of-the-art performance can thus be easily obtained by general users. All our experiments were readily implemented by using this repository.
Iii Experiments and Analysis
In this section, we compare the performance of different competing methods for rain removal from video and a single image. The implementation environment is: the operation system is Windows10 and the computation platform is Matlab (R2018b), PyTorch (version 1.0.1)
, and Tensorflow (version 1.12.0) with an Intel (R) Core(TM) i7-8700K at 3.70GHZ, 32GM RAM, and two Nvidia GeForce GTX 1080Ti GPUs.
Iii-a Video Deraining Experiments
In this section, we evaluate the video deraining performance of the recent state-of-the-art methods on synthetic and real benchmark datasets. These methods include Garg et al. 222http://www.cs.columbia.edu/CAVE/projects/camera_rain/ designed based on space-time correlation, Kim et al. 333http://mcl.korea.ac.kr/deraining/ with temporal correlation and low rank, Jiang et al. 444Code is directly provided by the authors with sparsity and smoothness, Ren et al. 555http://vision.sia.cn/our%20team/RenWeihong-homepage/vision-renweihong%28English%29.html with matrix decomposition, Wei et al. 666http://gr.xjtu.edu.cn/web/dymeng/2 with PMoG, Li et al. 777https://github.com/MinghanLi/MS-CSC-Rain-Streak-Removal with MS-CSC, and Liu et al. 888https://github.com/flyywh/J4RNet-Deep-Video-Deraining-CVPR-2018 with deep learning.
Iii-A1 Synthetic Data
Here we utilize the dataset released by the authors . They choose two videos from CDNET database , containing varying forms of moving objects and background scenes, and add different types of rain streaks under black background of these videos, varying from tiny drizzling to heavy rain storm and vertical rain to slash line. For synthetic data, since the rain-free groundtruth videos are available, we can compare all competing methods both visually and quantitatively. Four typical metrics for video have been employed, including PSNR, SSIM, VIF and FSIM.
Fig. 2 illustrates the deraining performance of all compared methods on videos with usual rain. As displayed in the first row, the rain removal results show that Garg et al.’s, Kim et al.’s, Jiang et al.’s, and Liu et al.’s methods do not finely detect rain streaks, and Ren et al.’s method improperly removes moving objects and rain streaks. The corresponding rain layers provided in the second row depict that apart from Li et al.’s method which can preserve texture details well, the rain layers extracted by the other methods contain different degrees of background information.
We also evaluate all competing methods under heavy rain scenario as shown in Fig. 3. The rain removal results displayed in the first row indicate that Garg et al.’s, Kim et al.’s, Jiang et al.’s, and Liu et al.’s methods do not well detect heavy rain streaks. Especially, Ren et al.’s method does not properly handle moving objects. In comparison with Wei et al.’s method, which treats rain streaks as aggregation of noise rather than natural streamline, Li et al.’s method presents natural rain patterns and has a better visual effect.
|Datasets||Fig. 2||Fig. 3|
Iii-A2 Real-World Data
We then show the rain streak removal results on real videos. As we have no groundtruth knowledge in this case, we only provide the visual effect comparisons.
Fig. 4 presents the deraining results on a video with complex moving objects, including walking pedestrian and moving vehicles, which is captured by surveillance systems on street. It is seen that Garg et al.’s, Kim et al.’s, Jiang et al.’s, and Wei et al.’s methods cause different degrees of artifacts at the location of the moving car. Comparatively, Li et al.’s method performs relatively well in this complicated scenario.
Fig. 5 displays the rain removal performance on a real video obtained at night. Comparatively, Wei et al.’s and Li et al.’s methods can better detect all rain streaks.
Iii-B Single Image Deraining Experiments
In this section, we evaluate the single image deraining performance of the recent state-of-the-art methods, including typical model-driven methods: Luo et al. 999https://github.com/hongwang01/Video-and-Single-Image-Deraining (denoted as DSC), Li et al. 101010http://yu-li.github.io/ (denoted as GMM), and Gu et al. 111111https://sites.google.com/site/shuhanggu/home (denoted as JCAS), and representative data-driven methods: Fu et al. 121212https://xueyangfu.github.io/projects/tip2017.html (denoted as Clear), Fu et al. 131313https://xueyangfu.github.io/projects/cvpr2017.html (denoted as DDN), Li et al. 141414 https://github.com/XiaLiPKU/RESCAN (denoted as RESCAN), and Ren et al. 151515https://github.com/csdwren/PReNet (denoted as PReNet), Wang et al. 161616https://stevewongv.github.io/derain-project.html (denoted as SPANet), Yang et al. 171717https://github.com/flyywh/JORDER-E-Deep-Image-Deraining-TPAMI-2019-Journal(denoted as JORDER_E), and semi-supervised method: Wei et al. 181818https://github.com/wwzjer/Semi-supervised-IRR (denoted as SIRR).
Iii-B1 Synthetic Data
For synthetic data, we utilized four frequently-used benchmark datasets: Rain1400 synthesized by Fu et al. , Rain12 provided by Li et al., Rain100L and Rain100H provided by Yang et al. . Specifically, Rain1400 includes 14000 rainy images synthesized from 1000 clean images with 14 kinds of different rain streak orientations and magnitudes. Among these images, 900 clean images (12600 rainy images) are chosen for training and 100 clean images (1400 rainy images) are selected as testing samples. Rain12 consists of 12 rainy/clean image pairs. Rain100L is selected from BSD200  with only one type of rain streaks, which consists of 200 image pairs for training and 100 image pairs for testing. Compared with Rain100L, Rain100H with five types of streak directions is more challenging, which contains 1800 image pairs for training and 100 image pairs for testing. As for SIRR, we use the real 147 rainy images released by Wei et al. as unsupervised training data. Since Rain12 has few samples, like, we directly adopt the trained model on Rain100L to do an evaluation on Rain12.
As the groundtruth in synthetic datasets is available, we will evaluate all competing methods by two commonly used metrics, i.e., PSNR and SSIM. Since human visual system is sensitive to Y channel in YCbCr space, we utilize the luminance channel to compute all quantitative results.
Fig. 6 shows the visual and quantitative comparisons of rain streak removal results for one synthesized rainy image from Rain100L. As displayed, three model-driven methods: DSC, GMM, and JCAS, leave many rain streaks in the recovered image. Especially, JCAS tends to oversmooth the background details. It implies that model prior is not sufficient enough to convey complex rain streak shapes in synthetic dataset. Compared with these conventional model-driven methods, six data-driven methods, Clear, DDN, RESCAN, PReNet, SPANet, and JORDER_E, have the ability to more completely remove the rain streaks. However, they damage the image content and lose detail information to a certain extent. Although SIRR focus on domain adaption, it fails to remove most rain streaks. This can be explained by the fact that there exists an obvious difference in distribution between Rain100L and real rainy images.
We further evaluate these single image deraining methods on Rain100H. As shown in Fig. 7, due to complicated rain patterns in heavy rain cases, the rain detection capability of most competing methods is weakened. By observing zoomed red boxes, we can find that for all competing methods, the rain removal results are not very satisfactory when rain streaks and the image background merge with each other. More rational and insightful understanding for intrinsic imaging process of rain streaks is still required to be further discovered and utilized .
We additionally do an evaluation based on Rain1400 and Rain12 with different rain patterns as presented in Fig. 8 and Fig. 9. From these, we can easily understand that generally the data-driven methods can achieve better rain removal effect than model-driven methods. However, due to the overfitting-to-training-samples issue, these deep learning methods make derained results lack of some image details.
Table II and Table III demonstrate quantitative results of all competing methods on synthetic datasets. From these tables, we can conclude that due to the strong nonlinear fitting ability of deep networks, the rain removal effect of most data-driven methods is evidently superior than those of model-driven methods. Besides, compared with the backbone network–DDN, SIRR hardly obtains any performance gain on these datasets. This can be explained by the fact that the usage of real unsupervised training samples makes the data distribution deviate from synthetic datasets.
Iii-B2 Real-World Data
For real application, what we really care about is the deraining ability of all competing methods on real rainy images. Here we will give a fair evaluation based on two real-world datasets: the one with 147 rainy images released by Wei et al. , called Internet-Data, and the other with 1000 image pairs collected by Wang et al. , called SPA-Data. Note that as Internet-Data has no groundtruth, we can only provide visual comparison.
Fig. 10 demonstrates a hard sample with various rain densities selected from Internet-Data. As seen, almost all competing methods cannot completely remove rain streaks and perfectly clear up rain accumulation effect. Even though PReNet, RESCAN, and JORDER_E achieve significant deraining performance on synthetic datasets, they oversmooth the background information to some extent. This can be interpret as that for model-driven methods, the priors they adopt have not comprehensively covered the complicated distribution of real rain, and for data-driven methods, they tend to learn specific rain patterns in synthesized data while cannot properly generalize to real test samples with diverse rain types.
. These comparisons tell us that in this case, the model-driven method JCAS with meaningful priors even performs better than some data-driven works, i.e., DDN and RESCAN. It is worth mentioning that although the rain removal performance of SPANet on synthesized datasets with imprecise rain mask is not very satisfying, it obtains an outstanding generalization ability on the real dataset with easily extracted rain mask. Additionally, compared with DDN, SIRR accomplishes a better transfer learning effect, which benefit from the unsupervised module.
Iv Conclusions and Future Works
In this paper, we have presented a comprehensive survey on the rain removal methods for video and a single image in the past few years. Both conventional model-driven and latest data-driven methodologies raised for the deraining task have been thoroughly introduced. Recent representative state-of-the-art algorithms have been implemented on both synthetic and real benchmark datasets, and the deraining performance, especially the generalization capability have been empirically compared and quantitatively analyzed. Especially, to make general users easily attain rain removal resources, we release a repository, including direct links to 74 rain removal papers, source codes of 9 methods for video rain removal and 20 ones for single image rain removal, 19 related project pages, 6 synthetic datasets and 4 real ones, and 4 commonly used image quality metrics. We believe this repository should be beneficial to further prompt the further advancement of this meaningful research issue. Here we summarize some limitations still existing in current deraining methods as follows:
Due to the intrinsic overlapping between rain streaks and background texture patterns, most of deraining methods tend to more or less remove texture details in rain-free regions, thus resulting in oversmoothing effect in the recovered background.
Although current model-driven methods try to portray complex rain streaks by diverse well-designed priors, they are only applicable to specific patterns instead of irregular distribution in real rainy images. Another obvious drawback is that the optimization algorithms employed by these methods generally involve many iterations of computation, causing their inefficiency in real scenarios [11, 12, 13, 43, 81].
Most data-driven methods require a great deal of training samples, which is time-consuming and cumbersome to collect [16, 17, 14, 70, 71, 18]. And they generally have unsatisfactory generalization capability because of the overfitting-to-training-sample issue. Besides, the designed networks are always like black boxes with less interpretability and few insights [19, 69, 89].
For the video deraining task, most model-driven methods cannot directly apply to streaming video data [38, 42, 43] in real-time. Meanwhile, the deep learning methods need a large amount of supervised videos, which exhibits high computational complexity in training stage, and they cannot guarantee favorable rain removal performance especially in complex scenes [44, 45].
The rain removal for video and a single image is thus still an open and worthy to be further investigated problem. Based on our evaluation and research experience, we also try to present the following remarks to illustrate some meaningful future research directions along this line:
Due to the diversity and the complexity of real rain, a meaningful scope is to skillfully combine model-driven and data-driven methodologies into a unique framework to make it possess both superiority of two learning manners. A hopeful direction is the deep unroll strategy, which might conduct networks with both better interpretability and generalization ability [89, 84].
To deal with the hard-to-collect-training-example and overfitting-to-training-example issues, semi-supervised/unsupervised learning as well as domain adaption and transfer learning regimes should be necessary to be explored to transfer the learned knowledge from limited training cases to wider range of diverse testing scenarios[18, 72].
To better serve real applications, we should emphasize efficiency and real-time requirement. Especially for videos, it is necessary to construct online rain removal techniques which meet three crucial properties: persistence (process steaming video data in real time), low time and space complexity, universality (available to complex video scenes). Similarly, fast test speed for a single image is also required.
Generally speaking, deraining is served as a pre-processing step for certain subsequent computer vision tasks. It is also critical to develop task-specific deraining algorithm .
-  S. Maji, A. C. Berg, and J. Malik, “Classification using intersection kernel support vector machines is efficient,” In Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-8, 2008.
-  O. L. Junior, D. Delgado, V. Gonalve, and U. Nunes, “Trainable classifier-fusion schemes: an application to pedestrian detection,” In Intelligent Transportation Syst., vol. 2, 2009.
-  D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. on Pattern Anal. and Machine Intell., vol. 25, no. 5, pp. 564-577, 2003.
-  K. Garg and S. K. Nayar, “Detection and removal of rain from videos,” IEEE Comput. Soc. Conf. on Comput. Vision and Pattern Recognition, vol. 1, pp. 528-535, 2004.
-  L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. on Pattern Anal. and Machine Intell., vol. 20, no. 11, pp. 1254-1259, 1998.
-  M. Farenzena, L. Bazzani, A. Perina, V. Murino, and M. Cristani, “Person re-identification by symmetry-driven accumulation of local features,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, 2010.
-  M. S. Shehata et al., “Video-based automatic incident detection for smart roads: the outdoor environmental challenges regarding false alarms,” IEEE Trans. on Intelligent Transportation Syst., vol. 9, no. 2, pp. 349-360, 2008.
-  T. Bouwmans, “Traditional and recent approaches in background modeling for foreground detection: an overview,” Comput. Sci. Review, vol. 11, pp. 31-66, 2014.
-  X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng, “Rain removal in video by combining temporal and chromatic properties,” IEEE Int. Conf. on Multimedia and Expo., 2006.
-  M. Zhou, Z. Zhu, R. Deng, and S. Fang, “Rain detection and removal of sequential images,” Chinese Control and Decision Conf., 2011.
-  Y. Luo, Y. Xu, and H. Ji, “Removing rain from a single image via discriminative sparse coding,” IEEE Int. Conf. on Comput. Vision, pp. 3397-3405, 2015.
-  Y. Li, R. Tan, X. Guo, J. Lu, and M. S. Brown, “Rain streak removal using layer priors,” IEEE Conf. on Comput. Vision and Pattern Recognition, 2016.
-  S. Gu, D. Meng, W. Zuo, and L. Zhang, “Joint convolutional analysis and synthesis sparse representation for single image layer separation,” IEEE Int. Conf. on Comput. Vision, pp. 1717-1725, 2017.
-  W. Yang, R. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan, “Deep joint rain detection and removal from a single image,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1685-1694, 2017.
-  S. Li et al., “Single image deraining: a comprehensive benchmark analysis,” In Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 3838-3847, 2019.
-  X. Fu, J. Huang, and X. Ding, “Clearing the skies: a deep network architecture for single-image rain streaks removal,” IEEE Trans. on Image Process., vol. 1, no. 1, pp. 99, 2017.
-  X. Fu, J. Huang, D. Zeng, Y. Huang, X. Ding, and J. Paisley, “Removing rain from single images via a deep detail network,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1715-1723, 2017.
-  W. Wei, D. Meng, Q. Zhao, Z. Xu, and Y. Wu, “Semi-supervised transfer learning for image rain removal,” In Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 3877-3886, 2019.
-  Y. Cheng, R. Liu, L. Ma, X. Fan, H. Li, and M. Zhang, “Unrolled optimization with deep priors for intrinsic image decomposition,” In IEEE International Conf. on Multimedia Big Data, pp. 1-7, 2018.
-  K. V. Beard and C. Chuang, “A new model for the equilibrium shape of raindrops,” J. of the Atmospheric Sci., vol. 44, no. 11, pp. 1509-1524, 1987.
-  A. K. Tripathi and S. Mukhopadhyay, “A probabilistic approach for detection and removal of rain from videos,” IEEE J. of Research, vol. 57, no. 1, pp. 82-91, 2011.
-  K. Garg and S. K. Nayar, “When does camera see rain?” IEEE Int. Conf. Comput. Vision, vol. 2, pp. 1067-1074, 2005.
-  G. B. Foote and P. S. Du Toit, “Terminal velocity of raindrops aloft,” J. of Applied Meteorology, vol. 8, no. 2, pp. 249-253, 1969.
-  J. Bossu, N. Hauti re, and J. P. Tarel, “Rain or snow detection in image sequences through use of a histogram of orientation of streaks,” Int. J. of Comput. Vision, vol. 93, no. 3, pp. 348-367, 2011.
-  K. Garg and S. K. Nayar, “ Photometric model of a raindrop,” Technical Report, Comput. Sci. Department, Columbia University, 2004.
-  P. Liu, J. Xu, J. Liu, and X. L. Tang, “ Pixel based temporal analysis using chromatic property for removing rain from videos,” Comput. Inf. Sci., vol. 2, no. 1, pp. 50-53, 2009.
-  S. Starik and M. Werman, “Simulation of rain in videos,” in Proc. of Texture: the 3rd Int. Workshop on Texture Anal. and Synthesis, pp. 95-100, 2003.
-  A. K. Tripathi and S. Mukhopadhyay, “Removal of rain from videos: a review,” in Proc. of Signal, Image and Video , vol. 8, no. 8, pp. 1421-1430, 2014.
-  K. Garg and S. K. Nayar, “Vision and rain,” Int. J. of Comput. Vision, pp. 3-27, 2007.
-  A. K. Tripathi and S. Mukhopadhyay, “Video post processing: low-latency spatiotemporal approach for detection and removal of rain,” IET Image Process., vol. 6, no. 2, pp. 181-196, 2012.
-  W. J. Park and K. H. Lee, “Rain removal using kalman filter in video,” Int. Conf. on Smart Manufacturing Applicat., vol. 1, no. 4, 2008.
-  N. Brewe and N. Liu, “Using the shape characteristics of rain to identify and remove rain from video,” Joint IAPR Int. Workshops on Statistical Techniques in Pattern Recognition and Structural and Syntactic Pattern Recognition, pp. 451-458, 2008.
-  X. Zhao, P. Liu, J. Liu, and X. Tang , “The application of histogram on rain detection in video,” Joint Conf. on Inform. Sci., vol. 1, no. 6, 2008.
-  P. Barnum, T. Kanade, and S. Narasimhan, “Spatio-temporal frequency analysis for removing rain and snow from videos,” Int. Workshop on Photometric Anal. for Comput. Vision, pp. 1-8, 2007.
-  P. C. Barnum, S. Narasimhan, and T. Kanade, “Analysis of rain and snow in frequency space,” Int. J. of Comput. Vision, vol. 86, no. 2-3, pp. 256, 2010.
-  Y. L. Chen and C. T. Hsu, “A generalized low-rank appearance model for spatio-temporally correlated rain streaks,” IEEE Int. Conf. on Comput. Vision, 2013.
-  J. Chen and L. P. Chau, “A rain pixel recovery algorithm for videos with highly dynamic scenes,” IEEE Trans. on Image Process., vol. 23, no. 3, pp. 1097-1104, 2013.
-  W. Ren, J. Tian, Z. Han, A. Chan, and Y. Tang, “Video desnowing and deraining based on matrix decomposition,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 2838-2847, 2017.
-  J. H. Kim, J. Y. Sim, and C. S. Kim, “Video deraining and desnowing using temporal correlation and low-rank matrix completion,” IEEE Trans. on Image Process., vol. 24, no. 9, pp. 2658-2670, 2015.
-  T. Jiang, T. Z. Huang, X. L. Zhao, L. J. Deng, and Y. Wang, “A novel tensor-based video rain streaks removal approach via utilizing discriminatively intrinsic priors,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 2818-2827, 2017.
-  T. X. Jiang, T. Huang, X. Zhao, L. Deng, and Y. Wang, “Fastderain: a novel video rain streak removal method using directional gradient priors,” IEEE Trans. on Image Process., vol. 28, no. 4, pp. 2089-2102, 2019.
-  W. Wei, L. Yi, Q. Xie, Q. Zhao, D. Meng, and Z. Xu, “Should We encode rain streaks in video as deterministic or stochastic?,” IEEE Int. Conf. on Comput. Vision, pp. 2535-2544, 2017.
-  M. Li, Q. Xie, Q. Zhao, W. Wei, S. Gu, J. Tao, and D. Meng, “Video rain streak removal by multiscale convolutional sparse coding,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-10, 2018.
-  J. Chen, C. H. Tan, J. Hou, L. P. Chau, and L. He, “Robust video content alignment and compensation for rain removal in a cnn framework,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-10, 2018.
-  J. Liu, W. Yang, S. Yang, and Z. Guo, “Erase or fill? deep joint recurrent rain removal and reconstruction in videos,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-10, 2018.
-  J. Liu, W. Yang, S. Yang, and Z. Guo, “D3r-net: dynamic routing residue recurrent network for video rain removal,” IEEE Trans. on Image Process., vol. 28, no. 2, pp. 699-712, 2018.
-  J. Xu, W. Zhao, P. Liu, and X. Tang, “Removing rain and snow in a single image using guided filter,” IEEE Int. Conf. on Comput. Sci. and Automation Eng., pp. 304-307, 2012.
-  K. He, J. Sun, and X. Tang, “Guided image filtering,” European Conf. on Comput. Vision, pp. 1-14, 2010.
-  J. Xu, W. Zhao, P. Liu, and X. Tang, “An improved guidance image based method to remove rain and snow in a single image,” Comput. Inf. Sci., vol. 5, no. 3, 2012.
-  X. Zheng, Y. Liao, W. Guo, X. Fu, and X. Ding, “Single-image-based rain and snow removal using multi-guided filter,” Neural Inform. Process., pp. 258-265, 2013.
-  X. Ding, L. Chen, X. Zheng, Y. Huang, and D. Zeng, “Single image rain and snow removal via guided L0 smoothing filter,” Multimedia Tools and Applicat., vol. 75, no. 5, pp.2697-2712, 2016.
-  J. H. Kim, C. Lee, J. Y. Sim, and C. S. Kim, “Single-image deraining using an adaptive nonlocal means filter,” IEEE Int. Conf. on Image Process., pp. 914-917, 2013.
-  Y. Fu, L. Kang, C. Lin, and C. T. Hsu, “Single-frame-based rain removal via image decomposition,” IEEE Int. Conf. on Acoustics, pp. 914-917, 2013.
-  L. Kang, C. Lin, and Y. Fu, “Automatic single-image-based rain streaks removal via image decomposition,” IEEE Trans. on Image Process., vol. 24, no. 4, pp. 1742-1755, 2012.
-  L. Kang, C. Lin, C. Lin, and Y. Lin, “Self-learning-based rain streak removal for image/video,” IEEE Int. Symp. Circuits Syst., vol. 57, no. 1, pp. 1871-1874, 2012.
-  Y. Wang, S. Liu, C. Chen, and B. Zeng, “A hierarchical approach for rain or snow removing in a single color image,” IEEE Trans. on Image Process., vol. 26, no. 8, pp. 3936-3950, 2017.
-  S. H. Sun, S. P. Fan, and Y. C. F. Wang, “Exploiting image structural similarity for single image rain removal,” IEEE Int. Conf. on Image Process., pp. 4482-4486, 2014.
-  L. Zhu, C. Fu, D. Lischinski, and P. Heng, “Joint bi-layer optimization for single-image rain streak removal,” IEEE Int. Conf. on Comput. Vision, pp. 2545-2553, 2017.
-  D. Eigen, D. Krishnan, and R. Fergus, “Restoring an image taken through a window covered with dirt or rain,” IEEE Int. Conf. on Comput. Vision, pp. 633-640, 2013.
-  R. Qian, R. Tan, W. Yang, J. Su, and J. Liu, “Attentive generative adversarial network for raindrop removal from a single image,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-1, 2018.
-  K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 770-778, 2016.
-  Z. Fan, H. Wu, X. Fu, Y. Huang, and X. Ding, “Residual guide feature fusion network for single image deraining,” In ACM Multimedia, 2018.
-  H. Zhang, V. Sindagi, and V. M. Patel, “Image de-raining using a conditional generative adversarial network,” IEEE Trans. on Circuits and Syst. for Video Technology, 2019.
-  H. Zhang and V. M. Patel, “Density-aware single image de-raining using a multi-stream dense network,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-10, 2018.
-  F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” In Int. Conf. on Learning Representation, 2016.
-  X. Li, J. Wu, Z. Lin, H. Liu, and H. Zha, “Recurrent squeeze-and-excitation context aggregation net for single image deraining,” In European Conf. on Comput. Vision, pp. 262-277, 2018.
-  Y. Wang, X. Zhao, T. Jiang, L. Deng, Y. Chang, and T. Huang, “Rain streak removal for single image via kernel guided cnn,” arXiv:1808.08545, 2018.
-  J. Pan, S. Liu, J. Zhang, Y. Liu, J. Ren, and Zechao Li, “Learning dual convolutional neural networks for low-level vision,” IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1-10, 2018.
-  G. Li, X. He, W. Zhang, H. Chang, L. Dong, and L. Lin, “Non-locally enhanced encoder-decoder network for single image de-raining,” In 2018 ACM Multimedia Conf. on Multimedia Conf., pp. 1056-1064, 2018.
-  D. Ren, W. Zuo, Q. Hu, P. Zhu, and D. Meng, “Progressive image deraining networks: a better and simpler baseline,” IEEE Conf. on Comput. Vision and Pattern Recognition, 2019.
-  T. Wang, X. Yang, K. Xu, S. Chen, Q. Zhang, and R. W. H. Lau, “Spatial attentive single-image deraining with a high quality real rain dataset,” IEEE Conf. on Comput. Vision and Pattern Recognition, 2019.
-  X. Jin, Z. Chen, J. Lin, Z. Chen, and W. Zhou, “Unsupervised single image deraining with self-supervised constraints,” IEEE Int. Conf. on Image Process., pp. 2761-2765, 2018.
-  H. Quan and M. Ghanbari, “Scope of validity of psnr in image/video quality assessment,” Electronics letters, vol. 44, no. 13, pp. 800-801, 2008.
-  Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process., vol. 13, no. 4, pp. 600-612, 2004.
-  Sheikh, R. Hamid, and A. C. Bovik, “Image information and visual quality,” IEEE Int. Conf. on Acoustics, Speech, and Signal Process., vol. 3, 2004.
-  L. Zhang, L. Zhang, X. Mou, and D. Zhang, “Fsim: A feature similarity index for image quality assessment,” IEEE Trans.on Image Process., vol. 20, no. 8, pp. 2378-2386, 2011.
-  N. Goyette, P. M. Jodoin, F. Porikli, J. Konrad, and P. Ishwar, “Change detection. net: A new change detection benchmark dataset,” in IEEE Conf. on Comput. Vision and Pattern Recognition Workshops, pp. 1-8, 2012.
-  D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” IEEE Int. Conf. on Comput. Vision, Vol. 2, pp. 416-423, 2001.
-  H. Lin, Y. Li, X. Ding, W. Zeng, Y. Huang, and J. Paisley, “Rain o’er me: synthesizing real rain to derain with data distillation,” arXiv:1904.04605, 2019.
-  D. Chen, C. Chen, and L. Kang, “Visual depth guided color image rain streaks removal using sparse coding,” IEEE Trans. on Circuits and Syst. for Video Technology, vol. 24, no. 24, pp. 1430-1455, 2014.
-  P. Mu, J. Chen, R. Liu, X. Fan, and Z. Luo, “Learning bilevel layer priors for single image rain streaks removal,” IEEE Signal Process. Letters, vol. 26, no. 2, pp. 307-311, 2019.
-  K. Zhang, W. Zuo, S. Gu, and L. Zhang, “Learning deep cnn denoiser prior for image restoration,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 3929-3938, 2017.
-  Q. Xie, M. Zhou, Q. Zhao, D. Meng, W. Zuo, and Z. Xu, “Multispectral and hyperspectral image fusion by MS/HS fusion net,” in Proc. of the IEEE Conf. on Comput. Vision and Pattern Recognition, pp. 1585-1594, 2019.
-  D. Meng, Q. Zhao, Z. Xu. , “Improve robustness of sparse pca by l1-norm maximization,” Pattern Recognition, vol. 45, no. 1, 487-497, 2012.
-  W. Yang, R. Tan, J. Feng, J. Liu, S. Yan, and Z. Guo, “Joint rain detection and removal from a single image with contextualized deep networks,” IEEE Trans. on Pattern Anal. and Machine Intell., vol. PP, no. 99, 2019.
-  H. Zhang and V. M. Patel, “Convolutional sparse and low-rank coding based rain streak removal,” in Proc. of IEEE Winter Conf. on Applications of Comput. Vision, pp. 1259-1267, 2017.
-  A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, and Z. De-Vito, “Automatic differentiation in pytorch,” 2017.
-  R. Liu, S. Cheng, L. Ma, X. Fan, and Z. Luo, “Deep proximal unrolling: algorithmic framework, convergence analysis and applications,” IEEE Trans. on Image Process., vol. 28, no. 10, pp. 5013-5026, 2019.