Modern intelligent systems rely more and more on visual information as input. However, in an outdoors setting, visual input quality and in turn, system performance, could be seriously degraded by atmospheric turbulences [11, 23]
. One such turbulence, rain streaks, degrade image contrast and visibility, obscure scene features, and could be misconstrued as scene motion by computer vision algorithms. Rain removal is therefore vital to ensure the robustness of outdoor vision-based systems.
There are two categories of methods for rain removal – image-based methods, which rely solely on the information of the processed frame, and video-based methods, which also utilize temporal clues from neighboring frames. Due to the lack of temporal information, image-based methods face difficulties in recovering from torrential rain with large and opaque occlusions.
To properly utilize temporal information, video-based methods require scene content to be aligned throughout consecutive frames. However, this requirement is challenging due to two factors – motion of the camera and dynamic scene content, i.e., presence of moving object. Previous works tackle these two issues separately. Camera motion-induced scene content shifts can be reversed using global frame alignment [29, 27]. However, the granularity of global alignment is too large when scene depth range is large; parts of scene content will be poorly aligned. Scene content shifts due to scene object motions could cause the scene to be misclassified as rain. One solution is to identify and exclude these pixels. This approach, however, is unable to remove rain that overlaps moving objects.
In this paper, we propose a novel and elegant framework that simultaneously solves both issues for the video-based approach – rain removal based on robust SuperPixel (SP) Alignment between video frames followed by detail Compensation in a CNN framework (SPAC-CNN). First, the target video frame is segmented into SPs and each SP is aligned with its temporal neighbors. This step simultaneously aligns both the scene background and moving objects without prior assumptions about moving objects. Scene content is also much better aligned at a SP level granularity. An intermediate derain output can be obtained by averaging the aligned SPs, which unavoidably introduces blurring. We restore the rain free details to the intermediate output by extracting the information from the aligned SPs using a convolutional neural network (CNN).
Extensive experiments show that our proposed algorithm achieves up to 5dB reconstruction advantage over state-of-the-art rain removal methods. Visual inspection shows that rain is much better removed, especially for heavy and opaque rainfall regions over highly dynamic scene content. Fig. 1 illustrates the advantage of our proposed algorithm over existing methods in a challenging video sequence. The contribution of this work can be generalized as follows:
We propose a novel spatial-temporal content alignment algorithm at SP level, which can handle fast camera motion and dynamic scene contents in one framework. This mechanism greatly outperforms existing scene motion analysis methods that models background and foreground motion separately.
The strong local properties of SPs can robustly counter heavy rain interferences, and facilitate much more accurate alignment. Owing to such robust alignment, accurate temporal correspondence could be established for rain occlusions such that heavily occluded backgrounds could be truthfully restored. This greatly outperforms image-based derain methods in which recovery of large and opaque rain occlusions remain the biggest challenge.
We propose a set of very efficient spatial-temporal features for the compensation of high frequency details lost during the deraining process. An efficient CNN network is designed, and a synthetic rain video dataset is created for training the CNN.
2 Related Work
Rain removal based on a single image is intrinsically a challenging one, since it only relies on visual features and priors to distinguish rain from the background. Local photometric, geometric, and statistical properties of rain have been studied in [11, 10, 36, 15]. Li et al. 
models background and rain streaks as layers to be separated. Under the sparse coding framework, rain and backgrounds can be efficiently separated either with classified dictionary atoms[13, 6], or via discriminative sparse coding . Convolutional Neural Networks have been very effective in both high-level vision tasks  and low-level vision applications for capturing signal characteristics [14, 34]. Hence, different network structures and features were explored for rain removal, such as the deep detail network , and the joint rain detection and removal model . Due to the lack of temporal information, heavy and opaque rain is difficult to be distinguished from scene structures. Full recovery of a seriously occluded scene is almost impossible.
The temporal information from a video sequence provides huge advantage for rain removal [9, 3, 25, 26, 33]. True rain pixels are separated from moving object pixels based on statistics of intensity values  or chromatic values , on geometric properties of connected candidate pixels , or on segmented motion regions . Kim’s work  compensates for scene content motion by using optical flow for content alignment. Ren et al.  decomposes a video into background, rain, and moving objects using matrix decomposition. Moving objects are derained by temporally aligning them using patch matching, while the moving camera effect is modeled using a frame transform variable. Temporal derain methods can handle occlusions much better than image-based methods; however, these methods perform poorly for complex dynamic scenes shot from fast moving cameras.
3 Proposed Model
Throughout the paper, scalars are denoted by italic lower-case letters, 2D matrices by upper-case letters, 3D tensors, functions, and operators by script letters.
Given a target derain video frame , we look at its immediate past and future neighbor frames to create a sliding buffer window of length : . Here, negative and positive indicate past and future frames, respectively. We only derain the Y luminance channel. The derain output is used to update the history buffer (Fig. 2). Such history update mechanism ensures cleaner derain for heavy rainfall scenarios.
The system diagram for the proposed SPAC-CNN rain removal algorithm is shown in Fig. 2. The algorithm can be divided into two parts: first, video content alignment is carried out at SP level, which consists two SP template matching operations that produce two output tensors: the optimal temporal match tensor , and the sorted spatial-temporal match tensor . An intermediate derain output is calculated by averaging the slices111A slice is a two-dimensional section of a higher dimensional tensor, defined by fixing all but two indices . of the tensor . Second, these two tensors will be prepared as input features to a CNN to compensate the high frequency details lost in caused by mis-alignment blur. The detail of each component will be explained in this section.
3.1 Robust Content Alignment via Superpixel Spatial-Temporal Matching
One of the most important procedure for video-based derain algorithms is the estimation of content correspondence between video frames. With accurate content alignment, rain occlusions could be easily detected and removed with information from the temporal axis.
3.1.1 Content Alignment: Global vs. Superpixel
The popular solution to compensate camera motion between two frames is via a homography transform matrix estimated based on global consensus of a group of matched feature points [4, 28]. Due to the reasons analyzed in Sec. 1, perfect content alignment can never be achieved for all pixels with a global transform at whole frame level, especially for dynamic scenes with large depth range.
The solution naturally turns to pixel-level alignment, which faces no fewer challenges: first, feature points are sparse, and feature-less regions are difficult to align; more importantly, rain streak occlusions will cause serious interferences to feature matching at single pixel level. Information from larger areas are required to overcome rain interferences. This lead us to our final solution: to decompose images into smaller depth consistent units.
The concept of SuperPixel (SP) is to group pixels into perceptually meaningful atomic regions [2, 30, 21]. Boundaries of SP usually coincide with those of the scene contents. Comparing Fig. 3(a) and (b), the SPs are very adaptive in shape, and are more likely to segment uniform depth regions compared with rectangular units. We adopt SP as the basic unit for content alignment.
3.1.2 Optimal Temporal Matching for Rain Detection
Let denote the set of pixels that belong to the -th SP on . Let be the bounding box that covers all pixels in (). Let denote a spatial-temporal buffer centered on . As illustrated in Fig. 2, spans the entire sliding buffer window, and its spatial range is set to cover the possible motion range of in its neighboring frames.
Pixels within the same SP are very likely to belong to the same object and possess identical motion between adjacent frames. Therefore, we can approximate the SP appearance in their adjacent frames based on its appearance in the current frame via linear translations.
Searching for the reference SP is done by template matching of the target SP at all candidate locations in . A match location is found at frame according to:
As shown in Fig. 4(d), indicates SP pixels in the bounding box . denotes element-wise multiplication. Each match at a different frame becomes a temporal slice for the optimal temporal match tensor :
Based on the temporal clues provided by , a rain mask can be estimated. Since rain increases the intensity of its covered pixels , rain pixels in are expected to have higher intensity than their collocated temporal neighbors in . We first compute a binary tensor to detect positive temporal fluctuations:
where operator is defined as replicating the 2D slices times and stacking along the thrid dimension into a tensor of . To robustly handle re-occurring rain streaks, we classify pixels as rain when at least 3 positive fluctuations are detected in . An initial rain mask can be calculated as:
Due to possible mis-alignment, edges of background could be misclassified as rain. Since rain steaks don’t affect values in the chroma channels (Cb and Cr), a rain-free edge map could be calculated by thresholding the sum of gradients of these two channels with . The final rain mask is calculated as:
A visual demonstration of , , and is shown in Fig. 4(a), (b), and (c), respectively. In our implementation, is set to while is set to .
3.1.3 Sorted Spatial-Temporal Template Matching for Rain Occlusion Suppression
The second round of template matching will be carried out based on the following cost function:
The rain-free matching template is calculated as:
As shown in Fig. 4(e), only the rain-free background SP pixels will be used for matching. Each candidate locations in (except current frame ) are sorted in ascending order based on their cost defined in Eq. (6). The top candidates with smallest will be stacked as slices to form the sorted spatial-temporal match tensor .
The slices of are expected to be well-aligned to the current target SP , and is robust to interferences from the rain. Since rain pixels are temporally randomly and sparsely distributed within , when is sufficiently large, we can get a good estimation of the rain free image through tensor slice averaging, which functions to suppress rain induced intensity fluctuations, and bring out the occluded background pixels:
Fig. 5 gives a visual example of and its calculation flow. We can see that all rain streaks have been suppressed in after the averaging.
3.2 Detail Compensation for Mis-Alignment Blur
The averaging of slices provides a good estimation of rain free image; however, it creates noticeable blur due to un-avoidable mis-alignment, especially when the camera motion is fast. To compensate the lost high frequency content details without reintroducing the rain streaks, we propose to use a CNN model for the task.
3.2.1 Occluded Background Feature
from Eq. (8) can be used as one important clue to recover rain occluded pixels. Rain streak pixels indicated by the rain mask are replaced with corresponding pixels from to form the first feature :
Note that the feature itself is already a reasonable derain output. However its quality is greatly limited by the correctness of the rain mask . For false positive222False positive rain pixels refer to background pixels falsely classified as rain; false negative rain pixels refer to rain pixels falsely classified as background. rain pixels, will introduce content detail loss; for false negative pixels, rain streaks will be added back from . This calls for more informative features.
3.2.2 Temporal Consistency Feature
The temporal consistency feature is designed to handle false negative rain pixels in , which falsely add rain streaks back to . For a correctly classified and recovered pixel (a.k.a. true positive) in Eq. (9), intensity consistency should hold such that for the collocated pixels in the neighboring frames, there are only positive intensity fluctuations caused by rain in those frames. Any obvious negative intensity drop along the temporal axis is a strong indication that such pixel is a false negative pixel.
The temporal slices in establishes optimal temporal correspondence at each frame, which embeds enough information for the CNN to deduce the above analyzed false negative logic, therefore they shall serve as the second feature :
3.2.3 High Frequency Detail Feature
The matched slices in are sorted according to their rain-free resemblance to , which provide good reference to the content details with supposedly small mis-alignment. We directly use the tensor as the last group of features . This feature will compensate the detail loss introduced by the operations in Eq. (9) for false positive rain pixels.
In order to facilitate the network training, we limit the mapping range between the input features and regression output by removing the low frequency component () from these input features. Pixels in but outside of the SP is masked out with :
The final input feature set is . The feature preparation process is summarized in Fig. 5.
3.2.4 CNN Structure and Training Details
|Camera Motion||Clip No.||Rain||DSC-ICCV15 ||DDN-CVPR17 ||VMD-CVPR17 ||SPAC-Avg||SPAC-CNN|
|panning unstable camera||a1||28.46||0.94||0.38||25.61||0.93||0.47||28.02||0.95||0.47||26.96||0.92||0.39||24.78||0.87||0.51||29.78||0.97|
|camera speed 20-30 km/h||b1||28.72||0.92||0.42||28.78||0.92||0.53||29.48||0.96||0.35||24.09||0.84||0.47||26.35||0.89||0.55||31.19||0.96|
The CNN architecture is designed as shown in Fig. 6. The network consists of four convolutional layers with decreasing kernel sizes of 11, 5, 3, and 1. All layers are followed by a rectified linear unit (ReLU). Our experiments show this fully convolutional network is capable of extracting useful information from the input features and efficiently providing reliable predictions of the content detail . The final rain removal output will be:
For the CNN training, we minimize the distance between the derain output and the ground truth scene:
denotes the ground truth clean image. We use stochastic gradient descent (SGD) to minimize the objective function. Mini-batch size is set as 50 for better trade-off between speed and convergence. The Xavier approach is used for network initialization, and the ADAM solver  is adpatoed for system training, with parameter settings 0.9, 0.999, and learning rate 0.0001.
To create the training rain dataset, we first took a set of 8 rain-free VGA resolution video clips of various city and natural scenes. The camera was of diverse motion for each clip, e.g., panning slowly with unstable movements, or mounted on a fast moving vehicle with speed up to 30 km/h. Next, rain was synthesized over these video clips with the commercial editing software Adobe After Effects , which can create realistic synthetic rain effect for videos with adjustable parameters such as raindrop size, opacity, scene depth, wind direction, and camera shutter speed. This provides us a diverse rain visual appearances for the network training.
We synthesized 3 to 4 different rain appearances with different synthetic parameters over each video clip, which provides us 25 rainy scenes. For each scene, 21 frames were randomly extracted (together with their immediate buffer window for calculating features). Each scene was segmented into approximately 300 SPs, therefore finally we have around 157,500 patches in the training dataset.
4 Performance Evaluation
We set the sliding video buffer window size . Each VGA resolution frame was segmented into around 300 SPs using the SLIC method . The bounding box size was 80, and the spatial-temporal buffer dimension was 30305. MatConvNet  was adopted for model training, which took approximately 54 hours to converge over the training dataset introduced in Sec. 3.2.4. The training and all subsequent experiments were carried out on a desktop with Intel E5-2650 CPU, 56GB RAM, and NVIDIA GeForce GTX 1070 GPU.
4.1 Quantitative Evaluation
To quantitatively evaluate our proposed algorithm, we took a set of 8 videos (different from the training set), and synthesized rain over these videos with varying parameters. Each video is around 200 to 300 frames. All subsequent results shown for each video are the average of all frames.
To test the algorithm performance in handling cameras with different motion, we divided the 8 testing scenes into two groups: Group a consists of scenes shot from a panning and unstable camera; Group b from a fast moving camera (with speed range between 20 to 30 km/h). Thumbnails and the labeling of each testing scene are shown in Fig. 7.
Four state-of-the-art methods were chosen for comparison: two image-based derain methods, i.e., discriminative sparse coding (DSC) , and the deep detail network (DDN) ; one video-based method via matrix decomposition (VMD) . The intermediate derain output is also used as a baseline (abbr. as SPAC-Avg).
4.1.1 Rain Streak Edge Precision Recall Rates
Rain fall introduces edges and textures over the background. To evaluate how much of the modifications from the derain algorithm contributes positively to only removing the rain pixels, we calculated the rain streak edge precision-recall (PR) curves. Absolute difference values were calculated between the derain output against the scene ground truth. Different threshold values were applied to retrieve a set of binary maps, which were next compared against the ground truth rain pixel map to calculate the precision recall rates.
Average PR curves for the two groups of testing scenes by different algorithms are shown in Fig. 10. As can be seen, for both Group a and b, SPAC-CNN shows consistent advantages over SPAC-Avg, which proves that the CNN model can efficiently compensate scene content details and suppress influences from rain streak edges.
Video-based derain methods (i.e., VMD and SPAC-CNN) perform better than image-based methods (i.e., DSC and DDN) for scenes in Group a. With slow camera motion, temporal correspondence can be accurately established, which brings great advantage to video-based methods. However, with fast camera motion, the performance of VMD deteriorates seriously for Group b data: rain removal is now at the cost of background distortion. Image-based methods show its relative efficiency in this scenario. However, SPAC-CNN still holds advantage over image-based methods at all recall rates for Group b data, which shows its robustness for fast moving camera.
4.1.2 Scene Reconstruction PSNR/SSIM
We calculated the reconstruction PSNR/SSIM between different state-of-the-art methods against the ground truth, and the results are shown in Table 1. The F-measure for rain streak edge PR curves are also listed for each data.
As can be seen, SPAC-CNN is consistently 5 dB higher than SPAC-Avg for both Groups a and b. SSIM is also at least 0.06 higher. This further validates the efficiency of the CNN detail compensation network.
Video based methods (VMD and SPAC-CNN) show great advantages over image-based methods for Group a data (around 2dB and 5dB higher respectively than DSC). For Group b, image-based methods excel VMD, however SPAC-CNN still hold a 3dB advantage over DDN, 4dB over DSC.
4.1.3 Feature Evaluation
We evaluated the roles different input features play in the final derain PSNR over two testing data a1 and b4. Three baseline CNNs with different combinations of features as input were independently trained for this evaluation. As can be seen from the results in Table. 2, combination of the three features provides the highest PSNR. proves to be the most important feature. Visual inspection on the derain output show both + and + leaves significant amount of un-removed rain. Comparing the last two columns, it can be seen that works more efficiently with a1 than b4, which makes sense since the high frequency features are better aligned for slow cameras, which led to more accurate detail compensation.
4.2 Visual Comparison
We carried out visual comparison to examine the derain performance of different algorithms. Fig. 8 shows the derain output for the testing data a.3, b.1, and b.4. Two consecutive frames are shown for b.1 and b.4 to demonstrate the camera motion. As can be seen, image-based derain methods can only handle well light and transparent rain occlusions. For those opaque rain streaks that cover a large area, they fail unavoidably. Temporal information proves to be critical in truthfully restoring the occluded details.
It is observed that rain can be much better removed by video-based methods. However the VMD method creates serious blur when the camera motion is fast. The derain effect for SPAC-CNN is the most impressive for all methods. The red dotted rectangles showcase the restored high frequency details between SPAC-CNN and SPAC-Avg.
Although the network has been trained over synthetic rain data, experiments show that it generalizes well to real world rain. Fig. 9 shows the derain results. As can be seen, the advantage of SPAC-CNN is very obvious under heavy rain, and robust to fast camera motion.
4.3 Execution Efficiency
|DSC ||DDN ||VMD ||SPAC-CNN|
We compared the average runtime between different methods for deraining a VGA resolution frame. Results are shown in Table 3. As can be seen SPAC-Avg is much faster than all other methods. SPAC-CNN is much faster than video-based method, and it’s comparable to that of DDN.
For SPAC-CNN, the choice of SP as the basic operation unit is key to its performance. When other decomposition units are used instead (e.g., rectangular), matching accuracy deteriorates, and very obvious averaging blur will be introduced especially at object boundaries.
Although the SP template matching can only handle translational motion, alignment errors caused by other types of motion such as rotation, scaling, and non-ridge transforms can be mitigated with global frame alignment before they are buffered (as shown in Fig. 2) . Furthermore, these errors can be efficiently compensated by the CNN.
When camera moves even faster, SP search range needs to be enlarged accordingly, which increases computation loads. We have tested scenarios with camera speed going up to 50 km/h, the PSNR becomes lower due to larger mis-alignment blur, alignment error is also possible as showcased in blue rectangles in Fig. 9. We believe a re-trained CNN with training data from such fast moving camera will help improve the performance.
We have proposed a video-based rain removal algorithm that can handle torrential rain fall with opaque streak occlusions from a fast moving camera. SP have been utilized as the basic processing unit for content alignment and occlusion removal. A CNN has been designed and trained to efficiently compensate the mis-alignment blur introduced by deraining operations. The whole system shows its efficiency and robustness over a series of experiments which outperforms state-of-the-art methods significantly.
The research was partially supported by the ST Engineering-NTU Corporate Lab through the NRF corporate lab@university scheme.
-  Adobe After Effects Software. Available at www.adobe.com/AfterEffects.
-  R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(11):2274–2282, 2012.
-  P. C. Barnum, S. Narasimhan, and T. Kanade. Analysis of rain and snow in frequency space. International Journal of Computer Vision, 86(2):256, Jan 2009.
-  H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool. Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding, 110(3):346 – 359, 2008.
-  J. Bossu, N. Hautière, and J.-P. Tarel. Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International Journal of Computer Vision, 93(3):348–367, Jul 2011.
-  D.-Y. Chen, C.-C. Chen, and L.-W. Kang. Visual depth guided color image rain streaks removal using sparse coding. IEEE Transactions on Circuits and Systems for Video Technology, 24(8):1430–1455, Aug. 2014.
-  J. Chen and L.-P. Chau. A rain pixel recovery algorithm for videos with highly dynamic scenes. IEEE Transactions on Image Processing, 23(3):1097–1104, 2014.
X. Fu, J. Huang, D. Z. Y. Huang, X. Ding, and J. Paisley.
Removing rain from single images via a deep detail network.
IEEE Conference on Computer Vision and Pattern Recognition, 2017.
-  K. Garg and S. K. Nayar. Detection and removal of rain from videos. In IEEE Conference on Computer Vision and Pattern Recognition, volume 1, pages 528–535, 2004.
-  K. Garg and S. K. Nayar. When does a camera see rain? In IEEE International Conference on Computer Vision, volume 2, pages 1067–1074, Oct 2005.
-  K. Garg and S. K. Nayar. Vision and rain. International Journal of Computer Vision, 75(1):3–27, 2007.
X. Glorot and Y. Bengio.
Understanding the difficulty of training deep feedforward neural
International Conference on Artificial Intelligence and Statistics, pages 249–256, 2010.
-  L.-W. Kang, C.-W. Lin, and Y.-H. Fu. Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing, 21(4):1742–1755, Apr. 2012.
J. Kim, J. K. Lee, and K. M. Lee.
Accurate image super-resolution using very deep convolutional networks.In IEEE Conference on Computer Vision and Pattern Recognition, pages 1646–1654, June 2016.
-  J.-H. Kim, C. Lee, J.-Y. Sim, and C.-S. Kim. Single-image deraining using an adaptive nonlocal means filter. In IEEE International Conference on Image Processing, pages 914–917, Sept. 2013.
-  J. H. Kim, J. Y. Sim, and C. S. Kim. Video deraining and desnowing using temporal correlation and low-rank matrix completion. IEEE Transactions on Image Processing, 24(9):2658–2670, Sept 2015.
-  D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
-  T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.
-  Y. Li, R. T. Tan, X. Guo, J. Lu, and M. S. Brown. Rain streak removal using layer priors. In IEEE Conference on Computer Vision and Pattern Recognition, pages 2736–2744, June 2016.
Z. Li and J. Chen.
Superpixel segmentation using linear spectral clustering.In IEEE Conference on Computer Vision and Pattern Recognition, pages 1356–1363, June 2015.
-  Y. Luo, Y. Xu, and H. Ji. Removing rain from a single image via discriminative sparse coding. In IEEE International Conference on Computer Vision, pages 3397–3405, 2015.
-  S. G. Narasimhan and S. K. Nayar. Vision and the atmosphere. International Journal of Computer Vision, 48(3):233–254, 2002.
-  W. Ren, J. Tian, Z. Han, A. Chan, and Y. Tang. Video desnowing and deraining based on matrix decomposition. In IEEE Conference on Computer Vision and Pattern Recognition, pages 4210–4219, 2017.
-  V. Santhaseelan and V. K. Asari. A phase space approach for detection and removal of rain in video. In Intelligent Robots and Computer Vision XXIX: Algorithms and Techniques, volume 8301, page 830114. International Society for Optics and Photonics, Jan. 2012.
-  V. Santhaseelan and V. K. Asari. Utilizing local phase information to remove rain from video. International Journal of Computer Vision, 112(1):71–89, 2015.
-  C.-H. Tan, J. Chen, and L.-P. Chau. Dynamic scene rain removal for moving cameras. In IEEE International Conference on Digital Signal Processing, pages 372–376. IEEE, 2014.
-  P. H. Torr and A. Zisserman. MLESAC: a new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding, 78(1):138–156, 2000.
-  A. Tripathi and S. Mukhopadhyay. Video post processing: low-latency spatiotemporal approach for detection and removal of rain. IET Image Processing, 6(2), 2012.
-  M. Van den Bergh, X. Boix, G. Roig, B. de Capitani, and L. Van Gool. SEEDS: Superpixels Extracted via Energy-Driven Sampling, pages 13–26. Springer Berlin Heidelberg, 2012.
-  A. Vedaldi and K. Lenc. Matconvnet: Convolutional neural networks for matlab. In ACM International Conference on Multimedia, MM ’15, pages 689–692, 2015.
-  W. Yang, R. T. Tan, J. Feng, J. Liu, Z. Guo, and S. Yan. Deep joint rain detection and removal from a single image. In IEEE Conference on Computer Vision and Pattern Recognition, pages 1357–1366, 2017.
-  S. You, R. T. Tan, R. Kawakami, Y. Mukaigawa, and K. Ikeuchi. Adherent raindrop modeling, detectionand removal in video. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(9):1721–1733, Sept 2016.
-  K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing, 26(7):3142–3155, July 2017.
-  X. Zhang, H. Li, Y. Qi, W. K. Leow, and T. K. Ng. Rain removal in video by combining temporal and chromatic properties. In IEEE International Conference on Multimedia and Expo, pages 461–464, July 2006.
-  X. Zheng, Y. Liao, W. Guo, X. Fu, and X. Ding. Single-image-based rain and snow removal using multi-guided filter. In International Conference on Neural Information Processing, pages 258–265. Springer Berlin Heidelberg, 2013.