Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)
While deep learning (DL)-based video deraining methods have achieved significant success recently, they still exist two major drawbacks. Firstly, most of them do not sufficiently model the characteristics of rain layers of rainy videos. In fact, the rain layers exhibit strong physical properties (e.g., direction, scale and thickness) in spatial dimension and natural continuities in temporal dimension, and thus can be generally modelled by the spatial-temporal process in statistics. Secondly, current DL-based methods seriously depend on the labeled synthetic training data, whose rain types are always deviated from those in unlabeled real data. Such gap between synthetic and real data sets leads to poor performance when applying them in real scenarios. Against these issues, this paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer, expecting to better depict its insightful characteristics. Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks, respectively, which both are parameterized as deep neural networks (DNNs). Further more, different prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them. Last but not least, we also design a Monte Carlo EM algorithm to solve this model. Extensive experiments are conducted to verify the superiorities of the proposed semi-supervised deraining model.READ FULL TEXT VIEW PDF
Semi-supervised Video Deraining with Dynamical Rain Generator (CVPR, 2021, Pytorch)
Rain is a very common bad weather that exists in many video data. The appearance of rain not only negatively affects the visual quality of video, but also seriously deteriorates the performance of subsequent video processing algorithms, e.g., semantic segmentation, object detection, and autonomous driving
. Therefore, as an necessary video pre-processing step, video deraining has attracted increasing attentions in computer vision community. As an ill-posed inverse problem raised by Garg and Nayar, various methods have been proposed to handle the video deraining task. Most of the traditional methods focus on exploiting rational prior knowledge for the background or rain layers so as to obtain a proper separation between them. For example, low-rankness[23, 52, 24] is widely used to encode the temporal correlations of background video. As for rain streaks, many physical characteristics, such as photometric appearance, geometrical features, chromatic consistency, local structure correlations and multi-scale convolutional sparse coding, are explored in past years. Different from such deterministic assumptions for rain streaks, Wei et al.
firstly regard them as random variables, and model them using mixture of Gaussian (MoG) distribution. Albeit substantiated to be effective in some ideal scenarios, these traditional methods are mainly limited by the subjective manually-designed prior knowledges and huge computation burden.
Recently, owning to the powerful nonlinear fitting capability of DNNs, DL-based methods facilitate significant improvements for the video deraining task. The core idea of this methodology is to directly train a derainer parameterized by DNNs based on synthetic rainy/clean video pairs in an end-to-end manner. Most of these methods leverage different technologies, e.g., superpixel alignment, dual-level flow and self-learning, to extract the clean background from rainy video. In addition, Liu et al.[35, 34] design a recurrent network to jointly perform both the rain degradation classification and rain removal tasks. Even though these DL-based methods have achieved impressive deraining results on some synthetic benchmarks, there still exists large room to further increase their performance and generalization capability in real applications. On one hand, most of these methods make efforts on depicting the background, but ignore to model the intrinsic characteristics of rain layer. In fact, the rain layers in video can be understood as a dynamic sequence both in spatial and temporal spaces. Specifically, along spatial dimension, the randomly scattered rain streaks in each frame are with evident physical structures (e.g, direction, scale and thickness), and the rain layers in different frames along temporal dimension correspond to a continuous time series. Therefore, elaborately exploiting and encoding such insightful knowledges underlying rain layers in video data is expected to facilitate the rain removal task. On the other hand, it is well known that the performance of DL-based methods heavily relies on large amount of pre-collected training data, i.e., rainy/clean video pairs. In fact, due to the high labor cost to obtain such video pairs in real scenes, most of current methods have to use synthetic ones, which are manually simulated based on photo-realistic rendering technique  or professional photography and human supervision . Fig.1 lists several typical frames of synthetic and real rainy images in NTURain 
data set, which is widely used as benchmark in current video deraining methods. It can be easily seen that the rain patterns in synthetic and real rainy images are with evident differences, and the real ones obviously contain more complex and diverse rain types. Because of such gap between synthetic and real data sets, these DL-based methods deteriorate seriously in real cases. To deal with general video deraining task, it is thus critical to build a rational semi-supervised learning manner to sufficiently exploit the common knowledge in labeled synthetic and unlabeled real data. To address these issues, in this paper we propose a semi-supervised video deraining method, in which a dynamic rain generator is adopted to mimic the generation process of rain layers in video, hopefully better characterizing its intrinsic knowledge simultaneously from spatial and temporal dimensions. Besides, the real rainy videos are taken into consideration in our model as unlabeled data, in order to achieve more robust deraining results. In summary, the contributions of this work are as follows: Firstly, we propose a new probabilistic video deraining method, in which a dynamic rain generator, consisting of a transition model and a emission model, is employed to fit the rain layer in videos. Specifically, the transition model is used to encode the continuous changes of rains among adjacent frames, while the emission model maps the state space to the observed rain streaks. To increase the capacities of such generator, both the transition and emission models are parameterized as DNNs. Secondly, a semi-supervised learning mechanism is designed by constructing different prior formats for labeled synthetic data and unlabeled real data. Specifically, for the labeled synthetic data, the corresponding ground truth rain-free videos are embedded into one elaborate prior distribution as a strong constraint. As for the unlabeled real data, we introduce the 3-D Markov Random Field (MRF) to encode the temporal consistencies and correlations of the underlying background. Thirdly, a Monte Carlo EM algorithm is designed to solve our model. In the expectation step, the posterior of latent variables are intractable because of the DNNs employed in generator and derainer, thus the Langevin dynamic is adopted to approximate the expectation.
In this section, we give a short recap for the developments on the video/image deraining methods.
To the best of our knowledge, Garg and Nayar firstly proposed the problem of video deraining, and developed a rain detector based on the photometric appearance of rain. Later, they further explored the relationships between rain effects and some camera parameters [16, 17, 18]. Inspired by these seminal works, various video deraining methods have beed proposed in past years, focusing on seeking more rational prior knowledge for the rain or background. For example, both the chromatic property [62, 36] and shape characteristics [3, 2] of rain in time domain have been employed to identify and remove rain layers from the captured rainy videos, while the regular visual effects of rain in global frequency space were also exploited by . Besides, Santhaseelan and Asari  employed local phase congruency to detect rain based on chromatic constraints. Notably, Wei et al.
firstly regarded rain streaks as random variables and model them by patch-based MoG distribution. In addition, matrix/tensor factorization technologies were also very popular in the field of video deraining, mainly used to encode the correlations of background video along time dimension, including[8, 27, 23, 24, 41]. In recent years, DL-based methods represent a new trend along this research line. In , Li et al. employed the multi-scale convolutional sparse coding to encode the repetitive local patterns under different scales of rain streaks. Chen et al.  proposed to decompose the scene into superpixels and then align the scene content at superpixel segmentation level, and finally a CNN is used to compensate the lost details and add normal textures to the deraining results. In , Liu et al.
designed a recurrent neural network to jointly implement both of the rain degradation classification and rain removal tasks. And in, a hybrid rain model is proposed to model both rain streaks and occlusions. Besides, Yang et al. also built a two-stage recurrent networks that utlize dual-level regularizations toward video deraining. Very recently, Yang et al.  proposed a self-learning manner for this task by taking both temporal correlations and consistencies into consideration. While DL-based methods have achieved impressive performance on some synthetic benchmarks, they are still very hard to be applied in real applications due to the large gap between their used synthetic data and the real data. Therefore, in order to increase the generalization capacities of deraining model in real task, it is critical to design a semi-supervised learning framework to fully mine the informations both in the labeled synthetic data and unlabeled real data. This paper mainly focuses on this issue.
For literature comprehensiveness, we also briefly review the single image deraining methods. The single image deraining method can be roughly divided into two categories, i.e., model-based methods and DL-based methods. Most of the model-based methods formulated the deraining task as a decomposition problem between the rain and background layers, and various technologies have been employed to deal with it, such as morphological components analysis , non-local means filter , and sparse coding [5, 37]. Besides, some prior knowledges of rain and background are also explored in this field, mainly including sparsity and low-rankness [58, 4, 19], narrow directions of rain and the similarities of rain patches 
, and Gaussian mixture model (GMM). The earliest DL-based method was proposed by Fu et al. [12, 13], in which CNNs are adopted to remove rains from the high frequency part of rainy images. Led by these two works, DL-based methods began to dominate the research in this field. Many effective and advanced network architectures [30, 32, 40, 49, 14, 21] were put forward in recent years. And some works attempted to jointly handle the rain removal task with other related tasks, like rain detection 
, rain density estimation, so as to obtain better deraining performance. Besides, some useful priors, e.g., multi-scale [57, 63, 22], convolutional sparse coding  and bilevel layer prior , were also embedded into the DL-based methods to sufficiently mine the potentials of DNNs. Different from the above methods, Zhang et al.  and Wang et al.  both introduced adversarial learning manner to enhance the realistic of the derained images, and Wei et al.  proposed a semi-supervised deraining model that can be better generalized to real tasks. Naturally, single image deraining method can be directly used in the video deraining task by taking each video as some independent single images. However, because of ignoring the abundant temporal informations contained in video, it is very hard to obtain satisfid performance using such manner. Thus it is necessary to design rational deraining model dedicated for video data.
Given a labeled data set and a unlabeled data set , where and denote the -th rainy and clean videos, respectively, we aim to construct a semi-supervised probabilistic model based on them and then design an EM algorithm to solve it.
where , and are the recovered rain-free background, rain layer and residual term, respectively, and is the element of at location. , which is parameterized by DNNs, denotes a function that maps the observed rainy video to the underlying rain-free background, and is called as “derainer” in this paper. Next, we consider how to model the derainer parameter and rain layer : Modelling background layer: As is well known, one general prior knowledge for video data is that the rain-free background is with strong correlations and similarities along spatial and temporal dimensions. Therefore, for any rainy video , we encode such knowledge through the following MRF prior distribution for :
where , , denotes the element of at location . and are both manual hyper-paramerters, and the latter represents the strength of smoothness constraint on the spatial and temporal dimensions. As for the rainy video , the known rain-free background can be further embedded into Eq. (2) as another strong prior, i.e.,
where is a very small hyper-paramerter close to zero. As for the derainer , we adopt a simple network architecture as shown in Fig. 2. Without any special designs, it only contains several 3-D convolution layers and residual blocks . To accelerate the computation, the pixel-unshuffle  and pixel-shuffle  layers are added to the head and the tail of it, respectively. Modelling rain layer: Intuitively, the rain layer is a dynamic sequence both in spatial and temporal directions, thus we naturally employ the spatial-temporal process [11, 53] in statistics to characterize it. Let’s denote as the -th frame of rain layer , and then our dynamic rain generator can be formulated as follows,
represents the hidden state variable in -th frame, and
the noise vector. Specifically, Eq. (4) is the transition model with parameters expecting to depict the changes of rains between two adjacent frames, and Eq. (5) is the emission model with parameters that maps the hidden state space to the observed rain layer. Note that the noise vectors are independent of each other, encoding the random factors that affect the rains (e.g., wind, camera motion) in the transition from to . Further more, we extend such generator to an advanced version for multiple rain videos. Specifically, for the -th rain video , another vector is introduced to account for the variations of rain patterns, and thus the transition model of Eq. (4) can be reformulated as:
where , . In practice, we use the extended version of Eq. (8) to simultaneously fit the rain layers in each mini-batch data. To increase the capacities of such dynamic generator, both of the transition model and emission model are parameterized as DNNs. Following 
, we used a two-layers mutli-layer perceptron (MLP) in Fig.3 (a) as the transition model. For the emission model, we elaborately design a CNN architecture that takes the state variable as input and outputs the rain image as shown in Fig. 3 (b), which is mainly inspired by a recent work  that uses CNN as a latent variable model to generate rain streaks. Remark: The employment of such dynamic generator to fit the rain layers is one of the main contributions of this work, which directly affects the deraining performance of the entire model. Therefore, it is necessary to validate the capabilities of such dynamic generator on simulating the rain layers. To prove this point, we pre-collected some rain layer videos synthesized by commercial Adobe After Effects111https://www.adobe.com/products/aftereffects.html software from YouTube as source videos, and trained such dynamic generator to recover them. Empirically, we found that such dynamic generator is able to sufficiently mimic the given rain layer videos. Due to page limitation, these experiments are put into the supplementary materials.
Finally, we directly optimize the problem of Eq. (9) on the whole labeled and unlabeled data sets, i.e.,
The insight behind Eq. (10) is to learn a general mapping from rainy videos to clean ones, based on large amount of data samples in and , which is expected to obtain a more efficient and robust derainer than that in traditional inference paradigm implementing on single video. Most notably, if only considering labeled data set, our method naturally degenerates into a supervised deraining model. However, the addition of unlabeled real data increases the generalization capacity in real deraining tasks as shown in the ablation studies in Sec. 4.2.2.
For notation brevity, we only consider one data sample in this part. Inspired by the technology of alternative back-propagation through time , a Monte Carlo EM  algorithm is designed to maximize , in which one expectation step samples latent variable from its posterior , and the next maximization step updates the model parameters and based on current sampled . E-Step: Let and denote current model parameters and the posterior under them, we can sample from using the Langevin dynamic :
indexs the time step for Langevin dynamics, denotes the step size. And
is the Gaussian white noise, which is added to prevent trapping into local modes. A key point in Eq. (11) is , and the right term can be easily calculated.
In practice, for the purpose of avoiding the high computational cost of MCMC, Eq. (11) starts from the previous updated results of . As for the initialized state vector and the rain variation vector of Eq. (8), we also sample them together with using the Langevin dynamics.
|Clip No.||Rain||DSC ||FastDerain ||DDN ||PReNet ||SpacCNN ||SLDNet ||S2VD|
M-Step: Denote the sampled latent variable in E-Step as , M-Step aims to maximize the approximate upper bound w.r.t. and as follows:
Equivalently, Eq. (13) can be further rewritten as the following minimization problem, i.e.,
where equals to 1 when comes from the labeled data set otherwise 0. Naturally, we can update and by gradient descent based on the back-propagation (BP) algorithm  as follows,
where denotes the step size. Due to the capacity limitation, we empirically find it is very difficult to fit the rain layers in all of the training videos using only one generator defined in Eq. (8). Therefore, we adopt one generator for each mini-batch data. With such strategy, our model performs stably well when setting the mini-batch size as 12 throughout all our experiments. The detailed steps of our algorithm are listed in Algorithm 1.
In this section, we conducted some experiments to evaluate the effectiveness of the proposed semi-supervised video deraining model on synthetic and real data sets. Then we give some addition analysis about it. And we briefly denote our Semi-Supervised Video Deraining model as S2VD in the following presentation.
Training Details: To train S2VD, we employ the synthesized training data of NTURain  as labeled data set, which contains 8 rain-free video clips of various scenes. For each rain-free video, 3 or 4 rain layers are synthesized by Adobe After Effects with different settings, and then added to them as rainy ones. As for unlabeled data, 7 real rainy videos without ground truth in the testing data of NTURain are employed. To relieve the burden of GPU memory, we used truncated back-propagation through time in training, meaning that the whole training sequence were divided into different non-overlapped chunks for forward and backward propagation. And the length of chunk is set as 20. The Adam  algorithm is used to optimize the model parameters in M-Step of Algorithm 1. All the network parameters are initialized by . The initialized learning rates for the transition model, emission model and the derainer are set as , and
, respectively, and decayed by multiplying 0.5 after 30 epochs. The mini-batch size is set as 12, and each video is clipped into small blocks with spatial size. Note that at the begining 5 epochs, we only update the parameter to pretrain the derainer, which makes the training more stable. As for the hyper-paramerters, , , , and more analysis on them is presented in Sec. 4.2.
We test our S2VD on the synthetic testing data set of NTURain , which consists of two groups of data sets. The videos in the first group (with prefix “a” in Table 1) are captured by a panning and unstable camera, and those in the second group (with prefix “b” in Table 1) by a fast moving camera with speed range between 20 to 30 km/h. As for the compared methods, six SOTAs are considered, including one model-based image deraining method DSC , one model-based video deraining method FastDerain , two DL-based image deraining methods DDN  and PReNet , two DL-based video deraining methods SpacCNN  and SLDNet . The average PSNR and SSIM  are used as quantitative metrics, which are evaluated only in the luminance channel due to the sensitiveness of us to the luminance information. Table 1 lists the average PSNR/SSIM results on 8 testing video clips. Evidently, our S2VD method attains the best (7 out of 8) or at least second best (1 out of 8) performance in all cases. Comparing with current SOTAs (SpacCNN or SLDNet), it achieves at least 2.5dB PSNR and 0.01 SSIM gain. And the visual results are shown in Fig. 4. Note that we only display the results of DL-based methods due to page limitations. It can be observed that: 1) The derained result of PReNet still contains some rain streaks. 2) DDN and SpacCNN both lose some image contents. 3) SLDNet can not finely preserve the original color maps. However, our S2VD evidently alleviate such deficiencies and obtains the closest result to ground truth, which indicates the effectiveness of our proposed semi-supervised deraining model.
To further test the generalization of S2VD in real tasks, we test it on two kinds of real rainy videos, i.e., the real testing data set in NTURain and several other real rainy videos in . Note that the former is included in our training set as unlabeled data, but the second is not. Fig. 5 illustrates typical deraining results by different methods on such two kinds of data sets. It can be seen that S2VD obviously achieves the best visual results comparing with other methods. Especially, the superiorities in the second data set substantiates that S2VD is able to handle the real rainy videos even that do not appear in the unlabeled data set, such generalization capability should be potentially useful in real deraining task.
The hyper-paramerter in Eqs. (2) or (3) controls the relative importantance of MRF prior in S2VD. The quantitative performance on the synthetic testing data set and the qualitative performance on the real testing data set of NTURain under different values are listed in Table 2 and Fig. 6, respectively. On one hand, when becomes gradually larger, the performance on the synthetic testing set tends to decrease as shown in Table 2, since the constraint led by the ground truth in Eq. (3) becomes weaker step by step. On the other hand, MRF prior is able to prevent the derainer overfitting onto the synthetic data and thus improve the generalization capability in real case, which is sufficiently verified by the visual comparisons in Fig. 6. Comprehensively considering these two aspects, we simply set as .
As shown in Eq. (14), our S2VD degenerates into the Mean Squre Error (MSE) loss when . Comparing with such special case, our model introduces one more likelihood term, one more MRF regularizer and the semi-supervised learning paradigm. To clarify the effect of each part, we compare S2VD with three baselines as follows: 1) Baseline1: We only train the derainer with MSE loss on labeled data set as the first baseline. 2)Baseline2: We train S2VD with and only on labeled data set so as to justify the marginal gain brought up by the likelihood term comparing with MSE (i.e., Baseline1). 3)Baseline3: On the basis of Baseline2, we introduce the MRF regularizer by setting as the third baseline. The quantitative comparisons on synthetic testing data set of NTURain are listed in Table 3, and the visual results on real testing data set are also displayed in Fig. 6. In summary, we can see that: 1) The performance improvement (1.01dB PSNR and 0.0071 SSIM) of Baseline2 beyond Baseline1 substantiates that the likelihood term plays a substantial role in our model. 2) Under the supervised learning manner, MRF prior is beneficial to our model both in the synthetic and real cases according to the performance of Baseline3. 3) Obviously, the addition of unlabeled data in S2VD increase the generalization capability on real task as shown in Fig. 6 (d) and (i). However, it leads to a little deterioration of the performance on synthetic data, mainly because the large gap between the rain types contained in the synthetic labeled and unlabeled real data sets.
Although achieving impressive deraining results as shown above, our method may still fails in some real scenarios, e.g, large camera motion between adjacent frames and heavy rain streaks as shown in Fig. 7. That’s mainly because the adopted MRF prior for unlabeled real data is not strong enough to guarantee satisfactory deraining results in such complex cases. Therefore, it is necessary to exploit better prior knowledge in order to handle more general real deraining task in the future.
In this paper, we have constructed a dynamic rain generator based on the spatial-temporal process in statistics. With such generator, a semi-supervised video deraining method is proposed. Specifically, we elaborately model the rain layer using such rain generator, which is able to facilitate the rain removal task. In order to handle the generalization issue in real cases, we propose a semi-supervise learning manner to exploit the common knowledge underlying the synthetic labeled and real unlabeled data sets. Besides, a Monte Carlo based EM algorithm is designed to solve it. Extensive experimental results demonstrated the effectiveness of the proposed video deraining method. We believe that our work can benefit to the research of rain removal in computer vision community. Acknowledgement: This research was supported by the National Key R&D Program of China (2020YFA0713900), the China NSFC projects under contracts 11690011, 61721002, U1811461, 62076196.
Joint IAPR International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 451–458. Springer, 2008.
Semi-supervised transfer learning for image rain removal.In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 5498–5507, 2019.
Image de-raining using a conditional generative adversarial network.IEEE Transactions on Circuits and Systems for Video Technology, 2017.