Rain is a very common weather in actual life. However, it can affect the visibility. Especially in heavy rain, rain streaks from various directions accumulate and make the background scene misty, which will seriously influence the accuracy of many computer vision systems, including video surveillance, object detection and tracking in autonomous driving, etc. Therefore, it is an important task to remove the rain and recover the background from rain images.
Image deraining has attracted much attention in the past decade. Many methods have been proposed to solve this problem. Existing methods can be divided into two categories, including video based approaches and single image based approaches. As video based methods can utilize the relationship between frames, it is relatively easy to remove rain from videos [1, 2, 3, 4]. However, single image deraining is more challenging, and we focus on this task in this paper. For single image deraining, traditional methods, such as discriminative sparse coding , low rank representation 
, and the Gaussian mixture model
, have been applied to this task and they work quite well . Recently, deep learning based deraining methods[8, 9] receive extensive attention due to its powerful ability of feature representation. All these related approaches achieve good performance, but there is still much space to improve.
There are mainly two limitations of existing approaches. On the one hand, according to [10, 11, 12], spatial contextual information is very useful for deraining. However, many current methods remove rain streaks based on image patches, which neglect the contextual information in large regions. On the other hand, as rain steaks in heavy rain have various directions and shapes, they blur the scene in different ways. It is a common way [9, 13] to decompose the overall rain removal problem into multiple stages, so that we can remove rain streaks iteratively. Since these different stages work together to remove rain streaks, the information of deraining in previous stages is useful to guide and benefit the rain removal in later stages. However, existing methods treat these rain streak removal stages independently and do not consider their correlations.
Motivated by addressing the above two issues, we propose a novel deep network for single image deraining. The pipeline of our proposed network is shown in Fig. 2. We remove rain streaks stage by stage. At each stage, we use the context aggregation network with multiple full-convolutional layers to remove rain streaks. As rain streaks have various directions and shapes, each channel in our network corresponds to one kind of rain streak. Squeeze-and-Excitation (SE) blocks are used to assign different alpha-values to various channels according to their interdependencies in each convolution layer. Benefited from the exponentially increasing convolution dilations, our network has a large reception field with low depth, which can help us to acquire more contextual information. To better utilize the useful information for rain removal in previous stages, we further incorporate the Recurrent Neural Network (RNN) architecture with three kinds of recurrent units to guide the deraining in later stages. We name the proposed deep network as REcurrent SE Context Aggregation Net (RESCAN).
Main contributions of this paper are listed as follows:
We propose a novel unified deep network for single image deraining, by which we remove the rain stage by stage. Specifically, at each stage, we use the contextual dilated network to remove the rain. SE blocks are used to assign different alpha-values to various rain streak layers according to their properties.
To the best of our knowledge, this is the first paper to consider the correlations between different stages of rain removal. By incorporating RNN architecture with three kinds of recurrent units, the useful information for rain removal in previous stages can be incorporated to guide the deraining in later stages. Our network is suitable for recovering rain images with complex rain streaks, especially in heavy rain.
Our deep network achieves superior performance compared with the state-of-the-art methods on various datasets.
2 Related Works
During the past decade, many methods have been proposed to separate rain streaks and background scene from rain images. We briefly review these related methods as follows.
2.0.1 Video Based Methods
As video based methods can leverage the temporal information by analyzing the difference between adjacent frames, it is relatively easy to remove the rain from videos [14, 3]. Garg and Nayar [15, 16, 2] propose an appearance model to describe rain streaks based on photometric properties and temporal dynamics. Meanwhile, Zhang et al.  exploit temporal and chromatic properties of rain in videos. Bossu et al.  detect the rain based on the histogram of orientation of rain streaks. In , Tripathi et al. provide a review of video-based deraining methods proposed in recent years.
2.0.2 Single Image Based Methods
Compared with video deraining, single image deraining is much more challenging, since there is no temporal information in images. For this task, traditional methods, including dictionary learning , Gaussian mixture models (GMMs) , and low-rank representation , have been widely applied. Based on dictionary learning, Kang et al.  decompose high frequency parts of rain images into rain and nonrain components. Wang et al.  define a 3-layer hierarchical scheme. Luo et al.  propose a discriminative sparse coding framework based on image patches. Gu et al.  integrate analysis sparse representation (ASR) and synthesis sparse representation (SSR) to solve a variety of image decomposition problems. In , GMM works as a prior to decompose a rain image into background and rain streaks layer. Chang et al.  leverage the low-rank property of rain streaks to separate two layers. Zhu et al.  combine three different kinds of image priors.
Recently, several deep learning based deraining methods achieve promising performance. Fu et al. [25, 8] first introduce deep learning methods to the deraining problem. Similar to , they also decompose rain images into low- and high-frequency parts, and then map high-frequency part to the rain streaks layer using a deep residual network. Yang et al. [26, 9] design a deep recurrent dilated network to jointly detect and remove rain streaks. Zhang et al.  use the generative adversarial network (GAN) to prevent the degeneration of background image when it is extracted from rain image, and utilize the perceptual loss to further ensure better visual quality. Li et al.  design a novel multi-stage convolutional neural network that consists of several parallel sub-networks, each of which is made aware of different scales of rain streaks.
3 Rain Models
It is a commonly used rain model to decompose the observed rain image into the linear combination of the rain-free background scene and the rain streak layer :
By removing the rain streaks layer from the observed image , we can obtain the rain-free scene .
Based on the rain model in Eq. (1), many rain removal algorithms assume that rain steaks should be sparse and have similar characters in falling directions and shapes. However, in reality, raindrops in the air have various appearances, and occur in different distances from the camera, which leads to an irregular distribution of rain streaks. In this case, a single rain streak layer is not enough to well simulate this complex situation.
To reduce the complexity, we regard rain streaks with similar shape or depth as one layer. Then we can divide the captured rainy scene into the combination of several rain streak layers and an unpolluted background. Based on this, the rain model can be reformulated as follows:
where represents the -th rain streak layer that consists of one kind of rain streaks and is the total number of different rain streak layers.
According to , things in reality might be even worse, especially in the heavy rain situation. Accumulation of multiple rain streaks in the air may cause attenuation and scattering, which further increases the diversity of rain streaks’ brightness. For camera or eye visualization, the scattering causes haze or frog effects. This further pollutes the observed image . For camera imaging, due to the limitation of pixel number, far away rain streaks cannot occupy full pixels. When mixed with other things, the image will be blurry. To handle the issues above, we further take the global atmospheric light into consideration and assign different alpha-values to various rain streak layers according to their intensities transparencies. We further generalize the rain model to:
where is the global atmospheric light, is the scene transmission, indicates the brightness of a rain streak layer or a haze layer.
4 Deraining Method
Instead of using decomposition methods with artificial priors to solve the problem in Eq. (3), we intend to learn a function that directly maps observed rain image to rain streak layer , since is sparser than and has simpler texture. Then we can subtract from to get the rain-free scene . The function
above can be represented as a deep neural network and be learned by optimizing the loss function.
Based on the motivation above, we propose the REcurrent SE Context Aggregation Net (RESCAN) for image deraining. The framework of our network is presented in Fig. 2. We remove rain streaks stage by stage. At each stage, we use the context aggregation network with SE blocks to remove rain streaks. Our network can deal with rain streaks of various directions and shapes, and each feature map in the network corresponds to one kind of rain streak. Dilated convolution used in our network can help us have a large reception field and acquire more contextual information. By using SE blocks, we can assign different alpha-values to various feature maps according to their interdependencies in each convolution layer. As we remove the rain in multiple stages, useful information for rain removal in previous stages can guide the learning in later stages. So we incorporate the RNN architecture with memory unit to make full use of the useful information in previous stages.
In the following, we first describe the baseline model, and then define the recurrent structure, which lifts the model’s capacity by iteratively decomposing rain streaks with different characteristics.
4.1 SE Context Aggregation Net
The base model of RESCAN is a forward network without recurrence. We implement it by extending Context Aggregation Net (CAN) [11, 12] with Squeeze-and-Excitation (SE) blocks , and name it as SE Context Aggregation Net (SCAN).
Here we provide an illustration and a further specialization of SCAN. We schematically illustrate SCAN in Fig. 1, which is a full-convolution network. In Fig. 1, we set the depth . Since a large receptive field is very helpful to acquire much contextual information, dilation is adopted in our network. For layers to , the dilation increases from 1 to 4 exponentially, which leads to the exponential growth in receptive field of every elements. As we treat the first layer as an encoder to transform an image to feature maps, and the last two layers as a decoder to map reversely, we do not apply dilation for layers , and . Moreover, we use convolution for all layers before . To recover RGB channels of a color image, or gray channel for a gray scale image, we adopt the convolution for the last layer . Every convolution operation except the last one is followed by a nonlinear operation. The detailed architecture of SCAN is summarized in Table 1. Generally, for a SCAN with depth , the receptive field of elements in the output image equals to .
For feature maps, we regard each channel of them as the embedding of a rain streak layer . Recall in Eq. (3), we assign different alpha-values to different rain steak layers . Instead of setting fixed alpha-values for each rain layer, we update the alpha-values for embeddings of rain streak layers in each network layer. Although the convolution operation implicitly introduces weights for every channel, these implicit weights are not specialized for every image. To explicitly import weight on each network layer for each image, we extend each basic convolution layer with Squeeze-and-Excitation (SE) block , which computes normalized alpha-value for every channel of each item. By multiplying alpha-values learned by SE block, feature maps computed by convolution are re-weighted explicitly.
is that SCAN has no batch normalization (BN)
layers. BN is widely used in training deep neural network, as it can reduce internal covariate shift of feature maps. By applying BN, each scalar feature is normalized and has zero mean and unit variance. These features are independent with each other and have the same distribution. However, in Eq. (2) rain streaks in different layers have different distributions in directions, colors and shapes, which is also true for each scalar feature of different rain streak layers. Therefore, BN contradicts with the characteristics of our proposed rain model. Thus, we remove BN from our model. Experimental results in Section 5 show that this simple modification can substantially improve the performance. Furthermore, since BN layers keep a normalized copy of feature maps in GPU, removing it can greatly reduce the demand on GPU memory. For SCAN, we can save approximately of memory usage during training without BN. Consequently, we can build a larger model with larger capacity, or increase the mini-batch size to stabilize the training process.
4.2 Recurrent SE Context Aggregation Net
As there are many different rain streak layers and they overlap with each other, it is not easy to remove all rain streaks in one stage. So we incorporate the recurrent structure to decompose the rain removal into multiple stages. This process can be formulated as:
where is the number of stages, is the output of the -th stage, and is the intermediate rain-free image after the -th stage.
The above model for deraining has been used in [9, 13]. However, recurrent structure used in their methods [9, 13] can only be regarded as the simple cascade of the same network. They just use the output images of last stage as the input of current stage and do not consider feature connection among these stages. As these different stages work together to remove the rain, input images of different stages can be regarded as a temporal sequence of rain images with decreasing levels of rain streaks interference. It is more meaningful to investigate the recurrent connections between features of different stages rather than only using the recurrent structure. So we incorporate recurrent neural network (RNN)  with memory unit to better make use of information in previous stages and guide feature learning in later stages.
In our framework, Eq. (5) can be further reformulated as:
where is the decomposed -th rain streak layer of the -th stage and is the hidden state of the ()-th stage. Consistent with the rain model in Eq. (3), is computed by summing with different alpha-values.
For the deraining task, we further explore three different recurrent unit variants, including ConvRNN, ConvGRU , and ConvLSTM . Due to the space limitation, we only present the details of ConvGRU  in the following. For other two kinds of recurrent units, please refer to the supplement materials on our webpage.
Gated Recurrent Units (GRU)  is a very popular recurrent unit in sequential models. Its convolutional version ConvGRU is adopted in our model. Denote as the feature map of the -th layer in the -th stage, and it can be computed based on the (feature map in the same layer of the previous stage) and (feature map in the previous layer of the same stage):
is the sigmoid functionand denotes element-wise multiplication. is the dilated convolutional kernel, and is an ordinary kernel with a size or . Compared with a convolutional unit, ConvRNN, ConvGRU and ConvLSTM units will have twice, three times and four times parameters, respectively.
4.3 Recurrent Frameworks
We further examine two frameworks to infer the final output. Both of them use and previous states as input for the -th stage, but they output different images. In the following, we describe the additive prediction and the full prediction in detail, together with their corresponding loss functions.
4.3.1 Additive Prediction
The additive prediction is widely used in image processing. In each stage, the network only predicts the residual between previous predictions and the ground truth. It incorporates previous feature maps and predictions as inputs, which can be formulated as:
where represents previous states as in Eq. (12). For this framework, the loss functions are chosen as follows:
where represents the network’s parameters.
4.3.2 Full Prediction
The full prediction means that in each stage, we predict the whole rain streaks . This approach can be formulated as:
where represents the predicted full rain streaks in the -th stage, and equals to plus remaining rain streaks. The corresponding loss function is:
In this section, we present details of experimental settings and quality measures used to evaluate the proposed SCAN and RESCAN models. We compare the performance of our proposed methods with the state-of-the-art methods on both synthetic and real-world datasets.
5.1 Experiment Settings
5.1.1 Synthetic Dataset
Since it is hard to obtain a large dataset of clean/rainy image pairs from real-world data, we first use synthesized rainy datasets to train the network. Zhang et al.  synthesize 800 rain images (Rain800) from randomly selected outdoor images, and split them into testing set of 100 images and training set of 700 images. Yang et al.  collect and synthesize 3 datasets, Rain12, Rain100L and Rain100H. We select the most difficult one, Rain100H, to test our model. It is synthesized with the combination of five streak directions, which makes it hard to effectively remove all rain streaks. There are synthetic image pairs in Rain100H, and pairs are selected as the testing set.
5.1.2 Real-world Dataset
5.1.3 Training Settings
In the training process, we randomly generate patch pairs with a size of
from every training image pairs. The entire network is trained on an Nvidia 1080Ti GPU based on Pytorch. We use a batch size ofand set the depth of SCAN as with the receptive field size
. For the nonlinear operation, we use leaky ReLU with . For optimization, the ADAM algorithm  is adopted with a start learning rate . During training, the learning rate is divided by at and iterations.
5.1.4 Quality Measures
To evaluate the performance on synthetic image pairs, we adopt two commonly used metrics, including peak signal to noise ratio (PSNR)  and structure similarity index (SSIM) . Since there are no ground truth rain-free images for real-world images, the performance on the real-world dataset can only be evaluated visually. We compare our proposed approach with five state-of-the-art methods, including image decomposition (ID) , discriminative sparse coding (DSC) , layer priors (LP) , DetailsNet , and joint rain detection and removal (JORDER) .
5.2 Results on Synthetic Data
Table 2 shows results of different methods on the Rain800 and the Rain100H datasets. We can see that our RESCAN considerably outperforms other methods in terms of both PSNR and SSIM on these two datasets. It is also worth noting that our non-recurrent network SCAN can even outperform JORDER and DetailsNet, and is slightly superior to JORDER-R, the recurrent version of JORDER. This shows the high capacity behind SCAN’s shallow structure. Moreover, by using RNN to gradually recover the full rain streak layer , RESCAN further improves the performance.
To visually demonstrate the improvements obtained by the proposed methods, we present results on several difficult sample images in Fig. 3. Please note that we select difficult sample images to show that our method can outperform others especially in difficult conditions, as we design it to deal with complex conditions. According to Fig. 3, these state-of-the-art methods cannot remove all rain steaks and may blur the image, while our method can remove the majority of rain steaks as well as maintain details of background scene.
5.3 Results on Real-world Dataset
To test the practicability of deraining methods, we also evaluate the performance on real-world rainy test images. The predictions for all related methods on four real-world sample rain images are shown in Fig. 4. As observed, LP  cannot remove rain steaks efficiently, and DetailsNet  tends to add artifacts on derained outputs. We can see that the proposed method can remove most of rain streaks and maintain much texture of background images. To further validate the performance on real-world dataset, we also make a user study. For details, please refer to the supplement material.
5.4 Analysis on SCAN
To show that SCAN is the best choice as a base model for deraining task, we conduct experiments on two datasets to compare the performance of SCAN and its related network architectures, including Plain (dilation= for all convolutions), ResNet used in DetailsNet  and Encoder-Decoder used in ID-CGAN . For all networks, we set the same depth and width ( channels), so we can keep their numbers of parameters and computations at the same order of magnitude. More specifically, we keep layers , and of them with the same structure, so they only differ in layers to . The results are shown in Table 3. SCAN and CAN achieve the best performance in both datasets, which can be attributed to the fact that receptive fields of these two methods exponentially increase with the growing of depth, while receptive fields of other methods only have linear relationship with depth.
Moreover, the comparison between results of SCAN and CAN indicates that the SE block contributes a lot to the base model, as it can explicitly learn alpha-value for every independent rain streak layer. We also examine the SCAN model with BN. The results clearly verify that removing BN is a good choice in the deraining task, as rain steak layers are independent on each other.
5.5 Analysis on RESCAN
As we list three recurrent units and two recurrent frameworks in Section 4.2, we conduct experiments to compare these different settings, consisting of ConvRNN, ConvLSTM, ConvGRU Additive Prediction (Add), Full Prediction (Full). To compare with the recurrent framework in [9, 13], we also implement the settings in Eq. (5) (Iter), in which previous states are not reserved. In Table 4, we report the results.
It is obvious that Iter cannot compete with all RNN structures, and is not better than SCAN, as it leaves out information of previous stages. Moreover, ConvGRU and ConvLSTM outperform ConvRNN, as they maintain more parameters and require more computation. However, it is difficult to pick the best unit between ConvLSTM and ConvGRU as they perform similarly in experiments. For recurrent frameworks, results indicate that Full Prediction is better.
We propose the recurrent squeeze-and-excitation based context aggregation network for single image deraining in this paper. We divide the rain removal into multiple stages. In each stage, the context aggregation network is adopted to remove rain streaks. We also modify CAN to better match the rain removal task, including the exponentially increasing dilation and removal of the BN layer. To better characterize intensities of different rain streak layers, we adopt the squeeze-and-excitation block to assign different alpha-values according to their properties. Moreover, RNN is incorporated to better utilize the useful information for rain removal in previous stages and guide the learning in later stages. We also test the performance of different network architectures and recurrent units. We conduct extensive experiments on both synthetic and real-world datasets, which shows that our proposed method outperforms the state-of-the-art approaches under all evaluation metrics.
Acknowledgment. Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (Grant no. 2015CB352502), National Natural Science Foundation (NSF) of China (Grant nos. 61625301 and 61731018), Qualcomm, and Microsoft Research Asia. Hong Liu is supported by National Natural Science Foundation of China (Grant nos. U1613209 and 61673030). Hongbin Zha is supported by Beijing Municipal Natural Science Foundation (Grant no. 4152006).
-  Zhang, X., Li, H., Qi, Y., Leow, W.K., Ng, T.K.: Rain removal in video by combining temporal and chromatic properties. In: IEEE ICME. (2006) 461–464
-  Garg, K., Nayar, S.K.: Vision and rain. International Journal of Computer Vision 75(1) (2007) 3–27
-  Santhaseelan, V., Asari, V.K.: Utilizing local phase information to remove rain from video. International Journal of Computer Vision 112(1) (2015) 71–89
-  Tripathi, A.K., Mukhopadhyay, S.: Removal of rain from videos: a review. Signal, Image and Video Processing 8(8) (2014) 1421–1430
-  Luo, Y., Xu, Y., Ji, H.: Removing rain from a single image via discriminative sparse coding. In: IEEE ICCV. (2015) 3397–3405
-  Chang, Y., Yan, L., Zhong, S.: Transformed low-rank model for line pattern noise removal. In: IEEE ICCV. (2017) 1726–1734
-  Li, Y., Tan, R.T., Guo, X., Lu, J., Brown, M.S.: Rain streak removal using layer priors. In: IEEE CVPR. (2016) 2736–2744
-  Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., Paisley, J.: Removing rain from single images via a deep detail network. In: IEEE CVPR. (2017) 1715–1723
-  Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Deep joint rain detection and removal from a single image. In: IEEE CVPR. (2017) 1357–1366
-  Huang, D.A., Kang, L.W., Yang, M.C., Lin, C.W., Wang, Y.C.F.: Context-aware single image rain removal. In: IEEE ICME. (2012) 164–169
-  Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)
-  Chen, Q., Xu, J., Koltun, V.: Fast image processing with fully-convolutional networks. In: IEEE ICCV. (2017) 2516–2525
-  Li, R., Cheong, L.F., Tan, R.T.: Single image deraining using scale-aware multi-stage recurrent network. arXiv preprint arXiv:1712.06830 (2017)
-  Barnum, P.C., Narasimhan, S., Kanade, T.: Analysis of rain and snow in frequency space. International Journal of Computer Vision 86(2-3) (2010) 256
-  Garg, K., Nayar, S.K.: Detection and removal of rain from videos. In: IEEE CVPR. Volume 1. (2004) 528–535
-  Garg, K., Nayar, S.K.: When does a camera see rain? In: IEEE ICCV. Volume 2. (2005) 1067–1074
-  Bossu, J., Hautière, N., Tarel, J.P.: Rain or snow detection in image sequences through use of a histogram of orientation of streaks. International Journal of Computer Vision 93(3) (2011) 348–367
-  Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: ICML. (2009) 689–696
-  Reynolds, D.A., Quatieri, T.F., Dunn, R.B.: Speaker verification using adapted gaussian mixture models. Digital Signal Processing 10(1-3) (2000) 19–41
-  Liu, G., Lin, Z., Yan, S., Sun, J., Yu, Y., Ma, Y.: Robust recovery of subspace structures by low-rank representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(1) (2013) 171–184
-  Kang, L.W., Lin, C.W., Fu, Y.H.: Automatic single-image-based rain streaks removal via image decomposition. IEEE Transactions on Image Processing 21(4) (2012) 1742–1755
-  Wang, Y., Liu, S., Chen, C., Zeng, B.: A hierarchical approach for rain or snow removing in a single color image. IEEE Transactions on Image Processing 26(8) (2017) 3936–3950
-  Gu, S., Meng, D., Zuo, W., Zhang, L.: Joint convolutional analysis and synthesis sparse representation for single image layer separation. In: IEEE ICCV. (2017) 1717–1725
-  Zhu, L., Fu, C.W., Lischinski, D., Heng, P.A.: Joint bi-layer optimization for single-image rain streak removal. In: IEEE CVPR. (2017) 2526–2534
-  Fu, X., Huang, J., Ding, X., Liao, Y., Paisley, J.: Clearing the skies: A deep network architecture for single-image rain removal. IEEE Transactions on Image Processing 26(6) (2017) 2944–2956
-  Yang, W., Tan, R.T., Feng, J., Liu, J., Guo, Z., Yan, S.: Joint rain detection and removal via iterative region dependent multi-task learning. CoRR, abs/1609.07769 2 (2016) 3
-  Zhang, H., Sindagi, V., Patel, V.M.: Image de-raining using a conditional generative adversarial network. arXiv preprint arXiv:1701.05957 (2017)
-  Kaushal, H., Jain, V., Kar, S.: Free-space optical channel models. In: Free Space Optical Communication. Springer (2017) 41–89
-  Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507 (2017)
-  Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: ICML. (2015) 448–456
-  Mandic, D.P., Chambers, J.A., et al.: Recurrent neural networks for prediction: learning algorithms, architectures and stability. Wiley Online Library (2001)
-  Cho, K., Van Merriënboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 (2014)
-  Zaremba, W., Sutskever, I., Vinyals, O.: Recurrent neural network regularization. arXiv preprint arXiv:1409.2329 (2014)
-  Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: ICML. Volume 30. (2013) 3
-  Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
-  Huynh-Thu, Q., Ghanbari, M.: Scope of validity of psnr in image/video quality assessment. IET Electronics letters 44(13) (2008) 800–801
-  Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing 13(4) (2004) 600–612