Image De-raining Using a Conditional Generative Adversarial Network

01/21/2017 ∙ by He Zhang, et al. ∙ Rutgers University 0

Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 6

page 7

page 8

page 9

page 10

page 11

Code Repositories

ID-CGAN

Image De-raining Using a Conditional Generative Adversarial Network


view repo

Edited_Original_IDCGAN

Source code is borrowed heavily from IDCGAN


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

It has been widely acknowledged that unpredictable impairments such as illumination, noise and severe weather conditions (i.e. rain, snow and fog) adversely affect the performance of many computer vision algorithms such as detection, classification and tracking. This is primarily due to the fact that these algorithms are trained using images that are captured under well-controlled conditions. For instance, it can be observed from Figure 1

(c), that the presence of heavy rain greatly impairs visual quality of the image, thus rendering face detection and verification algorithms ineffective for such degradations. A possible method to address this issue is to include images captured under unconstrained conditions in the training process of these algorithms. However, it may not be practical to collect such images for all classes in the training set, especially in a large scale setting. In addition, in this age of ubiquitous smartphone usage, images captured by smartphone cameras under difficult weather conditions undergo degradations that drastically affect the visual quality of images making the images useless for sharing and usage. In order to improve the overall quality of such degraded images for better visual appeal and to ensure enhanced performance of vision algorithms, it becomes essential to automatically remove undesirable artifacts arising due to difficult weather conditions discussed above. In this paper, we investigate conditional generative adversarial networks (GANs) to address this issue, where a pre-trained discriminator network is used as a guide to synthesize images free from weather-based degradations . Specifically, we propose a single image based de-raining/de-snowing algorithm using a conditional GAN framework for visually enhancing images that have undergone degradations due to rain and/or snow.

(a)
(b)
(c)
(d)
Fig. 1: Sample results of the proposed ID-CGAN method for single image de-raining. (a)&(c) Input rainy images. (b)&(d) De-rained results.

One can model the observed rainy image as the superposition of two images - one corresponding to rain streaks and the other corresponding to the clear background image (see Figure  2). Hence, the input rainy image can be expressed as

(1)

where represents the clear background image and represents the rain streaks. As a result, similar to image de-noising and image separation [1, 2, 3, 4], image de-raining can be viewed as the problem of separating two components from a rainy image.

(a)
(b)
(c)
Fig. 2: Rain streak removal from a single image. A rainy image (a) can be viewed as the superposition of a clean background image (b) and a rain streak image (c).
Fig. 3: An overview of the proposed ID-CGAN method for single image de-raining. The network consists of two sub-networks: generator and discriminator .

In the case of video-based de-raining, a common strategy to solve (1) is to use additional temporal information, such as methods proposed in [5, 6, 7]. However, this strategy is not applicable for single image de-raining. In such cases, researchers have used appropriate prior information such as sparsity prior [8, 9]

, Gaussian Mixture Model (GMM) prior

[10] and patch-rank prior [11]

to make the de-raining problem more regularized. Most recently, due to their strong ability to learn end-to-end mapping, Convolutional Neural Networks (CNNs) have been successfully applied to solve single image de-raining problem

[12, 13]. By learning a non-linear mapping between input rainy image and its corresponding ground truth using CNN structure, CNN-based methods are able to achieve superior visual performance.

Even though these existing methods have been successful, we note that they do not consider additional information into the optimization. Hence, to design a visually appealing de-raining algorithm, we must consider the following information into the optimization framework:

  1. We should consider into the objective function the criterion that performance of vision algorithms such as detection and classification should not be affected by the presence of rain streaks. The inclusion of this discriminative information ensures that the reconstructed image is indistinguishable from its original counterpart.

  2. Rather than concentrating only on the characterization of rain-streaks, visual quality may also be considered into the optimization function. By doing this, we will be able ensure that the de-rained image looks visually appealing without losing important details.

  3. Some of the existing methods adopt additional image processing techniques to enhance the results [12, 8]. Instead, it would be better to use a single structure to deal with the problem without any additional processing.

In this work, we incorporate these criteria by proposing a new conditional GAN-based framework called Image De-raining Conditional General Adversarial Network (ID-CGAN) to address the single image de-raining problem. Similar to the existing approaches to solve (1) where they use additional prior information to put constraints, we instead propose to use a discriminator model as a guide to optimize the de-raining algorithm. Inspired by the recent success of GANs for pixel-level vision tasks such as image generation [14, 15]

, image inpainting

[16]

and image super-resolution

[17], our network consists of two models: generator model (G) and discriminator model (D). The generator model acts as a mapping function to translate an input rainy image to de-rained image such that it fools the discriminator model which is trained to distinguish rainy images from images without rain. However, traditional GANs [14] are not stable to train and may introduce artifacts in the output image making it visually unpleasant and artificial. To address this issue, we define a new refined perceptual loss to serve as an additional loss function which aids the proposed network in generating visually pleasing outputs. Sample results of the proposed ID-CGAN algorithm are shown in Figure 1. In summary, this paper makes the following contributions:

  1. A conditional GAN-based optimization framework is presented to address the challenging single image de-raining problem without the use of any additional post-processing.

  2. A refined generator sub-network that is specially designed for the single image de-raining task is presented.

  3. A new perceptual loss function is defined to be used in the optimization task to ensure better visual appeal of the end results.

  4. Extensive experiments are conducted on publicly available and synthesized datasets. Detailed qualitative and quantitative comparisons with existing state-of-the-art methods are presented 111Datasets and experimental implementation are available at
    http://www.rci.rutgers.edu/~vmp93/index_ImageDeRaining.html
    .

This paper is organized as follows. A brief background on de-raining, GANs and perceptual loss is given in Section II. The details of the proposed ID-CGAN method are given in Section III. Experimental results on both synthetic and real images are presented in Section IV. Finally, Section V concludes the paper with a brief summary and discussion.

Ii Background

In this section, we briefly review the literature for existing single image de-raining methods, conditional GANs and perceptual loss.

Ii-a Single image de-raining

As discussed in Section I, single image de-raining is an extremely challenging task due to its ill-posed nature and unavailability of temporal information which could have been used as additional constraints. Hence, in order to generate optimal solutions to this problem, different kinds of prior information are enforced into the optimization function. Sparse coding-based clustering method [8]

is among the first ones to tackle the single image de-raining problem where the authors proposed to solve it in the image decomposition framework. They first separated the input image into low frequency and high frequency images using a bilateral filter. The high frequency image is further decomposed into rain and non-rain components based on the assumption that learned dictionary atoms can sparsely represent clear background image and rain-streak image separately. An important assumption that is made in this approach is that rain streaks usually have similar edge orientations. This may result in the removal of non-rain component as rain. Also, the method’s effectiveness is dependent on the performance of the bilateral filter and clustering of basis vectors for generating sparse representation. Similar to the above approach, Luo

et al. in [9] propose a discriminative sparse coding based method that considers the mutual exclusive property into the optimization framework. Though the authors present significant improvements as compared to previous methods, their method is ineffective in removing large rain-streaks due to the assumption that rain streaks are high frequency components. In addition, due to the same assumption, their method generates artifacts around the rain-streak components in the resulting images.

In another approach, Chen et al. proposed a low-rank representation-based method [11] that uses patch-rank as a prior to characterize unpredictable rain pattern. They use a low-rank model to capture correlated rain streaks. Observing that dictionary and low-rank based methods tend to leave too many rain pixels in the output image, Li et al. in [10] used the image decomposition framework to propose patch-based priors for background and rain image. These priors are based on GMMs which can accommodate multiple orientations and scales of rain streaks. These methods [11, 10] are based on the assumption that rain streaks have similar patterns and orientations. Due to this assumption, they tend to capture other global repetitive patterns such as brick and texture which results in removal of certain non-rain components from the background image. To address this issue, Zhang et al. recently proposed a convolutional coding-based method [18] that uses a set of learned convolutional low-rank filters to capture the rain pixels. Most recently, due to their immense success in learning non-linear functions, several CNN-based methods have also been proposed to directly learn an end-to-end mapping between input and its corresponding ground truth for de-raining [12, 13, 19]. Table I summarizes the comparison our proposed ID-CGAN to other single de-raining methods.

No addition pre-
(or post) processing
End-to-end mapping
Consider discriminative performance
in the optimization
Consider visual performance
in the optimization
Not Patch-based Time efficiency
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
TABLE I: Compared to the existing methods, our ID-CGAN has several desirable properties: 1. No additional image processing. 2. Include discriminative factor into optimization. 3. Consider visual performance into optimization.

Ii-B General adversarial networks

Generative Adversarial Networks were proposed by Goodfellow et al. in [20] to synthesize realistic images by effectively learning the distribution of training images. The authors adopted a game theoretic min-max optimization framework to simultaneously train two models: a generative model and a discriminative model . The goal of GAN is to train to produce samples from training distribution such that the synthesized samples are indistinguishable from actual distribution by the discriminator . Unlike other generative models such as Generative Stochastic Networks [21]

, GANs do not require a Markov chain for sampling and can be trained using standard gradient descent methods

[20]. Initially, the success of GANs was limited as they were known to be unstable to train, often resulting in artifacts in the synthesized images. Radford et al. in [14] proposed Deep Convolutional GANs (DCGANs) to address the issue of instability by including a set of constraints on their topology. Another limiting issue in GANs is that, there is no control on the modes of data being synthesized by the generator in case of these unconditioned generative models. Mirza et al. [22] incorporated additional conditional information in the model, which resulted in effective learning of the generator. The use of conditioning variables for augmenting side information not only increased the stability in learning but also improved the descriptive power of the generator [23]. Recently, researchers have explored various aspects of GANs such as training improvements[24] and use of task specific cost function [25]. Also, an alternative viewpoint for the discriminator function is explored by Zhao et al. [26] where they deviate from the traditional probabilistic interpretation of the discriminator model.

The success of GANs in synthesizing realistic images has led to researchers exploring the GAN framework for numerous applications such as style transfer [27], image inpainting [28], text to image translation [29], image to image translation [30], texture synthesis [31] and generating outdoor scenes from attributes [23]. Isola et al.

proposed a general purpose solution for image-to-image translation using conditional adversarial networks. Apart from learning a mapping function, they argue that the network also learns a loss function, eliminating the need for specifying or hand designing a task specific loss function. Karacan

et al. in [23] proposed a deep GAN conditioned on semantic layout and scene attributes to synthesize realistic outdoor scene images under different conditions. Recently, Jetchev et al. [31]

proposed spatial GANs for texture synthesis. Deviating from traditional GANs, their input noise distribution constitutes a whole spatial tensor instead of a vector, thus enabling them to create architectures more suitable for texture synthesis.

Ii-C Perceptual loss function

Loss functions form an important and integral part of learning process, especially in CNN-based reconstruction tasks. Several works [32, 33, 34, 35, 36, 37, 38]

have explored different loss functions and their combinations for effective learning for tasks such as super-resolution, semantic segmentation, depth estimation, feature inversion and style transfer. Initial work on CNN-based image translation or restoration optimized over pixel-wise L2-norm (Euclidean loss) or L1-norm between the predicted and ground truth images

[33, 34]. Since these losses operate at pixel level, their ability to capture high level perceptual/contextual details is limited and they tend to produce blurred results [17]. Hence, many authors argue and demonstrate through their results that it would be better to optimize a perceptual loss function where the aim is to minimize perceptual difference between reconstructed image and the ground truth image [39]

. In a different approach, the conditional GAN framework can also be considered as an attempt to explore a structured loss function where, a generator network is trained to minimize the discriminator’s ability to correctly classify between the synthesized image and the corresponding ground truth image. Researchers have attempted to solve various reconstruction tasks such as image super-resolution and style transfer where conditional GAN framework augmented with perceptual and L2 loss function have been used to produce state-of-the-art results

[17, 15].

Iii Proposed Method

Instead of solving (1) in a decomposition framework to address the single image de-raining problem, we aim to directly learn a mapping from an input rainy image to a de-rained (background) image by constructing a conditional GAN-based deep network called ID-CGAN. The proposed network is composed of three important parts (generator, discriminator and perceptual loss function) that serve distinct purposes. Similar to traditional GANs [15, 20], we have two sub-networks: a generator sub-network and a discriminator sub-network . The generator sub-network is a symmetric deep CNN network with appropriate skip connections as shown in the top part in Figure 3. Its primary goal is to synthesize a de-rained image from an image that is degraded by rain (input rainy image). The discriminator sub-network , as shown in the bottom part in Figure 3, serves to distinguish ‘fake’ de-rained image synthesized by the generators from corresponding ground truth ‘real’ image. It can also be viewed as a guidance for the generator . Since GANs are known to be unstable to train which results in artifacts in the output image synthesized by , we define a refined perceptual loss functions to address this issue. Additionally, this new refined loss function ensures that the generated (de-rained) images are visually appealing. In the following sub-sections, we discuss these important parts in detail starting with GAN objective function followed by generator/discriminator sub-networks and refined perceptual loss.

Iii-a GAN objective function

In order to learn a good generator so as to fool the learned discriminator and to make the discriminator good enough to distinguish synthesized de-rained image from real ground truth, the proposed method alternatively updates and following the structure proposed in [20, 15]. Given an input rainy image and a random noise vector , conditional GAN aims to learn a mapping function to generate output image by solving the following optimization problem:

(2)

Iii-B Generator with symmetric structure

As the goal of single image de-raining is to generate pixel-level de-rained image, the generator should be able to remove rain streaks as much as possible without loosing any detail information of the background image. So the key part lies in designing a good structure to generate de-rained image.

Existing methods for solving (1), such as sparse coding-based methods [8, 40, 3, 4], neural network-based methods [41] and CNN-based methods [1] have all adopted a symmetric structure. For example, sparse coding-based methods use a learned or pre-defined synthesis dictionaries to decode the input noisy image into sparse coefficient map. Then another set of analysis dictionaries are used to transfer the coefficients to desired clear output. Usually, the input rainy image is transferred to a specific domain for effective separation of background image and undesired component (rain-streak). After separation, the background image (in the new domain) has to be transferred back to the original domain which requires the use of a symmetric process. Therefore, we also adopt a symmetric structure to form our generator sub-network. Similar to traditional low-level vision CNN frameworks, the generator directly learns an end-to-end mapping from input rainy image to its corresponding ground truth.

The proposed generator with a symmetric structure is shown in the top part of Figure 3

. A set of convolutional layers (along with batch normalization and PReLU activation) are stacked in the front which act as a learned feature extractor or semantic attributes extractor. Then, three shrinking layers are stacked in the middle part serving for better computational efficiency. These three shrinking layers can be also regarded as performing linear combination within the learned features

[42]. These are followed by a stack of deconvolutional 222deconvolutional layer can be also named as transposed convolutional layers

layers (along with batch normalization and ReLU activation function). Note that the deconvolutional layers are a mirrored version of the forward convolutional layers. For all layers, we use a stride of 1 and pad appropriate zeros to maintain the dimension of each feature map to be the same as that of the input. To make the network efficient in training and have better convergence performance, we involve symmetric skip connections into the proposed generator sub-network, similar to

[1]. The generator network is as follows:

CBP(K)-CBP(K)-CBP(K)-CBP(K)-CBP(K/2)-CBP(1)-DBR(K/2)-DBR(K)-DBR(K)-DBR(K)-DBR(K)-DBR(3)-Tanh

where, is a set of -channel convolutional layers followed by batch normalization and PReLU activation, is a set of -channel deconvolutional layers followed by batch normalization and ReLU activation. Skip connections are added via every two skips, as shown in Figure 3.

Iii-C Discriminator

From the point of view of GAN framework, the goal of de-raining an input rainy image is not only to make the de-rained result visually appealing and quantitatively comparable to the ground truth, but also to ensure that the de-rained result is indistinguishable from ground truth image. Therefore, we include a learned discriminator sub-network to classify if each input image is real or fake. Following the structure that was proposed in [14]

, we use convolutional layer with batch normalization and PReLU activation as a basis throughout the discriminator network. Once we calculate the learned feature from a set of these Conv-BN-PReLU, a sigmoid function is stacked at the end to map the output to a probability score normalized to [0,1]. The proposed discriminator sub-network

is shown in the bottom part of Figure 3. The structure of the discriminator sub-network is as follows:

CB()-CBP(2)-CBP(4)-CBP(8)-C(1)-Sigmoid

where, is a set of channel convolutional layers followed by batch normalization and is a set of -channel convolutional layers.

(a)
(b)
(c)
Fig. 4: Illustration of improvements obtained by using perceptual loss function. (a) Input image (b) Output without perceptual loss (artifacts can be observed) (c) Output with perceptual loss.

Iii-D Refined perceptual loss

As discussed earlier, GANs are known to be unstable to train and they may produce noisy or incomprehensible results via the guided generator. A probable reason is that the new input may not come from the same distribution of the training samples. As illustrated in Figure 4, it can be clearly observed that there are a lot of artifacts introduced by the normal GAN structure. This greatly influences the visual performance of output image. A possible solution to address this issue is to introduce perceptual loss into the network. Recently, loss function measured on the difference of high-level feature representation, such as loss measured on certain layers in CNN [43], has demonstrated much better visual performance than the per-pixel loss used in traditional CNNs. However, in many cases it fails to preserve color and texture information [43]. Also, it does not achieve good quantitative performance simultaneously. To ensure that the results have good visual and quantitative scores along with good discriminatory performance, we propose a new refined loss function. Specifically, we combine pixel-to-pixel Euclidean loss, perceptual loss [43] and adversarial loss together with appropriate weights to form our new refined loss function. The new loss function is then defined as follows:

(3)

where represents adversarial loss (loss from the discriminator ), is perceptual loss and

is normal per-pixel loss function such as Euclidean loss. Here,

and are pre-defined weights for perceptual loss and adversarial loss, respectively. If we set both and to be 0, then the network reduces to a normal CNN configuration, which aims to minimize only the Euclidean loss between output image and ground truth. If is set to 0, then the network reduces to a normal GAN. If set to 0, then the network reduces to the structure proposed in [43].

The three loss functions , and are defined as follows. Given an image pair with channels, width and height (i.e. ), where is the input image and is the corresponding ground truth, the per-pixel Euclidean loss is defined as:

(4)

where is the learned network for generating the de-rained output. Suppose the outputs of certain high-level layer are with size . Similarly, the perceptual loss is defined as

(5)

where represents a non-linear CNN transformation. Similar to the idea proposed in [43], we aim to minimize the distance between high-level features. In our method, we compute the feature loss at layer relu22 in VGG-16 model [44].333https://github.com/ruimashita/caffe-train/blob/master/vgg.trainval.prototxt

Given a set of de-rained images generated from the generator , the entropy loss from the discriminator to guide the generator is defined as:

(6)

Iv Experiments and Results

In this section, we present details of the experiments and quality measures used to evaluate the proposed ID-CGAN method. We also discuss the dataset and training details followed by comparison of the proposed method against a set of baseline methods and recent state-of-the-art approaches.

Iv-a Dataset, training and evaluation details

Iv-A1 Synthetic dataset

Due to the lack of availability of large size datasets for training and evaluation of single image de-raining, we synthesized a new set of training and testing samples in our experiments. The training set consists of a total of 700 images, where 500 images are randomly chosen from the first 800 images in the UCID dataset [45] and 200 images are randomly chosen from the BSD-500’s training set [46]. The test set consists of a total of 100 images, where 50 images are randomly chosen from the last 500 images in the UCID dataset and 50 images are randomly chosen from the test-set of the BSD-500 dataset [46]. After the train and test sets are created, we add rain-streaks to these images by following the guidelines mentioned in [12] using Photoshop444http://www.photoshopessentials.com/photo-effects/rain/. It is ensured that rain pixels of different intensities and orientations are added to generate a diverse training and test set. Note that the images with rain form the set of observed images and the corresponding clean images form the set of ground truth images. All the training and test samples are resized to 256256.

Iv-A2 Real-world rainy images dataset

In order to demonstrate the effectiveness of the proposed method on real-world data, we created a dataset of 50 rainy images downloaded from the Internet. While creating this dataset, we took all possible care to ensure that the images collected were diverse in terms of content as well as intensity and orientation of the rain pixels. A few sample images from this dataset are shown in Figure 5. This dataset is used for evaluation (test) purpose only.

Fig. 5: Six samples of real rainy/snowy images dataset.
SPM [8] PRM [11] DSC [47] CNN [12] GMM [10] CCR [18] ID-CGAN
PSNR (dB) 18.88 20.46 18.56 19.12 22.27 20.56 22.73
SSIM 0.5832 0.7297 0.5996 0.6013 0.7413 0.7332 0.8133
UQI 0.4149 0.5668 0.4804 0.4706 0.5751 0.5582 0.6449
VIF 0.2197 0.3441 0.3325 0.3307 0.4042 0.3607 0.4148
TABLE II: Quantitative experiments evaluated on four different criterions.

Iv-A3 Quality measures

The following measures are used to evaluate the performance of different methods: Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) [48], Universal Quality Index (UQI) [49] and Visual Information Fidelity (VIF) [50]. Similar to previous methods [10], all of these quantitative measures are calculated using the luminance channel. Since we do not have ground truth reference images for the real dataset, the performance of the proposed and other methods on the real dataset is evaluated visually.

Iv-B Model details and parameters

The entire network is trained on a Nvidia Titan-X GPU using the torch framework

[51]. We used a batch size of 7 and number of training iterations of 100k. Adam algorithm [52] with a learning rate of is used. During training, we set and . We set and for the proposed generator and discriminator. All the convolutional and deconvolutional layers in generator are composed of kernels of size 33 with a stride 1 and zero-padding by 1. All convolutional and deconvolutional layers in the first three layers of the discriminator () are composed of kernels of size 44 with a stride 2 and zero-padding by 1. The last two layers in are composed of kernels of size 44 with a stride 1 and zero-padding by 1.

(a)
(b)
(c)
(d)
(e)
Fig. 6: Comparison of removal of rain-streaks using proposed method with three baseline configurations. (a) Input image (b) GEN (c) CGAN (d) CGAN-P (e) ID-CGAN.

Iv-C Comparison with baseline configurations

We compare the performance of our method with that of the following three baseline configurations:

  • GEN: Generator sub-network is trained using per-pixel Euclidean loss by setting and to zero in (3). This amounts to a traditional CNN architecture with Euclidean loss.

  • CGAN: Conditional GAN structure is trained using per-pixel Euclidean loss by setting to zero in (3).

  • CGAN-P: Conditional GAN is trained using perceptual loss by setting to zero in (3).

All three configurations along with ID-CGAN are learned using training images from the synthetic training dataset. Results of quantitative performance using the measures discussed earlier on test images from the synthetic dataset is shown in Table III. Sample results of the proposed method compared with the baseline configurations on test images from real dataset are shown in Figure 6. It can be noted that the introduction of adversarial loss improves the visual performance over the traditional CNN architecture but introduces artifacts (Figure 6(c)). The introduction of perceptual loss in the conditional GAN framework is able to tackle the artifacts better and enhances the performance but it is not successful in completely removing these artifacts resulting in a reduced quantitative performance (especially PSNR performance compared with GEN). Finally, the use of perceptual loss combined with the Euclidean loss in the conditional GAN framework (ID-CGAN) achieves significantly better results as compared to the baseline configurations. It can be observed from Figure 6(c), ID-CGAN is able to remove the majority of the artifacts introduced by CGAN.

Input
Ground Truth
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
Our

Input
Ground Truth
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Fig. 7: Comparison of rain-streak removal using different methods with the proposed method on sample images from the synthetic dataset.
GEN CGAN CGAN-P ID-CGAN
PSNR (dB) 22.45 22.05 22.37 22.73
SSIM 0.7292 0.7567 0.8053 0.8133
UQI 0.5280 0.5368 0.6335 0.6449
VIF 0.3042 0.3634 0.4052 0.4148
TABLE III: Quantitative results compared with three baseline configurations.
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
Input
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
Fig. 8: Rain-streak removal results on two real images.
Input

Iv-D Comparison with state-of-the-art

We compare the performance of the proposed ID-CGAN method with the following recent state-of-the-art methods for single image de-raining:

  • SPM: Sparse dictionary-based method [8]

  • DSC: Discriminative sparse coding-based method [9]

  • PRM: PRM prior-based method [10]

  • GMM: GMM-based method [11]

  • CCR: Convolutional-coding based method [18]

  • CNN: CNN-based method [12]

Iv-D1 Results on synthetic dataset

In the first set of experiments, we compare quantitative and qualitative performance of different methods on the test images from the synthetic dataset. As the ground truth is available for the these test images, we can calculate the quantitative measures such as PSNR, SSIM, UQI and VIF. Results are shown in Table II. It can be clearly observed that the proposed ID-CGAN method is able to achieve superior quantitative performance using all the measures.

To visually demonstrate the improvements obtained by the proposed method on the synthetic dataset, results on two difficult sample images are presented in Figure 7. Note that we selectively sample difficult images to show that our method performs well in difficult conditions also. While SPM [8] is able to remove the rain-streaks, it produces blurred results which are not visually appealing. The other compared methods are able to either reduce the intensity of rain or remove the streaks in parts, however, they fail to completely remove the rain-streaks. In contrast to the other methods, the proposed method is able to successfully remove majority of the rain streaks while maintaining the details of the de-rained images. Surprisingly, in addition to the removal of rain streaks, the proposed method is also able to de-haze the image simultaneously. Unlike previous methods [10, 19] that use additional post-processing or special cascading structure to remove rain and haze together, the proposed method is able automatically de-rain and de-haze simultaneously within one framework.

Iv-D2 Evaluation on real rainy images

We also evaluated the performance of the proposed method and recent state-of-the-art methods on real-world rainy test images. The de-rained results for all the methods on two sample input rainy images are shown in Figure 8. For better visual comparison, we show zoomed versions of the two specific regions-of-interest below the de-rained results. By looking at these regions-of-interest, we can clearly observe that SPM [8] and PRM [11] tend to produce blurred results and DSC [9] tends to add artifacts on the de-rained images. Even though the other three methods GMM [10], CNN [12] and CCR [18] are able to achieve good visual performance, rain drops are still visible in the zoomed regions-of-interest. In comparison, the proposed method is able to remove most of the rain drops while maintaining the details of the background image.

Iv-D3 Evaluation on real snowy images

It has been well-acknowledged that snow-streak components also share much similarity with rain-streaks, as the snow-streak is just the frozen format of the rain component. Therefore, it is also meaningful to explore how the de-raining methods work for the task of removing snow. In this part of the experiments, we use the same ID-CGAN model trained for the above mentioned de-raining task, where the model is learned via our synthesis rainy datasets. We evaluated our method on a set of snowy images and the results on two sample snowy images are shown in Figure 10. For better visual comparison, we show the zoomed versions of the two specific regions-of-interest below the de-snowed results. It can be clearly observed from the two specific regions-of-interest that the proposed ID-CGAN is able to achieve superior results without blurring the background details.

Iv-D4 Drawbacks of the proposed method

Though the proposed method is able to achieve good quantitative and visual performance on most of the test images, it suffers from the white round rain-streaks, shown in Figure 9. The proposed methods inherently enhance white round particles and still introduce additional artifacts to the de-rained result. We believe that this is due to the following reasons. Firstly, even though we try to create a dataset that can incorporate different kinds of rain-component, the size and diversity of the training set is still not large enough. Hence, the network is unable to tackle the white round particles that are barely seen in training. Secondly, the high-level features from CNN network inherently captures the white round particles, so the perceptual loss may enhance it automatically. Note that similar results are also observed in Figure 9 in [43].

(a)
(b)

(c)
(d)
Fig. 9: Example of failure cases using the proposed method (a)&(c) Input images. (b)&(d) De-rained results. It can be observed that though the proposed method is able to remove rain streaks, a few white round particles still remain in the output images.

V Conclusion

In this paper, we proposed a conditional GAN-based algorithm for the removal of rain streaks form a single image. In comparison to the existing approaches which attempt to solve the de-raining problem in an image decomposition framework by using prior information, we investigated the use of conditional GANs for synthesizing de-rained image from a given input rainy image. For improved stability in training and reducing artifacts introduced by GANs in the output images, we propose the use of a new refined loss function in the GAN optimization framework. Detailed experiments and comparisons are performed on synthetic and real-world images to demonstrate that the proposed ID-CGAN method significantly outperforms many recent state-of-the-art methods. Additionally, the proposed ID-CGAN method is compared against baseline configurations to illustrate the performance gains obtained by introducing the refined perceptual loss into the conditional GAN framework.

In spite of the superior performance achieved by the proposed method, it still suffers from a few drawbacks such as it fails to remove the white-round rain particles. In the future, we aim to build upon the conditional GAN framework to overcome these drawback and investigate the possibility of using similar structures for solving related problems.

SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
Input
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
CNN [12]
GMM [10]
CCR [18]
ID-CGAN
Input
SPM [8]
PRM [11]
DSC [9]
Fig. 10: Comparison of snow streak removal using different method with proposed method on two sample images from real dataset.
Input

Acknowledgment

This work was supported by an ARO grant W911NF-16-1-0126.

References