Image De-raining Using a Conditional Generative Adversarial Network
Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.READ FULL TEXT VIEW PDF
Most existing dehazing algorithms often use hand-crafted features or
We present a method to restore a clear image from a haze-affected image ...
Porous media are ubiquitous in both nature and engineering applications,...
Existing approaches towards single image dehazing including both model-b...
The process of decomposing target images into their internal properties ...
A new method is proposed for removing text from natural images. The chal...
We introduce Hair-GANs, an architecture of generative adversarial networ...
Image De-raining Using a Conditional Generative Adversarial Network
Source code is borrowed heavily from IDCGAN
It has been widely acknowledged that unpredictable impairments such as illumination, noise and severe weather conditions (i.e. rain, snow and fog) adversely affect the performance of many computer vision algorithms such as detection, classification and tracking. This is primarily due to the fact that these algorithms are trained using images that are captured under well-controlled conditions. For instance, it can be observed from Figure 1
(c), that the presence of heavy rain greatly impairs visual quality of the image, thus rendering face detection and verification algorithms ineffective for such degradations. A possible method to address this issue is to include images captured under unconstrained conditions in the training process of these algorithms. However, it may not be practical to collect such images for all classes in the training set, especially in a large scale setting. In addition, in this age of ubiquitous smartphone usage, images captured by smartphone cameras under difficult weather conditions undergo degradations that drastically affect the visual quality of images making the images useless for sharing and usage. In order to improve the overall quality of such degraded images for better visual appeal and to ensure enhanced performance of vision algorithms, it becomes essential to automatically remove undesirable artifacts arising due to difficult weather conditions discussed above. In this paper, we investigate conditional generative adversarial networks (GANs) to address this issue, where a pre-trained discriminator network is used as a guide to synthesize images free from weather-based degradations . Specifically, we propose a single image based de-raining/de-snowing algorithm using a conditional GAN framework for visually enhancing images that have undergone degradations due to rain and/or snow.
One can model the observed rainy image as the superposition of two images - one corresponding to rain streaks and the other corresponding to the clear background image (see Figure 2). Hence, the input rainy image can be expressed as
where represents the clear background image and represents the rain streaks. As a result, similar to image de-noising and image separation [1, 2, 3, 4], image de-raining can be viewed as the problem of separating two components from a rainy image.
In the case of video-based de-raining, a common strategy to solve (1) is to use additional temporal information, such as methods proposed in [5, 6, 7]. However, this strategy is not applicable for single image de-raining. In such cases, researchers have used appropriate prior information such as sparsity prior [8, 9]
, Gaussian Mixture Model (GMM) prior and patch-rank prior 
to make the de-raining problem more regularized. Most recently, due to their strong ability to learn end-to-end mapping, Convolutional Neural Networks (CNNs) have been successfully applied to solve single image de-raining problem[12, 13]. By learning a non-linear mapping between input rainy image and its corresponding ground truth using CNN structure, CNN-based methods are able to achieve superior visual performance.
Even though these existing methods have been successful, we note that they do not consider additional information into the optimization. Hence, to design a visually appealing de-raining algorithm, we must consider the following information into the optimization framework:
We should consider into the objective function the criterion that performance of vision algorithms such as detection and classification should not be affected by the presence of rain streaks. The inclusion of this discriminative information ensures that the reconstructed image is indistinguishable from its original counterpart.
Rather than concentrating only on the characterization of rain-streaks, visual quality may also be considered into the optimization function. By doing this, we will be able ensure that the de-rained image looks visually appealing without losing important details.
In this work, we incorporate these criteria by proposing a new conditional GAN-based framework called Image De-raining Conditional General Adversarial Network (ID-CGAN) to address the single image de-raining problem. Similar to the existing approaches to solve (1) where they use additional prior information to put constraints, we instead propose to use a discriminator model as a guide to optimize the de-raining algorithm. Inspired by the recent success of GANs for pixel-level vision tasks such as image generation [14, 15]16]
and image super-resolution, our network consists of two models: generator model (G) and discriminator model (D). The generator model acts as a mapping function to translate an input rainy image to de-rained image such that it fools the discriminator model which is trained to distinguish rainy images from images without rain. However, traditional GANs  are not stable to train and may introduce artifacts in the output image making it visually unpleasant and artificial. To address this issue, we define a new refined perceptual loss to serve as an additional loss function which aids the proposed network in generating visually pleasing outputs. Sample results of the proposed ID-CGAN algorithm are shown in Figure 1. In summary, this paper makes the following contributions:
A conditional GAN-based optimization framework is presented to address the challenging single image de-raining problem without the use of any additional post-processing.
A refined generator sub-network that is specially designed for the single image de-raining task is presented.
A new perceptual loss function is defined to be used in the optimization task to ensure better visual appeal of the end results.
Extensive experiments are conducted on publicly available and synthesized datasets. Detailed qualitative and quantitative comparisons with existing state-of-the-art methods are presented 111Datasets and experimental implementation are available at
This paper is organized as follows. A brief background on de-raining, GANs and perceptual loss is given in Section II. The details of the proposed ID-CGAN method are given in Section III. Experimental results on both synthetic and real images are presented in Section IV. Finally, Section V concludes the paper with a brief summary and discussion.
In this section, we briefly review the literature for existing single image de-raining methods, conditional GANs and perceptual loss.
As discussed in Section I, single image de-raining is an extremely challenging task due to its ill-posed nature and unavailability of temporal information which could have been used as additional constraints. Hence, in order to generate optimal solutions to this problem, different kinds of prior information are enforced into the optimization function. Sparse coding-based clustering method 
is among the first ones to tackle the single image de-raining problem where the authors proposed to solve it in the image decomposition framework. They first separated the input image into low frequency and high frequency images using a bilateral filter. The high frequency image is further decomposed into rain and non-rain components based on the assumption that learned dictionary atoms can sparsely represent clear background image and rain-streak image separately. An important assumption that is made in this approach is that rain streaks usually have similar edge orientations. This may result in the removal of non-rain component as rain. Also, the method’s effectiveness is dependent on the performance of the bilateral filter and clustering of basis vectors for generating sparse representation. Similar to the above approach, Luoet al. in  propose a discriminative sparse coding based method that considers the mutual exclusive property into the optimization framework. Though the authors present significant improvements as compared to previous methods, their method is ineffective in removing large rain-streaks due to the assumption that rain streaks are high frequency components. In addition, due to the same assumption, their method generates artifacts around the rain-streak components in the resulting images.
In another approach, Chen et al. proposed a low-rank representation-based method  that uses patch-rank as a prior to characterize unpredictable rain pattern. They use a low-rank model to capture correlated rain streaks. Observing that dictionary and low-rank based methods tend to leave too many rain pixels in the output image, Li et al. in  used the image decomposition framework to propose patch-based priors for background and rain image. These priors are based on GMMs which can accommodate multiple orientations and scales of rain streaks. These methods [11, 10] are based on the assumption that rain streaks have similar patterns and orientations. Due to this assumption, they tend to capture other global repetitive patterns such as brick and texture which results in removal of certain non-rain components from the background image. To address this issue, Zhang et al. recently proposed a convolutional coding-based method  that uses a set of learned convolutional low-rank filters to capture the rain pixels. Most recently, due to their immense success in learning non-linear functions, several CNN-based methods have also been proposed to directly learn an end-to-end mapping between input and its corresponding ground truth for de-raining [12, 13, 19]. Table I summarizes the comparison our proposed ID-CGAN to other single de-raining methods.
|Not Patch-based||Time efficiency|
Generative Adversarial Networks were proposed by Goodfellow et al. in  to synthesize realistic images by effectively learning the distribution of training images. The authors adopted a game theoretic min-max optimization framework to simultaneously train two models: a generative model and a discriminative model . The goal of GAN is to train to produce samples from training distribution such that the synthesized samples are indistinguishable from actual distribution by the discriminator . Unlike other generative models such as Generative Stochastic Networks 
, GANs do not require a Markov chain for sampling and can be trained using standard gradient descent methods. Initially, the success of GANs was limited as they were known to be unstable to train, often resulting in artifacts in the synthesized images. Radford et al. in  proposed Deep Convolutional GANs (DCGANs) to address the issue of instability by including a set of constraints on their topology. Another limiting issue in GANs is that, there is no control on the modes of data being synthesized by the generator in case of these unconditioned generative models. Mirza et al.  incorporated additional conditional information in the model, which resulted in effective learning of the generator. The use of conditioning variables for augmenting side information not only increased the stability in learning but also improved the descriptive power of the generator . Recently, researchers have explored various aspects of GANs such as training improvements and use of task specific cost function . Also, an alternative viewpoint for the discriminator function is explored by Zhao et al.  where they deviate from the traditional probabilistic interpretation of the discriminator model.
The success of GANs in synthesizing realistic images has led to researchers exploring the GAN framework for numerous applications such as style transfer , image inpainting , text to image translation , image to image translation , texture synthesis  and generating outdoor scenes from attributes . Isola et al.
proposed a general purpose solution for image-to-image translation using conditional adversarial networks. Apart from learning a mapping function, they argue that the network also learns a loss function, eliminating the need for specifying or hand designing a task specific loss function. Karacanet al. in  proposed a deep GAN conditioned on semantic layout and scene attributes to synthesize realistic outdoor scene images under different conditions. Recently, Jetchev et al. 
proposed spatial GANs for texture synthesis. Deviating from traditional GANs, their input noise distribution constitutes a whole spatial tensor instead of a vector, thus enabling them to create architectures more suitable for texture synthesis.
have explored different loss functions and their combinations for effective learning for tasks such as super-resolution, semantic segmentation, depth estimation, feature inversion and style transfer. Initial work on CNN-based image translation or restoration optimized over pixel-wise L2-norm (Euclidean loss) or L1-norm between the predicted and ground truth images[33, 34]. Since these losses operate at pixel level, their ability to capture high level perceptual/contextual details is limited and they tend to produce blurred results . Hence, many authors argue and demonstrate through their results that it would be better to optimize a perceptual loss function where the aim is to minimize perceptual difference between reconstructed image and the ground truth image 
. In a different approach, the conditional GAN framework can also be considered as an attempt to explore a structured loss function where, a generator network is trained to minimize the discriminator’s ability to correctly classify between the synthesized image and the corresponding ground truth image. Researchers have attempted to solve various reconstruction tasks such as image super-resolution and style transfer where conditional GAN framework augmented with perceptual and L2 loss function have been used to produce state-of-the-art results[17, 15].
Instead of solving (1) in a decomposition framework to address the single image de-raining problem, we aim to directly learn a mapping from an input rainy image to a de-rained (background) image by constructing a conditional GAN-based deep network called ID-CGAN. The proposed network is composed of three important parts (generator, discriminator and perceptual loss function) that serve distinct purposes. Similar to traditional GANs [15, 20], we have two sub-networks: a generator sub-network and a discriminator sub-network . The generator sub-network is a symmetric deep CNN network with appropriate skip connections as shown in the top part in Figure 3. Its primary goal is to synthesize a de-rained image from an image that is degraded by rain (input rainy image). The discriminator sub-network , as shown in the bottom part in Figure 3, serves to distinguish ‘fake’ de-rained image synthesized by the generators from corresponding ground truth ‘real’ image. It can also be viewed as a guidance for the generator . Since GANs are known to be unstable to train which results in artifacts in the output image synthesized by , we define a refined perceptual loss functions to address this issue. Additionally, this new refined loss function ensures that the generated (de-rained) images are visually appealing. In the following sub-sections, we discuss these important parts in detail starting with GAN objective function followed by generator/discriminator sub-networks and refined perceptual loss.
In order to learn a good generator so as to fool the learned discriminator and to make the discriminator good enough to distinguish synthesized de-rained image from real ground truth, the proposed method alternatively updates and following the structure proposed in [20, 15]. Given an input rainy image and a random noise vector , conditional GAN aims to learn a mapping function to generate output image by solving the following optimization problem:
As the goal of single image de-raining is to generate pixel-level de-rained image, the generator should be able to remove rain streaks as much as possible without loosing any detail information of the background image. So the key part lies in designing a good structure to generate de-rained image.
Existing methods for solving (1), such as sparse coding-based methods [8, 40, 3, 4], neural network-based methods  and CNN-based methods  have all adopted a symmetric structure. For example, sparse coding-based methods use a learned or pre-defined synthesis dictionaries to decode the input noisy image into sparse coefficient map. Then another set of analysis dictionaries are used to transfer the coefficients to desired clear output. Usually, the input rainy image is transferred to a specific domain for effective separation of background image and undesired component (rain-streak). After separation, the background image (in the new domain) has to be transferred back to the original domain which requires the use of a symmetric process. Therefore, we also adopt a symmetric structure to form our generator sub-network. Similar to traditional low-level vision CNN frameworks, the generator directly learns an end-to-end mapping from input rainy image to its corresponding ground truth.
The proposed generator with a symmetric structure is shown in the top part of Figure 3
. A set of convolutional layers (along with batch normalization and PReLU activation) are stacked in the front which act as a learned feature extractor or semantic attributes extractor. Then, three shrinking layers are stacked in the middle part serving for better computational efficiency. These three shrinking layers can be also regarded as performing linear combination within the learned features. These are followed by a stack of deconvolutional 222deconvolutional layer can be also named as transposed convolutional layers
layers (along with batch normalization and ReLU activation function). Note that the deconvolutional layers are a mirrored version of the forward convolutional layers. For all layers, we use a stride of 1 and pad appropriate zeros to maintain the dimension of each feature map to be the same as that of the input. To make the network efficient in training and have better convergence performance, we involve symmetric skip connections into the proposed generator sub-network, similar to. The generator network is as follows:
where, is a set of -channel convolutional layers followed by batch normalization and PReLU activation, is a set of -channel deconvolutional layers followed by batch normalization and ReLU activation. Skip connections are added via every two skips, as shown in Figure 3.
From the point of view of GAN framework, the goal of de-raining an input rainy image is not only to make the de-rained result visually appealing and quantitatively comparable to the ground truth, but also to ensure that the de-rained result is indistinguishable from ground truth image. Therefore, we include a learned discriminator sub-network to classify if each input image is real or fake. Following the structure that was proposed in 
, we use convolutional layer with batch normalization and PReLU activation as a basis throughout the discriminator network. Once we calculate the learned feature from a set of these Conv-BN-PReLU, a sigmoid function is stacked at the end to map the output to a probability score normalized to [0,1]. The proposed discriminator sub-networkis shown in the bottom part of Figure 3. The structure of the discriminator sub-network is as follows:
where, is a set of channel convolutional layers followed by batch normalization and is a set of -channel convolutional layers.
As discussed earlier, GANs are known to be unstable to train and they may produce noisy or incomprehensible results via the guided generator. A probable reason is that the new input may not come from the same distribution of the training samples. As illustrated in Figure 4, it can be clearly observed that there are a lot of artifacts introduced by the normal GAN structure. This greatly influences the visual performance of output image. A possible solution to address this issue is to introduce perceptual loss into the network. Recently, loss function measured on the difference of high-level feature representation, such as loss measured on certain layers in CNN , has demonstrated much better visual performance than the per-pixel loss used in traditional CNNs. However, in many cases it fails to preserve color and texture information . Also, it does not achieve good quantitative performance simultaneously. To ensure that the results have good visual and quantitative scores along with good discriminatory performance, we propose a new refined loss function. Specifically, we combine pixel-to-pixel Euclidean loss, perceptual loss  and adversarial loss together with appropriate weights to form our new refined loss function. The new loss function is then defined as follows:
where represents adversarial loss (loss from the discriminator ), is perceptual loss and
is normal per-pixel loss function such as Euclidean loss. Here,and are pre-defined weights for perceptual loss and adversarial loss, respectively. If we set both and to be 0, then the network reduces to a normal CNN configuration, which aims to minimize only the Euclidean loss between output image and ground truth. If is set to 0, then the network reduces to a normal GAN. If set to 0, then the network reduces to the structure proposed in .
The three loss functions , and are defined as follows. Given an image pair with channels, width and height (i.e. ), where is the input image and is the corresponding ground truth, the per-pixel Euclidean loss is defined as:
where is the learned network for generating the de-rained output. Suppose the outputs of certain high-level layer are with size . Similarly, the perceptual loss is defined as
where represents a non-linear CNN transformation. Similar to the idea proposed in , we aim to minimize the distance between high-level features. In our method, we compute the feature loss at layer relu22 in VGG-16 model .333https://github.com/ruimashita/caffe-train/blob/master/vgg.trainval.prototxt
Given a set of de-rained images generated from the generator , the entropy loss from the discriminator to guide the generator is defined as:
In this section, we present details of the experiments and quality measures used to evaluate the proposed ID-CGAN method. We also discuss the dataset and training details followed by comparison of the proposed method against a set of baseline methods and recent state-of-the-art approaches.
Due to the lack of availability of large size datasets for training and evaluation of single image de-raining, we synthesized a new set of training and testing samples in our experiments. The training set consists of a total of 700 images, where 500 images are randomly chosen from the first 800 images in the UCID dataset  and 200 images are randomly chosen from the BSD-500’s training set . The test set consists of a total of 100 images, where 50 images are randomly chosen from the last 500 images in the UCID dataset and 50 images are randomly chosen from the test-set of the BSD-500 dataset . After the train and test sets are created, we add rain-streaks to these images by following the guidelines mentioned in  using Photoshop444http://www.photoshopessentials.com/photo-effects/rain/. It is ensured that rain pixels of different intensities and orientations are added to generate a diverse training and test set. Note that the images with rain form the set of observed images and the corresponding clean images form the set of ground truth images. All the training and test samples are resized to 256256.
In order to demonstrate the effectiveness of the proposed method on real-world data, we created a dataset of 50 rainy images downloaded from the Internet. While creating this dataset, we took all possible care to ensure that the images collected were diverse in terms of content as well as intensity and orientation of the rain pixels. A few sample images from this dataset are shown in Figure 5. This dataset is used for evaluation (test) purpose only.
|SPM ||PRM ||DSC ||CNN ||GMM ||CCR ||ID-CGAN|
The following measures are used to evaluate the performance of different methods: Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) , Universal Quality Index (UQI)  and Visual Information Fidelity (VIF) . Similar to previous methods , all of these quantitative measures are calculated using the luminance channel. Since we do not have ground truth reference images for the real dataset, the performance of the proposed and other methods on the real dataset is evaluated visually.
The entire network is trained on a Nvidia Titan-X GPU using the torch framework. We used a batch size of 7 and number of training iterations of 100k. Adam algorithm  with a learning rate of is used. During training, we set and . We set and for the proposed generator and discriminator. All the convolutional and deconvolutional layers in generator are composed of kernels of size 33 with a stride 1 and zero-padding by 1. All convolutional and deconvolutional layers in the first three layers of the discriminator () are composed of kernels of size 44 with a stride 2 and zero-padding by 1. The last two layers in are composed of kernels of size 44 with a stride 1 and zero-padding by 1.
We compare the performance of our method with that of the following three baseline configurations:
GEN: Generator sub-network is trained using per-pixel Euclidean loss by setting and to zero in (3). This amounts to a traditional CNN architecture with Euclidean loss.
CGAN: Conditional GAN structure is trained using per-pixel Euclidean loss by setting to zero in (3).
CGAN-P: Conditional GAN is trained using perceptual loss by setting to zero in (3).
All three configurations along with ID-CGAN are learned using training images from the synthetic training dataset. Results of quantitative performance using the measures discussed earlier on test images from the synthetic dataset is shown in Table III. Sample results of the proposed method compared with the baseline configurations on test images from real dataset are shown in Figure 6. It can be noted that the introduction of adversarial loss improves the visual performance over the traditional CNN architecture but introduces artifacts (Figure 6(c)). The introduction of perceptual loss in the conditional GAN framework is able to tackle the artifacts better and enhances the performance but it is not successful in completely removing these artifacts resulting in a reduced quantitative performance (especially PSNR performance compared with GEN). Finally, the use of perceptual loss combined with the Euclidean loss in the conditional GAN framework (ID-CGAN) achieves significantly better results as compared to the baseline configurations. It can be observed from Figure 6(c), ID-CGAN is able to remove the majority of the artifacts introduced by CGAN.
We compare the performance of the proposed ID-CGAN method with the following recent state-of-the-art methods for single image de-raining:
In the first set of experiments, we compare quantitative and qualitative performance of different methods on the test images from the synthetic dataset. As the ground truth is available for the these test images, we can calculate the quantitative measures such as PSNR, SSIM, UQI and VIF. Results are shown in Table II. It can be clearly observed that the proposed ID-CGAN method is able to achieve superior quantitative performance using all the measures.
To visually demonstrate the improvements obtained by the proposed method on the synthetic dataset, results on two difficult sample images are presented in Figure 7. Note that we selectively sample difficult images to show that our method performs well in difficult conditions also. While SPM  is able to remove the rain-streaks, it produces blurred results which are not visually appealing. The other compared methods are able to either reduce the intensity of rain or remove the streaks in parts, however, they fail to completely remove the rain-streaks. In contrast to the other methods, the proposed method is able to successfully remove majority of the rain streaks while maintaining the details of the de-rained images. Surprisingly, in addition to the removal of rain streaks, the proposed method is also able to de-haze the image simultaneously. Unlike previous methods [10, 19] that use additional post-processing or special cascading structure to remove rain and haze together, the proposed method is able automatically de-rain and de-haze simultaneously within one framework.
We also evaluated the performance of the proposed method and recent state-of-the-art methods on real-world rainy test images. The de-rained results for all the methods on two sample input rainy images are shown in Figure 8. For better visual comparison, we show zoomed versions of the two specific regions-of-interest below the de-rained results. By looking at these regions-of-interest, we can clearly observe that SPM  and PRM  tend to produce blurred results and DSC  tends to add artifacts on the de-rained images. Even though the other three methods GMM , CNN  and CCR  are able to achieve good visual performance, rain drops are still visible in the zoomed regions-of-interest. In comparison, the proposed method is able to remove most of the rain drops while maintaining the details of the background image.
It has been well-acknowledged that snow-streak components also share much similarity with rain-streaks, as the snow-streak is just the frozen format of the rain component. Therefore, it is also meaningful to explore how the de-raining methods work for the task of removing snow. In this part of the experiments, we use the same ID-CGAN model trained for the above mentioned de-raining task, where the model is learned via our synthesis rainy datasets. We evaluated our method on a set of snowy images and the results on two sample snowy images are shown in Figure 10. For better visual comparison, we show the zoomed versions of the two specific regions-of-interest below the de-snowed results. It can be clearly observed from the two specific regions-of-interest that the proposed ID-CGAN is able to achieve superior results without blurring the background details.
Though the proposed method is able to achieve good quantitative and visual performance on most of the test images, it suffers from the white round rain-streaks, shown in Figure 9. The proposed methods inherently enhance white round particles and still introduce additional artifacts to the de-rained result. We believe that this is due to the following reasons. Firstly, even though we try to create a dataset that can incorporate different kinds of rain-component, the size and diversity of the training set is still not large enough. Hence, the network is unable to tackle the white round particles that are barely seen in training. Secondly, the high-level features from CNN network inherently captures the white round particles, so the perceptual loss may enhance it automatically. Note that similar results are also observed in Figure 9 in .
In this paper, we proposed a conditional GAN-based algorithm for the removal of rain streaks form a single image. In comparison to the existing approaches which attempt to solve the de-raining problem in an image decomposition framework by using prior information, we investigated the use of conditional GANs for synthesizing de-rained image from a given input rainy image. For improved stability in training and reducing artifacts introduced by GANs in the output images, we propose the use of a new refined loss function in the GAN optimization framework. Detailed experiments and comparisons are performed on synthetic and real-world images to demonstrate that the proposed ID-CGAN method significantly outperforms many recent state-of-the-art methods. Additionally, the proposed ID-CGAN method is compared against baseline configurations to illustrate the performance gains obtained by introducing the refined perceptual loss into the conditional GAN framework.
In spite of the superior performance achieved by the proposed method, it still suffers from a few drawbacks such as it fails to remove the white-round rain particles. In the future, we aim to build upon the conditional GAN framework to overcome these drawback and investigate the possibility of using similar structures for solving related problems.
This work was supported by an ARO grant W911NF-16-1-0126.
Computer Vision and Pattern Recognition, 2004. CVPR 2004. Proceedings of the 2004 IEEE Computer Society Conference on, vol. 1. IEEE, pp. I–528.
S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. Torr, “Conditional random fields as recurrent neural networks,” inProceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1529–1537.
R. Collobert, K. Kavukcuoglu, and C. Farabet, “Torch7: A matlab-like environment for machine learning,” inBigLearn, NIPS Workshop, 2011.