Perceptual Adversarial Networks for Image-to-Image Transformation

06/28/2017 ∙ by Chaoyue Wang, et al. ∙ University of Technology Sydney The University of Sydney 0

In this paper, we propose a principled Perceptual Adversarial Networks (PAN) for image-to-image transformation tasks. Unlike existing application-specific algorithms, PAN provides a generic framework of learning mapping relationship between paired images (Fig. 1), such as mapping a rainy image to its de-rained counterpart, object edges to its photo, semantic labels to a scenes image, etc. The proposed PAN consists of two feed-forward convolutional neural networks (CNNs), the image transformation network T and the discriminative network D. Through combining the generative adversarial loss and the proposed perceptual adversarial loss, these two networks can be trained alternately to solve image-to-image transformation tasks. Among them, the hidden layers and output of the discriminative network D are upgraded to continually and automatically discover the discrepancy between the transformed image and the corresponding ground-truth. Simultaneously, the image transformation network T is trained to minimize the discrepancy explored by the discriminative network D. Through the adversarial training process, the image transformation network T will continually narrow the gap between transformed images and ground-truth images. Experiments evaluated on several image-to-image transformation tasks (e.g., image de-raining, image inpainting, etc.) show that the proposed PAN outperforms many related state-of-the-art methods.

READ FULL TEXT VIEW PDF

Authors

page 2

page 5

page 10

page 12

page 13

page 14

page 16

Code Repositories

pix2pix_PAN

pix2pix and PAN(Parceptual Adversarial Networks) in PyTorch


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Image-to-image transformation tasks aim to transform an input image into the desired ouput image, and they exist in a series of applications about image processing, computer graphics, and computer vision. For example, generating high-quality images from corresponding degraded (e.g. simplified, corrupted or low-resolution) images, and transforming a color input image into its semantic or geometric representations. More examples include, but not limited to, image de-noising 

[15], image in-painting [3]

, image super-resolution 

[31]

, image colorization 

[29], image segmentation [22], etc.

In recent years, the convolutional neural networks (CNNs) are trained in a supervised manner for various image-to-image transformation tasks [17, 34, 10, 48]

. They encode input image into hidden representation, which is then decoded to the output image. By penalizing the discrepancy between the output image and ground-truth image, optimal CNNs can be trained to discover the mapping from the input image to the transformed image of interest. These CNNs are developed with distinct motivation and differ in the loss function design.

One of the most straightforward approaches is to pixel-wisely evaluate output images [10, 7, 38], e.g., L2 (or L1) norm to calculate the distance between the output and ground-truth images in the pixel space. Though it can generate reasonable images in many image-to-image transformation tasks, there used to be some unignorable defects associated with their outputs, such as blurred results (lack of high-frequency information), artifacts (lack of perceptual information).

Besides per-pixel losses, some advanced loss functions have been introduced to measure the discrepancy between the output and ground-truth images from other perspectives. There is a large body of works based on (conditional) generative adversarial networks (GANs, cGANs) [19, 30]. GANs (or cGANs) perform an adversarial process alternating between identifying and faking, and the generative adversarial losses are formulated to evaluate the discrepancy between the generated distribution and the real-world distribution. Experimental results show that generative adversarial losses are beneficial for generating more ‘realistic’ images. There are also some works [34, 20] to integrate per-pixel losses and generative adversarial losses as a new joint loss function to train image-to-image transformation models, resulting in sharper results.

Figure 1: Image-to-image transformation tasks. Many tasks in image processing, computer graphics, and computer vision can be regarded as image-to-image transformation tasks, where a model is designed to transform an input image into the required output image. We proposed a principled Perceptual Adversarial Networks (PAN) to solve the image-to-image transformation between paired images. For each pair of the images we demonstrated, the left one is the input image, and the right one is the transformed result of the proposed PAN.

In parallel, some recent works [21, 11, 5] proposed the perceptual losses as a novel kind of measurements for evaluating the discrepancy between the output and ground-truth images. Usually, a well-trained image classification network (e.g., VGG-16 [39]) is employed to extract high-level features (e.g., content or texture) of both output images and ground-truth images. Through penalizing the discrepancy between extracted high-level features, these models are trained to transform the input image into the output which has same high-level features with the corresponding ground-truth. Besides image style transfer, the perceptual losses were used in some image-to-image transformation tasks for suppressing artifacts occurred in the output images [21, 47].

Aforementioned different losses evaluate the discrepancy between the output image and corresponding ground-truth from different perspectives, and therefore, make them have their own advantages. Yet, single loss (e.g., per-pixel, perceptual or generative adversarial loss) is hard to discover all discrepancy between the output and ground-truth images, which usually lead to the production of some defects in their output images. In order to enhance the performance, some works attempted to integrate losses, i.e., evaluate the discrepancy from multiple perspectives. Among them, through integrating all of the per-pixel, perceptual and generative adversarial losses, Zhang et al[47] and Ledig et al[25] achieved the state-of-art performance on single image de-raining and single image super-resolution, respectively. Based on aforementioned observations, there are raised two questions: Have the existing methods penalized all possible discrepancy between the output image and corresponding ground-truth? If not, can we find a novel loss which can evaluate the discrepancy from more perspectives than existing methods, and further enhance the performance of image-to-image transformation tasks?

In this paper, we proposed the perceptual adversarial networks (PAN) for image-to-image transformation tasks. The PAN was inspired from the GANs framework and consisted of an image transformation network and a discriminative network . Both the generative adversarial loss and the principled perceptual adversarial loss are employed to train the PAN. Firstly, as same as GANs, we introduced the generative adversarial loss into our framework and utilized it as the statistical measurement to evaluate the distribution of output images. Then, we used the representations on the hidden layers of the discriminative network as dynamic perceptual measurements (i.e., perceptual adversarial loss) to penalize the discrepancy between the output and ground-truth images. Specifically, utilizing the representations on hidden layers of the network , the network is trained to generate the output image that has the same high-level features with the corresponding ground-truth. At the same time, if the discrepancy measured on the current high-dimensional space is small enough, the hidden layers of will be updated to find new high-dimensional spaces which stress the discrepancy that still exists between the output and ground-truth images. Different from the aforementioned perceptual losses which calculate the representations on hidden layers of well-trained CNNs, our proposed the perceptual adversarial loss performs an adversarial training process between the image transformation network and the discriminative network, and has the capability to continually and automatically discover the discrepancy that has not been minimized. Therefore, the perceptual adversarial loss provides a strategy to penalize the discrepancy between the output and ground-truth images from as many perspectives as possible.

In summary, our paper makes the following contributions:

  • We proposed a principled perceptual adversarial loss which utilizes the hidden layers of the discriminative network to evaluate the discrepancy between the output and ground-truth images through an adversarial training process.

  • Through combining the perceptual adversarial loss and the generative adversarial loss, we presented the PAN as a framework for solving image-to-image transformation tasks.

  • We evaluated the performance of the PAN on several image-to-image transformation tasks (Fig. 1). Experiments demonstrate the proposed PAN has the great capability of solving the image-to-image transformation tasks.

The rest of the paper is organized as follows: after a brief summary of previous related works in section 2, we illustrate the proposed PAN together with its training losses in section 3. Then we exhibit the experimental validation of the whole method in section 4. Finally, we conclude this work with some future directions in section 5.

2 Related Work

In this section, we first introduce some representative image-to-image transformation works using the feed-forward CNNs. And then, some related works based on the GANs and perceptual losses are briefly summarized.

2.1 Image-to-image transformation with feed-forward CNNs

Deep learning methods have received lots of attention in recent years. We witnessed that a variety of feed-forward CNNs have been proposed for image-to-image transformation tasks. These feed-forward CNNs can be easily trained using the back-propagation algorithm [36], and the transformed images can be generated by simply forward passing the input image through the well-trained CNNs in the test stage.

Individual per-pixel loss or per-pixel loss accompanied with other losses are employed in a number of image-to-image transformations. Image super-resolution tasks estimate a high-resolution image from its low-resolution counterpart 

[10, 21, 25]. Image de-raining (or de-snowing) methods attempt to remove the rain (or snow) strikes in the pictures, which caused by the uncontrollable weather conditions [17, 13, 45, 47]. Given a damaged image, image inpainting aims to re-fill the missing part of the input image [34, 26]. Image semantic segmentation methods produce dense scene labels based on a single input image [16, 32, 27, 12]. Given an input image of an object, some feed-forward CNNs were trained to synthesize the image of the same object under a new viewpoint [44, 40]. More, image-to-image transformation tasks based on feed-forward CNNs, include, but not limited to, image colorization [7], depth estimations [14, 12], etc.

2.2 GANs-based works

Generative adversarial networks (GANs) [19] provide an important approach for learning a generative model which generates samples from the real-world data distribution. GANs consists of a generative network and a discriminative network. Through playing a minimax game between these two networks, GANs is trained to generate more and more ‘realistic’ samples. Since the great performance on learning real-world distributions, there is emergence of a large number of GANs-based works. Some of these GANs-based works are committed to training a better generative model, such as the InfoGAN [6], the WGAN [1] and the Energy-based GAN [49]. There are also some works employing the GANs into their models to improve the performance of some classical tasks. For example, the PGN [28] for video prediction, the SRGAN [25] for super-resolution, the ID-CGAN for image de-raining [47],the iGAN [50] for interactive application, the IAN [4] for photo modification, and the Context-Encoder for image in-painting [34]. Most recently, Isola et al[20]

proposed the pix2pix-cGANs to perfrom several image-to-image transformation tasks (named image-to-image translation in their work), such as translating the semantic labels into the street scene, object edges into pictures, aerial photos into maps,

etc.

2.3 Perceptual loss

Recent years, some theoretical analysis and experimental results suggested that the high-level features extracted from a well-trained image classification network have the capability to capture some perceptual information from the real-world images 

[18, 21]

. Specifically, the hidden representations directly extracted by the well-trained image classifier can be regarded as semantic content of the input image, and the image style distribution can be captured by the Gram matrix of the hidden representations. Therefore, high-level features extracted from hidden layers of a well-trained CNNs are introduced to optimize image generation models. Dosovitskiy and Brox 

[11] calculated the Euclidean distances in high-level feature space of deep neural networks as the deep perceptual similarity metrics to improve their image generation performance. Johnson et al. [21], Bruna et al. [5] and Ledig et al. [25] use the features extracted from a well-trained VGG network to improve the performance of the single image super-resolution task. In addition, there are works applying high-level features in image style-transfer [18, 21], image de-raining [47] and image view synthesis [33] tasks.

Figure 2: PAN framework. The PAN consists of an image transformation network and a discriminative network . The image transformation network

is trained to synthesize the transformed images given the input images. It is composed of a stack of Convolution-BatchNorm-LeakyReLU encoding layers and Deconvolution-BatchNorm-ReLU decoding layers, and the skip-connections are used between mirrored layers. The discriminative network

is also a CNNs that consists of Convolution-BatchNorm-LeakyReLU layers. Hidden layers of the network are utilized to evaluate the perceptual adversarial loss, and the output of the network are used to distinguish the transformed images and the real-world images.

3 Methods

In this section, we introduce the proposed Perceptual Adversarial Networks (PAN) for image-to-image transformation tasks. Firstly, we explain the generative and the perceptual adversarial losses, respectively. Then, we give the whole framework of the proposed PAN. Finally, we illustrate the details of the training procedure and network architectures.

3.1 Generative adversarial loss

We begin with the generative adversarial loss in the original GANs. A generative network is trained to map samples from noise distribution to real-world data distribution through playing a minimax game with a discriminative network . In the training procedure, the discriminative network aims to distinguish the real samples and the generated samples . In contrary, the generative network tries to confuse the discriminative network by generating more and more ‘realistic’ samples. This minimax game can be formulated as

(1)

Nowadays, GANs-based models have shown strong capability in learning generative models, especially for image generation [1, 28, 6]. We therefore adapt the GANs learning strategy to solve image-to-image transformation tasks as well. As shown in Fig. 2, the image transformation network is used to generate transformed image given the input image . Meanwhile, each input image has a corresponding ground-truth image . We suppose that all target images obey the distribution , and the transformed images is encouraged to have the same distribution with that of targets images , i.e. . Based the generative adversarials learning strategy, a discriminative network is additionally introduced, and the generative adversarial loss can be written as

(2)

The generative adversarial loss acts as a statistical measurement to penalize the discrepancy between the distributions of the transformed images and the ground-truth images.

3.2 Perceptual adversarial loss

Different from the original GANs that randomly generate samples from the data distribution , our goal is inferring the transformed images according to the input images. This is thus a further step of GANs to explore the mapping from the input image to its ground truth.

As mentioned in Sections 1 and 2, per-pixel losses and perceptual losses are two widely used losses in existing works for generating images towards the ground truth. The per-pixel losses penalize the discrepancy occurred in the pixel space, but often produce blurry results [34, 48]. The perceptual losses explore the discrepancy between high-dimensional representations of images extracted from a well-trained CNNs, e.g

., the VGG net trained on the ImageNet dataset 

[39]. Although hidden layers of well-trained CNNs have been experimentally validated to map the image from pixel space to high-level feature spaces, how to extract the effective features for image-to-image transformation tasks through the hidden layers is still an open problem. In image-to-image transformation tasks, it is reasonable to assume that the output images are consistent with the ground-truth images under any measurements (i.e., any high-dimensional spaces). Here we employ the trainable hidden layers of the discriminative network to measure the perceptual adversarial loss between transformed images and ground-truth images. Formally, given a paired data , a poistive margin , the perceptual adversarial loss can be written as

(3)
(4)

where . The function measures the discrepancy between the high-level features that extracted by the -th hidden layer of the discriminative network . are hyper-parameters balancing the influence of different hidden layers. In our experiments, the L1 norm is employed to calculate the discrepancy of the high-dimensional representations on the hidden layers, i.e.,

(5)

where is the image representation on the -th hidden layer of the discriminative network .

By minimizing the perceptual adversarial loss function with respect to parameters of , we will encourage the network to generate image that has similar high-level features with its ground-truth . If the sum of discrepancy between the current learned representations of transformed image and ground-truth image is less than the positive margin , the loss function will upgrade the discriminative network for new high-dimensional spaces, which discover the discrepancy that still exist between the transformed images and corresponding ground-truth. Therefore, based on the perceptual adversarial loss, the discrepancy between the transformed and ground-truth images can be constantly explored and exploited.

3.3 The perceptual adversarial networks

Based on the aforementioned generative adversarial loss and perceptual adversarial loss, we develop the PAN framework, which consists of an image transformation network and a discriminative network . These two networks play a minimax game by optimizing different loss functions. Given a pair of data , the loss function of image transformation network and the loss function of discriminative network are formally defined as

(6)
(7)

minimizing with respect to the parameters of is consistent with maximizing the second and the third terms of . When , the third term of will have zero gradients, because of the positive margin .

In general, given paired images , the discriminative network aims to distinguish transformed images from ground-truth images from both the statistical (the first and second terms of ) and dynamic perceptual (the third term of ) aspects. On the other hand, the image transformation network is trained to generate increasingly better images by reducing the discrepancy between the output and ground-truth images.

3.4 Network architectures

Fig. 2 illustrates the framework of the proposed PAN, which is composed of two CNNs, i.e., the image transformation network and the discriminative network .

                        Image transformation network
Input: Image

[layer 1]      Conv. (3, 3,  64), stride=2;

LReLU;
[layer 2]      Conv. (3, 3, 128), stride=2; Batchnorm;
[layer 3]      LReLU; Conv. (3, 3, 256), stride=2; Batchnorm;
[layer 4]      LReLU; Conv. (3, 3, 512), stride=2; Batchnorm;
[layer 5]      LReLU; Conv. (3, 3, 512), stride=2; Batchnorm;
[layer 6]      LReLU; Conv. (3, 3, 512), stride=2; Batchnorm; LReLU;
[layer 7]      DeConv. (4, 4, 512), stride=2; Batchnorm;
                Concatenate Layer(Layer 7, Layer 5); ReLU;
[layer 8]      DeConv. (4, 4, 256), stride=2; Batchnorm;
                Concatenate Layer(Layer 8, Layer 4); ReLU;
[layer 9]      DeConv. (4, 4, 128), stride=2; Batchnorm;
                Concatenate Layer(Layer 9, Layer 3); ReLU;
[layer 10]     DeConv. (4, 4,  64), stride=2; Batchnorm;
                Concatenate Layer(Layer 10, Layer 2); ReLU;
[layer 11]     DeConv. (4, 4,  64), stride=2; Batchnorm; ReLU;
[layer 12]     DeConv. (4, 4, 3), stride=2; Tanh;
Output: Transformed image
Table 1: The architecture of the image transformation network.

3.4.1 Image transformation network

The image transformation network is proposed for generating the transformed image given the input image. Following the network architectures in [35, 20], the network firstly encodes the input image into high-dimensional representation using a stack of Convolution-BatchNorm-LeakyReLU layers, and then, the output image can be decoded by the remaining Deconvolution-BatchNorm-ReLU layers. Note that the output layer of the network does not use batchnorm and replaces the ReLU with Tanh activation. Moreover, the skip-connections are used to connect mirrored layers in the encoder and decoder stacks. More details of the transformation network are listed in the Table 1. For all experiments in this paper, if we do not give an additional explanation, the same architecture of the network is used.

3.4.2 Discriminative network

In the proposed PAN framework, the discriminative network is introduced to compute the discrepancy between the transformed images and the ground-truth images. Specifically, given an input image, the discriminative network extracts high-level features using a series of Convolution-BatchNorm-LeakyReLU layers. The 1st, 4th, 6th, and 8th layers are utilized to measure the perceptual adversarial loss for pairs of transformed image and its corresponding ground-truth. Finally, the last convolution layer is flattened and then fed into a single sigmoid output. The output of the discriminative network

estimates the probability that the input image comes from the real-world dataset rather than from the image transformation network

. The same discriminative network for all tasks demonstrated in this paper, and details of the network are shown in Table 2.

                       Discriminative network
Input: Image
[layer 1]    Conv. (3, 3, 64), stride=1; LReLU;
              (Perceptual adversarial loss: )
[layer 2]    Conv. (3, 3, 128), stride=2; Batchnorm; LReLU;
[layer 3]    Conv. (3, 3, 128), stride=1; Batchnorm; LReLU;
[layer 4]    Conv. (3, 3, 256), stride=2; Batchnorm; LReLU;
              (Perceptual adversarial loss: )
[layer 5]    Conv. (3, 3, 256), stride=1; Batchnorm; LReLU;
[layer 6]    Conv. (3, 3, 512), stride=2; Batchnorm; LReLU;
              (Perceptual adversarial loss: )
[layer 7]    Conv. (3, 3, 512), stride=1; Batchnorm; LReLU;
[layer 8]    Conv. (3, 3, 512), stride=2; Batchnorm; LReLU;
              (Perceptual adversarial loss: )
[layer 9]    Conv. (3, 3, 8), stride=2; LReLU;
[layer 10]   Fully connected (1); Sigmoid;
Output: Real or Fake (Probability)
Table 2: The architecture of the discriminative network.

4 Experiments

In this section, we evaluate the performance of the proposed PAN on several image-to-image transformation tasks, which are popular in fields of image processing (e.g., image de-raining), computer vision (e.g., semantic segmentation) and computer graphics (e.g., image generation).

4.1 Experimental setting up

For fair comparisons, we adopted the same settings with existing works, and reported experimental results using several evaluation metrics. These tasks and data settings include:

  • Single image de-raining, on the dataset provided by ID-CGAN [47].

  • Image Inpainting, on a subset of ILSVRC’12 (same as context-encoder [34]).

  • Semantic labelsimages, on the Cityscapes dataset [8] (same as pix2pix [20]).

  • Edgesimages, on the dataset created by pix2pix [20]. The original data is from [50] and [46], and the HED edge detector [43] was used to extract edges.

  • aerialmap, on the dataset from pix2pix [20].

Furthermore, all experiments were trained on Nvidia Titan-X GPUs using Theano 

[2]. According to the generative and perceptual adversarial losses, we alternately updated the image transformation network and the discriminative network . Specifically, Adam solver [23] with a learning rate of 0.0002 and a first momentum of 0.5 was used in network training. After one updating of the discriminative network , the image transformation will be updated three times. Hyper-parameters , , ,

, and batch size of 4 were used for all tasks. Since the dataset sizes for different tasks are changed largely, the training epochs of different tasks were set differently. Overall, the number of training iterations was around 100k.

4.2 Evaluation metrics

To illustrate the performance of image-to-image transformation tasks, we used qualitative and quantitative metrics to evaluate the performance of the transformed images. For the qualitative experiments, we directly showed the input and transformed images. Meanwhile, we used quantitative measures to evaluate the performance over the test sets, such as Peak Signal to Noise Ratio (PSNR), Structural Similarity Index (SSIM) [42], Universal Quality Index (UQI) [41] and Visual Information Fidelity (VIF) [37].

4.3 Analysis of the loss functions

PSNR(dB) SSIM UQI VIF
L2 22.77   0.7959   0.6261   0.3570
L2+CGAN 22.19 0.8083 0.6278 0.3640
ID-CGAN 22.91 0.8198 0.6473 0.3885
PAN 23.35 0.8303 0.6644 0.4050
Table 3: De-raining
(a) Input
(b) L2
(c) L2+cGAN
(d) ID-CGAN
(e) PAN
Figure 3: Comparison of snow-streak removal using different loss functions. Given the same input image (leftmost), each column shows results trained under different loss. The loss function of ID-CGAN [47] combined the per-pixel loss (using L2 norm), cGANs loss and perceptual loss, i.e., L2+cGAN+perceptual. For better visual comparison, zoomed versions of the specific regions-of-interest are demonstrated below the test images.
(a) Input
(b)
(c)
(d)
(e)
(f) Ground-Truth
Figure 4: Transforming the semantic labels to cityscapes images use the perceptual adversarial loss. Within the perceptual adversarial loss, a different hidden layer is utilized for each experiment. For better visual comparison, zoomed versions of the specific regions-of-interest are demonstrated below the test images. For higher layers, the transformed images look sharper, but less color information is preserved.

As discussed in Section 1, the design of loss function will largely influence the performance of image-to-image transformation. Firstly, the per-pixel loss (using L2 norm) is widely used in various image-to-image transformation works [40, 9, 24]. Then, the joint loss integrating the per-pixel loss and the conditional generative adversarial loss are proposed to synthesize more ‘realistic’ transformed images [34, 20]. Most recently, through introducing the perceptual loss, i.e., penalizing the discrepancy between high-level features that extracted by well-trained CNNs, the performance of some image-to-image transformation tasks are further enhanced [25, 21, 47]. Different from these existing methods, the proposed PAN loss integrates the generative adversarial loss and the proposed perceptual adversarial loss to train image-to-image transformation networks. Here, we compare the performance between the proposed PAN loss with those deployed in existing methods. For fair comparison, we adopted the same image transformation network and data settings from ID-CGAN [47], and used aforementioned loss functions to perform the image de-raining (de-snowing) task. The quantitative results over the synthetic test set were shown in Table 3, while the qualitative results test on the real-world images were shown in Fig. 3. From both quantitative and qualitative comparisons, we find that only using the per-pixel loss (L2 norm) achieved the worst result, due to the transformed image still leaves many snow-streaks (Fig. 3). Through introducing the cGANs loss, the de-snowing performance was indeed improved, but artifacts can be observed (Fig. 3) and the PSNR performance dropped (Table 3). Combining the per-pixel, cGAN and perceptual loss111Using the per-trained VGG-16 net [39]. together, i.e., using the loss function of ID-CGAN [47], the quality of transformed images have been further refined on both observations and quantitative measurements. However, from Fig. 3, we observe that the transformed images have some color distortion compared to the input images. Finally, the proposed PAN loss not only removed most streaks without color distortion, but also achieved the best performance on quantitative measurements.

Furthermore, in the proposed PAN, we selected four hidden layers of the discriminative network to calculate the perceptual adversarial loss. We next proceed to analyze the property of these hidden layers. Specifically, we trained four configurations of the PAN to perform the task of transforming the semantic labels to the cityscapes images. For each configuration, we set one hyper-parameter as one, and the others as zero, i.e., we used only one hidden layer to evaluate the perceptual adversarial loss in each configuration. As shown in Fig. 4, the lower layers (e.g., , ) pay more attention to the patch-to-patch transformation and the color transformation, but the transformed images are blurry and lack of fine details. On the other hand, higher layers (e.g., , ) capture more high-frequency information, but lose the color information. Therefore, by integrating different properties of these hidden layers, the proposed PAN can be expected to achieve better performance, and the final results of this task are shown in Fig. 7 and Table 5.

(a) Input
(b) CE
(c) PAN
(d) Ground-truth
Figure 5: Comparsion of image in-painting results using the Context-Encoder with the proposed PAN. Given the central region missed input image (leftmost), the in-painted images and the ground-truth are list on its rightside.
PSNR(dB) SSIM UQI VIF
Context-Encoder 21.74   0.8242   0.7828   0.5818
PAN 21.85 0.8307 0.7956 0.6104
Table 4: In-painting

4.4 Comparing with existing works

(a) Input
(b) ID-CGAN
(c) PAN
Figure 6: Comparsion of rain-streak removal using the ID-CGAN with the proposed PAN on real-world rainy images. For better visual comparison, zoomed versions of the specific regions-of-interest are demonstrated below the test images.
(a) Input
(b) pix2pix-cGAN
(c) PAN
(d) Ground-truth
Figure 7: Comparison of transforming the semantic labels to cityscapes images using the pix2pix-cGAN with the proposed PAN. Given the semantic labels (leftmost), the transformed cityscapes images and the ground-truth are listed on the rightside.

In this subsection, we compared the performance of the proposed PAN with some recent state-of-the-art algorithms for image-to-image transformation tasks.

(a) Input
(b) pix2pix-cGAN
(c) PAN
(d) Input
(e) pix2pix-cGAN
(f) PAN
Figure 8: Comparison of transforming the object edges to correspoding images using the pix2pix-cGAN with the proposed PAN. Given the edges (leftmost), the generated images of shoes and handbags are list on the rightside.

4.4.1 Context-encoder

Context-Encoder (CE) [34] trained a CNNs for the single image inpainting task, which generates the corrupted contents of an image based on its surroundings. Given corrupted images as input, the image inpainting aims to generate the inpainted images, and thus it can be formulated as an image-to-image transformation task. The per-pixel loss (using L2 norm) and the generative adversarial loss were combinded in the Context-Encoder to explore the relationship between the input surroundings and its central missed region.

To compare with the Context-Encoder, we performed our proposed PAN on the same image inpainting task, which attempted to inpaint the central region missed images over one of the most challenge datasets - the ImageNet dataset. As illustrated in Section 4.1, 100k images were randomly selected from the ILSVRC’12 dataset to train both Context-Encoder and PAN, and 50k images from the ILSVRC’12 validation set were used for test purpose. Moreover, since the image inpainting model are asked to generate the missing region of the input image but not the whole image, we directly employ the architecture of the image transformation network from [34].

In Fig. 5, we reported some inpainted results in the test set. For each input image, the missing part is mixed by the foreground objects and backgrounds. The goal is to predict and repaint the missing part through understanding the uncorrupted surroundings. From the inpainted results, we find the proposed PAN performed better on understanding the surroundings and inpainting the missing part with semantic contents. However, the context-encoder tended to use the nearest region (usually the background) to inpaint the missing part. In addition, although both the context-encoder and our PAN did not give the perfect results, PAN attempted to synthesize more details in the missing parts. Last but not the least, in Table 4, we reported the quantitative results calculated over all 50k test images, which also demonstrated that the proposed PAN achieves better performance on understanding the context of the input images and synthesizing the corresponding missing parts.

4.4.2 Id-Cgan

Single image de-raining task aims to remove rain streaks in a given rainy image. Considering the unpredictable weather conditions, the single image de-raining (de-snowing) is another challenge image-to-image transformation task. Most recently, the Image De-raining Conditional Generative Adversarial Networks (ID-CGAN) was proposed to tackle the image de-raining problem. Through combining the per-pixel (L2 norm), conditional generative adversarial, and perceptual losses (VGG-16), ID-CGAN achieved the state-of-the-art performance on single image de-raining.

We attempted to solve image de-raining by the proposed PAN using the same setting with that of ID-CGAN. Since there is a lack of large-scale datasets consisting of paired rainy and de-rained images, we resort to synthesize the training set [47] of 700 images. Zhang et al[47] provided 100 synthetic images and 50 real-world rainy images for test. Since the ground-truth is available for synthetic test images, we calculated and reported the quantitative results in Table 3. Moreover, we test both ID-CGAN and PAN on real-world rainy images, and the results were shown in Fig. 6. For better visual comparison, we zoomed up the specific regions-of-interest below the test images.

From Fig. 6, we found both ID-CGAN and PAN achieved great performance on single image de-raining. However, by observing the zoomed region, the PAN removed more rain-strikes with less color distortion. Additionally, as shown in Table 3, for synthetic test images, the de-rained results of PAN are much more similar with the corresponding ground-truth than that of ID-CGAN. Dealing with the uncontrollable weather condition, why the proposed PAN can achieve better results? One possible reason is that ID-CGAN utilized the well-trained CNNs classifier to extract the high-level features of the output and ground-truth images, and penalize the discrepancy between them (i.e., the perceptual loss). The high-level features extracted by the well-trained classifier usually focus on the content information, and may hard to capture other image information, such as color information. Yet, the proposed PAN used the adversarial perceptual loss which attempts to penalize the discrepancy from as many perspectives as possible. The different training strategy of PAN may help the model to learn a better mapping from the input to output images, and resulting in better performance.

Senmatic labels Cityscapes imags
PSNR(dB) SSIM UQI VIF
pix2pix-cGAN 15.74   0.4275  0.07315  0.05208
PAN 16.06 0.4820 0.1116 0.06581
Edges Shoes
PSNR(dB) SSIM UQI VIF
ID-cGAN 20.07 0.7504 0.2724 0.2268
PAN 19.51 0.7816 0.3442 0.2393
Edges Handbags
PSNR(dB) SSIM UQI VIF
ID-cGAN 16.50 0.6307 0.3978 0.1723
PAN 15.90 0.6570 0.4042 0.1841
Cityscapes images Semantic labels
PSNR(dB) SSIM UQI VIF
ID-cGAN 19.46 0.7270 0.1555 0.1180
PAN 20.67 0.7725 0.1732 0.1638
Aerial photos Maps
PSNR(dB) SSIM UQI VIF
ID-cGAN 26.10 0.6465 0.09125 0.02913
PAN 28.32 0.7520 0.3372 0.1617
Table 5: Comparison with pix2pix-CGAN
(a) Input
(b) pix2pix-cGAN
(c) PAN
(d) Ground-truth
Figure 9: Comparison of some other tasks using the pix2pix-cGAN with the proposed PAN. In the first row, semantic labels are generated based on the real-world cityscpaes images. And, the second row reports the generated maps given the aerial photos as input.

4.4.3 Pix2pix-cGAN

Isola et al[20] utilized cGANs as a general-purpose solution to image-to-image translation (transformation) tasks. In their work, the per-pixel loss (using L1 norm) and Patch-cGANs loss are employed to solve a serials of image-to-image transformation tasks, such as translating the object edges to its photos, semantic labels to scene images, gray images to color images, etc. The image-to-image transformation tasks performed by pix2pix-cGAN can also be solved by the proposed PAN. Here, we implemented some of them and compared with pix2pix-cGAN.

Firstly, we attempted to translate the semantic labels to cityscapes images. Unlike the image segmentation problems, this inverse translation is an ill-posed problem and image transformation network has to learn prior knowledge from the training data. As shown in Fig. 7, given semantic labels as input images, we listed the transformed cityscapes images of pix2pix-cGAN, PAN and the corresponding ground-truth on the rightside. From the comparison, we found the proposed PAN captured more details with less deformation, which led the synthetic images are looked more realistic. Moreover, the quantitative comparison in Table 5 also indicated that the PAN can achieve much better performance.

Generating real-world objects from corresponding edges is also one kind of image-to-image transformation task. Based on the dataset provided by [20], we trained the PAN to translate edges to object photos, and compared its performance with that of pix2pix-cGAN. Given edges as input, Fig. 8 presented shoes and handbags synthesized by pix2pix-cGAN and PAN. At the same time, the quantitative results over the test set were shown in the Table 5. Observing the generated object photos, we think that both pix2pix-cGAN and PAN achieved promising performance, yet it’s hard to tell which one is better. Quantitative results are also very close, PAN performed slightly inferior to pix2pix-cGAN on the PSNR measurement, yet superior on other quantitative measurements.

In addition, we compared PAN with pix2pix-cGAN on tasks of generating semantic labels from cityscapes photos, and generating maps from the aerial photos. Some example images generated using PAN and pix2pix-cGAN and their corresponding quantitative results were shown in Fig. 9 and Table 5, respectively. To perform these two tasks, the image-to-image transformation models are asked to capture the semantic information from the input image, and synthesize the corresponding transformed images. Since pix2pix-cGAN employed the per-pixel and generative adversarial losses to training their model, it may hard to capture perceptual infromation from the input image, which causes that their results are poor, especially on transforming the arial photos to maps. However, we can observe that the proposed PAN can still achieved promising performance on these tasks. These experiments showed that the proposed PAN can also effectively extract the perceptual information from the input image.

5 Conclusion

In this paper, we proposed the perceptual adversarial networks (PAN) for image-to-image transformation tasks. As a generic framework of learning mapping relationship between paired images, the PAN combines the generative adversrial loss and the proposed perceptual adversarial loss as a novel training loss function. According to this loss function, a discriminative network are trained to continually and automatically explore the discrepancy between the transformed images and the corresponding ground-truth images. Simultaneously, an image transformation network is trained to narrow the discrepancy explored by the discriminative network . Through the adversarial training process, these two networks are updated alternately. Finally, experimental results on several image-to-image transformation tasks demonstrated that the proposed PAN framework is effective and promising for practical image-to-image transformation applications.

References