ClsGAN: Selective Attribute Editing Based On Classification Adversarial Network

10/25/2019 ∙ by Liu Ying, et al. ∙ 21

Attribution editing has shown remarking progress by the incorporating of encoder-decoder structure and generative adversarial network. However, there are still some challenges in the quality and attribute transformation of the generated images. Encoder-decoder structure leads to blurring of images and the skip-connection of encoder-decoder structure weakens the attribute transfer ability. To address these limitations, we propose a classification adversarial model(Cls-GAN) that can balance between attribute transfer and generated photo-realistic images. Considering that the transfer images are affected by the original attribute using skip-connection, we introduce upper convolution residual network(Tr-resnet) to selectively extract information from the source image and target label. Specially, we apply to the attribute classification adversarial network to learn about the defects of attribute transfer images so as to guide the generator. Finally, to meet the requirement of multimodal and improve reconstruction effect, we build two encoders including the content and style network, and select a attribute label approximation between source label and the output of style network. Experiments that operates at the dataset of CelebA show that images are superiority against the existing state-of-the-art models in image quality and transfer accuracy. Experiments on wikiart and seasonal datasets demonstrate that ClsGAN can effectively implement styel transfer.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 3

page 5

page 6

page 8

Code Repositories

ClsGAN

A implement about paper Selective Attribute Editing Model Based Classification Adversarial Network


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Attribute editing aims to change an or more attributes of images(e.g., hair color, sex, age, style) and preserve other unchanging attributes. The key of transfer images is to keep high quality and accuracy of target attribute images. The presentation of generative adversarial network(GAN) [4] greatly promotes the development of attribute editing projects. The results range from local changes, e.g., changing hair color, adding accessories (glasses, hats) and altering facial expressions, to overall changes such as converting gender, age and style.

On attribute transfer,  [19, 11, 21]

take images and labels as inputs of generator and discriminator to realize the attribute transfer, and the loss function is only to train the performance of verifying the authenticity of the images in discriminator. However, the images contain many attributes, and some of them are correlated with each other, i.e., there are complex relationships. So it is not enough to implement the attribute transfer only by the inputting label. As a solution to the problem,  

[3, 6, 17, 27, 15, 20] put forward the attribute classification restriction,which independently train attribute classification. Therefore greatly enhances the performance of attribute transformation. However, these models still exist problem of low transfer accuracy in some special attribute conversion.

Encoder-decoder structure [7] is also introduced to realize the attribute conversion conveniently in some models, whereas the bottleneck layer of the structure will result in poor image quality (e.g., blurring or artifacts). To address this issue,  [6, 17] apply skip-connection to the encoder-decoder architecture to improve image quality. However, skip-connection leads to a trade-off between image quality and accuracy [17], i.e., it generates high quality images at the cost of low attribute accurate.

We investigate the shortages of skip-connection are mainly due to the source attribute and details of the images being both transmited into the decoder. Though skip-connection is beneficial to improvement of image quality, it also enhances the attributes information of the source image and reduces the target attribute information. In order to deal with this question and inspired by the residual neural network 

[28], we introduce the upper convolution residual network(Tr-resnet) to enhance the target attribute information in decoder. Tr-resnet can selectively acquire source image information and target label information by combining a certain encoding layer information, the decoding layer inputs and outputs information .

Through the deep understanding of the GAN  [4], we find the reason that original and generated images are both inputted to the discriminator instead of original images is to make the discriminator learn about the defects of the generated images during training discriminator. Then the discriminator enhances the generator in a certain direction. Recently proposed models [3, 6, 17, 27, 15]

mostly take the original images as input of discriminator to train the attribute classifier. Nevertheless, these methods do not take the effect of generating images into account to improve generator performance. ACGAN 

[20] also takes generated image as an auxiliary source when training classifier, but the attribute class of generated image is identified as the true category, which weakens classifier training due to the poor quality of the generated images. In this paper, a attribute classification adversarial network is proposed to enhance the classification accuracy. In the training classifier stage, the original image and the generated image are inputted to the classifier where we specify that the attributes of the generated image are indistinguishable and set the attribute values to none.

Meanwhile, drawing on the practices of  [27, 15], ClsGAN inputs images into two encoders to decouple entanglement between the attributes and unchanged content. What’s more, we approach the encodered attribute and reference labels to keep labels continuous. From Figure1, ClsGAN generates high-quality images with high attribute accuracy. In conclusion, our contributions are as follows:

1.We propose ClsGAN which has significant improvement in image resolution and attribute classification accuracy. For improve image quality and accurate, we introduce the upper convolution residual network which combined up-sampling with skip-connection technique. This network avoids the drawbacks of a single skip-connection.

2.Influenced by the idea of GAN, we introduce the method of attribute classification adversarial network which applys classifier to guide the generator in certain directions and greatly improve the accuracy of image attributes. At the same time, in order to keep the continuity of attribute label, we approximate the style encodered output to the target label.And ClsGAN designs two encoders to decouple image attribute and content.

3.We propose quantitative and qualitative experimental results in face attribute editing to demonstrate its superiority over the basic model. Art style and season transformation also be used to test the effects of ClsGAN model.

Figure 2: The structure of ClsGAN , which mainly includes the structure of generator (a) and discriminator (c). The generator is composed of two encoders and a decoder, which consist of a series of convolution layer and upper convolution residual layer(b) respectively. Discriminator is composed of classifier and adversarial network, whose parameters are Shared.

2 Related works

Generative adversarial network GAN [4] consists of two parts: generator and discriminator. The generator generates an image as photo-realistic as possible to make it difficult for discriminator to distinguish, while discriminator tries to distinguish the generated image from the original image. In order to maintain stability during training, DCGAN [22]

applys convolutional neural net(CNN) and batch normalization into the model.  

[1, 5] propose Wasserstein-1 distance and Gradient penalty function to enhance training stability and avoid model crash. CGAN [19] takes the reference label as inputs of generator and discriminator to produce specific images that are consistent with the label. GAN has received significant attention since it was proposed, and has been applied to many aspects of the computer field, e.g., image production [4, 22, 1, 5, 2], image style transfer [19, 8, 31, 9]

, super resolution image 

[13] and facial attribute tansfer  [14, 29, 11, 21, 26, 3, 6, 17, 27, 15, 16, 30, 25].

Encoder-decoder structure Geoffrey and Richard  [7]

propose the automatic encoder network, which includes an encoder to generate a higher-level vector with semantic characteristics and a decoder to restore the original image. Based on this structure, Kingma et al. 

[10]

proposed a variational auto-encoder, which makes the image latent represent obey a specific normal distribution. Then the images belong to specific source dataset are generated by taking the sample from this distribution as input. To achieve attribute editing, VAE/GAN 

[12] combines VAE [10] with GAN [4] to modify latent expressions of images by reconstructing losses and adversarial losses.  [6, 17, 24] employ the skip-connection or skip-connection variant to the encoder-decoder structure to render photo-realistic images.

Image-image transfer

Image-image transfer aims for the transfer between two images that exist one or more differences, e.g., the style and attribute differences, but it is unchanged in other aspects. Pix2pix 

[8] is proposed to realize the mutual transfer between the paired data. Nevertheless, it’s difficult to collect paired data with the different attributes for the same identity. Zhu et al.  [31] propose CycleGAN that utilizes the cycle consistency loss to preserve the key information of images so as to steer mutual transformation between unpaired data. However, the aforementioned model can only realize the mutual transformation between two domains. With the increase of domains, the number of models increases exponentially, which is not universal and leads to model overfitting and poor generalization ability.

Face attribute editing is an image transformation operation between multiple domains.  [3, 6, 17, 27, 15, 16, 30, 25, 20] introduce the attribute classification constraint to discriminator to control the model autonomic implementing attribute classification. StarGAN [3] takes both labels and images as input to control the specific attribute images transfer. AttGAN [6] adds skip-connection based on the structure of encoder-decoder which improve the image quality. [17, 25] both take difference attribute labels as the input. STGAN [17] improves the skip-connection method called STU to auto-selective information between reference images and target label. RelGAN [25]

presents matching-aware discriminator and interpolation discriminator to guarantee the attribute transfer and interpolation quality. AME-GAN 

[27]

seperates the input images into image attribute part and image background part on manifolds. Then it enforces attribute latent variables to Gaussian distributions and background latent variables to uniform distributions respectively to achieve attribute transfer procedure controllable. AGUIT 

[15]

utilizes a novel semi-supervised learning process. Then it decomposes image representation into domain-invariant content code and domain-specific style code to handle multi-modal and multi-domain tasks of unpaired image-to-image translation jointly. In  

[16], multi-path consistency loss is introduced to evaluate the differences between direct and indirect translation to regularize training. UGAN [30] employs a source classifier in discriminator to determine whether the translated image still holds the features of the source domain, so that remove the irrelevant source features. In ACGAN  [20], transfer images are also used as an auxiliary source to train classifier training. In this paper, we propose ClsGAN, which applys Tr-resnet and attribute classification adversarial network to improve image quality and attribute accuracy.

3 Proposed Method

This section presents the ClsGAN model for arbitrary attribute editing. Firstly, we build the generator framework of ClsGAN by introducing upper convolution residual network and attribute continuity processing. Then, ClsGAN puts forward an adversarial network about attribute classification in discriminator to enhance the accuracy of attribute. Finally, the network structure and model target of ClsGAN are proposed.

3.1 Upper convolution residual network(Tr-resnet)

STGAN [17] proves that skip-connection in AttGAN [6] is beneficial to improvement of image quality at the cost of attribute classification accuracy, so STGAN modifys skip-connection called STU. However, it requires more parameters than AttGAN, and the process is relatively complex. In this paper we propose the upper convolution residual network which has the same effect as STU, simple procedure and few parameters. The Tr-resnet structure is shown in Figure 2(b).

In deep learning, the phenomenon of gradient vanishing will become more obvious as the number of layers deepens. Therefore, Kaiming He et al.  

[17] propose ResNet to avoid gradient vanishing. Inspired by this idea, we introduce the upper convolution residual network in the decoder to avoide loss of original image and target label information. What’s more, for the sake of selective use of resource image and target attribute information, we apply weights to the resource image information in encoder, the current layer and the upper layer information of decoder. Specific operations are as follows:

(1)
(2)

Where denotes the decoder feature of -th layer, denotes transposed convolution operation. denotes the encoder feature of -th layer, denotes the output of -th of decoder. In formula (2), when , the model weights and sums about the 2-th layer feature graph information of the encoder, the 4-th layer input and output information of the decoder as the output of the 4-th layer of decoder. When , the output of decoder is the weighted sum of information the input and output of -th layer. We initialize , , where and equals the number of feature map in or .

Figure 3: Face transfer results on the CelebA dataset between different models.
Figure 4: Interpolation results for facial attributes on the CelebA dataset by employing our model. Values among - are the label values about the attribute.

3.2 Attribute consistency processing

StarGAN [3] and STGAN [17] generate a transfer image with a specific attribute value(0 or 1), but such an image is relatively single and discontinuous about attribute. AttGAN [6] employs an style controller to realize multi-modal for a specific attribute on the basis of source model. In order to control the attribute value continuously, the attribute value obtained by the attribute encoder is approximated to the real attribute value in this paper. The optimization formula is as follows:

(3)

Where denotes reference label of source image, denotes attribute encoder in generator. denotes loss.

3.3 Attribute classified adversarial network

GAN [4] updates the generator according to the deficiency of generated image which is learned about by the discriminator. Based on this idea, we apply the adversarial method to classification attributes.  [3, 6, 17] take real images as the input of classifier, and then use the optimized classifier to improve the generator. However, it is difficult for classifier to discover the difference between the generated images and source images.

In our model, the classifier is designed as an adversarial network. The source image and the generated image are fed into the classifier to optimize classifier simultaneously, and then the classifier trains the generator according to the defects in the generated image. When training classifier about source images, the category is required to be separable and the value is defined as 1(true) at the same time the attribute value is correct. So the clssifier needs to optimize the whole attribute evaluation values for the source images. In contrast, classifier only needs to assume that it is inseparable for generated images and the value is 0(false), so the remaining classification attribute values are not considered needlessly. The detailed operation is shown in in Figure 2. Meanwhile, in order to maintain the stability of the model, ClsGAN adds a penalty function for classification loss. The concrete operation is shown in the loss function. We define the loss function of training generator and classifier about classification adversarial net as follows:

(4)
(5)
(6)
(7)

Where and denote the loss functions when training classifier() and generator about attributes. represent classification losses in source and transfer images repepctively. denotes gradient penalty term about which is obtained by line sampling between the original and the generated images. and are source and transfer images. and are both the vector with n+1-dimensions, where the first dimension is used to determine whether the attribute is separable or not, define , and the remaining n-dimensions vector represents the values of the image’s difference attributes. and

denote log function and sigmoid function respectively.

represents -th attribute value in target label or evaluate label .

3.4 Network structure

Fig.2

shows the frame diagram of the network structure, in which the generator includes two encoders and a decoder. The encoder consists two convolutional neural networks

, which operate about image contents and attributes respectively. extracts high-level semantic features about image content from source image and obtains feature vectors of size . An image attribute feature is extracted by from the original image to obtain a vector with the same dimension as the reference label.

The decoder concatenates the content features from and attribute feature from (or the reference label) which extends to the same size as the content feature to construct a whole feature vector. Then decoder takes the whole feature vector as input to generate reconstructed images or images with specific attributes. For purpose of seletive use of attribute information and original image information, the decoder that consists of a series of up-sampled convolutional layers applys the structure of upper residual network. The specific structure is shown in Figure 2(b).

The discriminator consists of a series of convolution layers, and it shares parameters with the classifier which has the same structure with (except for the last layer). The source image and generated image are used as the input of discriminator and classifier. It is assumed that the image attribute dimension is dimension. So the output dimension of classifier is dimension. The first dimension is used to distinguish whether the attribute is separable or not and the remaining dimension vector correspond to the n-dimensional attributes of the images. By referring to the method of loss function in target detection  [23], the classification vector of the generated image only takes the first dimension for loss function operation, and the other dimensions are expressed as none in the training classifier stage. The specific method is shown in Figure 2(c).

3.5 Loss function

Adversarial loss In order to maintain stability, we follow the loss function defined by WGAN [1] and WGAN-GP [5], and define the loss function of generator and discriminator during training as follows:

(8)
(9)

Where is obtained by the linear sampling between the original image and the generated image, and represents the difference vector between the target attribute vector and the original attribute vector. Generator(G) is composed of encoder (Representing content encoder and attribute encoder respectively) and decoder . denotes discriminator. The specific relationship about G is as follows:

(10)

Reconstitution loss StarGAN reconstructs the original images by means of cycle consistency loss, which is not direct enough and will increase the lack of image generation during the cycle. In contrast, ClsGAN uses the attribute encoder to directly encode the input image attributes, and then directly takes the attributes and content features into the decoder to reconstruct the image. The reconstruction loss function is as follows:

(11)

Where the norm is used to surpress blurring of reconstitution images and to maintain clarity.

Object model Considering formula (7),(9), the target loss functions of training discriminator D and classifier C can be expressed as:

(12)

The target function of the generator is:

(13)

Where denote classification adversarial losses of classifier and generator which is mentioned in section 3.2. denotes the attribute approximation loss mentioned in section 3.3. and are model tradeoff parameters.

Figure 5: Face resconstruction images on the CelebA dataset between different models.

4 Experiments

We use Adam optimizer to train the model, and its’ parameters are set to . The learning rate of the first epoch is set as , and it is linearly attenuated to at the next epoch. In all experiments, the parameter is set as and

.All experiments are both performed in a Pytorch environment, with training on a single GeForce RTX 208 GPU. Source code can be found at

https://github.com/liuying/Cls-GAN.git.

4.1 Dataset

CelebA [18] is used in this paper for training and testing of facial attribute transformation. The CelebA dataset is a large face dataset, which contains more than images of celebrities’ faces and facial attributes. In this paper, the last images of the dataset are used as the test set, and the remaining images are all used as the training set. We perform center cropping the initial size images to , then resize them as for training and test images.

of the attributes are selected for attribute transfer in the paper, which are ”Bald”, ”Bangs”, ”Black Hair”, ”Blond Hair”, ”Brown Hair”, ”Bushy Eyebrows”, ”Eyeglasses”, ”Gender”, ”Mouth Open”, ”Mustache”, ”No Beard”, ”Pale Skin” and ”age”. These attributes already covers the most prominent of all attributes.

4.2 Image Quality Assessment

The FID method is used to evaluate the image quality. ClsGAN, StarGAN, AttGAN and STGAN models all use the test set of ClsGAN model to generate transformed images. All attribute images corresponding to each source image in test set are randomly selected as the dataset(a total of generated images) to evaluate image quality. At the same time, SSIM is adopted to evaluate the similarity between reconstructed images and original images. The specific comparison results are shown in Table 1, from which it can be seen that the method in this paper is superior to other methods in image quality, indicating that the use of upper convolution residual network operation is indeed helpful to improve the quality of generated images. Compared with STGAN, we improved the reconstruction rate by 4 percentage points to .

Meanwhile, we also shows the transfer effects and reconstruction effects generated by these four methods.The generated image is almost indistinguishable from the original image in terms of image quality from Figure 1.From Figure 5, ClsGAN has a higher degree of restoration in the aspects of background color and skin color compared with AttGAN and STGAN.

Method StarGAN AttGAN STGAN our
F,S 7.9/0.56 7.1/0.8 6.1/0.92 5.9/0.97
Table 1: We use FID(F) and SSIM(S) to evaluate Visual and Reconstruction quality.
Figure 6: The attribute accuracy about StarGAN,AttGAN,STGAN and ClsGAN

4.3 Image attribute resolution evaluation

In order to evaluate the classification accuracy, we use the training set of CelebA dataset to train a classifier for attributes, and use the same test set with ClsGAN model. The average accuracy of test set is 93%. Similarly, the three comparison models mentioned in 4.1 section are compared in terms of attribute accuracy with our model. For single attribute transfer,it can be seen from Figure 3 that the results of our model have a great improvement on the overall level of attributes.

In order to further compare the transformation ability of each model, we listed the conversion rate of 13 attributes on the test set about four models in the form of a bar chart. From the figure 6, the ’Bangs’ attribute transfer accuracy is improved by 10 percentage points compared with STGAN, and the ’Mouthopen’ attribute is improved by 6 percentage points compared with AttGAN.

To show the effects of attribute approximation method, Figure 4 lists the transfer images of the above four models with values attribute label of 0, 0.2, 0.4, 0.6, 0.8, 1, 1.2, 1.4, 1.6 respectively. We not only test atribute values between 0 and 1, but also values that is bigger than 1, and the performance was still great. It indicates that our model implements the continuity of attributes.We can see that the transformation effect is good on continuity for largely modified information about the source images (such as bangs).

Figure 7: The transfer images from photograph to artistic style
Figure 8: The transfer images between seasons

4.4 Seasons and artistic styles transfer

In order to prove the transformation ability, we also use the model to realize the mutual transformation between different seasons and different artistic styles. The seasonal images are from the unplash website. The number of images of different seasons are: spring (), summer (), fall() and winter(). Images of artistic style mainly come from wikiart website, and ClsGAN realizes the mutual transformation between four styles and photographs. The scale of images is Monet:, Cezanne:, VanGogh:, ukiyo-e :, photograph:. The photos are downloaded from Flickr and use landscape labels.

It can be seen from the Figure 7 8 that we realizes the transformation of seasons and styles of images, and the images quality is high, indicating that the model learns the features between different attributes and the unchanged features in the images.

5 Concludes

ClsGAN solves the constraint problem between image attribute and quality caused by skip-connection by introducing the upper convolution residual network (Tr-resnet), which provides the a method for producing high quality and attribute accuracy images. In order to improve the attribute accuracy of the generated image, we propose the classified adversarial network inspired by the generated adversarial network. At the same time, in order to meet the requirement of multimodal, we make an approximation between reference label and attribute feature vector generated by style encoder. Experiments demonstrate the great effectiveness of ClsGAN in face attribute editing, style and season conversion.

References

  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In

    International conference on machine learning

    ,
    pp. 214–223. Cited by: §2, §3.5.
  • [2] A. Brock, J. Donahue, and K. Simonyan (2018) Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096. Cited by: §2.
  • [3] Y. Choi, M. Choi, M. Kim, J. Ha, S. Kim, and J. Choo (2018) Stargan: unified generative adversarial networks for multi-domain image-to-image translation. In

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    ,
    pp. 8789–8797. Cited by: §1, §1, §2, §2, §3.2, §3.3.
  • [4] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: §1, §1, §2, §2, §3.3.
  • [5] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville (2017) Improved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777. Cited by: §2, §3.5.
  • [6] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen (2019) Attgan: facial attribute editing by only changing what you want. IEEE Transactions on Image Processing. Cited by: §1, §1, §1, §2, §2, §2, §3.1, §3.2, §3.3.
  • [7] G. E. Hinton and R. S. Zemel (1994) Autoencoders, minimum description length and helmholtz free energy. In Advances in neural information processing systems, pp. 3–10. Cited by: §1, §2.
  • [8] P. Isola, J. Zhu, T. Zhou, and A. A. Efros (2017)

    Image-to-image translation with conditional adversarial networks

    .
    In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125–1134. Cited by: §2, §2.
  • [9] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §2.
  • [10] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114. Cited by: §2.
  • [11] G. Lample, N. Zeghidour, N. Usunier, A. Bordes, L. Denoyer, et al. (2017) Fader networks: manipulating images by sliding attributes. In Advances in Neural Information Processing Systems, pp. 5967–5976. Cited by: §1, §2.
  • [12] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther (2015) Autoencoding beyond pixels using a learned similarity metric. arXiv preprint arXiv:1512.09300. Cited by: §2.
  • [13] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. (2017) Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681–4690. Cited by: §2.
  • [14] M. Li, W. Zuo, and D. Zhang (2016) Deep identity-aware transfer of facial attributes. arXiv preprint arXiv:1610.05586. Cited by: §2.
  • [15] X. Li, J. Hu, S. Zhang, X. Hong, Q. Ye, C. Wu, and R. Ji (2019) Attribute guided unpaired image-to-image translation with semi-supervised learning. arXiv preprint arXiv:1904.12428. Cited by: §1, §1, §1, §2, §2.
  • [16] J. Lin, Y. Xia, Y. Wang, T. Qin, and Z. Chen (2019) Image-to-image translation with multi-path consistency regularization. arXiv preprint arXiv:1905.12498. Cited by: §2, §2.
  • [17] M. Liu, Y. Ding, M. Xia, X. Liu, E. Ding, W. Zuo, and S. Wen (2019) STGAN: a unified selective transfer network for arbitrary image attribute editing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3673–3682. Cited by: §1, §1, §1, §2, §2, §2, §3.1, §3.1, §3.2, §3.3.
  • [18] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pp. 3730–3738. Cited by: §4.1.
  • [19] M. Mirza and S. Osindero (2014) Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784. Cited by: §1, §2.
  • [20] A. Odena, C. Olah, and J. Shlens (2017) Conditional image synthesis with auxiliary classifier gans. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2642–2651. Cited by: §1, §1, §2.
  • [21] G. Perarnau, J. Van De Weijer, B. Raducanu, and J. M. Álvarez (2016) Invertible conditional gans for image editing. arXiv preprint arXiv:1611.06355. Cited by: §1, §2.
  • [22] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §2.
  • [23] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi (2016) You only look once: unified, real-time object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779–788. Cited by: §3.4.
  • [24] O. Ronneberger, P. Fischer, and T. Brox (2015) U-net: convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, pp. 234–241. Cited by: §2.
  • [25] P. Wu, Y. Lin, C. Chang, E. Y. Chang, and S. Liao (2019) RelGAN: multi-domain image-to-image translation via relative attributes. arXiv preprint arXiv:1908.07269. Cited by: §2, §2.
  • [26] T. Xiao, J. Hong, and J. Ma (2017) DNA-gan: learning disentangled representations from multi-attribute images. arXiv preprint arXiv:1711.05415. Cited by: §2.
  • [27] D. Xie, M. Yang, C. Deng, W. Liu, and D. Tao (2019) Fully-featured attribute transfer. arXiv preprint arXiv:1902.06258. Cited by: §1, §1, §1, §2, §2.
  • [28] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He (2017) Aggregated residual transformations for deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492–1500. Cited by: §1.
  • [29] S. Zhou, T. Xiao, Y. Yang, D. Feng, Q. He, and W. He (2017) Genegan: learning object transfiguration and attribute subspace from unpaired data. arXiv preprint arXiv:1705.04932. Cited by: §2.
  • [30] D. Zhu, S. Liu, W. Jiang, C. Gao, T. Wu, and G. Guo (2019) UGAN: untraceable gan for multi-domain face translation. arXiv preprint arXiv:1907.11418. Cited by: §2, §2.
  • [31] J. Zhu, T. Park, P. Isola, and A. A. Efros (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pp. 2223–2232. Cited by: §2, §2.