Training on Art Composition Attributes to Influence CycleGAN Art Generation

12/19/2018 ∙ by Holly Grimm, et al. ∙ 0

I consider how to influence CycleGAN, image-to-image translation, by using additional constraints from a neural network trained on art composition attributes. I show how I trained the the Art Composition Attributes Network (ACAN) by incorporating domain knowledge based on the rules of art evaluation and the result of applying each art composition attribute to apple2orange image translation.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The standard adversarial and cyclical losses of a CycleGAN [1] were augmented with additional loss terms from a convolutional neural network trained with art composition attributes. During training of the CycleGAN, the user specifies values for each of the art composition attributes. For instance, if a target contrast value of 10 is specified, the generator should output images with more contrast than if the target contrast value is 1.

1.1 Art Composition Attributes

Eight art composition attributes were selected: variety of texture, variety of shape, variety of size, variety of color, contrast, repetition, primary color, and color harmony. Five hundred images from the WikiArt dataset [2] were labeled with these attributes. Figures 1, 2, 3 and 4 are examples of low and high values for variety of texture and contrast.

Figure 1: low texture Figure 2: high texture Figure 3: low contrast Figure 4: high contrast

2 Acan

Training consisted of fine-tuning a ResNet50 [3] pretrained on the ImageNet dataset. ResNet50 is a fifty-layer deep residual network with 16 residual blocks. Global Average Pooling (GAP) is applied to the ReLU output from each of the sixteen ResNet block activations, called rectified convolution maps [4]. The sixteen GAP outputs were concatenated and L2 normalization was applied to create a merge layer. From the merge layer, there are eight outputs, one for each of the attributes.

3 CycleGAN and ACAN

In addition to the standard CycleGAN losses (Adversarial, Cycle-Consistency, and Identity) the ACAN losses are a series of eight losses generated when the translated image is passed through the ACAN with eight target values. The difference between these target values and the values output by the network are the attribute losses.

4 Results

Below is a sampling of the results of running the CycleGAN training with ACAN on the apple2orange dataset. Even with a small training set size of 500 images, the ACAN is able to learn and generate apples with the eight art compositional attributes.

Figure 5: original apple image Figure 6: color harmony analogous Figure 7: color harmony complementary Figure 8: color harmony monochromatic Figure 9: low contrast Figure 10: high contrast Figure 11: low color Figure 12: high color Figure 13: low texture Figure 14: high texture Figure 15: low repetition Figure 16: high repetit.
Figure 17: original
Figure 18: translated
Figure 19: reconstructed

The most surprising result of this project is the painterly effect that the ACAN was able to inject into the CycleGAN generated images as seen in Figures 18 and 19.

Acknowledgments

This project was initially developed as part of the 2018 OpenAI Scholars program. I would like to thank my mentor, Christy Dennison from OpenAI, for her helpful comments along with support from Larissa Schiavo, Joshua Achiam, Jack Clark, and Greg Brockman from OpenAI.

References

[1] Zhu, J. & Park, T. & Isola, P. & Efros, A.A. (2017) Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv:1703.10593.

[2] Nichol, K. (2016) Kaggle dataset: Painter by numbers.
https://www.kaggle.com/c/painter-by-numbers.

[3] He, K. & Zhang, X. & Ren, S. & Sun, J. (2015) Deep Residual Learning for Image Recognition arXiv:1512.03385.

[4] Malu, G. & Bapi, R.S. & Indurkhya, B. (2017) Learning Photography Aesthetics with Deep CNNs. arXiv:1707.03981.