The standard adversarial and cyclical losses of a CycleGAN  were augmented with additional loss terms from a convolutional neural network trained with art composition attributes. During training of the CycleGAN, the user specifies values for each of the art composition attributes. For instance, if a target contrast value of 10 is specified, the generator should output images with more contrast than if the target contrast value is 1.
1.1 Art Composition Attributes
Eight art composition attributes were selected: variety of texture, variety of shape, variety of size, variety of color, contrast, repetition, primary color, and color harmony. Five hundred images from the WikiArt dataset  were labeled with these attributes. Figures 1, 2, 3 and 4 are examples of low and high values for variety of texture and contrast.
Training consisted of fine-tuning a ResNet50  pretrained on the ImageNet dataset. ResNet50 is a fifty-layer deep residual network with 16 residual blocks. Global Average Pooling (GAP) is applied to the ReLU output from each of the sixteen ResNet block activations, called rectified convolution maps . The sixteen GAP outputs were concatenated and L2 normalization was applied to create a merge layer. From the merge layer, there are eight outputs, one for each of the attributes.
3 CycleGAN and ACAN
In addition to the standard CycleGAN losses (Adversarial, Cycle-Consistency, and Identity) the ACAN losses are a series of eight losses generated when the translated image is passed through the ACAN with eight target values. The difference between these target values and the values output by the network are the attribute losses.
Below is a sampling of the results of running the CycleGAN training with ACAN on the apple2orange dataset. Even with a small training set size of 500 images, the ACAN is able to learn and generate apples with the eight art compositional attributes.
This project was initially developed as part of the 2018 OpenAI Scholars program. I would like to thank my mentor, Christy Dennison from OpenAI, for her helpful comments along with support from Larissa Schiavo, Joshua Achiam, Jack Clark, and Greg Brockman from OpenAI.
 Zhu, J. & Park, T. & Isola, P. & Efros, A.A. (2017) Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv:1703.10593.
 Nichol, K. (2016) Kaggle dataset: Painter by numbers.
 He, K. & Zhang, X. & Ren, S. & Sun, J. (2015) Deep Residual Learning for Image Recognition arXiv:1512.03385.
 Malu, G. & Bapi, R.S. & Indurkhya, B. (2017) Learning Photography Aesthetics with Deep CNNs. arXiv:1707.03981.