Modeling and Forecasting Art Movements with CGANs

06/21/2019 ∙ by Edoardo Lisi, et al. ∙ 7

Conditional Generative Adversarial Networks (CGANs) are a recent and popular method for generating samples from a probability distribution conditioned on latent information. The latent information often comes in the form of a discrete label from a small set. We propose a novel method for training CGANs which allows us to condition on a sequence of continuous latent distributions f^(1), ..., f^(K). This training allows CGANs to generate samples from a sequence of distributions. We apply our method to paintings from a sequence of artistic movements, where each movement is considered to be its own distribution. Exploiting the temporal aspect of the data, a vector autoregressive (VAR) model is fitted to the means of the latent distributions that we learn, and used for one-step-ahead forecasting, to predict the latent distribution of a future art movement f^(K+1). Realisations from this distribution can be used by the CGAN to generate "future" paintings. In experiments, this novel methodology generates accurate predictions of the evolution of art. The training set consists of a large dataset of past paintings. While there is no agreement on exactly what current art period we find ourselves in, we test on plausible candidate sets of present art, and show that the mean distance to our predictions is small.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

page 10

page 12

page 15

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Periodisation in art history is the process of characterizing and understanding art movements and their evolution over time. Each movement and period may last from years to decades, and encompass diverse styles. It is “an instrument in ordering the historical objects as a continuous system in time and space” [Schapiro, 1970], and it has been the topic of much debate among art historians [Kaufmann, 2010]. In this paper, we leverage the success of data generative models such as Generative Adversarial Networks (GANs) [Goodfellow et al., 2014] to learn the distinct features of widely agreed upon art movements, tracing and predicting their evolution over time.

Unlike previous work in machine learning, in which a clustering method is validated by showing that it recovers known categories, we take existing categories as given, and propose new methods to more deeply interrogate and engage with historiographical debates in art history about the validity of these categories. Time labels are critical to our modelling approach, following what one art historian called “a basic datum and axis of reference” in periodisation: “the irreversible order of single works located in time and space”. We take this claim to its logical conclusion, asking our method to forecast into the future. As the dataset we use covers agreed upon movements from the

to the Century, the future is really our present in the 21st Century. We are thus able to evaluate one hypothesis about what movement we find ourselves in at present, namely Post-Minimalism, by comparing the “future” art we generate with our method to Post-Minimalist art (which was not part of our training set) and other recent movements.

We consider the following setting: each observed image has a cluster label and resides in an image space , where we assume that is a mixture of unknown distributions . For each observed image we have . We assume that, given data from the sequence of time-ordered distributions , it is possible to approximate the next distribution, . For example, each could be a single painting in a dataset of art. Further, each painting can be associated with one of art movements such as Impressionism, Cubism or Surrealism. In this example, represents an art movement of the future.

In this work we are interested in generating images from the next distribution . However, modelling directly in the image space is complicated. Therefore, we assume that there is an associated lower-dimensional latent space , such that each image distribution is associated with a latent distribution in and every observed image is associated with a vector in the latent space which we refer to as a code. We choose the latent space such that its lower dimension means that it is comparatively easier to model: for example, if is an image of pixels, could be a code of dimension . Thus, we consider the image-code-cluster tuples .

Our contribution is as follows: we use a novel approach to Conditional Generative Adversarial Networks (CGANs, [Mirza and Osindero, 2014]) that conditions on continuous codes, which are in turn modelled with Vector Autoregression (VAR, [Sims, 1980]). The general steps of the method are:

  1. For each image learn a coding ;

  2. Train a CGAN using to learn

  3. Model latent category distributions

  4. Predict and draw new latent samples

  5. Sample new images using CGAN from step 2;

Typically, the aim of GANs is to generate realisations from an unknown distribution with density based on the observations . Existing approaches for training GANs are mostly focused on learning a single underlying distribution of training data. However, this work is concerned with handling a sequence of densities . As mentioned earlier, our objective is to generate images from using trend information we learn from data from the previous distributions. To do this, a VAR model is used for the sequence of latent distributions .

CGANs generate new samples from the conditional distribution of the data given the latent variable . The majority of current CGAN literature (e.g. [Isola et al., 2017, Gauthier, 2014]) considers the latent variable as a discrete distribution (i.e. labels) or as another image. In this work, however, the conditional

is a continuous random variable. Although, conditioning on discrete labels is a simple and effective way to generate images from an individual category without needing to train a separate GAN for each, discrete labels do not provide a means to generate images from an unseen category. We show that conditioning on a continuous space can indeed solve this issue.

Figure 1: Diagram illustrating our method. The latent space is chosen to be lower-dimensional than the image space . Moving from to

does not necessarily need to be done via autoencoder, as noted in Section

2.3.

Our CGAN is trained on samples from categories. Based on this trained CGAN, “future” new samples from category are obtained by conditioning on . In other words, we use a CGAN to generate images based upon the prediction given by the VAR model in the latent space i.e. generate new images from . In this paper, the latent representations are obtained via autoencoder – see Section 2.3 later.

It is important to point out that the method does not aim to model a sequence of individual images, but a sequence of distributions of images. Recalling the art example: an individual painting in the Impressionism category is not part of a sequence with e.g. another individual painting in the Post-Impressionism category. It is the two categories themselves that have to be modelled as a sequence.

The novel contribution of this paper can be summarised as generating images from a distribution with observations by exploiting the sequential nature of the distributions via a latent representation. This is achieved by combining existing methodologies in a novel fashion, while also exploring the seldom-used concept of a CGAN that conditions on continuous variables. We assess the performance of our method using widely agreed upon art movements from the WikiArt dataset111https://www.wikiart.org/ to train a model which can generate art from a predicted movement; comparisons with the real art movements that follow the training set show that the prediction is close to ground truth.

To summarise, the overall objectives considered in this paper are:

  • Derive a latent representation for training sample .

  • Find a model for the categories in this latent space.

  • Predict the “future”, i.e. category , in the latent space.

  • Generate new images that have latent representations corresponding to the category.

There exist other methods that have used GANs and/or autoencoders for predicting new art movements. For instance, [Vo and Soh, 2018] proposes a collaborative variational autoencoder [Li and She, 2017] that is trained to project existing art pieces into a latent space, then to generate new art pieces from imaginary art representations. However, this work generates new art subject to an auxiliary input vector to the model and does not capture sequential information across different movements.

The idea of using GANs to generate new art movements has been explored by [Elgammal et al., 2017] via Creative Adversarial Networks. These networks were designed to generate images that are hard to categorize into existing movements. Unlike our work, there is no modelling of the sequential nature of movements.

Modelling the sequential nature of a dataset is not limited to images/paintings: for instance, the history of music can also be interpreted as a succession of genres. Using GANs for music has been explored by [Mogren, 2016], but again modelling the sequential nature of genres has not been explored.

Figure 2:

Top two rows: a sample of images generated from our estimated “future”. Third and fourth rows, respectively: real paintings from Post-minimalism and New Casualism; these two movements succeed the most recent art in the training set, and are thus used to compare our generated “future” in Section

3.3.

2 Methodology

We now describe the general method used to model a sequence of latent structures of images and use this model to make future predictions. The full procedure is outlined in Algorithm 1. The remaining subsections are devoted to discussing the main steps of this algorithm in detail.

2.1 Generative adversarial networks

A GAN is comprised of two artificial neural networks: a

generator and a discriminator . The two components are pitted against each other in a two-player game: given a sample of real images, the generator produces random “fake” images that are supposed to look like the real sample, while tries to determine whether these generated images are fake or real. An important point is that only has access to the sample of real images; will initially output noise, which will improve as sends feedback. At the same time, will train to become better and better at judging real from fake, until an equilibrium is reached, such that the distribution implicitly defined by the generator corresponds to the underlying distribution of the training data – see [Goodfellow et al., 2014] for more details. In practice, the training procedure does not guarantee convergence. A good training procedure, however, can bring the distribution of the generator very close to its theoretical optimum.

CGANs [Mirza and Osindero, 2014] are an extension of GANs where the generator produces samples by conditioning on extra information. The data that we wish to condition on is fed to both the generator and discriminator. The conditioning information can be a label, an image or any other form of data. For instance, [Mirza and Osindero, 2014] generated specific digits that imitate the MNIST dataset by conditioning on a one-hot label of the desired digit.

More technically: a generator, in the GAN framework, learns a mapping where is random noise and is a sample. A conditional generator, on the other hand, learns mapping , where is the information to be conditioned on. The pair is input to the discriminator as well, so that it learns to estimate the probability of observing given a particular . The objective function of the CGAN is similar to the standard GAN’s: the conditional distribution of the generator converges the underlying conditional distribution of [Chrysos et al., 2018].

In our setting, a CGAN is trained on a dataset of images where every image is associated with a latent vector . The latent vectors are considered realizations of a mixture distribution with density

(1)

Each density corresponds to an artistic movement, and is considered to be a sequence of densities. We again stress that is assumed to be a continuous random variable. See Section 2.2 for details of how to train CGANs with a continuous latent space.

The conditional generator is trained to imitate images from density . After being trained, the generator can be used to sample new images. This can be achieved by sampling from the the latent space . Note that we are capable of sampling from areas of where little data is observed during training. Then the generator is forced to condition on “new” information, thus producing images with novel features.

2.2 Continuous CGAN: training details

Usually, CGANs condition on a discrete label [Mirza and Osindero, 2014] and are straightforward to train: training sets for this task contain many images for each label category. Then training and on generated images is a two-step task: (a) pick a label randomly and generate image given this label, then (b) update model parameters based on the pair.

When training a continuous CGAN, however, each in the training set is associated with a unique . Picking an existing to generate a new is an unsatisfactory solution: if done during training, would learn to generate exact copies of the original associated with . We would also lose the flexibility of being able to use the whole continuous latent space, instead of selecting individual points in it.

As mentioned in Section 2.1, the latent vectors are considered realisations of mixture distribution with components and weights . We propose the novel idea of approximating the latent distribution as a mixture of multivariate normals, and of using this approximation to sample new during and after training. We compute the sample means and covariances . Then each density component is approximated as . The weights are estimated as , the proportion of training images in category .

Generating new for the purpose of training, or for producing images in a trained model, is then done by (a) pick category with probability , (b) draw a random , and (c) use the generator with the current parameter to produce .

2.3 Obtaining the latent codes via autoencoders

Figure 3: Visualization of the latent space. The two dimensions of that best correlate with movement index were chosen, and plotted in the x and y axes. Three movements are highlighted: Early Renaissance (blue, top-left), Impressionism (red, centre), and Minimalism (green, bottom-right). The clustering and temporal ordering are clearly visible.

So far we have assumed that each image is associated with a latent vector . In principle, these latent representations of the images can be obtained with any method. Some reasonable properties of the method are as follows:

  • If images and are similar, then their associated latent vectors and should be close. Here the concept of closeness or “similarity” is not restricted to the the simple pixel-wise norm , but is instead a broader concept of similarity between the features of the images. For instance, two images containing boats should be close in the latent space even if the boat is in a different position in each image.

  • Sampling from needs to be straightforward.

Autoencoders are an easy and flexible choice that satisfy the two points above. For this reason, we choose to use autoencoders in this work. However we stress that any method with the properties described in the list above can be used to obtained the latent codes.

[Johnson et al., 2016] made use of a perceptual loss

function between two images to fulfill the tasks of style transfer and super-resolution. The method, which builds on earlier work by

[Gatys et al., 2016]

, is based on comparing high-level features of the images instead of comparing the images themselves. The high-level features are extracted via an auxiliary pre-trained network, e.g. a VGG classifier

[Simonyan and Zisserman, 2015]

. The same concept can be applied to autoencoders, and the resulting latent space satisfies the above point about preservation of image similarity. We use this perceptual loss specifically for art data: the details are in Section

3.1.

Note that the latent space is learned without knowledge of categories . It is assumed that, when moving from to , the distributions are somewhat ordered. This is, however, not guaranteed. The assumption can be easily tested, as it is done in Section 3.2.

2.4 Predicting the future latent distribution

We make the assumption that , , have a non-trivial relationship, and that they can be interpreted as being a “sequence of distributions”. Furthermore, we assume that this sequential relationship is preserved when we map the distributions to using the autoencoder. The key part of our method is that we assume the latent space and latent distributions to be simple enough that we can predict , which is completely unobserved. Then we aim to use the same conditional generator trained as described in Section 2.1 to sample from , which is equally unobserved. In our setting, the sequence of densities represents, in the case of the WikiArt dataset, a latent sequence of artistic movements.

The underlying distribution of is unknown. Suppose we have realisations from each of these distributions (see Section 2.3); then we model the sequence of latent distributions as follows. We assume that each

follows a normal distribution

. Denote , an estimator of , as the sample mean of . Then the mean is modelled using the following vector autoregression (VAR) model with a linear trend term:

(2)

Vectors , and matrices , are parameters that need to be estimated. Estimation is performed using a sparse specification (e.g. via LASSO) in the high-dimensional case.

Once the parameters are estimated we can predict , the latent mean of the unobserved future distribution.

The covariance of is estimated by . For the WikiArt dataset we observed little change in the empirical covariance structure of , and therefore elected to use an average of the observed covariances.

The future latent distribution is therefore approximated as .

The entire method described in Section 2 is outlined in Algorithm 1.

1 Train a CGAN with generator and discriminator on real and fake pairs ;
2 Estimate , the sample means of the categories of latent codes;
3 Fit a VAR model on and predict ;
4 Draw “future” code , where ;
5 Generate new images by sampling from ;
Algorithm 1 Predicting using CGANs

2.5 Theoretical notes on the procedure

The autoencoder, or any alternative method that satisfies the properties laid out in Section 2.3, maps each image to a low-dimensional latent vector . This mapping implicitly defines a distribution in the latent space, and our assumption is that each distribution of images is mapped to a distribution in the latent space.

The conditional generator produces samples from distribution , where the latent code can come from any of the latent distributions , . Note the superscript “” in , indicating that the distribution implicitly defined by the generator does not necessarily equal the theoretical training optimum (as mentioned in Section 2.1). Nevertheless, we will proceed under the assumption that a good training procedure results in a conditional generator close to the theoretical equilibrium. The conditional generator, just like the autoencoder, does not know which movement and belong to.

Recall that the overall distribution of all latent codes was modelled as a mixture of the movement-wise distributions in equation (1). Our method is based on the premise that, while the conditional GAN is trained on the whole space of the movements, new samples can be generated from an individual movement by conditioning on random variable from . That is, if we draw , the conditional generator will produce sample whose empirical distribution is close to . This is motivated by marginalizing out of :

(3)

where

is the cumulative distribution function associated with

, and is the distribution implicitly defined by the generator. The properties described in this subsection, together with all of Section 2, is summarized in Figure 2.3.

3 Results

The performance of our method presented in Section 2 is demonstrated on the WikiArt dataset222https://www.wikiart.org/

, where each category represents an art movement. All experiment are implemented with Tensorflow

[Abadi et al., 2015]

via Keras, and run on a NVIDIA GeForce GTX 1050. All code is available at

https://github.com/cganart/gan_art_2019.

After the introduction of the setting, the structure of the resulting latent spaces is discussed in Section 3.2. Finally, Section 3.3 describes the prediction and generation of future art from .

3.1 WikiArt results

The dataset considered is the publicly available WikiArt dataset, which contains images categorized into various movements, types (e.g. portrait or landscape), artists, and sometimes years. We use the central square of each image, re-sized to pixels. Note that a small number of raw images are unable to be reshaped into our desired format, reducing the total sample size to .

Additionally, note that all images considered are paintings; images that are tagged as “sketch and study”, “illustration”, “design” or “interior” were excluded. The remaining images can then be categorised into notable and well-defined artistic movement from Western art history (see Table 1).

Movement Year n
Early Renaissance 1440 1194
High Renaissance 1510 1005
Mannerism 1560 1204
Baroque 1660 3883
Rococo 1740 2108
Neoclassicism 1800 1473
Romanticism 1825 7073
Realism 1860 8680
Impressionism 1885 8929
Post-Impressionism 1900 5110
Fauvism 1905 680
Expressionism 1910 6232
Cubism 1910 1567
Surrealism 1930 3705
Abstract Expressionism 1945 1919
Tachisme / Art Informel 1955 1664
Lyrical Abstraction 1960 652
Hard Edge Painting 1965 362
Op Art 1965 480
Minimalism 1970 446
Table 1: Summary of the WikiArt dataset. “Year” is the approximate median year of the art movement, N is number of images.

In order to apply Algorithm 1, each image in the dataset needs to be associated with a latent vector . As described in Section 2.3, a non-variational autoencoder with perceptual loss is utilized. Note again that the category labels associated with each image are not revealed to the autoencoder when training it. Two autoencoders are separately trained with content loss and style loss which are now defined:

Content loss

where is the convolutional activation of a trained auxiliary classifier, while are the dimensions of the output of that same layer. [Johnson et al., 2016] and [Gatys et al., 2016]

explain how this loss function is minimized when two images share extracted features that represent the overall shapes and structures of objects and backgrounds; they also discuss how the choice of

influences the result. Each image is thus associated with a content latent vector .

Style loss

where is the Gram matrix of layer of the same auxiliary classifier used for the content loss. This loss function is used to measure the similarity between images that share the same repeated textures and colours, which we collectively call style. Each image is thus associated with a style latent vector .

The auxiliary classifier is obtained by training a simplified version of the VGG16 network [Simonyan and Zisserman, 2015] on the tinyImageNet dataset333https://tiny-imagenet.herokuapp.com/. Once each image has its content and style latent vectors, these are concatenated to obtain .

Figure 4: Evolution of artistic movements as generated by our method. The last column, labeled “future”, is the prediction , where . Note that there is no relationship between images in the same row but different column.

Finally, the CGAN is trained by conditioning on the continuous latent space, as described in Algorithm 1. Figure 4 contains examples of generated images from various artistic movements. Some qualitative comments can be remarked (whereas quantitative evaluations are in Sections 3.2, 3.3):

  • There is very good between-movement variation and within-movement variation. It is hard to find two generated images that are similar to each other.

  • One of the main reasons that guided the use of a perceptual autoencoder was the fact that movements vary not only in style (e.g. color, texture) but also in content (e.g. portrait or landscape). From this point of view our method is a success. Each movement appears to have its own set of colors and textures. Additionally, movements that were overwhelmingly portraits in the training set (e.g. Baroque) result in generated images that mostly mimic the general structure of human figures. Similarly, movements with a lot of landscapes (e.g. Impressionism) result in generated images that are also mostly landscapes; the latter tend to be of very good quality.

  • More abstract movements (e.g. Lyrical Abstraction) result in very colorful generated images with little to no structure, as is to be expected. Interesting behaviors can be observed: Op Art paintings, for instance, are generally very geometric and often remind of chessboards, and the generator’s effort to reproduce this can be clearly observed (Figure 4). The same can be said of Minimalist art, where many paintings are monochromatic canvas; the generator does a fairly good job at reproducing this as well.

A drawback of using the WikiArt dataset is that the relatively small number of movements () forces the use of a very sparse version of VAR [Kilian and Lütkepohl, 2017]. As a result, the predicted future mean is almost entirely determined by the linear trend component of the VAR model, ; the autoregressive component is largely non-influential, as the parameter matrix is shrunk to by the sparse formulation.

3.2 Latent space analysis

Section 2.3 mentioned that it is not guaranteed that the

categories will actually be ordered in the latent space, although it is expected. We implement a simple heuristic to test this in the WikiArt case: suppose that

and that is a matrix with as rows (

is the dimension of the latent space). Then we can fit a simple linear regression

, where and and are parameters. We do this for various types of latent vectors obtained with different loss functions: pixel-wise cross-entropy, style-only, content-only, the sum of the latter two (“joint”), and a concatenation of style-only and content-only.

Table 2 displays the values for each type of latent vector, which can be directly compared as matrix always has size . The mean of the absolute correlations between pairs of the dimensions of each latent space is also presented in Table 2. This is a simple measure of how the various dimensions of the latent vectors are correlated with each other.

Standard Style Content Joint Concat.
cor.
Table 2: Performance when regressing movement label on movement-means of various types of latent spaces.

The results suggest using a perceptual loss instead of a pixel-wise loss: the results for the latter columns (the different types of perceptual losses) are much better than the “standard” latent space obtained via pixel-wise cross-entropy. Further, the results suggest using two separate autoencoders for style and loss and then concatenating the resulting latent vectors: the last column has the highest of all methods, while also having a between-dimensions correlation that is lower than using a sum of style loss and content loss. Overall, this is an impressive result: recall that the autoencoders do not have access to the movement labels . Despite this, the latent vectors are able to predict those same movement labels quite accurately. This result confirms that there is indeed a natural ordering of the art movements (which corresponds to their temporal order), and that this natural ordering is reflected in the latent space. This can also be seen in the means of the clusters of latent vectors in Figure 3.

Figure 5 displays a heatmap of distances between pairs of movements in the latent space. Most notably, the matrix exhibits a block-diagonal structure. This means that (a) movements that are chronologically close are also close in the latent space and (b) there tends to be an alternation between series of movements being similar to each other and points where a new movement breaks from the past more significantly. Figure 5 also shows the position of predicted and real “future” (or current) movements relatively to the movements in the training set. More detail can be found is Section 3.3.

Figure 5: Matrix of Euclidean distances between the means of individual movements in the latent space. The movements are ordered chronologically. The last two columns/rows represent true Post-Minimalist paintings as well as our prediction. Note the block-diagonal tendencies.

3.3 Future prediction

Once the CGAN is fully trained on the dataset of training set categories, autoregression methods are used to generate from the unobserved category (the future). As described in Section 2.4, we use a simple linear trend plus sparse VAR on the means of the categories in the latent space. This results in predicted mean , while the predicted covariance is simply the mean of the training covariances. Then we sample new latent vectors from , and feed them to the trained conditional generator together with the random noise vector. The result is generated images that condition on an area of the latent space which is not covered by any of the existing movements. Instead, this latent area is placed in a “natural” position after the sequence of successive movements. A collection of generated “future” images can be found in Figure 4.

As summarized in Table 1, the WikiArt dataset only contains large, well-defined art movements up to the , the most recent one being Minimalism. The same dataset, however, also contains smaller movements that were developed after Minimalism. In particular, Post-Minimalism and New Casualism can be considered successors of the latest of the training movements, but they contain too few images to be considered for training the CGAN. They can, however, be used to compare our “future” predictions with what actually came after the last movement in the training set. We use the same autoencoder to map each image in Post-Minimalism and New Casualism. Then, after generating images from predicted movement , we compute the euclidean distance of the means and MMD distance [Gretton et al., 2012] from the real small movements in the latent space. The results are summarised in Table 3 and are included in the distance matrix in Figure 5.

The results indicate a success: according to all metrics, the distance between the generated future and the real movements is small when compared to other between-movement distances shown in Figure 5. In particular, the generated images are closer to Post-Minimalism and New Casualism than they are to the last training movement, i.e. Minimalism. This indicates that our prediction of the future of art is not a mere copy of the most recent observed movement, but rather a jump in the right direction towards the true evolution of new artistic movements.

Post-minimalism New Casualism
Euclid.
Euclid.
MMD
MMD
Table 3: Two types of distances between real recent movements (columns) and either future predictions () or last movement in the training set (Minimalism, ).

4 Discussion

In this paper, we introduced a novel machine learning method to bring new insights to the problem of periodization in art history. Our method is able to model art movements using a simple low-dimensional latent structure and generate new images using CGANs. By reducing the problem of generating realistic images from a complicated, high-dimensional image space to that of generating from low-dimensional Gaussian distributions, we are able to perform statistical analysis, including one-step-ahead forecasting, of periods in art history by modelling the low-dimensional space with a vector autoregressive model. The images we produced resemble real art, including real art from held-out “future” movements.

References

  • Abadi et al. [2015] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
  • Chrysos et al. [2018] Grigorios G Chrysos, Jean Kossaifi, and Stefanos Zafeiriou. Robust conditional generative adversarial networks. arXiv:1805.08657, 2018.
  • Elgammal et al. [2017] Ahmed Elgammal, Bingchen Liu, Mohamed Elhoseiny, and Marian Mazzone. Can: Creative adversarial networks generating “art” by learning about styles and deviating from style norms. arXiv:1706.07068, 2017.
  • Gatys et al. [2016] Leon A Gatys, Alexander S Ecker, and Matthias Bethge.

    Image style transfer using convolutional neural networks.

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    , pages 2414–2423, 2016.
  • Gauthier [2014] Jon Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester, 2014.
  • Ginosar et al. [2015] Shiry Ginosar, Kate Rakelly, Sarah Sachs, Brian Yin, and Alexei A Efros. A century of portraits: A visual historical record of american high school yearbooks. Extreme Imaging Workshop, ICCV, pages 1–7, 2015.
  • Goodfellow et al. [2014] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. Advances in Neural Information Processing Systems, 27:2672–2680, 2014.
  • Gretton et al. [2012] Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Scholkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723–773, 2012.
  • Isola et al. [2017] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition, pages 5967–5976, 2017.
  • Johnson et al. [2016] Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. European Conference on Computer Vision, pages 694–711, 2016.
  • Kaufmann [2010] T Kaufmann. Periodization and its discontents. Journal of Art Historiography, 2(2), 2010.
  • Kilian and Lütkepohl [2017] Lutz Kilian and Helmut Lütkepohl. Structural Vector Autoregressive Analysis. Cambridge University Press, Cambridge, 2017.
  • Li and She [2017] Xiaopeng Li and James She. Collaborative variational autoencoder for recommender systems. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 305–314. ACM, 2017.
  • Mirza and Osindero [2014] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014.
  • Mogren [2016] Olof Mogren.

    C-rnn-gan: Continuous recurrent neural networks with adversarial training.

    Constructive Machine Learning Workshop at NIPS, 2016.
  • Schapiro [1970] Meyer Schapiro. Criteria of periodization in the history of european art. New Literary History, 1(2):113–125, 1970. ISSN 00286087, 1080661X. URL http://www.jstor.org/stable/468623.
  • Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. Proceedings of the International Conference on Learning Representations, 2015.
  • Sims [1980] Christopher A Sims. Macroeconomics and reality. Econometrica: Journal of the Econometric Society, pages 1–48, 1980.
  • Vo and Soh [2018] Thanh Vinh Vo and Harold Soh. Generation meets recommendation: Proposing novel items for groups of users. In Proceedings of the 12th ACM Conference on Recommender Systems, RecSys ’18, pages 145–153. ACM, 2018.

5 Appendix

5.1 Yearbook results

The yearbook dataset introduced by [Ginosar et al., 2015] contains photographs of faces of male students and female students from US universities. Each photo is labeled by the year it was taken, where the oldest images are from while the latest are from . Post- pictures are kept out of the training set, since they are going to be used as ground truth when comparing to our prediction of the future.

The model summarized in Algorithm 1 is applied to this dataset. However, unlike the WikiArt example, a standard autoencoder is used to learn the latent space; the autoencoder is trained on the male images and used to predict the latent codes of the female images, and vice versa. A conditional GAN is then trained on the pairs of images and latent codes.

Although smaller than WikiArt, this dataset has the advantage of having well-defined “year” labels, as opposed to an ordinal succession of artistic movements. The number of years covered, being more than , also provides benefits when fitting the VAR model in the latent space.

A collection of generated images is presented in Figure 6. A few qualitative comments can be made:

  • As the years progress, various changes can be noticed. Most prominently we observe the evolution of hairstyles and makeup, the diversification of race, and the increasing prevalence of smiles.

  • The model is able to capture the fact that images in older years are more uniform (e.g. same hairstyles and expressions) while more recent periods show more variety.

Figure 6: Evolution of yearbook faces over various decades. The last column contains a prediction from the model trained on all previous decades. The prediction is a mix of images from some of the predicted distributions . Note that there is no relationship between images in the same row but different column.