1 Introduction
Generative Adversarial Networks (GANs) are currently one of the most popular lines of research in machine learning. Research on GANs mainly revolves around (a) how to achieve faster and/or better convergence (e.g., by studying different loss functions
(Nowozin et al., 2016; Arjovsky and Bottou, 2017; Mao et al., 2017) or regularization schemes (Odena et al., 2018; Miyato et al., 2018; Gulrajani et al., 2017)) and (b) how to design generators that can effectively model complicated highdimensional distributions (e.g., by progressively training large networks (Karras et al., 2018) or by utilizing deep ResNet type of networks as generators (Brock et al., 2019)). Nevertheless, as stated in the recent indepth comparison of many different GAN training schemes (Lucic et al., 2018), the improvements may arise from a higher computational budget and tuning more than fundamental algorithmic changes.Motivated by the aforementioned empirical finding we take an orthogonal approach and investigate a direction that has not been explored in the literature before. We propose to model a vectorvalued generator function
by a highorder multivariate polynomial of to the inputand efficiently learn its parameters tensors by means of tensor decomposition. Concretely, we apply a hierarchical shared tensor decomposition to the parameters tensors, which is specifically tailored to capture interactions of latent variables across different levels of approximation. Moreover this specific decomposition allows us to implement the polynomial approximation function as a hierarchical structure (e.g., a neural network decoder) which can be used as a generator in a GAN setting. The proposed polynomialbased generator GAN (PolyGAN) provides an intuitive way of generating samples with an increasing level of detail. This is pictorially shown in Fig.
1. The result of the proposed GAN using a fourthorder polynomial approximator is shown in Fig. 1 (a), while Fig. 1 (b) shows the corresponding generation when removing the fourthorder power from the generator.The multivariate approximation is preferable to a classic compositional neural network with nonlinear activations mainly due to the following two reasons:

The nonlinear activation functions pose a difficulty in theoretical analysis, e.g. convergence. Several methods, e.g. (Saxe et al., 2014; Hardt and Ma, 2017; Laurent and Brecht, 2018; Lampinen and Ganguli, 2019), focus only on linear models (with respect to the weights) in order to be able to rigorously analyze the neural network dynamics, the residual design principle, local extrema and generalization error, respectively. As illustrated in (Arora et al., 2019), elementwise nonlinearities pose a challenge on proving convergence and pose a “major difficulty” in an adversarial learning setting (Ji and Liang, 2018).

On the other hand, our polynomialbased analysis relies on strong evidence. The Stone–Weierstrass theorem (Stone, 1948) proves that every continuous function defined on a closed interval can be uniformly approximated as closely as desired by a polynomial function
. The current practice in deep learning is that we are approximating smooth, continuous functions (required by backpropagation).
We demonstrate that the proposed generator is agnostic to the GAN training scheme through extensive experimentation with three different widely used GAN architectures, i.e., DCGAN (Radford et al., 2015), SNGAN (Miyato et al., 2018), and SAGAN (Zhang et al., 2019). Furthermore, we introduce an experimental result that has not emerged before: we remove the activation functions of the generator (keeping only the typical in the last layer) and show that we can generate compelling images. We will release the code in http://anonymous.
2 Method
In this section, a novel approach for approximating generators in GANs is introduced. The notation is summarized in Section 2.1, with the derivation of the polynomial approximation and its factorization following in Section 2.2.
2.1 Preliminaries and notation
Matrices (vectors) are denoted by uppercase (lowercase) boldface letters e.g., , (). Tensors are denoted by calligraphic letters, e.g., . The order of a tensor is the number of indices needed to address its elements. Consequently, each element of an thorder tensor is addressed by indices, i.e., .
The mode unfolding of a tensor maps to a matrix with such that the tensor element is mapped to the matrix element where with .
The mode vector product of a tensor with a vector , denoted by . The result is of order and is defined elementwise as
(1) 
Furthermore, we denote .
The KhatriRao product (i.e., columnwise Kronecker product) of matrices and is denoted by and yields a matrix of dimensions . The Hadamard product of and is defined as and is equal to for the element.
The CP decomposition (Kolda and Bader, 2009; Sidiropoulos et al., 2017) factorizes a tensor into a sum of component rankone tensors. An thorder tensor has rank, when it is decomposed as the outer product of vectors . That is, , where denotes for the vector outer product. Consequently, the rank CP decomposition of an thorder tensor is written as:
(2) 
where the factor matrices collect the vectors from the rankone components. By considering the mode unfolding of , the CP decomposition can be written in matrix form as (Kolda and Bader, 2009):
(3) 
2.2 Highorder polynomial generators
GANs typically consist of two deep networks, namely a generator and a discriminator . is a decoder (i.e., acting as a function approximator of the sampler of the target distribution) which receives as input a random noise vector and outputs a sample . receives as input both and real samples and tries to differentiate the fake and the real samples. During training, both and compete against each other till they reach an “equilibrium” (Goodfellow et al., 2014). In practice, both the generator and discriminator are modeled as deep neural networks, involving composition of linear and nonlinear operators (Radford et al., 2015).
In this paper, we focus on the generator. Instead of modeling the generator as a composition of linear and nonlinear functions, we assume that each generated pixel may be expanded as a order polynomial^{1}^{1}1With an order polynomial we can approximate any smooth function (Stone, 1948). in . That is,
(4) 
where scalars , and the set of tensors are the parameters of the polynomial expansion associated to each output of the generator (i.e., pixel). Clearly, when the weights are dimensional vectors, when , the weights form a matrix and for higher orders of approximation, i.e., when , the weights are th order tensors.
By stacking the parameters for all pixels and all parameters , we define the parameters , and . Consequently, the vectorvalued generator function is expressed as:
(5) 
Intuitively, the aforementioned functional form is an expansion which allows the order interactions between the elements of the noise vector . Furthermore, it is worth noting that (5) resembles the functional form of a truncated Maclaurin expansion of vectorvalued functions. In the case of a Maclaurin expansion, represent the
order partial derivatives of a known function. However, in our case the generator function is unknown and hence all the parameters need to be estimated from training samples.
The number of the unknown parameters in (5) is , which grows exponentially with the order of the approximation. Consequently, model equation 5 is prone to overfitting and its training is computationally demanding.
A natural approach to reduce the number of parameters is to assume that the weights have redundancy and hence the weight tensors are of lowrank. To this end, several lowrank tensor decomposition can be employed (Kolda and Bader, 2009; Sidiropoulos et al., 2017). Let the parameter tensors admit a CP decompostion (Kolda and Bader, 2009) of mutilinear rank, namely, , with , and , for . Then, (5) is expressed as
(6) 
which has significantly less parameters than (5), especially when . However, a set of different factor matrices for each level of approximation are required in equation 6, and hence the correlation of pixels at different levels of approximations is not taken into account.
To alleviate this, and further reduce the number of parameters we propose the following factorization:
(7) 
with for . This hierarchical decomposition, which admits parameters, can be implemented with a neural network structure as a GAN decoder.
The main building injection block: Third order approximation
To illustrate the proposed approach we consider a third order function approximation ():
(8) 
The following lemmas are used to transform (9) into a network structure; their proofs are deferred to the appendix.
Lemma 1.
For the sets of matrices and , it holds that
(10) 
Lemma 2.
Let
(11) 
It holds that
(12) 
(14) 
The last equation can be implemented in a hierarchical manner with a threelayer neural network as shown in Fig. 2.
3 Related work
Since the literature on GANs is vast, we refer the interested reader to a recent survey on the topic (Creswell et al., 2018). In what follows, we focus only on the most closely related works to ours.
The authors of (Berthelot et al., 2017) use skip connections to concatenate the noise in deeper layers in the generator. The recent BigGAN (Brock et al., 2019) performs a hierarchical composition through skip connections from the noise to multiple resolutions of the generator. In their implementation, they split into one chunk per resolution and concatenate each chunk (of ) to the respective resolution.
Despite the propagation of the noise to successive layers, the aforementioned works have substantial differences from ours. We introduce a wellmotivated and mathematically elaborate method to achieve a more precise approximation in a power serieslike fashion. In contrast to the previously mentioned works, we also do not concatenate the noise with the feature representations, but rather perform multiplication of the noise with the feature representations, which we mathematically justify.
The work that is most closely related to ours is the recently proposed StyleGAN (Karras et al., 2019), which is an improvement over the Progressive Growing of GANs (ProGAN) (Karras et al., 2018). As ProGAN, StyleGAN is a highlyengineered network that achieves compelling results on synthesized 2D images. In order to provide an explanation on the improvements of StyleGAN over ProGAN the authors adopt arguments from the style transfer literature (Huang and Belongie, 2017). Nevertheless, the idea of style transfer proposes to use features from images for conditional image translation, which is very different to unsupervised samples (image) generation. We believe that these improvements can be better explained under the light of our proposed polynomial function approximation. That is, as we show in Fig. 1, the injection layers allows to build a hierachical decomposition with increasing level of details (rather than different styles).
In addition, the improvements in StyleGAN (Karras et al., 2019) are demonstrated by using a welltuned model, while in this paper we showcase that without any complicated engineering process the injection can be applied into several generators (or any other type of decoders) and consistently improve the performance.
4 Experiments
In this section, we describe the experimental setup as well as the obtained quantitative and qualitative results. In Section 4.1, we implement exactly the theoretical analysis of Section 2 and thus experimentally solidify our methodology. In Section 4.2, we extend the framework of the previous section in realworld images. Finally, in Section 4.3, we utilize more challenging datasets and stateoftheart network structures and establish that our framework is architectureagnostic and consistently outperforms the baselines. An ablation study as well as further quantitative and qualitative results are deferred to the appendix.
We note that throughout our experiments we minimally modify the generators of the implemented architectures and derive our approach as visualized in Fig. 2. Additionally, we implement the most closely related alternative to our framework, namely instead of using the Hadamard operator as in Fig. 2, we concatenate the noise with the feature representations at that layer/block. The latter approach is frequently used in the literature (Berthelot et al., 2017; Brock et al., 2019) (referred as “Concat” in the paper). The number of the trainable parameters of the generators of the compared methods are reported in Table 3. Our method has only a minimal increase of the parameters, while the concatenation increases the number of parameters substantially.
4.1 Synthetic
In this synthetic experiment we assess the polynomialbased generator on a sinusoidal function in the bounded domain [
]. Only linear blocks, i.e., no activation functions, are used in the generator. That is, all the elementwise nonlinearities (such as ReLU’s,
) are ditched.The distribution we want to match is a signal, i.e., the input to the generator is and the output is with . The generator architecture consists of fully connected (FC) layers with hidden units each. In Fig. 3, for each compared method, samples are visualized.
4.2 Generators with Linear Blocks
We extend the linear generation as described in Sec. 4.1 to realworld images. Since realworld data are more intricate, we capitalize on the expressivity of the recent resnetbased SNGAN (Miyato et al., 2018). In our case, we remove all intermediate activations and retain only in the last layer (for normalization purposes). For every resnetbased generator, we consider each resnet block as one term of the expansion equation 5.
In Fig. 5 we illustrate conditional generation in MNIST (LeCun et al., 1998); in Fig. 4 we do unsupervised generation in MNIST. As can be seen, there is a significant visual difference. Specifically, the “Orig” and “Concat” methods result in severe mode collapse while they fail to retain any fine details. The corresponding experiment for conditional CIFAR10 (Krizhevsky et al., 2014) is deferred to the supplementary material.
4.3 Extensive experiments on image generation
We use three different popular GAN architectures. In particular, we utilize DCGAN (Radford et al., 2015), SNGAN (Miyato et al., 2018), and SAGAN (Zhang et al., 2019). The original implementations of the aforementioned GANs correspond to our baselines; we then minimally modify the baseline (without adapting any hyperparameters for fair comparison) to obtain our proposed approach. Algorithms 1 and 2 succinctly present the key differences of our approach compared to the traditional one (in the case of SNGAN, similarly for other architectures).
To reduce the variance often observed during GAN training
(Lucic et al., 2018; Odena et al., 2018), each reported score is averaged over runs utilizing different seeds. The metrics we utilized are Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017).4.3.1 Unsupervised image generation
In this experiment, we study the image generation problem without any labels or class information for the images. The architectures of DCGAN and resnetbased SNGAN are used for image generation in CIFAR10 (Krizhevsky et al., 2014), a widely used dataset to benchmark generative methods. CIFAR10 includes images of resolution. We use images for training and the rest for testing. Table 1 summarizes the results of the IS/FID scores of the compared methods. In all of the experiments, PolyGAN outperforms the compared methods.
4.3.2 Conditional image generation
Frequently class information is available in the databases, which we can utilize, e.g. in the formation of conditional batch normalization or class embeddings, to synthesize images conditioned on a class. We train two networks, i.e., SNGAN
(Miyato et al., 2018) in CIFAR10 (Krizhevsky et al., 2014) and SAGAN (Zhang et al., 2019)in Imagenet
(Russakovsky et al., 2015). SAGAN uses selfattention blocks (Wang et al., 2018) to improve the resnetbased generator. Imagenet is a large scale dataset that includes over one million training samples and validation images. We reshape the images to resolution.Despite our best efforts to show that our method is both architecture and database agnostic, the recent methods are run for hundreds of thousands or even million iterations till “convergence”. In SAGAN the authors report that for each training multiple GPUs need to be utilized for weeks to reach the final reported Inception Score. We report the metrics for networks that are run with batch size (i.e., four times less than the original ) to fit in a single 16GB NVIDIA V100 GPU. Following the current practice in ML, due to the lack of computational budget (Hoogeboom et al., 2019), we run SAGAN for iterations (see Fig.3 of the original paper for the IS during training)^{2}^{2}2Given the batch size difference, our training corresponds to roughly the steps of the authors’ reported results.. Each such experiment takes roughly days to train. The FID/IS scores of our approach compared against the baseline method can be found in Table 2. In both cases, our proposed method yields a higher Inception Score and a lower FID.


5 Conclusion
In this work, we study data generation as a hierarchical regression task and introduce a decomposition that yields a function approximation as a highorder polynomial. We implement the decomposition on three different popular GAN architectures and demonstrate that with minimal increase in the network parameters the new decomposition outperforms the original or recently proposed variants of them by a significant margin. We additionally showcase that our decomposition can be used to synthesize images without any activation functions in the generator, i.e., by utilizing only linear blocks.
6 Acknowledgements
We would like to thank Takeru Miyato for the advice on implementing the baselines of our experiments. We are thankful to Nvidia for the hardware donation and Amazon web services for the cloud credits. The work of GC and SM was partially funded by an Imperial College DTA. The work of Stefanos Zafeiriou was partially funded by the EPSRC Fellowship DEFORM: Large Scale Shape Analysis of Deformable Models of Humans (EP/S010203/1) and a Google Faculty Award.
References
 Towards principled methods for training generative adversarial networks. In International Conference on Learning Representations (ICLR), Cited by: §1.
 A convergence analysis of gradient descent for deep linear neural networks. In International Conference on Learning Representations (ICLR), Cited by: 1st item.
 Began: boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717. Cited by: §3, §4.
 Large scale gan training for high fidelity natural image synthesis. In International Conference on Learning Representations (ICLR), Cited by: §C.3, §C.3, §C.3, §1, §3, §4.
 Generative adversarial networks: an overview. IEEE Signal Processing Magazine 35 (1), pp. 53–65. Cited by: §C.2, §3.
 Generative adversarial nets. In Advances in neural information processing systems (NIPS), Cited by: §2.2.
 Improved training of wasserstein gans. In Advances in neural information processing systems (NIPS), pp. 5767–5777. Cited by: §1.
 Identity matters in deep learning. In International Conference on Learning Representations (ICLR), Cited by: 1st item.
 Gans trained by a two timescale update rule converge to a local nash equilibrium. In Advances in neural information processing systems (NIPS), pp. 6626–6637. Cited by: §C.2, §4.3.
 Emerging convolutions for generative normalizing flows. In International Conference on Machine Learning (ICML), Cited by: §4.3.2.

Arbitrary style transfer in realtime with adaptive instance normalization.
In
IEEE Proceedings of International Conference on Computer Vision (ICCV)
, pp. 1501–1510. Cited by: §3.  Minimax estimation of neural net distance. In Advances in neural information processing systems (NIPS), pp. 3845–3854. Cited by: 1st item.
 Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations (ICLR), Cited by: §1, §3.

A stylebased generator architecture for generative adversarial networks.
In
IEEE Proceedings of International Conference on Computer Vision and Pattern Recognition (CVPR)
, Cited by: §3, §3.  Tensor decompositions and applications. SIAM review 51 (3), pp. 455–500. Cited by: §2.1, §2.1, §2.2.
 The cifar10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html 55. Cited by: §4.2, §4.3.1, §4.3.2, Table 1.

An analytic theory of generalization dynamics and transfer learning in deep linear networks
. In International Conference on Learning Representations (ICLR), Cited by: 1st item.  Deep linear networks with arbitrary loss: all local minima are global. In International Conference on Machine Learning (ICML), Cited by: 1st item.
 Gradientbased learning applied to document recognition. Proceedings of the IEEE 86 (11), pp. 2278–2324. Cited by: §4.2.
 Are gans created equal? a largescale study. In Advances in neural information processing systems (NIPS), pp. 700–709. Cited by: §C.2, §1, §4.3.
 Least squares generative adversarial networks. In IEEE Proceedings of International Conference on Computer Vision (ICCV), pp. 2813–2821. Cited by: §1.
 Spectral normalization for generative adversarial networks. In International Conference on Learning Representations (ICLR), Cited by: 2nd item, §C.2, §1, §1, §4.2, §4.3.2, §4.3, Table 1.
 Fgan: training generative neural samplers using variational divergence minimization. In Advances in neural information processing systems (NIPS), pp. 271–279. Cited by: §1.
 Is generator conditioning causally related to gan performance?. In International Conference on Machine Learning (ICML), Cited by: §1, §4.3.
 Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: 1st item, §1, §2.2, §4.3, Table 1.
 Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV) 115 (3), pp. 211–252. Cited by: 3rd item, §4.3.2.
 Improved techniques for training gans. In Advances in neural information processing systems (NIPS), pp. 2234–2242. Cited by: §C.2, §4.3.
 Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In International Conference on Learning Representations (ICLR), Cited by: 1st item.
 Tensor decomposition for signal processing and machine learning. IEEE Transactions on Signal Processing 65 (13), pp. 3551–3582. Cited by: §2.1, §2.1, §2.2.
 The generalized weierstrass approximation theorem. Mathematics Magazine 21 (5), pp. 237–254. Cited by: 2nd item, footnote 1.
 Going deeper with convolutions. In IEEE Proceedings of International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. Cited by: §C.2, §C.2.
 A note on the evaluation of generative models. In International Conference on Learning Representations (ICLR), Cited by: §C.2.
 Nonlocal neural networks. In IEEE Proceedings of International Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7794–7803. Cited by: 3rd item, §4.3.2.
 Selfattention generative adversarial networks. In International Conference on Machine Learning (ICML), Cited by: 3rd item, §C.3, §1, §4.3.2, §4.3.
Appendix A Introduction
This is the supplementary material accompanying the submission “PolyGAN: HighOrder Polynomial Generators”. The following sections are organized as:
Appendix B Theoretical study
Additional notation: The KhatriRao of a set of matrices is denoted by .
In this section, we will prove the following identity connecting the sets of matrices and :
(15) 
To demonstrate the simple case with two matrices, we prove first the special case with .
Lemma 4.
It holds that
(16) 
Proof.
Initially, both sides of the equation have dimensions of , i.e. they match. The element of the matrix product of is
(17) 
Then the element of the right hand side (rhs) of equation 16 is:
(18) 
From the definition of KhatriRao, it is straightforward to obtain the element with of as . Similarly, the element of is .
The respective element of the left hand side (lhs) of the equation is:
(19) 
In the last equation, we replace the sum in () with the equivalent sums in . ∎
In a similar manner, we generalize the identity to the case of terms below.
Lemma 5.
It holds that
(20) 
Proof.
In the right hand side (rhs), we have Hadamard products of the matrix multiplications . Each multiplication results in a matrix of dimensions. Thus, the rhs is a matrix of dimensions.
The lhs is a matrix multiplication of two KhatriRao products. The first KhatriRao product has dimensions , while the second . Altogether, the lhs has dimensions.
Similarly to the previous Lemma, the element of the rhs is:
(21) 
To proceed with the lhs, it is straightforward to derive that
(22) 
where and is a recursive function of the .
However, the recursive definition of is summed in the multiplication and we obtain:
(23) 
∎
Lemma 6.
Let
(24) 
It holds that
(25) 
Proof.
(26) 
Appendix C Experiments
In section C.1 further information on the baseline architectures are mentioned, in section C.2 the implementation details are developed, and in section C.3 an ablation study is performed.
c.1 Baseline details
Three recent methods are used for the experimental validation of our proposed generators. The architectures employed as baselines are:

DCGAN (Radford et al., 2015), as implemented in https://github.com/pytorch/examples/tree/master/dcgan. This is a widely used baseline.

SNGAN (Miyato et al., 2018), as implemented in https://github.com/pfnetresearch/sngan_projection. SNGAN is a recent strong performing GAN that introduced a spectral normalization in the discriminator.

SAGAN (Zhang et al., 2019), as implemented in https://github.com/voletiv/selfattentionGANpytorch. This is a recent network architecture that utilizes the notion of selfattention (Wang et al., 2018) in a GAN setting, achieving impressive results on Imagenet (Russakovsky et al., 2015).
For all the aforementioned architectures, the default hyperparameters are left unchanged. The aforementioned codes are used for reporting the results of both the baseline and our method to avoid any discrepancies, e.g. different frameworks resulting in unfair comparisons. The source code will be released to enable the reproduction of our results.
c.2 Implementation details
We experimentally define that in deeper networks instead of injecting directly the input noise , we might need fully connected layer(s) to transform it into before injecting it. The aforementioned fully connected layers are henceforth mentioned as global transformations on . Similarly the fully connected layers before the injection are mentioned as local transformations on .
The implementation details for each network are the following:

DCGAN: We use a global transformation followed by a RELU nonlinearity. We also use local transformations followed by RELU nonlinearities. The rest details remain the same as the baseline model.

SNGAN: Similarly to DCGAN, we use a global transformation with a RELU nonlinearity. Each local transformation is only composed of a fullyconnected layer.
We have not tried optimizing further the nonlinearities or the layers in the local transformations. It is possible that with additional engineering, it might yield superior results depending on the task. However, our goal is to assess the performance without additional overhead or engineering.
In the resnetbased generators, e.g., SNGAN or SAGAN, we perform injection after each block, i.e. see algorithms in the main paper.
In the conditional SNGAN/SAGAN, the injection is only performed on the noise, i.e. no class information is injected.
Evaluation metrics: The popular Inception Score (IS) (Salimans et al., 2016) and Frechet Inception Distance (FID) (Heusel et al., 2017)
are used for the quantitative evaluation. Both scores use feature representations from a pretrained classifier. Despite their shortcomings, IS and FID are widely used
(Lucic et al., 2018; Creswell et al., 2018), since alternative metrics fail for generative models (Theis et al., 2016).The Inception Score is defined as
(30) 
where is a generated sample and is the conditional distribution for labels (in practice the Inception network (Szegedy et al., 2015)). The distribution over the labels is approximated by for generated samples. Following the methods in the literature (Miyato et al., 2018), we compute the inception score for generated samples per run ( splits for each run).
The Frechet Inception Distance (FID) utilizes feature representations from a pretrained network (Szegedy et al., 2015) and assumes that the distributions of these representations are Gaussian. Denoting the representations of real images as and the generated (fake) as , FID is:
(31) 
In the experiments, we use to compute the mean and covariance of the real images and synthesized samples for .
For both scores the original tensorflow inception network weights are used; the routines of tensorflow.contrib.gan.eval are called for the metric evaluation.
c.3 Ablation study
In this section we conduct an ablation study to further assess our method. The experiments are based on the SNGAN since most recent methods are based on the similar generator (Zhang et al., 2019; Brock et al., 2019). Unless, explicitly mentioned otherwise, the experiments are on SNGAN trained CIFAR10 for unsupervised image generation in this section.
In the first experiment we evaluate the performance of the additional fullyconnected (FC) layer on z. We namely report two alternatives: i) with linear global transformation (‘Ourslinearglobal’) and ii) with global transformation with RELU nonlinearity (‘OursRELUglobal’).
SNGAN on CIFAR10  

Model  IS  FID 
Ourslinearglobal  
OursRELUglobal 
Based on the previous experiment, we insert the same global transformation in the original SNGAN, i.e. we insert a fully connected layer with RELU activation in the input of the generator. The original is mentioned as ‘Orig’, while the alternative of adding a global transformation as ‘OriginalRELUglobal’.
SNGAN on CIFAR10  

Model  IS  FID 
Original  
OriginalRELUglobal  
OursRELUglobal 
The recent BigGAN of (Brock et al., 2019) uses a blockwise injection. However, in contrast to our method, in BigGAN the original is split into different nonoverlapping parts that are then injected. That is, if the noise is injected into two blocks, they split the noise into . The part is used in the input, in the first injection and in the second.
We scrutinize this splitting against our method; we split the noise into vectors of equal size for performing injections. The injection with splitting is mentioned as ‘Injectsplit’ below. As the experimental results demonstrate, the naive splitting deteriorates the scores on the task. However, we have not tried optimizing the dimensionality of each or the conditional setting (used in (Brock et al., 2019)).
SNGAN on CIFAR10  

Model  IS  FID 
Original  
Injectsplit  
PolyGAN 
Comments
There are no comments yet.