Universality Theorems for Generative Models

05/27/2019 ∙ by Valentin Khrulkov, et al. ∙ Skoltech 4

Despite the fact that generative models are extremely successful in practice, the theory underlying this phenomenon is only starting to catch up with practice. In this work we address the question of the universality of generative models: is it true that neural networks can approximate any data manifold arbitrarily well? We provide a positive answer to this question and show that under mild assumptions on the activation function one can always find a feedforward neural network that maps the latent space onto a set located within the specified Hausdorff distance from the desired data manifold. We also prove similar theorems for the case of multiclass generative models and cycle generative models, trained to map samples from one manifold to another and vice versa.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Generative models such as Generative Adversarial Networks (GANs) are widely used for tasks such as image synthesis, semi-supervised learning, and domain adaptation

(Brock et al., 2018; Radford et al., 2015; Zhang et al., 2017; Isola et al., 2017). Such generative models are trained to perform a mapping from a latent space of a small dimension to some specified data manifold, typically represented by a dataset of natural images. Despite their success and excellent performance, the theory behind such models is not yet well understood. A recent survey of open questions about generative models (Odena, 2019) among others presents the following question: what sorts of distributions can GANs model? In particular, what does it even mean for a GAN to model a distribution?

To answer these questions we adopt the following geometric approach, very amenable to precise mathematical analysis. Under the assumption of the Manifold Hypothesis (Goodfellow et al., 2016), data comes from a certain data manifold. Then the goal of a generator network is to reproduce this data manifold as closely as possible by mapping the latent space into the ambient space of the data manifold. This intuitive understanding can be written in a more concrete manner as follows. Suppose that we are given the latent space , feedforward neural network as a generator, and some target data manifold . In order for the manifold to be generated by we require that the image of under is sufficiently close to , more specifically that the Hausdorff distance between and is less than the given parameter . Hausdorff distance is a well-defined metric on the space of all compact subsets of Euclidean space and hence is equal to zero if and only if — the case of precise replication of the data manifold. Thus, the question at hand can be formulated as follows: is it possible to approximate in the sense of the Hausdorff distance an arbitrary compact (connected) manifold using standard feedforward neural networks? By combining techniques from Riemannian geometry with well–known properties of neural networks we provide a positive answer to this question. We also show that the condition of being smooth is not necessary and the results are also valid for just topological manifolds.

We further extend the discussed geometric approach for the theoretical analysis of many practical situations, for instance, to the case of data manifolds, which consist of multiple disjoint manifolds and correspond to multiclass datasets, and cycle generative models (Zhu et al., 2017; Isola et al., 2017), which for two manifolds learn an approximately invertible mapping from one manidold to another. For the latter case we prove a somewhat surprising result that for any given pair of data manifolds of the same dimension, one can always train a pair of neural networks which are approximately inverses of one another, and map the first manifold almost

onto the second one, and vice versa. In this work, we ignore specifics of the training algorithm (for instance, what loss function is used) and merely focus on understanding the generative capabilities of neural networks.

2 Related work

A large body of papers is devoted to analyzing the universality of neural networks. Classical works on universality (Cybenko, 1989; Hornik, 1991; Haykin, 1994; Hassoun et al., 1995) prove that neural networks with one hidden layer are universal approximators

and can approximate arbitrary continuous functions on compact sets. Similar results also stand for deep wide networks with ReLU nonlinearities

(Lu et al., 2017)

, convolutional neural networks

(Cohen and Shashua, 2016)

and recurrent neural networks

(Khrulkov et al., 2019).

GANs were mostly studied from point of view of convergence properties (Feizi et al., 2017; Balduzzi et al., 2018; Lucic et al., 2018). Several works focus on the relation between geometric properties of datasets and behavior of GANs. In order to analyze what characteristics of datasets lead to better convergence, synthetic datasets were studied in (Lucic et al., 2018). A case of disconnected data manifold (similar in spirit to our analysis in Section 5) was analyzed in (Khayatkhoei et al., 2018). A metric for analyzing the quality of GANs based on comparing geometric properties of the original and generated datasets was proposed in (Khrulkov and Oseledets, 2018).

3 Notation and assumptions

We will denote the -cube by . We will often use an approximation of a continuous function by a neural network, in that case, the “network version” of the function will be indicated by a subscript or indicating a collection of trainable parameters, e.g., or .

In this work, we deal with data manifolds. We assume that all these manifolds are smooth, orientable, compact and connected unless stated explicitly. We also assume that all the manifolds are embedded into a Euclidean space

, and inherit the Riemannian metric tensor

. By smooth we will mean infinitely differentiable manifolds (functions), i.e, of class ; all the results, however, will stay true if we consider class for some finite . As a norm of a function defined on some compact set we will use the -norm:

, and for vectors we use the

-norm.

We will often make use of a natural geometric measure on a manifold, which can be constructed by integrating the volume form associated with the Riemannian metric tensor over the corresponding set.

4 Background

Let us first present some background material necessary for understanding the proofs. We will freely use the term manifold in the precise mathematical sense. Due to limited space, we do not provide the definition and refer the reader to thorough introductions such as (Lee, 2013; Sakai, 1996).

First important construction in the proof is the exponential map.

4.1 Exponential map

Let be a Riemannian manifold endowed with a metric tensor . Recall that geodesics are locally length minimizing curves, defined as a solution of a certain second-order differential equation. An important property of geodesics is that the length of the velocity vector is preserved along the curve, i.e., for a geodesic we have

(1)

.

The exponential map is defined in the following manner. Let and , and suppose that there exists a geodesic with

Then the point is denoted by and called the exponential of the tangent vector . The geodesic can then be written as . While apriori the exponential map is defined only if is small enough, for certain class of manifolds it is globally defined. Namely, if a manifold is geodesically complete, then is defined for all and . Our proof is based on the following classical result.

Theorem 4.1 (Hopf-Rinow).

Let be a connected Riemannian manifold. Then the following statements are equivalent.

  • The closed and bounded subsets of are compact;

  • is a complete metric space;

  • is geodesically complete.

Furthermore, any of the above implies that any points and in can be connected by a minimal (length–minimizing) geodesic.

In particular, this implies that any compact connected manifold is geodesically complete.

4.2 Hausdorff distance

The Hausdorff distance between two sets is defined as follows.

(2)

where

(3)

It is well–known that the set of all compact subsets of endowed with the Hausdorff distance becomes a complete metric space (Henrikson, 1999).

4.3 Universal Approximation Property of Neural Networks

In this paper we heavily rely on the following classical results on neural networks (Cybenko, 1989; Hornik, 1991).

Theorem 4.2 (Universal Approximation Theorem).

Let be a nonconstant, bounded and continuous function. Then for any continuous function and there exists a fully connected neural network with the activation function and one hidden layer, such that

In our analysis we restrict ourselves to the case of neural networks of the form considered in Theorem 4.2. However, all the results stand for any other learnable parametric maps with the property of being dense in the space of continuous functions.

5 Geometric Universality Theorem

In this section we prove that for an arbitrary manifold it is possible to construct a neural network, mapping the cube approximately onto this manifold. Our analysis is based on the following lemma. In fact, this is a particular case of a much stronger theorem valid even for topological manifolds (without smooth structure), for which we provide a discussion and reference further in the text. We, however, believe that this particular case is instructive and provides an intuition on how the generative mappings may look like.

Lemma 5.1.

Let be a compact connected -dimensional manifold. Then there exists a smooth map

such that .

Proof.

We will construct this map explicitly. Choose an arbitrary point , and consider

Since is compact and connected, it is geodesically complete and the Hopf-Rinow theorem applies. Thus, this map is defined on and surjective.

We now need to show that we can choose a compact subset of such that the restriction of to this subset is also surjective. To do this observe that since is compact it has finite diameter, namely for some finite constant . Here is the Riemannian distance, defined as the arc length of a minimizing geodesic. From Eq. 1 it instantly follows that for the (Euclidean) ball we have . Indeed, since any point on is within distance from , there exists a minimal geodesic connecting these points with length bounded by . But for any vector from Eq. 1 we obtain that the length of the corresponding geodesic connecting and is exactly , which proves the claim. Statement of the lemma then follows after selecting an arbitrary cube containing and appropriate rescaling. ∎

Figure 1: Visualization of the construction in the proof of Theorem 5.1. The latent space is mapped onto the manifold via the function . This mapping is then approximated via neural network , which in turn maps onto the compact set . If is sufficiently close to then so are and .
Theorem 5.1 (Geometric Universality of Generative Models).

Let be a compact connected -dimensional manifold. For every nonconstant, bounded, continuous activation function and there exists a fully connected neural network with the activation function , such that . Here .

Proof.

Choose an arbitrary as in Lemma 5.1. By the standard universal approximation theorem for neural networks we can find such a neural network that . Statement of the theorem then follows from the definition of the Hausdorff distance. Indeed, by surjectivity of we find that every point is within distance from the point , and thus as in Eq. 3, and conversely . See Fig. 1 for illustration of the proof. ∎

Previously we have noted that our Lemma 5.1 is a particular case of a much stronger result (Brown, 1962). Namely, it can be stated as follows.

Lemma 5.2 (Brown’s mapping theorem).

Let be a compact connected -dimensional topological manifold. Then there exists a continuous map

such that .

Based on this lemma Theorem 5.1 can be generalized to include the more general case of topological data manifolds.

Corollary 5.1 (Geometric Universality for Topological Manifolds).

Theorem 5.1 holds true for being an arbitrary compact connected topological manifold.

Multiclass case

The previous theorem considers only the case of a single data manifold. However, commonly in practice, single datasets contain samples from multiple data manifolds (e.g, MNIST digits, ImageNet classes). Since we can assume that these manifolds do not intersect, it is impossible to map a connected latent space surjectively onto this disconnected joint data manifold. To counteract this effect we can allow small pieces of latent space to map into thin “tunnels” connecting those manifolds. This can be made precise by the following statement.

Theorem 5.2 (Geometric Universality for Multiclass Manifolds).

Let be a “multiclass” data manifold, with each being a compact connected -dimensional topological manifold. Then for every and every nonconstant, bounded, continuous activation function there exists a fully connected neural network with the activation function such that the following properties hold.

  • There exists a collection of disjoint compact subsets of such that

    (4)
Proof.

Similar to the proof of Theorem 5.1 we will apply the universal approximation theorem to a certain function constructed with the help of Lemma 5.2. To construct such function let us select sets in the following way. We divide the interval uniformly into intervals, namely with length of each interval being and . We propose to use the following , satisfying conditions of the corollary. Denote ,

(5)

Intuition is very simple: we chop down the cube on the first axis into smaller boxes, and remove some space between them. On each of the chunks we can now apply Lemma 5.2 for the corresponding manifold , obtaining a collection of maps . To construct a global continuous map

we can now simply linearly interpolate each of the maps

from the right boundary of one box to the left boundary of the neighboring one. By applying the universal approximation theorem to this function , we finalize the proof. ∎

6 Invariance property of deep expanding networks

Our previous results state that it is possible to approximate any given manifold up to some accuracy. However, neural networks used in the proof are shallow (they have one hidden layer) and are not practical. In this section, we study how the set looks like for more practical networks consisting of a series of fully connected and convolutional layers. We will show a somewhat surprising result that under certain mild conditions such networks cannot significantly transform the latent space, more precisely the generated set will be diffeomorphic to the open unit cube . In fact, our results will be more general and will demonstrate that this property holds for arbitrary latent spaces, that is if is sampled from some manifold , then will be diffeomorphic to .

6.1 Reminder on embeddings

Recall the following definition.

Definition 6.1 (Smooth embedding).

Let and be smooth manifolds and be a smooth map. Then is called an embedding is the following conditions hold.

  • Derivative of is everywhere injective;

  • is an injective, continuous and open map (i.e, maps opens sets to open sets).

The main property of a smooth embedding is the following (Lee, 2013).

Proposition 6.1.

The domain of an embedding is diffeomorphic to its image.

We will show that certain neural networks commonly used for generative models are in fact smooth embeddings, and thus their image is diffeomorphic to the domain (latent space). We analyze two most commonly used layers in such models: fully connected and convolutional layers (both standard and transposed). For the sake of simplicity we assume that convolutions are circularlypadded, i.e., the input presents a two-dimensional torus; in this case, when the offset calls for a pixel that is off the left end of the image, the layer “wraps around” to take it from the opposite end. We consider arbitrary stride, in order to allow for a layer to increase the spatial size of a feature tensor, as commonly done.

Let us fix the nonlinearity to be an arbitrary smooth monotonous function without saddle points (). Then the following two lemmas hold. Let us first assume that the latent space is the Euclidean space (or equivalently, an open unit cube ).

Lemma 6.1.

Let with be a fully connected layer. If then is a smooth embedding for all except for a set of measure zero. We will call such a layer an expanding fully connected layer.

Proof.

Indeed, such a map is injective. It is open as a composition of a linear map (which is trivially open), and of which is open since it is a continuous monotonous function. Then for all matrices of full rank (which form a set of full measure in the space of matrices of size

) the derivative is injective by a simple application of the chain rule and the fact that

. ∎

Let us now deal with the convolutional layers.

Lemma 6.2.

Let be a rd–order tensor tensor representing a feature tensor of size with channels. Suppose that is a standard convolutional or transposed convolutional layer with an arbitrary stride. Suppose that is parameterized via a kernel parameter , such that is a feature tensor of size with channels. If then is a smooth embedding for all except for a set of measure zero. We will call such a layer an expanding convolutional layer.

Proof.

The only non-trivial part of the proof is showing injectivity of this layer for all but measure zero. Note that if then the matrix representing the linear map performing the operation is vertical, hence it is sufficient to show that generically it is of full rank. In the case of the transposed convolution, we can transpose this matrix and analyze the corresponding convolutional layer.

Stride one

Let us start with the most important case of stride being one, in which case . Denote the matrix of the linear map underlying by , that is , where denotes the vectorization operator. We need to show that for all but measure zero this matrix is of full rank.

To prove the lemma we use the following simple argument coming from algebraic geometry. The condition of matrix not being a full rank is algebraic (i.e., is given by polynomial equations) in the space of parameters . Indeed, the operation of constructing based on is linear with respect to , and the condition of not being a full rank in the space of all matrices is specified by a set of polynomial equations (namely, determinants of all maximal square submatrices should be zero). Thus, we have shown that set is algebraic; and by the well-known property of algebraic sets there are two options: either or (with being the standard Lebesgue measure). To show that the latter does not hold, we provide a concrete example of a weight not in . Namely, consider the following .

(6)

Here denotes the Kronecker delta symbol:

We observe that the corresponding matrix is of particularly simple structure:

which trivially is of full rank.

Arbitrary stride

The same argument as before applies. Notice that selection of a bigger stride corresponds to selecting specific rows from the matrix obtained for stride one. By using the same weight tensor as in the case of stride one, we find that the obtained matrix contains

distinct rows of the identity matrix, followed by possible zero rows and thus also has full rank.

After these preliminary results, we are ready to extend them to the case of arbitrary latent space. Namely, suppose that is sampled from an arbitrary manifold . We use the following simple lemma.

Lemma 6.3.

Let be an arbitrary smooth embedding. Let be a smooth embedded submanifold. Then is also a smooth embedding.

Proof.

The proof follows from the definition. Indeed, for every point we have and restriction of the derivative of onto this subspace is also injective. Note that is also injective and open map. ∎

By combining Propositions 6.1, 6.3, 6.2 and 6.1 we obtain the following result.

Theorem 6.1.

Let be an arbitrary neural network consisting of expanding fully connected layers and expanding convolutions, and . Denote . Then for all parameters but measure zero the following properties hold:

  • is a smooth embedded manifold;

  • .

Proof.

Theorem follows from Propositions 6.1, 6.3, 6.2 and 6.1 and the fact that a composition of embeddings is also an embedding. ∎

For many datasets used in practice, it seems very unlikely that the data comes from manifolds with very simple topological properties, as even basic visual patterns may possess quite non-trivial topological structure (Ghrist, 2008). Thus on the first sight, it seems that Theorem 6.1 suggests that using only expanding architectures, it is impossible to approximate an arbitrary data manifold with latent space being (or an open unit cube). Such models are, however, extremely successful in practice. While we do not provide a precise theorem for this case, based on the discussion in Section 7, we hypothesize that it may possible to approximate an arbitrary compact data manifold using expanding networks up to a subset of arbitrary small measure, and thus limitations imposed by Theorem 6.1 are negligible in practice.

7 Cycle generative models

Another popular class of models used for instance for the unsupervised image to image translation (Zhu et al., 2017; Isola et al., 2017) learn a mapping along with its inverse from one data manifold to another. We specify this task as follows. Given two data manifolds and , the goal is two train two neural networks and such that is a diffeomorphism of and with being inverse of .

First of all, let us notice that we do not expect for such and to exist for two general manifolds since two manifolds of different topological properties cannot be diffeomorphic. However, based on Theorem 6.1 we expect that the desired properties may hold approximately. Let us start with lemmas ensuring existence of functions and which map approximately to and approximately to correspondingly. In this section, we again consider only the case of smooth data manifolds.

First of all, we recall the following result (Sakai, 1996), proved in a very similar manner to Lemma 5.1.

Lemma 7.1.

Every compact connected -dimensional manifold contains an open dense set diffeomorphic to . Moreover, complement of this set has measure zero in .

We use this result to obtain the following lemma.

Lemma 7.2.

For every there exist compact subsets and such that and and is diffeomorphic to .

Proof.

For each of the manifolds and select the open dense set of full measure as in Lemma 7.1. Each of these subsets is diffeomorphic to an open unit ball in via maps and . In order to construct and it sufficient to take preimages under and correspondingly of a sufficiently large closed ball (as with we have and ). ∎

We are now ready to provide our main result on cycle generative models.

Theorem 7.1 (Geometric Universality for Cycle Models).

Fix any two compact connected manifolds and of the same dimension and a nonconstant, bounded, continuous nonlinearity . Then for every and there exist compact subsets and and a pair of feedforward neural networks , with the activation function satisfying the following conditions:

  • and ;

  • and ;

  • and with constant depending only on manifolds and .

Proof.

Let us start by selecting subsets and and a diffeomorphism along with its inverse as specified by Lemma 7.2. For simplicity let us also assume that and . By means of the Whitney extension theorem (Whitney, 1934) we can smoothly extend and to the entire cube , and apply the universal approximation theorem (Hornik, 1991), thus obtaining two feedforward neural networks and such that

(7)

and

(8)

with all the functions defines on the unit cube . This proves first two points in the theorem. To show the last property we find that

the following estimate holds.

(9)

where he have used the fact that for and property (8). The second part of the claim is proved similarly. ∎

Neural networks and constructed in the proof perform translation from data sampled from to data coming from approximately , and existence of such networks for arbitrary manifolds may partially explain huge empirical success of cyclic models. Even though the theorem is valid for an arbitrary pair of manifolds, we hypothesize that for datasets containing visually similar images such a map may be much easier to model, than for two arbitrary manifolds without such a connection.

8 Conclusion and future work

In this work we have attempted to partially explain huge empirical success of generative models. Our results show only existence of neural networks approximating arbitrary manifolds, and do not specify how one can estimate the size of a network required for any given manifold. We hypothesize, however, that there might exist a connection between certain geometrical properties of a manifold (curvature, various topological properties), and the width/depth of a neural network required. One interesting direction of research left for a future work is analyzing this relation for datasets popular in computer vision, such as MNIST or CelebA, or toy datasets sampled from simple small dimensional manifolds (tori, circles), where one can easily vary the topological properties.

References

  • Balduzzi et al. (2018) David Balduzzi, Sebastien Racaniere, James Martens, Jakob Foerster, Karl Tuyls, and Thore Graepel. The mechanics of n-player differentiable games. In

    Proceedings of the 35th International Conference on Machine Learning

    , volume 80, pages 354–363. PMLR, 2018.
  • Brock et al. (2018) Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018.
  • Brown (1962) Morton Brown. A mapping theorem for untriangulated manifolds. Topology of, 3:92–94, 1962.
  • Cohen and Shashua (2016) Nadav Cohen and Amnon Shashua. Convolutional rectifier networks as generalized tensor decompositions. In International Conference on Machine Learning, pages 955–963, 2016.
  • Cybenko (1989) George Cybenko.

    Approximation by superpositions of a sigmoidal function.

    Mathematics of control, signals and systems, 2(4):303–314, 1989.
  • Feizi et al. (2017) Soheil Feizi, Farzan Farnia, Tony Ginart, and David Tse. Understanding gans: the lqg setting. arXiv preprint arXiv:1710.10793, 2017.
  • Ghrist (2008) Robert Ghrist. Barcodes: the persistent topology of data. Bulletin of the American Mathematical Society, 45(1):61–75, 2008.
  • Goodfellow et al. (2016) Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press, 2016.
  • Hassoun et al. (1995) Mohamad H Hassoun et al. Fundamentals of artificial neural networks. MIT press, 1995.
  • Haykin (1994) Simon Haykin. Neural networks: a comprehensive foundation. Prentice Hall PTR, 1994.
  • Henrikson (1999) Jeff Henrikson. Completeness and total boundedness of the hausdorff metric. MIT Undergraduate Journal of Mathematics, 1:69–80, 1999.
  • Hornik (1991) Kurt Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
  • Isola et al. (2017) Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In

    Proceedings of the IEEE conference on computer vision and pattern recognition

    , pages 1125–1134, 2017.
  • Khayatkhoei et al. (2018) Mahyar Khayatkhoei, Maneesh K Singh, and Ahmed Elgammal. Disconnected manifold learning for generative adversarial networks. In Advances in Neural Information Processing Systems, pages 7343–7353, 2018.
  • Khrulkov and Oseledets (2018) Valentin Khrulkov and Ivan Oseledets. Geometry score: A method for comparing generative adversarial networks. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 2621–2629. PMLR, 2018.
  • Khrulkov et al. (2019) Valentin Khrulkov, Oleksii Hrinchuk, and Ivan Oseledets. Generalized tensor models for recurrent neural networks. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=r1gNni0qtm.
  • Lee (2013) John M Lee. Smooth manifolds. Springer, 2013.
  • Lu et al. (2017) Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang. The expressive power of neural networks: A view from the width. In Advances in Neural Information Processing Systems, pages 6231–6239, 2017.
  • Lucic et al. (2018) Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. Are gans created equal? a large-scale study. In Advances in neural information processing systems, pages 700–709, 2018.
  • Odena (2019) Augustus Odena. Open questions about generative adversarial networks. Distill, 2019. doi: 10.23915/distill.00018. https://distill.pub/2019/gan-open-problems.
  • Radford et al. (2015) Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
  • Sakai (1996) Takashi Sakai. Riemannian geometry, volume 149. American Mathematical Soc., 1996.
  • Whitney (1934) Hassler Whitney. Analytic extensions of differentiable functions defined in closed sets. Transactions of the American Mathematical Society, 36(1):63–89, 1934.
  • Zhang et al. (2017) Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, and Dimitris N Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5907–5915, 2017.
  • Zhu et al. (2017) Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017.