Gaussian Auto-Encoder

11/12/2018 ∙ by Jarek Duda, et al. ∙ 0

Evaluating distance between sample distribution and the wanted one, usually Gaussian, is a difficult task required to train generative Auto-Encoders. After the original Variational Auto-Encoder (VAE) using KL divergence, there was claimed superiority of distances based on Wasserstein metric (WAE, SWAE) and L_2 distance of KDE Gaussian smoothened sample for all 1D projections (CWAE). This article derives formulas for also L_2 distance of KDE Gaussian smoothened sample, but this time directly using multivariate Gaussians, also optimizing position-dependent covariance matrix with mean-field approximation, for application in purely Gaussian Auto-Encoder (GAE).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Generative AutoEncoders require probability distribution in the latent space being close to a chosen (prior) distribution, usually multivariate Gaussian in -dimensional latent space. The original Variational AutoEncoders (VAE) [1] use nondeterministic encoder - choosing from a Gaussian distribution for each input, optimized to minimize Kullback-Leibler distance/divergence for separate inputs. Such randomness means additional distortion, these Gaussians overlap - distinct inputs can lead to the same outputs. Separate treatment lacks tendency for uniform coverage by the sample, which requires some repulsion.

These problems were repaired later by philosophy introduced in WAE article [2]. As in standard AutoEncoder, it uses deterministic encoder minimizing reconstruction cost: distortion of encoding-decoding process - some average over of distance between and , preferably alongside evaluation of a trained discriminator (GAN) - exploiting the fact that not all distortions are equally unwanted. Additionally, the minimized criterion contains also regularizer - some distance between distribution of obtained ensemble in the latent space and the Gaussian distribution we would like to reach. Assume the number of considered points is , which can be the entire sample size, or size of a used random subset.

Figure 1:

Empirical distribution function (estimated CDF) from sorted squared radii (left column) and distances (right column) for

points in for . For independent variables from multivariate Gaussian distribution both should be close to CDF of distribution. Top row: plots for 10 independent experiments using random sample from . 2nd and 3rd row: plots of 10 independent experiments for gradient descent minimization (starting with random sample from uniform distribution in ) of regularizer of WAE-MMD (1) or CWAE (2) formula - obtained distribution is essentially narrower or wider than for Gaussian. Bottom row: discussed here attracting to the desired CDF for radii and distances - getting nearly perfect agreement (hence also of their densities), also in further tests presented in Fig. 2

. Such optimization step in generative AutoEncoder should be combined with optimization of encoding-decoding distortion and discriminator of decoded vectors.

Such two complex criteria (reconstruction cost and regularizer) are usually evaluated while optimized combined, however, proper evaluation should start with separating them - due to complexity, dependence on data sample, and freedom of choice e.g. of regularization rate. Hence, we will focus here on finding a proper regularizer: which optimization indeed approaches the desired e.g. Gaussian distribution, what turns out quite difficult as we can see in Fig. 1 - often is not satisfied due to focusing on some arbitrary criteria instead of what is really required. This article repairs it: designing regularizers directly attracting toward the desired distribution, to be combined with optimization of reconstruction cost.

The original WAE chooses to optimize approximation of Wasserstein metric, also known as earth mover’s distance. The minimized regularizer in WAE-MMD is:

(1)

where is a sample for - chosen randomly in every optimization step. Regarding choice of used kernel, the article briefly mentions , then arbitrarily choose to use kernel instead. In the next section we will analytically derive similar formula as for the former choice, Fig. 1 show results of minimization using the latter choice as in the article.

Later sliced SWAE [3] uses a different approximation of Wasserstein metric - for randomly chosen 1D projections with again randomly chosen sample and arbitrarily chosen transportation cost.

Soon after it, finally a non-random analytical formula was proposed as CWAE [4] by using

distance for KDE (kernel density estimation) Gaussian-smoothened 1D projections and averaging over all projection directions. Its regularizer does not longer require a random sample, getting similar formula as (

1) but with reduced one index:

(2)

for heuristic choice. This formula uses approximation claimed in the article to be practically indistinguishable for tested dimensions. Formulas to directly use multivariate Gaussians instead (without projection) are derived in Section II of this article for general covariance matrices.

As above regularizers contain arbitrary choices, randomness and approximations, we should verify if minimization of such regularizer alone indeed leads to Gaussian distribution, as combined with optimization of reconstruction cost it will be even more difficult. It was tested using so called Marida tests [5]

for 3-rd ad 4-th moment: that

and .

However, these are only two moments, still leaving huge freedom for disagreement with the desired continuous distribution, starting with the second moment , which generally does not need to be satisfied due to additional constraints (encoding-decoding distortion and evaluation by discriminator). Focusing on them, we could directly optimize for agreement of such moments e.g. using gradient descent. However, from one side it would potentially need infinite number of moments for perfect agreement, from the other choosing weights for separate moments seems a difficult problem.

Much more accurate approach can be found in Kolmogorov-Smirnov test: of some distance between the desired CDF (cumulative distribution function) and empirical distribution function. For Gaussian distribution we would mostly expect agreement of two distributions: for radii

and pair-wise distances . Both should be from chi-squared distribution, what turned out not true for WAE-MMD and CWAE regularizer as we can see in Fig. 1 - leading to essentially narrower or wider distribution.

Having such accurate criterion we can directly optimize it: agreement of CDF with empirical distribution for chosen properties, especially radii and distances for Gaussian distribution. It will be described in Section III and leads to agreement also for tests of other properties, like random projections, scalar products and distances between normalized vectors - presented in Fig. 2. Alternative approach is optimizing distribution of simultaneously all coordinates this way.

Therefore, combining or interleaving it with minimization of reconstruction cost of AutoEncoder (instead of optimizing some arbitrary criterion), we can get direct attraction to Gaussian distribution for latent variable. We can analogously use this approach to attract a different chosen distribution by selecting its crucial 1D properties and directly attracting their proper CDFs. For example enforcing uniform distribution on hypercube or torus would allow for data compression without additional statistical analysis and entropy coding. Additionally, to optimize for unavoidable quantization, we can enforce increased density near codewords this way.

Section II presents approach of the first version of this article, giving some connection between (1) and (2

) formula, which can have also some other applications like optimization of Gaussian mixture models (GMMs). Section

III contains the main proposed approach.

Ii Gaussian mixture distance

In this section there are derived analytic formulas for distance between multivariate Gaussian-smoothened samples, also using general covariance matrices. The derived formulas are similar to (1) and (2), can be useful in low dimensions, also e.g. to optimize GMMs. However, high dimensional Gaussians should be rather imagined as thin shells instead of balls - what will be resolved in the next section by directly ensuring agreement of CDF for radii and distances.

Ii-a Integral of product of multivariate Gaussians:

Density of multivariate -dimensional Gaussian distribution : with center and covariance matrix (real, symmetric, positive-definite, e.g. ) is:

(3)

We first need to calculate formula for integral of product of two such densities: of covariance matrix and , which are shifted by a vector . Due to translational invariance, we can choose centers of these Gaussians as and :

(4)

Transforming the numerator in exponent we get:

Denoting and we get:

Now knowing that we can remove integral from (4):

Substituting :

Observe that, as required, it does not change if switching and . We can now get the final formula:

(5)

Let us also find its special case for spherically symmetric Gaussians: for any length 1 vector :

(6)

We could also analogously find formula for integral of three or more Gaussians. We can also use general powers of Gaussians, e.g. to calculate norm, for example using: .

Ii-B distance between two smoothened samples

Having two samples and in , we would like to KDE smoothen them using multivariate Gaussians, then define distance as norm between such smoothened samples.

For full generality, let us start with assuming that each point has a separately chosen covariance matrix for the Gaussian: we have some and matrices. Such Gaussian mixture can use any positive weights summing to 1, for simplicity we can assume that they are equal , .

Now such squared distance between these samples, depending on the choice of covariance matrices, is

As we have freedom of choosing , we can use above formula to optimize this choice - we can use distance being a result of e.g. its iterative minimization. The initial choice can be found with mean-field approximation discussed later.

This formula can be used for example for optimizing GMM (Gaussian Mixture Model) - e.g. associate fixed Gaussians to points of the sample and find close covering with a smaller number of Gaussians. It allows to directly optimize centers and covariances matrices: as symmetric (so called precision matrix) for efficient calculation, or as .

Let us also find more practical formula for the basic choice: of all covariance matrices being :

(7)

We can remove the fixed term while applying this formula - as it becomes very large in high dimensions.

This formula turns out quite similar as for WAE (1) with exponential kernel, using second sample as random from the chosen distribution. In the next subsection we will directly use a single Gaussian instead - getting similar formula as final for CWAE (2). It uses another, heavy tailed kernel: function in place of exponent. Similarity with CWAE comes from similar origin: both use distance between Gaussian-smoothened samples. However, CWAE calculates this distance for projections to 1D subspaces and averaging over all such directions - optimizing similarity of 1D projections. In contrast, here we directly want closeness of multivariate distributions - as in the original generative AutoEncoder motivation.

It might be also worth to explore different types of tails - corresponding to repulsion inside both sets, and attraction between them. Like they were charged with various types of Coulomb-like interaction.

Ii-C distance between smoothened sample and

For generative AutoEncoders we are more interested in calculating distance from single Gaussian distribution , instead of representing it with a random sample like in WAE. Let us now use in place of from the previous subsection:

(8)

Using the simplest: spherically symmetric , for example for constant , we get:

(9)

For large it requires to use , for tiny allowing to manipulate weight of the two above sums. For the simplest choice: , formula (9) becomes inexpensive:

(10)

Ii-D Mean-field approximation for optimizing

Choosing is generally a difficult question, but we can use kind of mean-field approximation to individually choose covariance matrices depending on position. Specifically, focusing on a given point , we can assume that the remaining ones are from approximately the desired density. This way e.g. distance becomes:

(11)

For fixed , we would like to choose minimizing (11) depending on radius . Obviously, . Numerically, approximate behavior turns out

(12)

which can be used as in distance (9).

This mean-field approximation can be also used to choose optimized position-dependent general covariance matrix:

. Due to symmetry, it should have only two different eigenvalues: in

direction, and in its perpendicular plane.

Ii-E High dimensional situation

Above calculations might be useful in a few dimensional situation, but in practice we often need to work on large . As can be seen as independent variables (coordinates) from , hence is from chi-squared distribution, which asymptotically (large ) is , making exponent e.g. in (10) impractically small. It got heavier tail in CWAE (2) by 1D projections (also in WAE (1) but without a deeper explanation).

Hence, high dimensional Gaussian distribution should be rather imagined as thin radius spherical shell, what is far from ball-like low dimensional intuition about Gaussian mixtures, above distance should be rather imagined as between spheres - not exactly what we are interested in.

Iii Attracting to a chosen CDF

The previously discussed approaches tried to guess a metric, hoping it will lead to the Gaussian distribution. Instead, we can focus on features of this distribution and try to directly optimize them. We could use moments for this purpose, but they provide only a very rough description.

In contrast, a perfect description of continuous 1D distribution is given by its CDF, and like in Kolmogorov-Smirnov test, it can be modelled as empirical distribution function - by just sorting the values. The most important 1D properties of multivariate Gaussian, other discussed methods were also focused on, are radii and distances - the provided algorithm directly attracts for their agreement. Analogously there can be added (or chosen from scratch) other properties to optimize. However, it turns out that optimizing radii and distances here also leads to agreement of other properties, as we can see in some tests in Fig. 2.

Figure 2: Additional tests for discussed attracting Gaussian CDF for radii and distances (right column) compared with random i.i.d. sample (left column) from distribution we would like to achieve - each contains 10 independent experiments. Top row: test of projections on random directions. Middle row: test of scalar products. Bottom row: test of uniform distribution of angles as distances for normalized vectors.

Iii-a Algorithm

This subsection contains the main approach of this article: directly optimizing agreement with the proper CDF of empirical distribution from the sample - obtained by sorting the values. Its used Mathematica implementation is in Appendix.

The discussed version attracts to CDFs of multivariate Gaussian distribution for two (squared) properties: radii and their pairwise distances, both ideally should be from chi-squared distribution. This general approach can be naturally modified for agreement of other properties and their chosen CDFs.

Algorithm:

We need first to put into tables the desired CDF arguments, here of chi-squared distribution for radii and distances:

Then gradient descent step for optimizing empirical distribution of set of points in is:

  1. Calculate all radii and distances:

  2. Sort both - find orders (bijections):

  3. Assuming the minimized final distance is , which corresponds to area of difference between the desired CDFs and empirical distributions for radii and distances:

    (13)

    its gradient on -th vector is:

    (14)
  4. Gradient descent e.g. where can be chosen depending (e.g. as proportional) to .

Each such 1)-4) iteration takes our points closer to agree with perfect CDF of multivariate Gaussian. In AutoEncoder it should be combined or interleaved with steps reducing distortion of coding-decoding process (preferably also evaluation of discriminator), regularizer rate should start large and be gradually reduced during training.

It is tempting to approximate CDF of with just a step function in (especially in high dimensions) as it would allow to remove above sorting, just optimize both squared norms to constant value . Sorting gives more tolerance for distortion from these constants especially for extreme values, exactly like in the real Gaussian distribution.

Iii-B Some comments and expansions

Proportion of weights for radii and distances was chosen arbitrarily, what might be worth exploring, especially if adding CDFs of more properties to be attracted.

In Kolmogorov-Smirnov test there is used norm instead, but optimizing it would lead to gradient descent shifting only single extreme points. Above norm allows to optimize all points at a time and has a natural interpretation as area between the two plots. It might be worth exploring also other norms like , which can be obtained by just replacing above sign with bracket.

Above attraction only ensures approaching the desired CDF for radii and pairwise distances, what turns out sufficient for optimizing regularizer alone, also for some other properties as we can see if Fig. 2. Approaching it might turn out more difficult while adding other optimization criteria like reconstruction cost - when it might be worth to consider adding CDF attraction also for other properties like scalar products or . If there is a problem with analytical formula for CDF, it can be approximated by just sampling from the desired distribution and using empirical distribution.

This general approach can be also used to attract different chosen distributions in the latent space, what requires choosing essential properties: for which CDF we would like to attract, then replacing above with the chosen sum. For example to attract GMM-like distribution, we can choose agreement of CDF (not necessarily shell-like as in Gaussian) of distances from a few chosen points like centers in GMM, and CDF for distances found e.g. as empirical distribution of random sample.

Alternative approach e.g. for Gaussian is optimizing CDF simultaneously for all coordinates. Assume coordinates have independent distributions, is the desired CDF of -th coordinate, e.g. error function for Gaussian, or for uniform distribution on hypercube or torus. For from data sample, denote by its position while sorting accordingly to -th coordinate. Hence

(15)

is its perfect position accordingly to the desired distribution, we can use e.g. as regularization step.

Iii-C Data compression application

For data compression applications, especially image/video, we would like to learn from dataset how typical objects (e.g. textures) look like and try to encode within their space - which is essentially smaller than the space of e.g. all bitmaps. It is usually realized by encoding crucial features, like Fourier or wavelet coefficients in classical methods. Machine learning techniques can optimize it further - customize based on the training dataset.

Additionally, images usually have patterns repeating in various scales. To exploit this multi-scale nature, there was proposed pyramidal decomposition [6] in analogy to wavelet transform: encode a given block simultaneously in multiple scales: differing by down-sampler. Optimizing distortion of encoding-decoding process (including quantization), and additionally evaluation by discriminator, we get kind of multi-scale AE-GAN, with additional encoding of quantized features - values of latent variables.

In standard AuteEncoder these values of latent variables usually have very complex distribution, making their statistical analysis (and entropy coding) difficult and often suboptimal, what translates into inferior compression ratio. Beside simplification (cost), we should get better compression if enforcing some simple probability distribution in latent space by adding some regularizer to optimized criteria. For example if enforced multivariate Gaussian, each coordinate should be from approximately 1D Gaussian, which can be encoded by splitting possible values into ranges (bins), then use entropy coder to store bin’s number, then directly store some number of the remaining most significant bits if needed [7]. Alternative approach is using vector quantization, for example separately encode radius, and from uniform distribution on unit sphere e.g. using pyramid vector quantizer [8, 9].

We can also use the discussed attracting CDF approach to enforce a different distribution. For example mentioned uniform distribution in hypercube would allow to avoid entropy coder - we could just directly store a chosen number of the most significant bits for each coordinate. It could be done by analogously attracting to uniform distribution for all coordinates, however, it needs some special behavior (e.g. repulsion, projection , or rescaling) near the boundaries (not to exceed them). We could repair it by using torus instead: gluing pairs of surfaces (in 0 and 1) for all dimensions, by just taking modulo 1 for each coordinate originally being a real number.

For data compression applications we need to also include quantization () of latent space to practically represent and encode these continuous values: as the closest point (codeword) from a chosen finite subset (codebook). Directly including it () during AutoEncoder training would give differential equal zero. A simplest way to resolve this problem is just ignoring quantization during training. More sophisticated solution is adding quantization error: e.g. to optimized criteria, getting ”egg-carton”-like potential with minima in codewords. Discussed here attracting to a chosen CDF allows to include such behavior inside this chosen CDF. For example for uniform distribution on and taking most significant bits for each coordinate, instead of , we can e.g. use , getting additional attraction to the closest codeword: point of position of -th coordinate should be shifted toward .

Iv Conclusions and further work

The basic conclusion of this article is that instead of using heuristic approximated regularizers, in similar computational cost we can directly optimize toward the desired probability distribution e.g. for radii and distances of multivariate Gaussian distribution. Combining or interleaving such optimization step with standard AutoEncoder optimization (of encoding-decoding distortion and evaluation by discriminator), we can ensure that the final distribution of latent variable is nearly indistinguishable from a random sample from the desired probability distribution.

Beside testing the proposed approach with AutoEncoders, suggested further work starts with expanding evaluation of other methods from just testing of two moments, to much more accurate: verifying agreement of empirical distributions with desired CDFs like in fig. 1.

As discussed, above approach leaves some freedom which might be worth exploring, e.g. weights between CDFs for different properties, set of these properties, norm for evaluating distance between CDF and empirical distribution.

Finally, this attracting CDF approach is much more general: can be used to approach practically any chosen probability distribution, what allows to use e.g. a chosen clustering in latent space with GMM-like prior distribution, or even distribution with some chosen nontrivial topology e.g. for some circular morphing, or torus latent space to simplify and optimize storage of its value in data compression, also with ”egg-carton”-like density to optimize for quantization. Used Mathematica implementation of approaching chosen radii and distances CDFs:

np = n*(n - 1)/2;                (* n points in R^d *)
(* calculating CDF tables and auxiliary tables: *)
invcdf[q_] := InverseCDF[ChiSquareDistribution[d], q];
c = Table[invcdf[(i - 0.5)/n], {i, n}];
cp = Table[invcdf[(k - 0.5)/np], {k, np}];
dt = Table[0., {i, np}];               (* distances *)
ps = Table[0, {i, n}];        (* positions in order *)
psp = Table[0, {i, np}];

(* single optimization step for x table *)
g = Table[0., {i, n}, {j, d}];    (* gradient table *)
rt = Table[Total[x[[i]]^2], {i, n}];       (* radii *)
or = Ordering[rt]; Do[ps[[or[[i]]]] = i, {i, n}];
Do[g[[i]] += x[[i]] * Sign[rt[[i]] - c[[ps[[i]]]]]/n
   ,{i, n}];         (* radii gradient contribution *)
k = 0;
Do[dt[[++k]] = Total[(x[[i]] - x[[j]])^2]/2, {i, 2, n}
   , {j, i - 1}];            (* calculate distances *)
orp = Ordering[dt]; Do[psp[[orp[[i]]]] = i, {i, np}];
k = 0;
Do[k++;           (* distance gradient contribution *)
 ch =2(x[[i]]-x[[j]])*Sign[dt[[k]]-cp[[psp[[k]]]]]/np;
 g[[i]] += ch; g[[j]] -= ch , {i, 2, n}, {j, i - 1}];
x -= alpha * g;            (* gradient descent step *)

References