Improving Variational Auto-Encoders using convex combination linear Inverse Autoregressive Flow

06/07/2017 ∙ by Jakub M. Tomczak, et al. ∙ University of Amsterdam 0

In this paper, we propose a new volume-preserving flow and show that it performs similarly to the linear general normalizing flow. The idea is to enrich a linear Inverse Autoregressive Flow by introducing multiple lower-triangular matrices with ones on the diagonal and combining them using a convex combination. In the experimental studies on MNIST and Histopathology data we show that the proposed approach outperforms other volume-preserving flows and is competitive with current state-of-the-art linear normalizing flow.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Variational Auto-Encoders and Normalizing Flows

Let

be a vector of

observable variables, a vector of stochastic latent variables and let

be a parametric model of the joint distribution. Given data

we typically aim at maximizing the marginal log-likelihood,

, with respect to parameters. However, when the model is parameterized by a neural network (NN), the optimization could be difficult due to the intractability of the marginal likelihood. A possible manner of overcoming this issue is to apply

variational inference and optimize the following lower bound:

(1)

where is the inference model (an encoder), is called a decoder and is the prior. There are various ways of optimizing this lower bound but for continuous this could be done efficiently through a re-parameterization of [KW:13], [RMW:14], which yields a variational auto-encoder architecture (VAE).

Typically, a diagonal covariance matrix of the encoder is assumed, i.e., , where and are parameterized by the NN. However, this assumption can be insufficient and not flexible enough to match the true posterior.

A manner of enriching the variational posterior is to apply a normalizing flow [TT:13], [TV:10]

. A (finite) normalizing flow is a powerful framework for building flexible posterior distribution by starting with an initial random variable with a simple distribution for generating

and then applying a series of invertible transformations , for . As a result, the last iteration gives a random variable that has a more flexible distribution. Once we choose transformations for which the Jacobian-determinant can be computed, we aim at optimizing the following lower bound [RM:15] :

(2)

The fashion the Jacobian-determinant is handled determines whether we deal with general normalizing flows or volume-preserving flows. The general normalizing flows aim at formulating the flow for which the Jacobian-determinant is relatively easy to compute. On the contrary, the volume-preserving flows design series of transformations such that the Jacobian-determinant equals while still it allows to obtain flexible posterior distributions.

In this paper, we propose a new volume-preserving flow and show that it performs similarly to the linear general normalizing flow.

2 New Volume-Preserving Flow

In general, we can obtain more flexible variational posterior if we model a full-covariance matrix using a linear transformation, namely,

. However, in order to take advantage of the volume-preserving flow, the Jacobian-determinant of must be . This could be accomplished in different ways, e.g.,

is orthogonal matrix or it is the lower-triangular matrix with ones on the diagonal. The former idea was employed by the Hauseholder flow (HF)

[TW:16] and the latter one by the linear Inverse Autoregressive Flow (LinIAF) [KSJCSW:16]. In both cases, the encoder outputs an additional set of variables that are further used to calculate . In the case of the LinIAF, the lower triangular matrix with ones on the diagonal is given by the NN explicitly.

However, in the LinIAF a single matrix could not fully represent variations in data. In order to alleviate this issue we propose to consider such matrices, . Further, to obtain the volume-preserving flow, we propose to use a convex combination of these matrices , where is calculated using the softmax function, namely, , where is the neural network used in the encoder.

Eventually, we have the following linear transformation with the convex combination of the lower-triangular matrices with ones on the diagonal:

(3)

The convex combination of lower-triangular matrices with ones on the diagonal results again in the lower-triangular matrix with ones on the diagonal, thus, . This formulates the volume-preserving flow we refer to as convex combination linear IAF (ccLinIAF).

3 Experiments

Datasets

In the experiments we use two datasets: the MNIST dataset111We used the static binary dataset as in [LM:11]. [MNIST] and the Histopathology dataset [TW:16]. The first dataset contains images of handwritten digits (50,000 training images, 10,000 validation images and 10,000 test images) and the second one contains gray-scaled image patches of histopathology scans (6,800 training images, 2,000 validation images and 2,000 test images). For both datasets we used a separate validation set for hyper-parameters tuning.

Set-up

In both experiments we trained the VAE with stochastic hidden units, and the encoder and the decoder were parameterized with two-layered neural networks (

hidden units per layer) and the gate activation function

[DG:15], [DFAG:16], [OKEVGK:16], [TW:16]. The number of combined matrices was determined using the validation set and taking more than matrices resulted in no performance improvement. For training we utilized ADAM [KB:14] with the mini-batch size equal

and one example for estimating the expected value. The learning rate was set according to the validation set. The maximum number of epochs was

and early-stopping with a look-ahead of epochs was applied. We used the warm-up [BVVDJB:15], [SRMSW:16] for first epochs. We initialized weights according to [GB:10].

We compared our approach to linear normalizing flow (VAE+NF) [RM:15], and finite volume-preserving flows: NICE (VAE+NICE) [DKB:14], HVI (VAE+HVI) [SKW:15], HF (VAE+HF) [TW:16], linear IAF (VAE+LinIAF) [KSJCSW:16] on the MNIST data, and to VAE+HF on the Histopathology data. The methods were compared according to the lower bound of marginal log-likelihood measured on the test set.


Method
VAE
VAE+NF (=10)
VAE+NF (=80)
VAE+NICE (=10)
VAE+NICE (=80)
VAE+HVI (=1)
VAE+HVI (=8)
VAE+HF(=1)
VAE+HF(=10)
VAE+LinIAF
VAE+ccLinIAF(=5)
Table 1: Comparison of the lower bound of marginal log-likelihood measured in nats of the digits in the MNIST test set. Lower value is better. Some results are presented after: [RM:15], [DKB:14], [SKW:15].

Method
VAE
VAE+HF (=1)
VAE+HF (=10)
VAE+HF (=20)
VAE+LinIAF
VAE+ccLinIAF(=5)
Table 2: Comparison of the lower bound of marginal log-likelihood measured in nats of the image patches in the Histopathology test set. Higher value is better. The experiment was repeated times. The results for VAE+HF are taken from: [TW:16].

Discussion

The results presented in Table 1 and 2 for MNIST and Histopathology data, respectively, reveal that the proposed flow outperforms all volume-preserving flows and performs similarly to the linear normalizing flow with large number of transformations. The advantage of using several matrices instead of one is especially apparent on the Histopathology data where the VAE+ccLinIAF performed better by about nats than the VAE+LinIAF. Hence, the convex combination of the lower-triangular matrices with ones on the diagonal seems to allow to better reflect the data with small additional computational burden.

Implementation

The code for the proposed approach can be found at: https://github.com/jmtomczak/vae_vpflows.

Acknowledgments

The research conducted by Jakub M. Tomczak was funded by the European Commission within the Marie Skłodowska-Curie Individual Fellowship (Grant No. 702666, ”Deep learning and Bayesian inference for medical imaging”).

References