Generative Model with Dynamic Linear Flow

05/08/2019 ∙ by Huadong Liao, et al. ∙ 14

Flow-based generative models are a family of exact log-likelihood models with tractable sampling and latent-variable inference, hence conceptually attractive for modeling complex distributions. However, flow-based models are limited by density estimation performance issues as compared to state-of-the-art autoregressive models. Autoregressive models, which also belong to the family of likelihood-based methods, however suffer from limited parallelizability. In this paper, we propose Dynamic Linear Flow (DLF), a new family of invertible transformations with partially autoregressive structure. Our method benefits from the efficient computation of flow-based methods and high density estimation performance of autoregressive methods. We demonstrate that the proposed DLF yields state-of-theart performance on ImageNet 32x32 and 64x64 out of all flow-based methods, and is competitive with the best autoregressive model. Additionally, our model converges 10 times faster than Glow (Kingma and Dhariwal, 2018). The code is available at https://github.com/naturomics/DLF.

READ FULL TEXT VIEW PDF

Authors

page 8

page 9

page 11

page 12

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The increasing amount of data, paired with the exponential progress in the capabilities of hardware and relentless efforts for better methods, has tremendously advanced the development in the fields of deep learning, such as image classification 

(Krizhevsky et al., 2012; He et al., 2016; Huang et al., 2017) and machine translation (Vaswani et al., 2017; Devlin et al., 2018; Radford et al., 2019)

. However, most applications have been greatly limited to situations where large amounts of supervision is available, as labeling data remains a labor-intensive and cost-inefficient exercise. In the meantime, unlabeled data is generally easier to acquire but its direct utilization is yet a central challenging problem. Deep generative models, an emerging and popular branch of machine learning, aims to address these challenges by modeling the high-dimensional distributions of data without supervision.

In recent years, the field of generative modeling has advanced significantly, especially in the development and application of generative adversarial networks (GANs) (Goodfellow et al., 2014) and likelihood-based methods (Graves, 2013; Kingma and Welling, 2013; Dinh et al., 2014; Oord et al., 2016b)

. Likelihood-based generative methods could be further divided into three different categories: variational autoencoders 

(Kingma and Welling, 2013), autoregressive models (Oord et al., 2016b; Salimans et al., 2017; Chen et al., 2017; Menick and Kalchbrenner, 2018), and flow-based generative methods (Dinh et al., 2014, 2016; Kingma and Dhariwal, 2018). Variational autoencoders have displayed promising parallelizability of training and synthesis, however, it could be technically challenging to optimize with the lower bound on the marginal likelihood of the data. Autoregressive models and flow-based generative models both estimate the exact likelihood of the data. However, autoregressive models suffer from the limited parallelizability of synthesis or training, and a lot of effort has been made to overcome this drawback (Oord et al., 2017). On the contrast, flow-based generative models are efficient for training and synthesis, but generally yield compromised performance in comparison with autoregressive models in density estimation benchmarks.

In this paper, we focus on the exact likelihood-based methods. In Section 2, we first review models of autoregressive methods and flow-based methods. Inspired by their common properties, in Section 3, we then propose a new family of invertible transformations with partially autoregressive structure. And we illustrate that autoregressive models and flow-based generative models are two extreme forms of our proposed method. In Section 5, our empirical results show that the proposed method achieves state-of-the-art density estimation performance on ImageNet dataset among flow-based methods and converges significantly faster than Glow model (Kingma and Dhariwal, 2018). Though our method has a partially autoregressive structure, we illustrate that the synthesis of a high-resolution image (i.e., 256256 image) on modern hardware takes less than one second, which is comparable to most flow-based methods.

2 Background

2.1 Flow-based Models

In most flow-based models (Kingma and Dhariwal, 2018; Dinh et al., 2014, 2016)

, the high-dimensional random variable

with complex and unknown true distribution is generally modeled by a latent variable : , where can be any bijective function with parameters and is typically composed of a series of transformations .

has a tractable density, such as a standard Gaussian distribution. With the change of variables formula, we then have the marginal log-likelihood of a datapoint and take it as the optimization objective of learning

:

(1)

where is the hidden output of sequence of transformations, with and .

However, the above formula requires the computation of Jacobian determinant of each intermediate transformation, which is generally intractable and therefore, becomes a limitation of the above method. In practice, to overcome this issue, the transformation function is well-designed to let its Jacobian matrix be triangular or diagonal, thus the log-determinant is simply the sum of log-diagonal entries:

(2)

In the next part of this section, we will review invertible and tractable transformations reported in previous studies, categorized as fully autoregressive structure and non-autoregressive structure. After that, we will discuss their respective advantages and disadvantages in computational parallelizability and density estimation performance.

2.2 Autoregressive and Inverse Autoregressive Transformations

Papamakarios et al. (2017) and Kingma et al. (2016) introduced autoregressive (AR) transformation and Inverse Autoregressive (IAR) transformation, respectively. These methods model a similar invertible and tractable transformation from high-dimensional variable to :

(3)

where and are the -th element of and , respectively. The difference between AR and IAR is that and are driven by different input: in autoregressive transformation and in inverse autoregressive transformation. Here

is an arbitrarily complex function, usually a neural network. The vectorized transformation and its reverse transformation for (inverse) autoregressive transformations could be described as follows:

(4)
(5)

where is the Hadamard product or element-wise product, and the addition, division and subtraction are also element-wise operations.

In previous works, AR and IAR have been successfully applied to image generation (Kingma et al., 2016) and speech synthesis (Oord et al., 2016a). However, as are dependent on previous elements of input or output , these transformations are inherently sequential in at least one pass of training (IAR) or synthesis (AR), making it difficult to parallelize on modern parallel hardware (Oord et al., 2017).

2.3 Non-autoregressive Transformations

Non-autoregressive transformations are designed to be parallelizable in both forward and backward pass, with tractable Jacobian determinants and inverses. Here, we describe a number of them:

Actnorm (Kingma and Dhariwal, 2018), as one of non-autoregressive transformations, was proposed to alleviate the training problems encountered in deep models, which is actually a special case of (inverse) autoregressive transformation that the scale and bias are treated as regular trainable parameters, namely, independent of the input data:

(6)

It’s worth mentioning that and are shared between the spatial dimensions of when the input is 2D images as described in Kingma and Dhariwal (2018).

Affine/additive coupling layers (Dinh et al., 2014, 2016) split the high-dimensional input into two parts and applies different transformations to each one to obtain the output . The first part is transformed with an identity function thus remains unchanged, and the second part is mapped to a new distribution with an affine transformation:

(7)

with . Same as AR and IAR, here is an arbitrarily complex function, typically a neural network. Note that this transformation can be also rewritten in the same form as (inverse) autoregressive transformations and actnorm method: , where and .

These non-autoregressive transformations have the advantage of parallelization, therefore, they are usually faster than the transformations with autoregressive structure. However, previous results have shown that they generally perform much worse in density estimation benchmarks (Ho et al., 2019).

3 Method

In this section, we introduce a new family transformations, which have the advantages of computational efficiency of non-autoregressive transformations and the high performance of (inverse) autoregressive transformations in density estimation benchmarks.

There are two key observations from the mentioned methods in Section 2. First, all methods have a consistent linear form:

(8)

Here is a diagonal matrix with as its diagonal elements, thus this transformation is invertible and its inverse is simple as Eq. (5). The invertibility makes it possible to use a same transformation as the block of both encoder and decoder in generative models.

The second key observation is the weights of such linear transformations

and are data-dependent, in the way that the determinant of Jacobian matrix is computationally efficient or tractable, usually making triangular (AR, IAR and affine coupling layer) or diagnoal (actnorm). Therefore, the log-determinant is simply the sum of logarithm of diagonal terms . Their difference are the methods used for modelling the relationship between the weights and the data under the "easy determinant of the Jacobian" constraint.

Figure 1: Dynamic Linear Flow with multi-scale architecture. At each scale, the input is passed through a squeezing operation to trade the spatial size for number of channels, followed by flows of invertible 11 convolution and dynamic linear transformation. The output is splitted into two halves, one for the next series of flow and another as a part of final latent variable. The condition is optional which guides dynamic linear transformation as prior knowledge.

3.1 Dynamic Linear Transformation with Triangular Jacobian

Let us now consider a high-dimensional variable : When splitting it into parts along its dimension, we obtain , with . Then we introduce a tractable and bijective function as following:

(9)

with . Variables and have the same dimension, and are modeled by an arbitrarily complex function (usually a neural network) with the previous part of data as input. is tractable and bijective with the inverse . An alternative of is identity function . If then, combined with , our method turns out to be the case of affine coupling layer, see Eq. (7). For the purpose of consistency, in this paper, we choose , where and are trainable. In other words, and are modeled by with that is any constant, e.g. . Therefore, Eq. (9) and its inverse can be rewritten as:

(10)
(11)

where and initial condition .

The Jacobian of the above transformation is triangular with as its diagonal elements and thus has a simple log-determinant term:

(12)

Note that our proposed transformation can also be rewritten in the following linear form:

(13)

where the variables and , and they are data-dependent, therefore, we call our method dynamic linear transformation. As and changed for different inputs, dynamic linear transformation can be considered as the extreme form of piecewise linear function, each of the points learning its own weights for affine transformation.

Figure 2: Negative log-likelihood on CIFAR-10 test set during training. Increasing leads to no performance gain but slower convergence.

In applications, an important concern for dynamic linear transformation is its recursive dependencies in the reverse pass, introduced by that each pair depends on previous partition . We show that this issuse could be addressed for two reasons: (1) the recursive dependencies are based on piece and only dependent of one earlier step, thus it is more efficient on computation than the element-level autoregressive structure, which has a great dependency on all its earlier steps; and (2) the smaller is, the shorter the dependency chain we get. In Section 5, we will show that increasing is not helpful and results in worse NLL score (Fig. 2), and our state-of-the-art results are achieved with , with a similar computational speed compared to non-autoregressive methods.

Similar to the transformations of AR and IAR, we also introduce a variant of dynamic linear transformation. Let and take the transformed output as input instead of , we then have:

(14)
(15)

with and initial condition . We call this variant inverse dynamic linear transformation, which has the same log-determinant as Eq. (12).

3.2 Conditional Dynamic Linear Transformation

In most samples generation scenarios, it is a common requirement to control the generating process with prior knowledge, e.g. generating an image with class label information. We introduce the conditional dynamic linear transformation to meet such requirement. Given condition , the conditional dynamic linear transformation could be described as:

(16)

The parameters of transformation and take as an additional input. Accordingly, when inverting the transformation, we can recompute and from the same and transformed .

For the inverse dynamic linear transformation variant, its conditional form is

(17)

4 Dynamic Linear Flow

In high-dimensional problems (e.g. generating images of faces), the use of a single layer of dynamic linear transformation is fairly limited. In order to increase the capability of the model, in this section, we describe Dynamic Linear Flow (DLF), a flow-based model using the (inverse) dynamic linear transformation as a building block. Following by the previous works of NICE (Dinh et al., 2014), RealNVP (Dinh et al., 2016) and Glow, DLF is stacked with blocks consisting of invertible convolution and (inverse) dynamic linear transformation, combined in a multi-scale architecture (Fig. 1). Since dynamic linear transformation and inverse dynamic linear transformation are similar, in Fig. 1, we only illustrate the structure of DLF with dynamic linear transformation, and the corresponding variant is obtained by replacing the layer of dynamic linear transformation with inverse dynamic linear transformation. A comparison on their density estimation performance is included in Section 5.

4.1 Multi-scale Architecture

For the case of 2D image input, following realNVP and Glow, we use squeezing operation to reduce each spatial resolution by a factor 2 and transpose them into channels, resulting in input transformed into a tensor. After the squeezing operation, steps of flows consisting of invertible convolution and dynamic linear transformation are combined into a sequence. Then the output of sequence stacks is factored out half of the dimensions at regular intervals, while all of the another half at different scales are concatenated to obtain the final transformed output. The above operations are iteratively applied for times.

4.2 Invertible Convolution

To ensure that each dimension can influence every other dimension during the transformation, we apply an invertible convolution layer (Kingma and Dhariwal, 2018) before each layer of dynamic linear transformation. The invertible convolution is essentially a normal convolution with equal number of input and output channels:

(18)

where is the kernel with shape , and index the spatial dimension of 2D variables .

Dataset Partitions () Channels () Levels () parameters
CIFAR-10 2 512 3 44.6M
4 308 3 45.5M
6 246 3 45.7M
MNIST 2 128 2 1.8M
ImageNet 3232 2 512 3 44.6M
ImageNet 6464 2 384 4 50.7M
CelebA HQ 256256 2 128 6 57.4M
Table 1: Hyperparameters and number of trainable parameters for our experiments in Section 5.

5 Experiments

We evaluate the proposed DLF model on standard image modeling benchmarks such as CIFAR-10 (Krizhevsky and Hinton, 2009), ImageNet (Russakovsky et al., 2015) among others. We first investigated the impact of number of partitions and compared the variants of dynamic linear transformation. With the optimal hyperparameters, we then compared log-likelihood with previous generative models of autoregressive and non-autoregressive families. Lastly, we assessed the conditional DLF with class label information and the qualitative aspects of DLF on high-resolution datasets.

In all our experiments, we followed a similar implementation of neural network

as in Glow, using three convolutional layers with a different activation function in the last layer. More specifically, the first two convolutional layers have

channels with ReLU activation functions, and

and filters, respectively. To control the number of model parameters, varied for different number of partitions and different datasets (Table. 1). The last convolution is and has two times of channels as partition , and its outputs are equally splitted into two parts along the channel dimension, obtained . For the purpose of training stability, the final , where and are learnable scale variables. For the conditional DLF, we introduce conditions by in the last layer, where is weight matrix for conditioning data. In cases where encodes spatial information, the matrix products () is replaced by a convolution operation. The parameters of neural network are individual between different partitions . Depth is always set to 32. See Table. 1 and Appendix A for more details of optimization.

Family Model CIFAR10 ImageNet 3232 ImageNet 6464
Non-autoregressive RealNVP (Dinh et al., 2016)
Glow (Kingma and Dhariwal, 2018)
Flow++ (Ho et al., 2019)
DLF (ours)
Autoregressive Multiscale PixelCNN (Reed et al., 2017) - 3.95 3.70
PixelRNN (Oord et al., 2016b) 3.00 3.86 3.63
Gated PixelCNN (van den Oord et al., 2016) 3.03 3.83 3.57
PixelSNAIL (Chen et al., 2017) 2.85 3.80 3.52
SPN (Menick and Kalchbrenner, 2018) - 3.79 3.52
Table 2: Comparison on density estimation performance (bits/dim, lower is better). Results are obtained from 8-bits datasets.

5.1 Effect of Partitions and Model Variants

Choosing a large will increase the recursive complexity of the model. Therefore, a small is preferred given the performance was not degraded. We tested number of partitions and on CIFAR-10. The number of model parameters was approximately equal to 45M (same size as in Glow) by controlling channels , see Table 1. The results are summarized in Fig. 2. As we can see, Increasing is unnecessary and has negative effect on model performance, leading to worse NLL score and slower convergence. On the other hand, we replaced the layers of dynamic linear transformation with its inverse variant when , which does not produce significant performance difference. Therefore, we choose and will not evaluate DLF with inverse dynamic linear transformation in the following experiments.

Note that for the case of

, both the non-inverse and inverse variants start overfitting after 20 epochs. And after 50 epochs, the averaged NLL score over epoch on training set reaches 3.30 and the loss still keeps decreasing, while the validation NLL increases from 3.51 to 3.55. As mentioned in Section 

3, dynamic linear transformation is the extreme form of piecewise linear function, learning weights of affine transformation for each input. This indicates that the more powerful the transformation is, the more training data our method is eager for to cover the distribution of whole dataset. Therefore, to avoid overfitting, apart from degrading the capacity of dynamic linear transformation, another approach is to increase the size of training dataset. We will discuss this in greater details in the following sections.

(a) Unconditional samples
(b) Conditional samples
Figure 3: Comparison of unconditional and conditional DLF on MNIST with class label information. (a) unconditional samples; (b) class conditional samples. Temperature 0.7

5.2 Density Estimation

To compare with previous likelihood-based models, we perform density estimation on natural images datasets CIFAR10 and ImageNet. In particular, we use the and downsampled version of ImageNet (Oord et al., 2016b). For all datasets, we follow the same preprocessing as in Kingma and Dhariwal (2018).

On CIFAR10, as discussed earlier, the DLF model with the same size as Glow displayed overfitting. A possible reason is the simplicity and small size of CIFAR10. We tested the assumption by training a same size model on the relatively complex dataset ImageNet 3232. As shown in Table. 2, compared to Glow, the improvement is significant by 0.24 bits/dim and we did not observe overfitting on Imagenet 32

32. This encourages us to apply transfer learning to CIFAR10, initializing its parameters with the trained model on ImageNet 32

32. We found the approach helpful for CIFAR10, obtained 3.51 bits/dim without transfer learning and 3.44 bits/dim with transfer learning. on ImageNet 6464, the DLF model led to 3.57 bits/dim, while the model size is relatively small with 50.7M parameters compared to 112.3M parameters of Glow on the same dataset.

Summarily, the DLF model achieves state-of-the-art density modeling results on ImageNet 3232 and 6464 among all non-autoregressive models, and it is comparable to most autoregressive models. It is worth mentioning that all results are obtained within 50 epochs. To our knowledge, it is more than 10 times more efficient than Glow and Flow++ (Ho et al., 2019), which generally require at least thousands of epochs to converge.

5.3 Conditional DLF

For conditional DLF, we experimented on MNIST (LeCun et al., 1998) and CIFAR10 with class label as prior. The hyperparameters can be found in Table. 1 (For CIFAR10, only

was tested). For the conditional version, during training, we represent the class label as a 10-dimensional, one-hot encoded vector

, and add it to each layer of dynamic linear transformation. On contrary, class label is not given in the unconditional version. Once converged, we synthesize samples by randomly generating latent variables from standard Gaussian distribution, and giving one-hot encoded label to all layers of dynamic linear transformation for conditional DLF. As in Fig. 3, the class-conditional samples (sampled after 150 epochs) are controlled by the corresponding label and the quality is better than the unconditional samples (sampled after 200 epochs). This result indicates that DLF correctly learns to control the distribution with class label prior. See appendix for samples from CIFAR10.

Figure 4: Random samples from ImageNet 6464 (left, temperature 1.0) and CelebA-HQ 256256 (right, temperature 0.6), both on 8-bits.

5.4 Samples and Interpolation

We present samples randomly generated from the trained DLF model on ImageNet 6464 and CelebA HQ 256256 (Karras et al., 2017) in Fig. 4, both on 8-bit. For CelebA 256256 dataset, our model has 57.4M parameters, which is approximately of Glow’s, and is trained with only 400 epochs. Note that our model have not fully converged on CelebA 256256, due to limited computational resources.

Figure 5:

Linear interpolation in latent space between two real images. The images have never been seen by model during training.

In Fig. 5, we take pairs of real images from Celeba HQ 256256 test set, encode them to obtain the latent representations, and linearly interpolate between the latents to decode samples. As we can see, the image manifold is smoothly changed.

During sampling, generating a 256256 image at batch size 1 takes about 315ms on a single 1080 Ti GPU, and 1078ms on a single i7-6700k CPU. We believe this sampling speed can be further improved by using inverse dynamic linear transformation, as it has no recursive structure in the reverse computation.

6 Conclusion

We propose a new family of invertible and tractable transformations, coined dynamice linear transformation. Building DLF model with blocks of dynamic linear transformation, we achieved state-of-the-art performance in terms of log-likelihood on ImageNet 3232 and 6464 benchmarks. We also illustrated that our flow-based model can efficiently synthesize high-resolution images.

Flow-based methods optimize exact log-likelihood directly, which is stable and easy for training. With the development of more powerful invertible transformations, we belief flow-based methods will show potential comparable to GANs and give rise to various applications.

References

Appendix A Optimization details

We use the Adam optimizer (Kingma and Ba, 2014) with and default and . Batch size is 256 for MNIST, 32 for all experiemnts on CIFAR-10 and ImageNet 3232, 24 for ImageNet 6464, and 8 for CelebA HQ 256256. In practice, the weights of invertible convolution are possible to become non-invertible thus interrupts the training (especially on CelebA dataset). We found it is caused by the increased and then exploded weights of convolution during training. Therefore, for CelebA dataset, we use L2 regularization for the weights of invertible convolution, with . During sampling, we use the method proposed in Parmar et al. (2018) to reduce temperature, which often results in higher-quality images.

Appendix B Extra Samples

We present extra samples in Fig. 6 to 7.

Figure 6: ImageNet 6464 (left) and 3232 (right) samples with temperature 1.0
Figure 7: Unconditional (left) and class-conditional (right) samples from CIFAR10 with temperature 1.0