Multimodal Generative Models for Scalable Weakly-Supervised Learning

02/14/2018 ∙ by Mike Wu, et al. ∙ Stanford University 0

Multiple modalities often co-occur when describing natural phenomena. Learning a joint representation of these modalities should yield deeper and more useful representations. Previous work have proposed generative models to handle multi-modal input. However, these models either do not learn a joint distribution or require complex additional computations to handle missing data. Here, we introduce a multimodal variational autoencoder that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal inference problem. Notably, our model shares parameters to efficiently learn under any combination of missing modalities, thereby enabling weakly-supervised learning. We apply our method on four datasets and show that we match state-of-the-art performance using many fewer parameters. In each case our approach yields strong weakly-supervised results. We then consider a case study of learning image transformations---edge detection, colorization, facial landmark segmentation, etc.---as a set of modalities. We find appealing results across this range of tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

page 8

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Learning from diverse modalities has the potential to yield more generalizable representations. For instance, the visual appearance and tactile impression of an object converge on a more invariant abstract characterization yildirim2014perception . Similarly, an image and a natural language caption can capture complimentary but converging information about a scene vinyals2015show ; xu2015show

. While fully-supervised deep learning approaches can learn to bridge modalities, generative approaches promise to capture the joint distribution across modalities and flexibly support missing data. Indeed, multimodal data is

expensive and sparse, leading to a weakly supervised setting of having only a small set of examples with all observations present, but having access to a larger dataset with one (or a subset of) modalities.

We propose a novel multimodal variational autoencoder (MVAE) to learn a joint distribution under weak supervision. The VAE kingma2013auto jointly trains a generative model, from latent variables to observations, with an inference network from observations to latents. Moving to multiple modalities and missing data, we would naively need an inference network for each combination of modalities. However, doing so would result in an exponential explosion in the number of trainable parameters. Assuming conditional independence among the modalities, we show that the correct inference network will be a product-of-experts hinton2006training , a structure which reduces the number of inference networks to one per modality. While the inference networks can be best trained separately, the generative model requires joint observations. Thus we propose a sub-sampled training paradigm in which fully-observed examples are treated as both fully and partially observed (for each gradient update). Altogether, this provides a novel and useful solution to the multi-modal inference problem.

We report experiments to measure the quality of the MVAE, comparing with previous models. We train on MNIST lecun1998gradient

, binarized MNIST

larochelle2011neural , MultiMNIST eslami2016attend ; sabour2017dynamic , FashionMNIST xiao2017fashion , and CelebA liu2015faceattributes . Several of these datasets have complex modalities—character sequences, RGB images—requiring large inference networks with RNNs and CNNs. We show that the MVAE is able to support heavy encoders with thousands of parameters, matching state-of-the-art performance.

We then apply the MVAE to problems with more than two modalities. First, we revisit CelebA, this time fitting the model with each of the 18 attributes as an individual modality. Doing so, we find better performance from sharing of statistical strength. We further explore this question by choosing a handful of image transformations commonly studied in computer vision—colorization, edge detection, segmentation, etc.—and synthesizing a dataset by applying them to CelebA. We show that the MVAE can jointly learn these transformations by modeling them as modalities.

Finally, we investigate how the MVAE performs under incomplete supervision by reducing the number of multi-modal examples. We find that the MVAE is able to capture a good joint representation when only a small percentage of examples are multi-modal. To show real world applicability, we then investigate weak supervision on machine translation where each language is a modality.

2 Methods

A variational autoencoder (VAE) kingma2013auto is a latent variable generative model of the form where is a prior, usually spherical Gaussian. The decoder, , consists of a deep neural net, with parameters , composed with a simple likelihood (e.g. Bernoulli or Gaussian). The goal of training is to maximize the marginal likelihood of the data (the “evidence”); however since this is intractable, the evidence lower bound (ELBO) is instead optimized. The ELBO is defined via an inference network, , which serves as a tractable importance distribution:

(1)

where

is the Kullback-Leibler divergence between distributions

and ; higgins2016beta and are weights balancing the terms in the ELBO. In practice, and is slowly annealed to 1 bowman2015generating

to form a valid lower bound on the evidence. The ELBO is usually optimized (as we will do here) via stochastic gradient descent, using the reparameterization trick to estimate the gradient

kingma2013auto .

(a)
(b)
(c)
Figure 1: (a) Graphical model of the MVAE. Gray circles represent observed variables. (b) MVAE architecture with modalities. represents the -th inference network; and represent the -th variational parameters; and represent the prior parameters. The product-of-experts (PoE) combines all variational parameters in a principled and efficient manner. (c) If a modality is missing during training, we drop the respective inference network. Thus, the parameters of are shared across different combinations of missing inputs.

In the multimodal setting we assume the modalities, , …, , are conditionally independent given the common latent variable, (See Fig. 0(a)). That is we assume a generative model of the form . With this factorization, we can ignore unobserved modalities when evaluating the marginal likelihood. If we write a data point as the collection of modalities present, that is , then the ELBO becomes:

(2)

2.1 Approximating The Joint Posterior

The first obstacle to training the MVAE is specifying the inference networks, for each subset of modalities . Previous work (e.g. suzuki2016joint ; vedantam2017generative ) has assumed that the relationship between the joint- and single-modality inference networks is unpredictable (and therefore separate training is required). However, the optimal inference network would be the true posterior . The conditional independence assumptions in the generative model imply a relation among joint- and single-modality posteriors:

(3)

That is, the joint posterior is a product of individual posteriors, with an additional quotient by the prior. If we assume that the true posteriors for each individual factor is properly contained in the family of its variational counterpart111Without this assumption, the best approximation to a product of factors may not be the product of the best approximations for each individual factor. But, the product of is still a tractable family of approximations., , then Eqn. 3 suggests that the correct is a product and quotient of experts: , which we call MVAE-Q.

Alternatively, if we approximate with , where is the underlying inference network, we can avoid the quotient term:

(4)

In other words, we can use a product of experts (PoE), including a “prior expert”, as the approximating distribution for the joint-posterior (Figure 0(b)). This representation is simpler and, as we describe below, numerically more stable. This derivation is easily extended to any subset of modalities yielding (Figure 0(c)). We refer to this version as MVAE.

The product and quotient distributions required above are not in general solvable in closed form. However, when and are Gaussian there is a simple analytical solution: a product of Gaussian experts is itself Gaussian cao2014generalized with mean and covariance , where , are the parameters of the -th Gaussian expert, and is the inverse of the covariance. Similarly, given two Gaussian experts, and , we can show that the quotient (QoE), , is also a Gaussian with mean and covariance , where . However, this distribution is well-defined only if element-wise—a simple constraint that can be hard to deal with in practice. A full derivation for PoE and QoE can be found in the supplement.

Thus we can compute all multi-modal inference networks required for MVAE efficiently in terms of the uni-modal components,

; the additional quotient needed by the MVAE-Q variant is also easily calculated but requires an added constraint on the variances.

2.2 Sub-sampled Training Paradigm

On the face of it, we can now train the MVAE by simply optimizing the evidence lower bound given in Eqn. 2. However, a product-of-Gaussians does not uniquely specify its component Gaussians. Hence, given a complete dataset, with no missing modalities, optimizing Eqn. 2 has an unfortunate consequence: we never train the individual inference networks (or small sub-networks) and thus do not know how to use them if presented with missing data at test time. Conversely, if we treat every observation as independent observations of each modality, we can adequately train the inference networks , but will fail to capture the relationship between modalities in the generative model.

We propose instead a simple training scheme that combines these extremes, including ELBO terms for whole and partial observations. For instance, with modalities, a complete example, can be split into partial examples: , , , …. If we were to train using all subsets it would require evaluating ELBO terms. This is computationally intractable. To reduce the cost, we sub-sample which ELBO terms to optimize for every gradient step. Specifically, we choose (1) the ELBO using the product of all Gaussians, (2) all ELBO terms using a single modality, and (3) ELBO terms using randomly chosen subsets, . For each minibatch, we thus evaluate a random subset of the ELBO terms. In expectation, we will be approximating the full objective. The sub-sampled objective can be written as:

(5)

We explore the effect of in Sec. 5. A pleasant side-effect of this training scheme is that it generalizes to weakly-supervised learning. Given an example with missing data, , we can still sample partial data from , ignoring modalities that are missing.

3 Related Work

Given two modalities, and , many variants of VAEs kingma2013auto ; kingma2014semi have been used to train generative models of the form , including conditional VAEs (CVAE) sohn2015learning and conditional multi-modal autoencoders (CMMA) pandey2017variational . Similar work has explored using hidden features from a VAE trained on images to generate captions, even in the weakly supervised setting (pu2016variational, ). Critically, these models are not bi-directional. We are more interested in studying models where we can condition interchangeably. For example, the BiVCCA wang2016deep trains two VAEs together with interacting inference networks to facilitate two-way reconstruction. However, it does not attempt to directly model the joint distribution, which we find empirically to improve the ability of a model to learn the data distribution.

Several recent models have tried to capture the joint distribution explicitly. suzuki2016joint introduced the joint multi-modal VAE (JMVAE), which learns using a joint inference network, . To handle missing data at test time, the JMVAE collectively trains with two other inference networks and . The authors use an ELBO objective with two additional divergence terms to minimize the distance between the uni-modal and the multi-modal importance distributions. Unfortunately, the JMVAE trains a new inference network for each multi-modal subset, which we have previously argued in Sec. 2 to be intractable in the general setting.

Most recently, vedantam2017generative introduce another objective for the bi-modal VAE, which they call the triplet ELBO. Like the MVAE, their model’s joint inference network combines variational distributions using a product-of-experts rule. Unlike the MVAE, the authors report a two-stage training process: using complete data, fit and the decoders. Then, freezing and , fit the uni-modal inference networks, and to handle missing data at test time. Crucially, because training is separated, the model has to fit 2 new inference networks to handle all combinations of missing data in stage two. While this paradigm is sufficient for two modalities, it does not generalize to the truly multi-modal case. To the best of our knowledge, the MVAE is the first deep generative model to explore more than two modalities efficiently. Moreover, the single-stage training of the MVAE makes it uniquely applicable to weakly-supervised learning.

Our proposed technique resembles established work in several ways. For example, PoE is reminiscent of a restricted Boltzmann machine (RBM), another latent variable model that has been applied to multi-modal learning

(ngiam2011multimodal, ; srivastava2012multimodal, )

. Like our inference networks, the RBM decomposes the posterior into a product of independent components. The benefit that a MVAE offers over a RBM is a simpler training algorithm via gradient descent rather than requiring contrastive divergence, yielding faster models that can handle more data. Our sub-sampling technique is somewhat similar to denoising

(vincent2008extracting, ; ngiam2011multimodal, ) where a subset of inputs are “partially destructed" to encourage robust representations in autoencoders. In our case, we can think of “robustness" as capturing the true marginal distributions.

4 Experiments

As in previous literature, we transform uni-modal datasets into multi-modal problems by treating labels as a second modality. We compare existing models (VAE, BiVCCA, JMVAE) to the MVAE and show that we equal state-of-the-art performance on four image datasets: MNIST, FashionMNIST, MultiMNIST, and CelebA. For each dataset, we keep the network architectures consistent across models, varying only the objective and training procedure. Unless otherwise noted, given images and labels , we set and . We find that upweighting the reconstruction error for the low-dimensional modalities is important for learning a good joint distribution.

Model BinaryMNIST MNIST FashionMNIST MultiMNIST CelebA
VAE 730240 730240 3409536 1316936 4070472
CVAE 735360 735360 3414656 4079688
BiVCCA 1063680 1063680 3742976 1841936 4447504
JMVAE 2061184 2061184 7682432 4075064 9052504
MVAE-Q 1063680 1063680 3742976 1841936 4447504
MVAE 1063680 1063680 3742976 1841936 4447504
JMVAE19 3.6259e12
MVAE19 10857048
Table 1: Number of inference network parameters. For a single dataset, each generative model uses the same inference network architecture(s) for each modality. Thus, the difference in parameters is solely due to how the inference networks interact in the model. We note that MVAE has the same number of parameters as BiVCCA. JMVAE19 and MVAE19 show the number of parameters using 19 inference networks when each of the attributes in CelebA is its own modality.

Our version of MultiMNIST contains between 0 and 4 digits composed together on a 50x50 canvas. Unlike eslami2016attend , the digits are fixed in location. We generate the second modality by concatenating digits from top-left to bottom-right to form a string. As in literature, we use a RNN encoder and decoder (bowman2015generating, ). Furthermore, we explore two versions of learning in CelebA, one where we treat the 18 attributes as a single modality, and one where we treat each attribute as its own modality for a total of 19. We denote the latter as MVAE19. In this scenario, to approximate the full objective, we set for a total 21 ELBO terms (as in Eqn. 5

). For complete details, including training hyperparameters and encoder/decoder architecture specification, refer to the supplement.

5 Evaluation

In the bi-modal setting with denoting the image and denoting the label, we measure the test marginal log-likelihood, , and test joint log-likelihood using importance samples in CelebA and samples in other datasets. In doing so, we have a choice of which inference network to use. For example, using , we estimate . We also compute the test conditional log-likelihood , as a measure of classification performance, as done in suzuki2016joint : . In CelebA, we use 1000 samples to estimate

. In all others, we use 5000 samples. These marginal probabilities measure the ability of the model to capture the data distribution and its conditionals. Higher scoring models are better able to generate proper samples and convert between modalities, which is exactly what we find desirable in a generative model.

Quality of the Inference Network

In all VAE-family models, the inference network functions as an importance distribution for approximating the intractable posterior. A better importance distribution, which more accurately approximates the posterior, results in importance weights with lower variance. Thus, we estimate the variance of the (log) importance weights as a measure of inference network quality (see Table 3).

Model BinaryMNIST MNIST FashionMNIST MultiMNIST CelebA
Estimated
VAE -86.313 -91.126 -232.758 -152.835 -6237.120
BiVCCA -87.354 -92.089 -233.634 -202.490 -7263.536
JMVAE -86.305 -90.697 -232.630 -152.787 -6237.967
MVAE-Q -91.665 -96.028 -236.081 -166.580 -6290.085
MVAE -86.026 -90.619 -232.535 -152.761 -6236.923
MVAE19 -6236.109
Estimated
JMVAE -86.371 -90.769 -232.948 -153.101 -6242.187
MVAE-Q -92.259 -96.641 -236.827 -173.615 -6294.861
MVAE -86.255 -90.859 -233.007 -153.469 -6242.034
MVAE19 -6239.944
Estimated
CVAE -83.448 -87.773 -229.667 -6228.771
JMVAE -83.985 -88.696 -230.396 -145.977 -6231.468
MVAE-Q -90.024 -94.347 -234.514 -163.302 -6311.487
MVAE -83.970 -88.569 -230.695 -147.027 -6234.955
MVAE19 -6233.340
Table 2: Estimates (using ) for marginal probabilities on the average test example. MVAE and JMVAE are roughly equivalent in data log-likelihood but as Table 1 shows, MVAE uses far fewer parameters. The CVAE is often better at capturing but does not learn a joint distribution.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 2: Image samples using MVAE. (a, c, e, g) show 64 images per dataset by sampling and then generating via . Similarly, (b, d, f, h) show conditional image reconstructions by sampling where (b) , (d) , (f) , (h) .

Fig. 2 shows image samples and conditional image samples for each dataset using the image generative model. We find the samples to be good quality, and find conditional samples to be largely correctly matched to the target label. Table 10 shows test log-likelihoods for each model and dataset.222These results used as the importance distribution. See supplement for similar results using . Because importance sampling with either or

yields an unbiased estimator of marginal likelihood, we expect the log-likelihoods to agree asymptotically.

We see that MVAE performs on par with the state-of-the-art (JMVAE) while using far fewer parameters (see Table 1). When considering only

(i.e. the likelihood of the image modality alone), the MVAE also performs best, slightly beating even the image-only VAE, indicating that solving the harder multi-modal problem does not sacrifice any uni-modal model capacity and perhaps helps. On CelebA, MVAE19 (which treats features as independent modalities) out-performs the MVAE (which treats the feature vector as a single modality). This suggests that the PoE approach generalizes to a larger number of modalities, and that jointly training shares statistical strength. Moreover, we show in the supplement that the MVAE19 is robust to randomly dropping modalities.

Tables 3 show variances of log importance weights. The MVAE always produces lower variance than other methods that capture the joint distribution, and often lower than conditional or single-modality models. Furthermore, MVAE19 consistently produces lower variance than MVAE in CelebA. Overall, this suggests that the PoE approach used by the MVAE yields better inference networks.

Model BinaryMNIST MNIST FashionMNIST MultiMNIST CelebA
Variance of Marginal Log Importance Weights:
VAE 22.264 26.904 25.795 54.554 56.291
BiVCCA 55.846 93.885 33.930 185.709 429.045
JMVAE 39.427 37.479 53.697 84.186 331.865
MVAE-Q 34.300 37.463 34.285 69.099 100.072
MVAE 22.181 25.640 20.309 26.917 73.923
MVAE19 71.640
Variance of Joint Log Importance Weights:
JMVAE 41.003 40.126 56.640 91.850 334.887
MVAE-Q 34.615 38.190 34.908 64.556 101.238
MVAE 23.343 27.570 20.587 27.989 76.938
MVAE19 72.030
Variance of Conditional Log Importance Weights:
CVAE 21.203 22.486 12.748 56.852
JMVAE 23.877 26.695 26.658 37.726 81.190
MVAE-Q 34.719 38.090 34.978 44.269 101.223
MVAE 19.478 25.899 18.443 16.822 73.885
MVAE19 71.824
Table 3: Average variance of log importance weights for three marginal probabilities, estimated by importance sampling from . 1000 importance samples were used to approximate the variance. The lower the variance, the better quality the inference network.
Effect of number of ELBO terms

In the MVAE training paradigm, there is a hyperparameter that controls the number of sampled ELBO terms to approximate the intractable objective. To investigate its importance, we vary from 0 to 50 and for each, train a MVAE19 on CelebA. We find that increasing has little effect on data log-likelihood but reduces the variance of the importance distribution defined by the inference networks. In practice, we choose a small as a tradeoff between computation and a better importance distribution. See supplement for more details.

(a) Dynamic MNIST
(b) FashionMNIST
(c) MultiMNIST
Figure 3: Effects of supervision level

. We plot the level of supervision as the log number of paired examples shown to each model. For MNIST and FashionMNIST, we predict the target class. For MultiMNIST, we predict the correct string representing each digit. We compare against a suite of baselines composed of models in relevant literature and commonly used classifiers. MVAE consistently beats all baselines in the

middle region where there is both enough data to fit a deep model; in the fully-supervised regime, MVAE is competitive with feedforward deep networks. See supplement for accuracies.

5.1 Weakly Supervised Learning

For each dataset, we simulate incomplete supervision by randomly reserving a fraction of the dataset as multi-modal examples. The remaining data is split into two datasets: one with only the first modality, and one with only the second. These are shuffled to destroy any pairing. We examine the effect of supervision on the predictive task , e.g. predict the correct digit label, , from an image

. For the MVAE, the total number of examples shown to the model is always fixed – only the proportion of complete bi-modal examples is varied. We compare the performance of the MVAE against a suite of baseline models: (1) supervised neural network using the same architectures (with the stochastic layer removed) as in the MVAE; (2) logistic regression on raw pixels; (3) an autoencoder trained on the full set of images, followed by logistic regression on a subset of paired examples; we do something similar for (4) VAEs and (5) RBMs, where the internal latent state is used as input to the logistic regression; finally (6) we train the JMVAE (

as suggested in (suzuki2016joint, )) on the subset of paired examples. Fig. 3 shows performance as we vary the level of supervision. For MultiMNIST, is a string (e.g. “6 8 1 2") representing the numbers in the image. We only include JMVAE as a baseline since it is not straightforward to output raw strings in a supervised manner.

We find that the MVAE surpasses all the baselines on a middle region when there are enough paired examples to sufficiently train the deep networks but not enough paired examples to learn a supervised network. This is especially emphasized in FashionMNIST, where the MVAE equals a fully supervised network even with two orders of magnitude less paired examples (see Fig. 3). Intuitively, these results suggest that the MVAE can effectively learn the joint distribution by bootstrapping from a larger set of uni-modal data. A second observation is that the MVAE almost always performs better than the JMVAE. This discrepancy is likely due to directly optimizing the marginal distributions rather than minimizing distance between several variational posteriors. We noticed empirically that in the JMVAE, using the samples from did much better (in accuracy) than samples from .

(a) Edge Detection and Facial Landscapes
(b) Colorization
(c) Fill in the Blank
(d) Removing Watermarks
Figure 4: Learning Computer Vision Transformations: (a) 4 ground truth images randomly chosen from CelebA along with reconstructed images, edges, and facial landscape masks; (b) reconstructed color images; (c) image completion via reconstruction; (d) reconstructed images with the watermark removed. See supplement for a larger version with more samples.

6 Case study: Computer Vision Applications

We use the MVAE to learn image transformations (and their inverses) as conditional distributions. In particular, we focus on colorization, edge detection, facial landmark segmentation, image completion, and watermark removal. The original image is itself a modality, for a total of six.

To build the dataset, we apply ground-truth transformations to CelebA. For colorization, we transform RGB colors to grayscale. For image completion, half of the image is replaced with black pixels. For watermark removal, we overlay a generic watermark. To extract edges, we use the Canny detector canny1987computational from Scikit-Image van2014scikit . To compute facial landscape masks, we use dlib king2009dlib and OpenCV bradski2000opencv .

We fit a MVAE with 250 latent dimensions and . We use Adam with a learning rate, a batch size of 50, for ,

annealing for 20 out of 100 epochs. Fig. 

4 shows samples showcasing different learned transformations. In Fig. 4a we encode the original image with the learned encoder, then decode the transformed image with the learned generative model. We see reasonable reconstruction, and good facial landscape and edge extraction. In Figs.4b, 4c, 4d we go in the opposite direction, encoding a transformed image and then sampling from the generative model to reconstruct the original. The results are again quite good: reconstructed half-images agree on gaze direction and hair color, colorizations are reasonable, and all trace of the watermark is removed. (Though the reconstructed images still suffer from the same blurriness that VAEs do zhao2017towards .)

7 Case study: Machine Translation

Num. Aligned Data (%) Test log
133 (0.1%)
665 (0.5%)
1330 (1%)
6650 (5%)
13300 (10%)
133000 (100%)
Table 4: Weakly supervised translation. Log likelihoods on a test set, averaged over 3 runs. Notably, we find good performance with a small fraction of paired examples.

As a second case study we explore machine translation with weak supervision – that is, where only a small subset of data consist of translated sentence pairs. Many of the popular translation models (vaswani2017attention, ) are fully supervised with millions of parameters and trained on datasets with tens of millions of paired examples. Yet aligning text across languages is very costly, requiring input from expert human translators. Even the unsupervised machine translation literature relies on large bilingual dictionaries, strong pre-trained language models, or synthetic datasets (lample2017unsupervised, ; artetxe2017unsupervised, ; ravi2011deciphering, ). These factors make weak supervision particularly intriguing.

We use the English-Vietnamese dataset (113K sentence pairs) from IWSLT 2015 and treat English (en) and Vietnamese (vi) as two modalities. We train the MVAE with 100 latent dimensions for 100 epochs (). We use the RNN architectures from (bowman2015generating, ) with a maximum sequence length of 70 tokens. As in (bowman2015generating, ), word dropout and KL annealing are crucial to prevent latent collapse.

Type Sentence
this was one of the highest points in my life.
Đó là một gian tôi vời của cuộc đời tôi.
It was a great time of my life.
the project’s also made a big difference in the lives of the people .
tôi án này được ra một Điều lớn lao cuộc sống của chúng người sống chữa hưởng .
this project is a great thing for the lives of people who live and thrive .
trước tiên , tại sao chúng lại có ấn tượng xấu như vậy ?
first of all, you do not a good job ?
First, why are they so bad?
Ông ngoại của tôi là một người thật đáng <unk> phục vào thời ấy .
grandfather is the best experience of me family .
My grandfather was a worthy person at the time .
Table 5: Examples of (1) translating English to Vietnamese by sampling from where , and (2) the inverse. We use Google Translate (Google) for ground-truth.

With only 1% of aligned examples, the MVAE is able to describe test data almost as well as it could with a fully supervised dataset (Table 4). With 5% aligned examples, the model reaches maximum performance. Table 11 shows examples of translation forwards and backwards between English and Vietnamese. See supplement for more examples. We find that many of the translations are not extremely faithful but interestingly capture a close interpretation to the true meaning. While these results are not competitive to state-of-the-art translation, they are remarkable given the very weak supervision. Future work should investigate combining MVAE with modern translation architectures (e.g. transformers, attention).

8 Conclusion

We introduced a multi-modal variational autoencoder with a new training paradigm that learns a joint distribution and is robust to missing data. By optimizing the ELBO with multi-modal and uni-modal examples, we fully utilize the product-of-experts structure to share inference network parameters in a fashion that scales to an arbitrary number of modalities. We find that the MVAE matches the state-of-the-art on four bi-modal datasets, and shows promise on two real world datasets.

Acknowledgments

MW is supported by NSF GRFP and the Google Cloud Platform Education grant. NDG is supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement FA8750-14-2-0006. We thank Robert X. D. Hawkins and Ben Peloquin for helpful discussions.

References

Appendix A Dataset Descriptions

MNIST/BinaryMNIST

We use the MNIST hand-written digits dataset [16] with 50,000 examples for training, 10,000 validation, 10,000 testing. We also train on a binarized version to align with previous work [15]. As in [27], we use the Adam optimizer [11] with a learning rate of 1e-3, a minibatch size of 100, 64 latent dimensions, and train for 500 epochs. We anneal from to linearly for the first 200 epochs. For the encoders and decoders, we use MLPs with 2 hidden layers of 512 nodes. We model with a Bernoulli likelihood and with a multinomial likelihood.

FashionMNIST

This is an MNIST-like fashion dataset containing 28 x 28 grayscale images of clothing from 10 classes—skirts, shoes, t-shirts, etc [34]. We use identical hyperparameters as in MNIST. However, we employ a miniature DCGAN [22] for the image encoder and decoder.

MultiMNIST

This is variant of MNIST where between 0 and 4 digits are composed together on a 50x50 canvas. Unlike [7], the digits are fixed in location. We generate the text modality by concatenating the digit classes from top-left to bottom-right. We use 100 latent dimensions, with the remaining hyperparameters as in MNIST. For the image encoder and decoder, we retool the DCGAN architecture from [22]. For the text encoder, we use a bidirectional GRU with 200 hidden units. For the text decoder, we first define a vocabulary with ten digits, a start token, and stop token. Provided a start token, we feed it through a 2-layer GRU, linear layers, and a softmax. We sample a new character and repeat until generating a stop token. We note that previous work has not explored RNN-VAE inference networks in multi-modal learning, which we show to work well with the MVAE.

CelebA

The CelebFaces and Attributes (CelebA) dataset [36] contains over 200k images of celebrities. Each image is tagged with 40 attributes i.e. wears glasses, or has bangs. We use the aligned and cropped version with a selected 18 visually distinctive attributes, as done in [20]. Images are rescaled to 64x64. For the first experiment, we treat images as one modality, , and attributes as a second modality, where a single inference network predicts all 18 attributes. We also explore a variation of MVAE, called MVAE19, where we treat each attribute as its own modality for a total of 19. To approximate the full objective, we set for a total 21 ELBO terms. We use Adam with a learning rate of , a minibatch size of 100, and anneal KL for the first 20 of 100 epochs. We again use DCGAN for image networks. For the attribute encoder and decoder, we use an MLP with 2 hidden layers of size 512. For MVAE19, we have 18 such encoders and decoders.

Appendix B Product of a Finite Number of Gaussians

In this section, we provide the derivation for the parameters of a product of Gaussian experts (PoE). Derivation is summarized from [bromiley2003products].

Lemma B.1.

Give a finite number

of multi-dimensional Gaussian distributions

with mean , covariance for , the product is itself Gaussian with mean and covariance where .

Proof.

We write the probability density of a Gaussian distribution in canonical form as where is a normalizing constant, , . We then write the product of Gaussians distributions . We note that this product itself has the form of a Gaussian distribution with and . Converting back from canonical form, we see that the product Gaussian has mean and covariance . ∎

Appendix C Quotient of Two Gaussians

Similarly, we may derive the form of a quotient of two Gaussian distributions (QoE).

Lemma C.1.

Give two multi-dimensional Gaussian distributions and with mean and , and covariance and respectively, the quotient is itself Gaussian with mean and covariance where .

Proof.

We again write the probability density of a Gaussian distribution as . We then write the quotient of two Gaussians and as . This defines a new Gaussian distribution with and . If we let and , then we see that and . ∎

The QoE suggests that the constraint must hold for the resulting Gaussian to be well-defined. In our experiments, is usually a product of Gaussians, and is a product of prior Gaussians (see Eqn 3 in main paper). Given modalities, we can decompose and where the prior is a unit Gaussian with variance 1. Thus, the constraint can be rewritten as , which is satisfied if . One benefit of using the regularized importance distribution is to remove the need for this constraint. To fit MVAE without a universal expert, we add an additional nonlinearity to each inference network such that the variance is fed into a rescaled sigmoid: .

Appendix D Additional Results using the Joint Inference Network

In the main paper, we reported marginal probabilities using and showed that MVAE is state-of-the-art. Here we similarly compute marginal probabilities but using . Because importance sampling with either induced distribution yields an unbiased estimate, using a large number of samples should result in very similar log-likelihoods. Indeed, we find that the results do not differ much from the main paper: MVAE is still at the state-at-the-art.

Model BinaryMNIST MNIST FashionMNIST MultiMNIST CelebA
Estimating using
JMVAE -86.234 -90.962 -232.401 -153.026 -6234.542
MVAE -86.051 -90.616 -232.539 -152.826 -6237.104
MVAE19 -6236.113
Estimating using
JMVAE -86.304 -91.031 -232.700 -153.320 -6238.280
MVAE -86.278 -90.851 -233.007 -153.478 -6241.621
MVAE19 -6239.957
Estimating using
JMVAE -83.820 -88.436 -230.651 -145.761 -6235.330
MVAE -83.940 -88.558 -230.699 -147.009 -6235.368
MVAE19 -6233.330
Table 6: Similar estimates as in Table 2 (in main paper) but using as an importance distribution (instead of ). Because VAE and CVAE do not have a multi-modal inference network, they are excluded. Again, we show that the MVAE is able to match state-of-the-art.
Variance of Marginal Log Importance Weights:
Model BinaryMNIST MNIST FashionMNIST MultiMNIST CelebA
JMVAE 22.387 24.962 28.443 35.822 80.808
MVAE 21.791 25.741 18.092 16.437 73.871
MVAE19 71.546
Variance of Joint Log Importance Weights:
JMVAE 23.309 26.767 29.874 38.298 81.312
MVAE 21.917 26.057 18.263 16.672 74.968
MVAE19 71.953
Variance of Conditional Log Importance Weights:
JMVAE 40.646 40.086 56.452 92.683 335.046
MVAE 23.035 27.652 19.934 28.649 77.516
MVAE19 71.603
Table 7: Average variance of log importance weights for marginal, joint, and conditional distributions using . Lower variances suggest better inference networks.

Appendix E Model Architectures

Here we specify the design of inference networks and decoders used for each dataset.

(a)
(b)
(c)
(d)
Figure 5: MVAE architectures on MNIST: (a) , (b) , (c) , (d) where specifies an image and specifies a digit label.
(a)
(b)
(c)
(d)
Figure 6: MVAE architectures on FashionMNIST: (a) , (b) , (c) , (d) where specifies an image and specifies a clothing label.
(a)
(b)
(c)
(d)
Figure 7: MVAE architectures on MultiMNIST: (a) , (b) , (c) , (d) where specifies an image and specifies a string of 4 digits.
(a)
(b)
(c)
(d)
Figure 8: MVAE architectures on CelebA: (a) , (b) , (c) , (d) where specifies an image and specifies a 18 attributes.

Appendix F More on Weak Supervision

In the main paper, we showed that we do not need that many complete examples to learn a good joint distribution with two modalities. Here, we explore the robustness of our model with missing data under more modalities. Using MVAE19 (19 modalities) on CelebA, we can conduct a different weak supervision experiment: given a complete multi-modal example , randomly keep with probability for each . Doing so for all examples in the training set, we simulate the effect of missing modalities beyond the bi-modal setting. Here, the number of examples shown to the model is dependent on e.g. suggests that on average, 1 out of every 2 are dropped. We vary from to , train from scratch, and plot (1) the prediction accuracy per attribute and (2) the various data log-likelihoods. From Figure 9, we conclude that the method is fairly robust to missing data. Even with , we still see accuracy close to the prediction accuracy with full data.

(a)
(b)
(c)
(d)
Figure 9: We randomly drop input features with probability . Figure (a) shows the effect of increasing from to on the accuracy of sampling the correct attribute given an image. Figure (b) and (c) show changes in log marginal and log conditional approximations as increases. In all cases, we see close-to-best performance using only 10% of the complete data.

Appendix G Table of Weak Supervision Results

In the paper, we showed a series of plots detailing the performance the MVAE among many baselines on a weak supervision task. Here we provide tables detailing other numbers.

Model 0.1% 0.2% 0.5% 1% 2% 5% 10% 50% 100%
AE 0.4143 0.5429 0.6448 0.788 0.8519 0.9124 0.9269 0.9423 0.9369
NN 0.6618 0.6964 0.7971 0.8499 0.8838 0.9235 0.9455 0.9806 0.9857
LOGREG 0.6565 0.7014 0.7907 0.8391 0.8510 0.8713 0.8665 0.9217 0.9255
RBM 0.7152 0.7496 0.8288 0.8614 0.8946 0.917 0.9257 0.9365 0.9379
VAE 0.2547 0.284 0.4026 0.6369 0.8016 0.8717 0.8989 0.9183 0.9311
JMVAE 0.2342 0.2809 0.3386 0.6116 0.7869 0.8638 0.9051 0.9498 0.9572
MVAE 0.2842 0.6254 0.8593 0.8838 0.9394 0.9584 0.9711 0.9678 0.9681
Table 8: Performance of several models on MNIST with a fraction of paired examples. Here we compute the accuracy (out of 1) of predicting the correct digit in each image.
Model 0.1% 0.2% 0.5% 1% 2% 5% 10% 50% 100%
NN 0.6755 0.701 0.7654 0.7944 0.8102 0.8439 0.862 0.8998 0.9318
LOGREG 0.6612 0.7005 0.7624 0.7627 0.7728 0.7802 0.8015 0.8377 0.8412
RBM 0.6708 0.7214 0.7628 0.7690 0.7805 0.7943 0.8021 0.8088 0.8115
VAE 0.5316 0.6502 0.7221 0.7324 0.7576 0.7697 0.7765 0.7914 0.8311
JMVAE 0.5284 0.5737 0.6641 0.6996 0.7437 0.7937 0.8212 0.8514 0.8828
MVAE 0.4548 0.5189 0.7619 0.8619 0.9201 0.9243 0.9239 0.9478 0.947
Table 9: Performance of several models on FashionMNIST with a fraction of paired examples. Here we compute the accuracy of predicting the correct class of attire in each image.
Model 0.1% 0.2% 0.5% 1% 2% 5% 10% 50% 100%
JMVAE 0.0603 0.0603 0.0888 0.1531 0.1699 0.1772 0.4765 0.4962 0.4955
MVAE 0.09363 0.1189 0.1098 0.2287 0.3805 0.4289 0.4999 0.5121 0.5288
Table 10: Performance of several models on MultiMNIST with a fraction of paired examples. Here compute the average accuracy of predicting each digit correct (by decomposing the string into individual digits, at most 4).

Appendix H Details on Weak Supervision Baselines

The VAE used the same image encoder as the MVAE. JMVAE used identical architectures as the MVAE with a hyperparameter . The RBM has a single layer with 128 hidden nodes and is trained using contrastive divergence. NN uses the image encoder and label/string decoder as in MVAE, thereby being a fair comparison to supervised learning. For MNIST, we trained each model for 500 epochs. For FashionMNIST and MultiMNIST, we trained each model for 100 epochs. All other hyperparameters were kept constant between models.

Appendix I More of the effects of sampling more ELBO terms

In the main paper, we stated that with higher (sampling more ELBO terms), we see a steady decrease in variance. This drop in variance can be attributed to two factors: (1) additional un-correlated randomness from sampling more when reparametrizing for each ELBO [4], or (2) additional ELBO terms to better approximate the intractable objective. Fig. 10 (c) shows that the variance still drops consistently when using a fixed for computing all ELBO terms, indicating independent contributions of additional ELBO terms and additional randomness.

(a)
(b)
(c)
Figure 10: Effect of approximating the MVAE objective with more ELBO terms on (a) the joint log-likelihood and (b) the variance of the log importance weights over 3 independent runs. Similarly, (c) compute the variance but fixes a single when reparametrizing for each ELBO. (b) and (c) imply that switching from to greatly reduces the variance in the importance distribution defined by the inference network(s).

Appendix J More on the Computer Vision Transformations

We copy Fig. 4 in the main paper but show more samples and increase the size of each image for visibility. The MVAE is able to learn all 6 transformations jointly under the PoE inference network.

Figure 11: Edge Detection and Facial Landscapes: The top row shows 8 ground truth images randomly chosen from the CelebA dataset. The second to fourth rows respectively plot the reconstructed image, edge, and facial landscape masks using the trained MVAE decoders and .
Figure 12: Colorization: The top row shows ground truth grayscale images. The bottom row show reconstructed color images.
Figure 13: Fill in the Blank: The top row shows ground truth CelebA images with half of each image obscured. The bottom row replaces the obscured part with a reconstruction.
Figure 14: Removing Watermarks: The top row shows ground truth CelebA images, each with an added watermark. The bottom row shows the reconstructed image with the watermark removed.

Appendix K More on Machine Translation

We provide more samples on (1) sampling joint (English, Vietnamese) pairs of sentences from the prior , (2) translating English to Vietnamese by sampling from where , and (3) translating Vietnamese to English by sampling from where . Refer to the main for analysis and explanation.

Type Sentence
it’s a problem .
nó là một công việc .
it is a job .
we have an idea .
chúng tôi có thể làm được .
we can do it .
And as you can see , this is a very powerful effect of word of mouth .
và một trong những điều này đã xảy ra với những người khác , và chúng
tôi đã có một số người trong số các bạn đã từng nghe về những điều này .
and one of these has happened to other people, and we’ve had
some of you guys already heard about this .
this is a photograph of my life .
Đây là một bức ảnh .
this is a photo .
thank you .
xin cảm ơn .
thank you .
i’m not kidding .
tôi không nói đùa .
i am not joking .
Table 11: A few examples of “paired" reconstructions from a single sample . Interestingly, many of the translations are not exact but instead capture a close interpretation of the true meaning. The MVAE tended to perform better on shorter sentences.
Type Sentence
this was one of the highest points in my life.
Đó là một gian tôi vời của cuộc đời tôi.
It was a great time of my life.
i am on this stage .
tôi đi trên sân khấu .
me on stage .
do you know what love is ?
Đó yêu của những ?
that’s love ?
today i am 22 .
hãy nay tôi sẽ tuổi .
I will be old now .
so i had an idea .
tôi thế tôi có có thể vài tưởng tuyệt .
I can have some good ideas .
the project’s also made a big difference in the lives of the <unk> .
tôi án này được ra một Điều lớn lao cuộc sống của chúng người
sống chữa hưởng .
this project is a great thing for the lives of people who live and thrive .
Table 12: A few examples of Vietnamese MVAE translations of English sentences sampled from the empirical dataset, . We use Google translate to re-translate back to English.
Type Sentence
Đó là thời điểm tuyệt vọng nhất trong cuộc đời tôi .
this is the most bad of the life .
it was the most desperate time in my life .
cảm ơn .
thank .
thank you .
trước tiên , tại sao chúng lại có ấn tượng xấu như vậy ?
first of all, you do not a good job ?
First, why are they so bad?
Ông ngoại của tôi là một người thật đáng <unk> phục vào thời ấy .
grandfather is the best experience of me family .
My grandfather was a worthy person at the time .
Đứa trẻ này 8 tuổi .
this is man is 8 years old .
this child is 8 years old .
Table 13: A few examples of English MVAE translations of Vietnamese sentences sampled from the empirical dataset, . We use Google translate to translate to English as a ground truth.