Variational methods for Conditional Multimodal Deep Learning

03/06/2016 ∙ by Gaurav Pandey, et al. ∙ ERNET India 0

In this paper, we address the problem of conditional modality learning, whereby one is interested in generating one modality given the other. While it is straightforward to learn a joint distribution over multiple modalities using a deep multimodal architecture, we observe that such models aren't very effective at conditional generation. Hence, we address the problem by learning conditional distributions between the modalities. We use variational methods for maximizing the corresponding conditional log-likelihood. The resultant deep model, which we refer to as conditional multimodal autoencoder (CMMA), forces the latent representation obtained from a single modality alone to be `close' to the joint representation obtained from multiple modalities. We use the proposed model to generate faces from attributes. We show that the faces generated from attributes using the proposed model, are qualitatively and quantitatively more representative of the attributes from which they were generated, than those obtained by other deep generative models. We also propose a secondary task, whereby the existing faces are modified by modifying the corresponding attributes. We observe that the modifications in face introduced by the proposed model are representative of the corresponding modifications in attributes.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

page 12

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The problem of learning from several modalities simultaneously has garnered the attention of several deep learning researchers over the past few years [1]-[2]. This is primarily because of the wide availability of such data, and the numerous real-world applications where multimodal data is used. For instance, speech may be accompanied with text and the resultant data can be used for training speech-to-text or text-to-speech engines. Even within the same medium, several modalities may exist simultaneously, for instance, the plan and elevation of a 3d object, or multiple translations of a text.

The task of learning from several modalities simultaneously is complicated by the fact that the correlations within a modality are often much stronger than the correlations across modalities. Hence, many multi-modal learning approaches such as [1][3] try to capture the cross-modal correlations at an abstract latent feature level rather than at the visible feature level. The assumption is that the latent features are comparatively less correlated than the visible features, and hence, the latent features from different modalities can be concatenated and a single distribution can be learnt for the concatenated latent features [3].

An alternative approach to capture the joint distribution, is by modelling the conditional distribution across modalities as done in [2], whereby the authors make the simplifying assumption that the joint log-likelihood is maximized when the conditional log-likelihood of each modality given the other modality is maximized. While the assumption is untrue in general, the idea of learning conditional distributions to capture the joint distribution has several advantages. In particular, the conditional distributions are often less complex to model, since conditioning on one modality reduces the possibilities for the other modality. Moreover, if the underlying task is to generate one modality given the other, then learning conditional distributions directly addresses this task.

Hence, we address the problem of multimodal learning by capturing the conditional distributions. In particular, we use a variational approximation to the joint log-likelihood for training. In this paper, we restrict ourselves to directed graphical models, whereby a latent representation is sampled from one modality (referred to as the conditioning modality) and the other modality (referred to as the generated modality) is then sampled from the latent representation. Hence, the model is referred to as conditional multimodal autoencoder (CMMA).

2 Problem Formulation and Proposed Solution

A formal description of the problem is as follows. We are given an sequence of datapoints . For a fixed datapoint , let be the modality that we wish to generate and be the modality that we wish to condition on. We assume that is generated by first sampling a real-valued latent representation from the distribution , and then sampling from the distribution . The graphical representation of the model is given in Figure 2. Furthermore, we assume that the conditional distribution of the latent representation given and the distribution of given are parametric.

Figure 1: A graphical representation of CMMA
Figure 2: A graphical representation of conditional VAE as well as conditional GAN

Given the above description of the model, our aim is to find the parameters so as maximize the joint log-likelihood of and for the given sequence of datapoints. The computation of log-likelihood comprises a term for (referred to as conditional log-likelihood) and a term for . The computation fo requires the marginalization of the latent variable from the joint distribution .

(1)
(2)

For most choices of and , the evaluation of conditional log-likelihood is intractable. Hence, we resort to the minimization of a variational lower bound to the conditional log-likelihood. This is achieved by approximating the posterior distribution of given and , that is by a tractable distribution . This is explained in more detail in the following section.

2.1 The variational bound

For a given collection of datapoints, , the log-likelihood can be written as

Distribution Parametric form Representation
Table 1: Parametric forms for the distributions used in the paper and their representations demonstrating the explicit dependence on and .
(3)

Let the posterior distribution be approximated by a distribution whose graphical representation is shown in Figure 2. In particular, be an approximation to the posterior distribution of the latent variables given and be an approximation to the posterior distribution of given . For an individual datapoint, the conditional log-likelihood can be rewritten as

(4)
(5)

where refers to the -divergence between the distributions and and is always non-negative. Note that the choice of the decomposition of posterior as forces the distribution to be ‘close’ to the true posterior , thereby encouraging the model to learn features from alone, that are representative about as well.

The term in equation (5) is referred to as the variational lower bound for the conditional log-likelihood for the datapoint and will be denoted by . It can further be rewritten as

(6)

From the last equation, we observe that the variational lower bound can be written as the sum of two terms. The first term is the negative of reconstruction error of , when reconstructed from the encoding of . The second term ensures that the encoding of is ’close’ to the corresponding encoding of , where closeness is defined in terms of -divergence between the corresponding distributions.

Adding to the above bound, we obtain the lower bound to the joint log-likelihood. It has been shown in [4]

that for learning a distribution from samples, it is sufficient to train the transition operator of a Markov chain, whose stationary distribution is the distribution that we wish to model. Using this idea, we replace

by . Note that while the two terms will be quite different, the gradients with respect to the parameters for the two terms is expected to be ’close’.

2.2 The reparametrization

In order to simplify the computation of the variational lower bound, we assume that conditioned on , the latent representation

is normally distributed with mean

and a diagional covariance matrix whose diagonal entries are given by . Moreover, conditioned on , is normally distributed with mean and a diagonal covariance matrix whose diagonal entries are given by . In the rest of the paper, we assume and

to be multi-layer perceptrons. Furthermore, we approximate the posterior distribution of

given and by a normal distribution with mean and a diagonal covariance matrix whose diagonal entries are given by , where and are again multi-layer perceptrons. In order to make the dependence of the distributions on and explicit, we represent as , as and as . For reference, the parametric forms of the likelihood, prior and posterior distributions and their representations demonstrating the explicit dependence on and are given in Table 1.

The above assumptions simplify the calculation of -divergence and . Let denote the component of the function and the size of the latent representation be . After ignoring the constant terms, the -divergence term of the variational lower bound can be written as

(7)

The negative reconstruction error term in the variational lower bound in  (6) can be obtained by generating samples from the posterior distribution of given and , and then averaging over the negative reconstruction error. For a fixed , the term can be written as

(8)

The choice of the posterior allows us to sample as follows:

(9)
(10)

where denotes elementwise multiplication. Hence, the negative reconstruction error can alternatively be rewritten as

(11)

where is as defined in (8)

In order to train the model using first-order methods, we need to compute the derivative of the variational lower bound with respect to the parameters of the model. Let and be the parameters of , and respectively. Note that the KL-divergence term in (7) depends only on , and . Its derivatives with respect to and

can be computed via chain rule.

From (11), the derivative of the negative reconstruction error with respect to is given by

(12)

The term inside the expectation can again be evaluated using chain rule.

Figure 3: A pictorial representation of the implemented model. The KL-divergence between and

is computed using (7

) and backpropagated to update the parameters

and . Similarly, the negative reconstruction error is computed using equation (8) for the specific and its gradient is backpropagated to update the parameters and .

2.3 Implementation details

We use minibatch training to learn the parameters of the model, whereby the gradient of the model with respect to the model parameters is computed for every minibatch and the corresponding parameters updated. While the gradient of the KL-divergence can be computed exactly from (7), the gradient of the negative reconstruction error in (11

) requires one to sample standard normal random vectors, compute the gradient for each sampled vector, and then take the mean. In practise, when the minibatch size is large enough, it is sufficient to sample one standard normal random vector per training example, and then compute the gradient of the negative reconstruction error with respect to the parameters, for this vector. This has also been observed for the case of variational autoencoder in 

[5].

A pictorial representation of the implemented model is given in Figure 3. Firstly, and

are fed to the neural network

to generate mean and log-variance of the distribution

. Moreover, is fed to the neural network to generate the mean and log-variance of the distribution . The KL-divergence between and is computed using (7), and its gradient is backpropagated to update the parameters and . Furthermore, the mean and log-variance of are used to sample , which is then forwarded to the neural network to compute the mean and log-variance of the distribution . Finally, the negative reconstruction error is computed using equation (8) for the specific and its gradient is backpropagated to update the parameters and .

3 Related Works

Over the past few years, several deep generative models have been proposed. They include deep Boltzmann machines (DBM) 

[6], generative adversarial networks (GAN) [7], variational autoencoders (VAE) [5] and generative stochastic networks (GSN) [4].

DBMs learn a Markov random field with multiple latent layers, and have been effective in modelling MNIST and NORB data. However, the training of DBMs involves a mean-field approximation step for every instance in the training data, and hence, they are computationally expensive. Moreover, there are no tractable extensions of deep Boltzmann machines for handling spatial equivariance.

All the other models mentioned above, can be trained using backpropagation or its stochastic variant, and hence can incorporate the recent advances in training deep neural networks such as faster libraries and better optimization methods. In particular, GAN learns a distribution on data, by forcing the generator to generate samples that are ‘indistinguishable’ from training data. This is achieved by learning a discriminator whose task is to distinguish between the generated samples and samples in the training data. The generator is then trained to fool the discriminator. Though this approach is intuitive, it requires a careful selection of hyperparameters. Moreover, given the data, one can not sample the latent variables from which it was generated, since the posterior is never learnt by the model.

In a VAE, the posterior distribution of the latent variables conditioned on the data, is approximated by a normal distribution, whose mean and variance are the output of a neural network (distributions other than normal can also be used). This allows approximate estimation of variational log-likelihood which can be optimized using stochastic backpropagation 

[8].

Both GAN and VAE are directed probabilistic models with an edge from the latent layer to the data. Conditional extensions of both these models for incorporating attributes/labels have also been proposed [9][10][11]. The graphical representation of a conditional GAN or conditional VAE is shown in Figure 2. As can be observed, both these models assume the latent layer to be independent of the attributes/labels. This is in stark contrast with our model CMMA, which assumes that the latent layer is sampled conditioned on the attributes.

It is also informative to compare the variational lower bound of conditional log-likelihood for a CVAE with (6). The lower bound for a CVAE is given by

(13)

Note that while the lower bound in the proposed model CMMA contains a KL-divergence term to explicitly force the latent representation from to be ’close’ to the latent representation from both and , there is no such term in the lower bound of CVAE. This proves to be a disadvantage for CVAE as is reflected in the experiments section.

(a) The prior distribution over the latent representations

for several randomly selected individuals. The width of the circles correspond to standard deviation. Note the high overlap between the priors, despite the attributes being different for the individuals.

(b) The contours of the prior distribution for individual 1 (in blue), and the contours for the posterior distribution for individual 1,2 and 3 (in red). The high uncertainty in the prior causes the posterior distribution of several individuals to lie within the prior distribution of individual 1.
Figure 4: The prior and posterior distributions of latent representation when the dimension of the latent layer if fixed to .

4 Experiments

We consider the task of learning a conditional distribution for the faces given the attributes. For this task, we use the cropped Labelled Faces in the Wild dataset111The dataset is available at http://conradsanderson.id.au/lfwcrop/ (LFW) [12], which consists of faces of people of which people have only one image. The images are of size and contain channels (red green and blue). Of the faces, faces have

attributes associated with them, obtained partially using Amazon Mechanical Turk and partially using attribute classifiers 

[13]. The attributes include ‘Male’, ‘Asian’, ‘No eye-wear’, ‘Eyeglasses’, ‘Moustache’, ‘Mouth open’, ‘Big nose’, ‘Pointy nose’, ‘Smiling’, ‘Frowning’, ‘Big lips’ etc. The data also contains attributes for hair, necklace, earrings etc., though these attributes are not visible in the cropped images. We use the first faces and the corresponding attributes for training the model, the next faces for validation, and keep the remaining faces and their corresponding attributes for testing.

The LFW dataset is very challenging since most people in the dataset have only one image. Moreover, any possible combination of the attributes occurs at most once in the dataset. This forces the model to learn a mapping from attributes to faces, that is shared across all possible combinations of attributes. In contrast, the face dataset used in [14], consists of several subsets of faces where only one attribute changes while others remain unchanged. Hence, one can tune the mapping from attributes to faces, one attribute at a time. This, however, isn’t possible for LFW.

In order to emphasize this factor, we show the prior () and the posterior distribution () for a 2-dimensional latent representatation of few randomly selected individuals with the modalities in Figure 4. Note that despite conditioning on the attributes, the prior distributions have high uncertainty, and prior distribution for several attribute combinations overlap considerably, particularly in lower dimensions. As the dimensions increase, this overlap decreases however. A VAE, on the other hand, assumes a common prior for all the individuals. Hence, one can think of conditioning in CMMA as tilting the prior of VAE in the direction of the conditioning modality.

Moreover, the posterior always has much lower variance than the prior. Other than the fact that access to decreases the uncertainty by a huge amount, the reduced variance is also an artifact of variational methods in general. In particular, for the 2-dimensional latent representations, we observed an average standard deviation of for CMMA , and for VAE in the posterior distribution of latent representations after iterations, which did not reduce further.

4.1 CMMA architecture

The MLP of the CMMA used in this paper (refer Figure 3) encodes the attributes, and is a neural network with hidden units, a soft thresholding unit of the form and two parallel output layers, each comprising of units. The MLPs and are convolution and deconvolution neural networks respectively. The corresponding architectures are given in Figure 5.

Figure 5: The architecture of the MLPs (top) and (bottom) of CMMA used in experiments.

4.2 Models used for comparison

We compare the quantitative and qualitative performance of CMMA against conditional Generative Adversarial Networks [11][10] (CGAN) and conditional Variational Autoencoders [5] (CVAE). We have tried to ensure that the architecture of the models used for comparison is as close as possible to the architecture of the CMMA used in our experiments. Hence, the generator and discriminator of CGAN and the encoder and decoder of CVAE closely mimic the MLPs and of CMMA as described in the previous section.

4.3 Training

We coded all the models in Torch 

[15] and trained each of them for iterations on a Tesla K40 GPU. For each model, the training time was approximately day. The adagrad optimization algorithm was used [16]. The proposed model CMMA was found to be relatively stable to the selection of initial learning rate, and the variance of the randomly initialized weights in various layers. For CGAN, we selected the learning rate of generator and discriminator and the variance of weights by verifying the conditional log-likelihood on the validation set. Only the results from the best hyperparameters have been reported. We found the CGAN model to be quite unstable to the selection of hyperparameters.

(a) Faces generated from the proposed model CMMA.
(b) Faces generated from CVAE [9].
(c) Faces generated from CGAN [10] using the hyperparameters used in [10].
(d) Faces generated from CGAN [10] using the hyperparameters selected by us.
Figure 6: Faces generated from the attributes using various models (Best viewed in color). For a fixed model, the rows correspond to ‘Female Asian’, ’Female Not-Asian’, ’Male Asian’ and ’Male Not-Asian’ in order. The remaining attributes are varied one at a time to generate the columns. In particular, for each model, the columns of faces correspond to i) no change, ii) mouth open, iii) spectacles, iv) bushy eyebrows, v) big nose ,vi) pointy nose and vii) thick lips. Note that, for our model CMMA, any change in attributes, such as mouth open, spectacles etc., is clearly reflected in the corresponding face. For other models, this change is not very evident from the corresponding faces.

4.4 Quantitative results

For the first set of experiments, we compare the conditional log-likelihood of the faces given the attributes on the test set for the models - CMMA, CGAN, and CVAE. A direct evaluation of conditional log-likelihood is infeasible, and for the size of latent layer used in our experiments (500), MCMC estimates of conditional log-likelihood are unreliable.

For the proposed model CMMA, a variational lower bound to the log-likelihood of the test data can be computed as the difference between the negative reconstruction error and KL-divergence (see (5)). The same can also be done for the CVAE model using (13).

Since we can not obtain the variational lower bound for the other models, we also use Parzen-window based log-likelihood estimation method for comparing the models. In particular, for a fixed test instance, we condition on the attributes to generate samples from the

models. A Gaussian Parzen window is fit to the generated samples, and the log-probability of the face in the test instance is computed for the obtained Gaussian Parzen window. The

-parameter of the Parzen window estimator is obtained via cross-validation on the validation set. The corresponding log-likelihood estimates for the models are given in Table 2.

Model Conditional Log-likelihood Variational Lower Bound
CMMA 9,487 17,973
CVAE 8,714 14,356
CGAN 8,320 -
Table 2: Parzen window based estimates and variational lower bound to conditional log-likelihood for the test data (Higher means better).

In both the cases, the proposed model CMMA was able to achieve a better conditional log-likelihood than the other models.

4.5 Qualitative results

While the quantitative results do convey a sense of superiority of the proposed model over the other models used in comparison, it is more convincing to look at the actual samples generated by these models. Hence, we compare the three models CGAN, CVAE and CMMA for the task of generating faces from attributes. We also compare the two models CVAE and CMMA for modifying an existing face by changing the attributes. CGAN can not be used for modifying faces because of the uni-directional nature of the model, that is, it is not possible to sample the latent layer from an image in a generative adversarial network.

4.5.1 Generating faces from attributes

In our first set of experiments, we generate samples from the attributes using the already trained models. In a CGAN, the images are generated by feeding noise and attributes to the generator. Similarly, in a CVAE, noise and attributes are fed to the MLP that corresponds to (see (13)) to sample the images. In order to generate images from attributes in a CMMA, we prune the MLP from the CMMA model (refer Figure 3), and connect the MLP in its stead as shown in Figure 7.

Figure 7: The model used for generating faces from attributes in CMMA is obtained by removing the MLP from the CMMA model (refer Figure 3), and connecting the MLP in its stead.
(a) Modifying faces with CMMA.
(b) Modifying faces with CVAE.
Figure 8: Modifying the faces in the training data by modifying the corresponding attributes using CMMA and CVAE respectively (Best viewed in color). The rows in each of the above figures correspond to i) No change, ii) Big Nose, iii) Spectacles, iv) Moustache and v) Big lips. Except for spectacles, any other change in attributes is not reflected in the faces modified by CVAE.
(a)
(b)
Figure 9: Modification of faces by setting the old attribute.

We set/reset the ’Male’ and ’Asian’ attributes to generate four possible combinations. The faces are then generated by varying the other attributes one at a time. In order to remove any bias from the selection of images, we set the variance parameter of the noise level to in CMMA, CVAE and CGAN. The corresponding faces for our model CMMA, and the other models (CVAE [9] and CGAN [10]) are listed in Figure 6. We have also presented the results from the implementation of CGAN222https://github.com/hans/adversarial by the author of [10], since the images sampled from CGAN trained by us were quite noisy.

The columns of images for each model correspond to the attributes i) no change, ii) mouth open, iii) spectacles, iv) bushy eyebrows, v) big nose ,vi) pointy nose and vii) thick lips. As is evident from the first image in Figure 6, CMMA can incorporate any change in attribute such as ‘open mouth’ or ‘spectacles’ in the corresponding face for each of the rows. However, this does not seem to be the case for the other models. We hypothesize that this is because our model explicitly minimizes the KL-divergence between the latent representation of attributes and the joint representation of face and attributes.

4.5.2 Varying the attributes in existing faces

In our next set of experiments, we select a face from the training data, and vary the attributes to generate a modified face. For a CMMA, this can be achieved as follows (also refer Figure 3):

  1. Let attr_orig be the original attributes of the face and attr_new be the new attributes that we wish the face to possess.

  2. Pass the selected face and the attr_new through the MLP .

  3. Pass attr_orig and attr_new through the MLP and compute the difference.

  4. Add the difference to the output of MLP .

  5. Pass the resultant sum through the decoder .

As in the previous case, we have the set the variance parameter of noise level to .

Note that, we can not use CGAN for this set of experiments, since, given a face, it is not possible to sample the latent layer in a CGAN. Hence, we only present the results corresponding to our model CMMA and CVAE. The corresponding transformed faces are given in Figure 8. As can be observed, for most of the attributes, our model, CMMA, is successfully able to transform images by removing moustaches, adding spectacles and making the nose bigger or pointy etc.

4.5.3 Modifying faces with missing attributes

Next, we select a few faces from the web and evaluate the performance of the model for modifying these faces. In order to modify these faces, one needs to sample the attributes conditioned on the faces. The algorithm for modifying the faces mentioned in the previous section is then applied. The corresponding results are given in Figure 10.

(a) Unchanged image
(b) With a big nose
(c) With moustache
(d) With spectacles and moustache
Figure 10: Modification of images with missing attributes (Best viewed in color).

5 Concluding Remarks

In this paper, we proposed a model for conditional modality generation, that forces the latent representation of one modality to be ‘close’ to the joint representation for multiple modalities. We explored the applicability of the model for generating and modifying images using attributes. Quantitative and qualitative results suggest that our model is more suitable for this task than CGAN [10] and CVAE [9]. The model proposed, is general and can be used for other tasks such whereby some modalities need to be conditioned whereas others need to be generated, for instance, translation of text or transliteration of speech. We wish to explore the applicability of the model for such problems in future.

References

  • [1] Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., Ng, A.Y.: Multimodal deep learning.

    In: Proceedings of the 28th International Conference on Machine Learning (ICML-11). (2011) 689–696

  • [2] Sohn, K., Shang, W., Lee, H.: Improved multimodal deep learning with variation of information. In: Advances in Neural Information Processing Systems. (2014) 2141–2149
  • [3] Srivastava, N., Salakhutdinov, R.R.: Multimodal learning with deep Boltzmann machines. In: Advances in Neural Information Processing Systems. (2012) 2222–2230
  • [4] Bengio, Y., Thibodeau-Laufer, E., Alain, G., Yosinski, J.: Deep generative stochastic networks trainable by backprop. In: Proceedings of The 31st International Conference on Machine Learning. (2014) 226––234
  • [5] Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. International Conference on Learning Representations (2014)
  • [6] Salakhutdinov, R., Hinton, G.E.: Deep Boltzmann machines.

    In: International Conference on Artificial Intelligence and Statistics. (2009) 448–455

  • [7] Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems. (2014) 2672–2680
  • [8] Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In Jebara, T., Xing, E.P., eds.: Proceedings of the 31st International Conference on Machine Learning (ICML-14), JMLR Workshop and Conference Proceedings (2014) 1278–1286
  • [9] Kingma, D.P., Mohamed, S., Rezende, D.J., Welling, M.: Semi-supervised learning with deep generative models. In: Advances in Neural Information Processing Systems. (2014) 3581–3589
  • [10] Gauthier, J.: Conditional generative adversarial nets for convolutional face generation
  • [11] Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  • [12] Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.:

    Labeled faces in the wild: A database for studying face recognition in unconstrained environments.

    Technical Report 07-49, University of Massachusetts, Amherst (October 2007)
  • [13] Kumar, N., Berg, A.C., Belhumeur, P.N., Nayar, S.K.: Attribute and simile classifiers for face verification.

    In: International Conference on Computer Vision, IEEE (2009) 365–372

  • [14] Kulkarni, T.D., Whitney, W., Kohli, P., Tenenbaum, J.B.: Deep convolutional inverse graphics network. arXiv preprint arXiv:1503.03167 (2015)
  • [15] Collobert, R., Kavukcuoglu, K., Farabet, C.: Torch7: A matlab-like environment for machine learning. In: BigLearn, NIPS Workshop. Number EPFL-CONF-192376 (2011)
  • [16] Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research 12 (2011) 2121–2159