Variational Autoencoded Regression: High Dimensional Regression of Visual Data on Complex Manifold

08/12/2019 ∙ by YoungJoon Yoo, et al. ∙ Imperial College London Seoul National University 8

This paper proposes a new high dimensional regression method by merging Gaussian process regression into a variational autoencoder framework. In contrast to other regression methods, the proposed method focuses on the case where output responses are on a complex high dimensional manifold, such as images. Our contributions are summarized as follows: (i) A new regression method estimating high dimensional image responses, which is not handled by existing regression algorithms, is proposed. (ii) The proposed regression method introduces a strategy to learn the latent space as well as the encoder and decoder so that the result of the regressed response in the latent space coincide with the corresponding response in the data space. (iii) The proposed regression is embedded into a generative model, and the whole procedure is developed by the variational autoencoder framework. We demonstrate the robustness and effectiveness of our method through a number of experiments on various visual data regression problems.



There are no comments yet.


page 6

page 7

page 8

page 11

page 13

page 14

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Regression of paired input and output data with an unknown relationship is one of the most crucial challenges in data analysis. In diverse research fields, such as trajectory analysis, robotics, the stock market, etc. [15, 44, 44, 28, 22], target phenomena are interpreted as a form of paired input / output data. In these applications, a regression algorithm is usually used to estimate the unknown response for a given input by using the information obtained from the observed data pairs. Many vision applications can also be expressed as such input / output data pairs. For example, in Fig. 1 (a), the sequence of the motion images can be described by input / output paired data, where the input can be defined as the relative order and the output response is defined as the corresponding image. The motion capture data and their corresponding images in Fig. 1 (b) are another example. The input data are 3D joint positions, and their responses will be the corresponding posture images. If we can model the implicit function representing the given image data pairs via regression, we can estimate unobserved images that correspond to the input data.

However, applying existing multiple output regression algorithms [3, 2, 1, 39] to these kinds of visual data applications is not straightforward, because the visual data are usually represented in high dimensional spaces. In general, high dimensional visual data (such as image sequences) are difficult to be analyzed with classical probabilistic methods because of their limited modeling capacity [18, 17]. Thus, regression of visual data to estimate visual responses requires a novel approach.

Figure 1:

Examples of paired data in vision applications. (a) For the image sequence, the domain can be defined by the space representing relative orders of image sequences. (b) For joint-pose data pairs, the joint vector space can be a possible domain.

In handling high dimensional complex data, recent attempts [12, 27, 38, 24, 36] using a deep generative network, such as the variational autoencoder (VAE) [24], have achieved significant success in reconstructing images. In VAE, a latent variable is defined to embed the compressed information of the data, and an encoder is trained to map a data space into its corresponding latent space. A decoder is also trained to reconstruct images from a sampled point in the latent space. The projection for the input data into the latent space (via the encoder) captures the essential characteristics of the data and allows execution of the regression task using a much lower dimensional space. The regression in the latent space is done together with the training of the encoder and decoder. However, a naive combination of regression and the VAE is not particularly effective because the decoder and the latent space are not designed in a way that permits the result of the regressed response in latent space, and the corresponding response in data space, to coincide. Therefore, a new method to simultaneously train the latent space and the encoder/decoder is required to achieve coincidence between the regressed latent vector and the reconstructed image.

In this paper, we solve this problem by combining the VAE [24] and Gaussian process regression [35]

. The key idea of this work is to do regression in the latent space instead of the high-dimensional image space. The proposed algorithm generates the latent space that compresses the information of both the domain and output image using the VAE, and projects the data pairs to the latent space. Then, regression is conducted for the projected data pairs in latent space, and the decoder is trained so that the regression in latent space and image space coincide. To the best of our knowledge, it is the first attempt to apply the VAE framework to solve the regression problem. The whole process, including the loss function, is designed as the generative model, and a new mini-batch composition method is presented for training the decoder to satisfy our purpose.

All connection parameters of the encoder / decoder are inferred by the end-to-end stochastic gradient descent method as described in 

[23]. The proposed regression method is validated with two different examples: sports sequences and motion image sequences with skeletons. The first example presents a regression case of simple domain to complex codomain, and the second example presents the complex domain to complex codomain case.

2 Related Work

Deep Generative Network: Classical probabilistic generative models [34, 8, 42, 19, 29, 40, 5] have proven to be successful in understanding diverse unsupervised data, but their descriptive ability is insufficient to fully explain complex data such as images [17]. Recently, as in other works in the vision area [16, 31], deep layered architectures have been successfully applied to solve this problem with powerful data generation performance. Among these architectures, generative adversarial network (GAN) [12]

and generative moment matching networks (GMMN) 


directly learn the generator that maps latent space to data space. Meanwhile, the variants of the restricted Boltzman machine (RBM)  

[18, 17, 37, 38] and probabilistic autoencoders [24, 30, 11] learn the encoder that defines the map from data to latent space and the generator (decoder) simultaneously. The former methods, and especially variants of GAN [12, 9, 33], are reported to describe the edges of generated images more sharply than the latter methods. However, the applicability of these methods is restricted due to the difficulty of discovering the relationships between data and latent space. This innate nature makes it difficult to use adversarial networks for designing the regression. Therefore, this paper adopts the variational autoencoder framework [24], which is also more suitable than RBM families to expand the regression model.

Figure 2: Overall scheme of the proposed method. For observed data pairs and , the proposed autoencoder reconstructs as shown in the top right. For the unobserved to given , it is impossible to obtain by using an encoder with , because we do not have information about . Thus to estimate , we obtain using regression from and , and estimate the response from .

Variational Autoencoder: Since Kingma et al[24] first published the variational autoencoder (VAE), numerous applications [32, 43] have been presented to solve various problems. Yan et al[43] proposed conditional VAE in order to generate the image conditioned on the attribute given in the form of sentences. Furthermore, recent work [14, 25, 13]

has demonstrated that a sequence in latent space would be mapped back to the sequence of data. Hence these methods embedded dynamic models such as recurrent neural networks 


and the Kalman filter 

[25, 21] into the VAE framework. These algorithms [14, 25, 25, 13] successfully show the ability of dynamic models in a latent space to capture the temporal changes of relatively simple objects in images. In this paper, we apply the VAE for the regression task in a relatively complex manifold.

Regression: Regression of paired data is theoretically well established and analytic solutions for the infinite dimension of the basis function [4, 35] have been derived for the last century. In non-parametric cases, Gaussian process [35, 26]

provides a general solution by expanding Bayesian linear regression using kernel metrics and a Gaussian process prior. By using this method, we can estimate the output data as a Gaussian posterior composed of given data pairs and input data to the unobserved target outputs. However, applying the algorithms for the high-dimensional output data is difficult because the kernel metric has limited capacity to express the complicated high dimensional data. The variants of multiple output regression algorithms 

[1, 2, 3, 39] are proposed to deal with multi-dimensional output responses. Still, these algorithms focus on handling relatively low dimensional output responses and are not able to sufficiently describe complicated data, such as that of an image. In this paper, we construct a regressed latent space by using a variational autoencoder to handle complex data.

3 Proposed Method

3.1 Overall Scheme

Given the target data pair , our goal is to find the unknown response for a new input . In this paper, the response is defined as an image and the corresponding input is defined accordingly based on the applications as in Fig. 1.

As shown in Fig. 2, for the observed data pairs , the encoder/decoder produces which is the reconstruction of an observed image . For the observed data, the encoding network

produces mean and variance for a part of the latent vector

, that is, which compresses to a latent variable with Gaussian mean and variance . The remaining part of is modeled by which represents mean and variance of

. Thus, the Gaussian distribution of

is described by and . For the unobserved image for a newly given , the proposed method produces , which is an estimator of .

Using sampled from , the decoding network reconstructs the output response , that is, . Note that if is well trained by the training scheme in Section 3.4, should be similar to . However, for an unobserved to a given , it is impossible to obtain from because we do not have any information on . To estimate , we obtain by using regression from and . For this regression, is sampled from for each observed response . Then, we estimate using Gaussian process (GP) regression to be described in Section 3.2, where is a kernel parameter of the GP regression, which can be produced by an additional encoder ; with . In this paper, for computational simplicity, we combine this kernel encoder with the encoder network and change the outputs into .

After is estimated, the response is reconstructed from by using the decoding network . Note that the should reconstruct not only from the sampled by , but also from the which is the regression result obtained from and y

. The whole procedure is designed as a generative framework with joint distribution

, and hence can be derived by the VAE algorithm.

3.2 Variational Autoencoded Regression

The proposed scheme (depicted in Fig. 2) is derived from the directed graph model in Fig. 3. The diagram in Fig. 3 (a) represents the generative model describing a typical reconstruction problem, and the diagram in Fig. 3 (b) is the variational model which not only approximates the generative model in Fig. 3 (a), but also performs the regression for the estimation of unobserved by utilizing an information variable related to . The joint distribution can be expressed by the likelihood function and the prior distribution , where refers to the set of all parameters related to the generation of the response from the latent variable . In our method, the prior distribution of is defined as zero mean Gaussian distribution, as in typical variants of VAE [43, 24]. Also, the likelihood function depicts the decoding process in the proposed scheme. Below, it is shown that is realized by the parameter of the decoding network.

Once the joint distribution is defined, the posterior

can be theoretically derived from the Bayes theorem, but the calculation is intractable. Therefore, the variational distribution

is introduced to approximate the true posterior distribution . Unlike , is introduced for the variational distribution to sample , which is the result of the GP regression for the unknown . represents the overall encoding procedure generating the latent variable from the input data pair and correspondingly, the variational parameter is realized by the parameters as described in Section 3.1. Importantly, should be able to explain both cases: 1) an observed image , and 2) an unobserved image which requires regression as mentioned previously. For the first case, the variational distribution is defined as . For the latter case, the variational distribution is defined as which represents the GP regression procedure for estimating latent for the input .

In order to estimate the parameters and which minimize the distance between and , we minimize the KullbackLeibler divergence . Following the derivation in [6, 24], the minimization procedure is converted to , where


and represents the number of latent codes and output responses for , to be regressed. In (1), is the reconstructed response from by the decoding network as depicted in Fig. 2. The parameters and are realized by the connection parameters of the encoding network with regression for , and the decoding network for (see Section 3.3). To minimize the loss in (1), we propose a method for mini-batch learning (see Section 3.4). The Adam optimizer [23] is used for stochastic gradient descent training.

Figure 3: The directed graphical model of the proposed method. (a) Generative model for and the latent variable . (b) Variational distribution to approximate the posterior of the generative model. is not observed for the newly given input .

3.3 Model Description

For the encoding part, we define which maps the data pair into the latent space . For both, observed and unobserved images, is defined by Gaussian distribution as in (2) and it enables us to analytically solve the KL-divergence term in (1) following [24]:


The variational parameter consists of the Gaussian mean function and the variance function . The and are produced in different ways depending on the input data. When the input data is given by , the encoder yields and , where refers to a diagonal matrix. When the input data is given by , and are determined by the mean and variance estimated by GP regression from z, x and , where


Z refers to the matrix , and

is the identity matrix, where

is the dimension of . The matrices and are defined as


For the kernel , we use a simplified version of SE-kernel [35], where . Eventually, the variational parameter is realized by the weight matrices of the encoder network. In summary, for the given data and , in (2) is given as

Figure 4: Training strategy of the proposed method. The mini-batch is generated from the sampled training data sequences.

For the decoding procedure, we define the likelihood function , where is defined as a Gaussian distribution with mean and fixed variance. Since the prior of is defined with zero mean Gaussian and identity covariance matrix, the weight represents the generative model parameter . Correspondingly, the meanings of the second term and third term in (1) are interpreted as follows. Since the negative log-likelihood is defined as distance in our algorithm, the second term represents the reconstruction error for the given data pair to , and the third term denotes the estimation error for via regression from the given input data and the observed data .

3.4 Training

To train the parameters of the proposed model, a sufficient number of the training datasets is required. In our algorithm, a total of different training sequences are used, as shown in Fig. 4. These training data pairs share similar semantics to the target (test) data pair . If the target data pair is a golf swing sequence, the training data pairs will be different golf swing sequences obtained in different situations. Once the parameters are trained by the training dataset consisting of diverse golf swings, the proposed method can complete the target image sequence via regression from the incomplete test sequence on a golf swing. After training the model with the mini-batch, we fine-tune the parameters with observed data pairs in target regression.

Mini-Batch Training: The work in [7] reports that the composition of the mini-batch is critical when using variants of stochastic gradient descent methods [23, 10] to train the parameters. To generate the batch, in this paper, sequences of a total sequences are randomly selected. For each selected training sequence , we randomly pick data pairs , where . For the earlier data pairs , we get the latent space vector from the encoder function , and to train , and . Alternatively, for the latter data pairs we obtain the latent by regression (Section 3.3) from , and . The responses are assumed to be unknown in the encoding process. This data set is used to train the decoder network to reconstruct the proper responses not only for the from the data pair , but also for the which are obtained from the regression. The corresponding loss from the estimated and the actual refers to the the third term in (1). We note that it is possible to calculate the loss term because can be used as ground truth regression response. After constructing the batch, the stochastic gradient [23] for the batch is calculated to train all parameters.

Figure 5: Batch generation for fine-tuning. The batch is composed of observed data pairs (red) and sampled data pairs in training dataset.
Figure 6: Qualitative Results on regression from the sport dataset (best viewed in color). The row (a) in each sport represents the proposed regression results. The images in rows (b) result from the regression with R-VAE. Row (c) is the result from MOGP [1]. The results on the right indicate the samples of reconstruction results for observed images.
Figure 7: experimental results comparing with the NN method. (a) proposed method. (b) NN with the latent space from VAE.

Parameter Fine-Tuning: After training the parameters and using the batch from the training dataset, we further fine-tune the parameters with the observed data pairs in the same way as previous regression techniques [35, 26]. Note that the training of the regression part is not done because the ground truth is not available for the test dataset. For the fine-tuning process, mini-batches are composed of the observed test data pairs and randomly selected data sequences from the training set as in Fig. 5, where and . When the total number of observed test data pairs is less than , we increase the number of samples by allowing repetition. Then, the parameters are fine-tuned with iterations. The detailed implementation is described in the supplementary material.

4 Experiments

In the experiments, we evaluated the regression capability of the proposed method via two applications composed of image data: (1) a problem with a simple temporal domain and complicated codomain and (2) a problem with a complicated domain and codomain. For the first application, we used sport data sequences obtained from YouTube. Human pose reconstruction for a given skeleton was tested for the second application.

4.1 Sports Data Sequences

Evaluation Scenario: In this scenario, we created data sets for three sport sequences: baseball swing, golf swing, and weightlifting. The dataset includes baseball swings, golf swings and weightlifting sequences from YouTube. In the dataset, images are included for each action sequence, and their relative orders are given. The domain is defined as and a point in is assigned to for each image according to its relative order in the entire sequence. For testing, the golf and the baseball swings were trained with randomly selected sequences and tested with those that remained. The weight lifting scenario was trained with sequences. We executed the regression for each test sequence with observed images within all images of each sequence and compared the results with multiple-output GP regression (MOGP) [1], and GP regression combined with vanilla VAE [24] (called R-VAE from here on). For R-VAE, we conducted the fine-tuning process in the same way as the proposed method. For MOGP, we trained the kernel with two-thirds of the images in the given sequences.

Qualitative Analysis: Fig. 6 shows the qualitative comparison of image generation results. The sequences in Fig. 6 show samples uniformly picked among the regressed responses from evenly divided points in the range . As seen in (a), the proposed method generated the most accurate responses compared to the other methods. R-VAE also succeeded in capturing the blunt characteristics of the background and the motions of the actions. However, the generated images in (b) suffer from large amount of noise for some images it is difficult to recognize the motion (circled in red). Demonstrated are also instances in which the order of the image was not matched (circled in blue), and instances in which the background of the image was not matched (circled in green). The images in the box show the samples of reconstruction results for given image pairs. Both, the proposed method and R-VAE successfully reconstructed the images, but the regression performance is largely different. As in (c), MOGP was not successful in describing the motion changes in the image, where every regression converged to the average of the training images.

Figure 8: Analysis on the effect of fine-tuning. (a), (b): the regression result of the proposed method is shown in the first row and that of R-VAE in the second row. (c): the images in the box denotes the samples of reconstruction results for observed images.

We also conducted the experiments comparing with Nearest Neighbor (NN) method results to the proposed method. We investigated the reconstruction results by applying the NN to the latent space after VAE learning. However, the latent space encoded by the vanilla VAE is not appropriate enough to perform regression using the NN (see Fig. 7). This is because the encoding of the background region plays a dominant role in NN as compared to the motion region. This problem is clearly seen at the bottom right sequence in the Fig. 7. Although the background (green and sky) region is relatively similar to the observation, the swinging human regions are not correctly regressed. This clearly shows that the proposed regression in the latent space is well performed achieving expected regression results in the image space. The encoder and decoder are trained to link the regression results in the latent space to the regression results in the image space directly, which is not trivial as shown in Fig. 7 (b). This is the second contribution of the proposed method (Abstract-ii).

Fig. 8 shows the effect of the fine-tuning process. The first and second column show the results with and without fine tuning. The result of the proposed method is shown in the first row and that of R-VAE in the second row. Before fine tuning, both methods generated noisy outputs, but the proposed method captured the vast characteristics of the background as well as the change of the motions. In R-VAE, background information was less accurate than the proposed method (circled in red). After the fine-tuning process, both methods accurately reconstructed the given image pairs, as in (c). Nevertheless, the regression performance between the methods varied significantly, as in (b).

Figure 9: Results from + and latent sample.
Structural Similarity Index Measure [41] result
sports Proposed R-VAE [24] MIGP [1] NN
Baseball 0.610 / 0.607 0.492 / 0.489 0.803 / 0.247 0.215 / 0.210
Golf 0.752 / 0.707 0.578 / 0.543 0.845 / 0.114 0.244 / 0.213
Snatch 0.377 / 0.369 0.207 / 0.205 0.626 / 0.019 0.206 / 0.198
Table 1: Measure for the results with / without background.

Fig. 9

represents the image generation results for different standard deviations. As with the original GP regression, the proposed method estimates the output responses in the form of mean and variance because the latent

for reconstructing the image is sampled from Gaussian distribution, as in (7). As seen in (a), the proposed algorithm captured the core semantics of the motion in each image despite the deviation change. In R-VAE, the regression results were plausible when the sampled latent was close to the mean, but the motion in the image was regressed by a totally different action when adding large amounts of noise (up to 1.0). From this result, we can see that R-VAE also has an ability to align the images in the latent space according to their order as reported in previous works [24, 13]. However, the results also show that the learned variance of R-VAE does not represent the motion semantics required for regression well, which is essential for the realization of GP regression in the image space.

SSIM result for different standard deviations
sports method +0.5 +1.0 +1.5
Baseball proposed 0.6453 0.5980 0.5307
R-VAE [24] 0.4993 0.4402 0.3825
Golf proposed 0.7203 0.4839 0.4422
R-VAE 0.5642 0.4026 0.2417
Snatch proposed 0.4042 0.3656 0.3629
R-VAE 0.2700 0.1645 0.0770
Table 2: Measure for images from + and .
Figure 10: Human pose estimation results from the joint (best viewed in color). The images in row (a) represents the C-VAE (1) results. The images in rows (b) is result from the CVAE (2). The row (c) is the result from proposed method. The row (d) shows the ground truth.

Quantitative Analysis: The quantitative performance was measured using the Structural Similarity Index Measure (SSIM) [41] which captures the structural similarities between two images. We estimated the images in the test set by using their domain information only, and compared the similarity between the ground truth image and the regression results. Table 1 shows the performance measures for generated regression images. For the three different sport sequences, the proposed method generated more similar images to the ground truth (GT) compared to R-VAE. Interestingly, the results of MOGP [1] which converged to the average of the images were measured to be most similar among the tested methods when including the background. This is because the background of the average image is almost the same as the background of the GT when the background of the GT is fixed. When we measured the similarity without the background region, MOGP was not successful and the proposed algorithm achieved the highest performance. Also, as with the result shown in Fig. 7, NN method marked unsuccessful quantitative performance. Table 2 and Fig. 9 show the performance when changing the standard deviation. We confirmed that the proposed method generated more plausible output than R-VAE for all cases.

4.2 Human Pose Reconstruction

Evaluation Scenario: For this experiment, we have used the human 3.6 million (H3.6m) [20] dataset for generating proper human appearance given the joint positions. The dataset provides joint positions, and thus the input data lie in dimensional space. The dataset includes diverse actions, and each action is repeatedly performed by different actors. Our goal is to estimate the proper image of a new skeleton by utilizing the observed pairs of joint positions and images. In the experiment, we used the ‘greeting’ and ‘posing’ scenarios of the H3.6m dataset. The scenario for each actor was captured in different view-points, resulting in a total of human pose sequences available for each actor. We trained the model with the motions of different actors using sequences from each actor. Then, we picked the observations from the remaining four sequences and conducted the regression. The joint vectors for the regression were selected from the sequences from which the observations were selected. The joint vectors from other actors were also tested. For comparison, we used the recent conditional VAE (C-VAE) [43] method, which generated an image according to a given attribute coupled with the sampled latent code. In this experiment, the joint vector was used as the attribute.

SSIM result for human pose generation
Actors proposed(A) CVAE(A) proposed(B) CVAE(B)
#1 0.7402 0.4849 0.5227 0.4059
#2 0.6743 0.4265 0.4775 0.3580
#3 0.7295 0.5094 0.5013 0.4268
#4 0.7671 0.4954 0.5224 0.4198
Table 3: Similarity measure for generated human pose image.

Qualitative Analysis: Fig. 10 (A) shows the pose generation result of the proposed algorithm and C-VAE. For C-VAE , we used randomly sample latent code as in [43]. For C-VAE , the latent code was given by the proposed regression block in Fig. 2. As shown in (c), the image regressed by the proposed method successfully describes the overall motion of each human pose. Also, note that the background of each image was correctly generated according to the view point of the observed data pair. The generated images from C-VAE contain a large amount of noise, but they captured the rough silhouette of the actors. This result is noticeable because C-VAE usually deals with cases in which the attribute is discrete. The result from C-VAE was clearer than the result from C-VAE , but the difference was not significant. The result in Fig. 10 (B) shows the output responses when the joint vectors of other actors were given. The images in the blue box refer to the ground truth pose, and the images in the red box are the regression result by the proposed method. This result shows that the proposed method generates poses that resemble those of the input joint vectors while preserving the appearance of the given data pairs via regression. Specifically, when the given pair involves a man wearing white clothes, the generated image illustrates a man wearing the same clothes with a similar pose to the GT image. C-VAE (2) was not successful in generating a corresponding pose for a given joint from other actors.

Quantitative Analysis: Table 3 shows the similarities between the generated image and the ground truth image. The first two columns denoted by (A) represent the quantitative results in experiment (A) of Fig. 10. In the experiment, the proposed method achieved a higher score than C-VAE (2). For experiment (B), we compared the similarity between the regressed image and the original images for the joint vector (green box in Fig. 10). There, our method also achieved a higher score than C-VAE (2).

In this experiment, the input data lay in the high dimensional space and the target joint vectors were selected without considering temporal information. Despite the complicated and non-sequential input domain, the proposed regression method achieved reasonable output responses describing the semantics given in the input and the identity information contained in the observed pairs. It means that the proposed method is available for the temporal input and can also handle more complex and non-sequential input.

5 Conclusion

In this paper, we have proposed a novel regression method regarding high dimensional visual output. To tackle the challenge, the proposed regression method is designed so that the result of the regressed response in a latent space and the corresponding response in the data space coincide. Through qualitative and quantitative analysis, it has been verified that our method properly estimates the regressed image responses and offers an approximation of the complicated input-output relationship. This paper discovers meaningful progress in the regression field in that our work introduces a way to combine a deep layered architecture to the regression method in a probabilistic framework.

6 Acknowledgment

This work was partly supported by the ICT R&D program of MSIP/IITP (No.B0101-15-0552, Development of Predictive Visual Intelligence Technology), the SNU-Samsung Smart Campus Research Center at Seoul National University, EU FP7 project WYSIWYD under Grant 612139 and the BK 21 Plus Project. We thank the NVIDIA Corporation for their GPU donation.


  • [1] M. A. Alvarez and N. D. Lawrence (2011) Computationally efficient convolved multiple output gaussian processes.

    Journal of Machine Learning Research

    12 (May), pp. 1459–1500.
    Cited by: §1, §2, Figure 6, §4.1, §4.1, Table 1.
  • [2] M. A. Alvarez, D. Luengo, M. K. Titsias, and N. D. Lawrence (2010) Efficient multioutput gaussian processes through variational inducing kernels.. In AISTATS, Vol. 9, pp. 25–32. Cited by: §1, §2.
  • [3] M. Alvarez and N. D. Lawrence (2009) Sparse convolved gaussian processes for multi-output regression. In Advances in neural information processing systems, pp. 57–64. Cited by: §1, §2.
  • [4] Y. Anzai (2012) Pattern recognition & machine learning. Elsevier. Cited by: §2.
  • [5] L. E. Baum and T. Petrie (1966)

    Statistical inference for probabilistic functions of finite state markov chains

    The annals of mathematical statistics 37 (6), pp. 1554–1563. Cited by: §2.
  • [6] M. J. Beal (2003)

    Variational algorithms for approximate bayesian inference

    University of London London. Cited by: §3.2.
  • [7] Y. Bengio, J. Louradour, R. Collobert, and J. Weston (2009) Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. Cited by: §3.4.
  • [8] D. M. Blei, A. Y. Ng, and M. I. Jordan (2003) Latent dirichlet allocation. Journal of machine Learning research 3 (Jan), pp. 993–1022. Cited by: §2.
  • [9] E. L. Denton, S. Chintala, R. Fergus, et al. (2015) Deep generative image models using a laplacian pyramid of adversarial networks. In Advances in neural information processing systems, pp. 1486–1494. Cited by: §2.
  • [10] J. Duchi, E. Hazan, and Y. Singer (2011) Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12 (Jul), pp. 2121–2159. Cited by: §3.4.
  • [11] M. Germain, K. Gregor, I. Murray, and H. Larochelle (2015) Made: masked autoencoder for distribution estimation. In International Conference on Machine Learning, pp. 881–889. Cited by: §2.
  • [12] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680. Cited by: §1, §2.
  • [13] R. Goroshin, M. F. Mathieu, and Y. LeCun (2015) Learning to linearize under uncertainty. In Advances in Neural Information Processing Systems, pp. 1234–1242. Cited by: §2, §4.1.
  • [14] K. Gregor, I. Danihelka, A. Graves, D. Rezende, and D. Wierstra (2015) DRAW: a recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1462–1471. Cited by: §2.
  • [15] H. He and W. Siu (2011)

    Single image super-resolution using gaussian process regression

    In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 449–456. Cited by: §1.
  • [16] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. Cited by: §2.
  • [17] G. E. Hinton, S. Osindero, and Y. Teh (2006) A fast learning algorithm for deep belief nets. Neural computation 18 (7), pp. 1527–1554. Cited by: §1, §2.
  • [18] G. E. Hinton (2002)

    Training products of experts by minimizing contrastive divergence

    Neural computation 14 (8), pp. 1771–1800. Cited by: §1, §2.
  • [19] P. W. Holland and S. Leinhardt (1981)

    An exponential family of probability distributions for directed graphs

    Journal of the american Statistical association 76 (373), pp. 33–50. Cited by: §2.
  • [20] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu (2014-07) Human3.6m: large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Transactions on Pattern Analysis and Machine Intelligence 36 (7), pp. 1325–1339. Cited by: Appendix B, §4.2.
  • [21] R. E. Kalman (1960) A new approach to linear filtering and prediction problems. Journal of basic Engineering 82 (1), pp. 35–45. Cited by: §2.
  • [22] T. Kimoto, K. Asakawa, M. Yoda, and M. Takeoka (1990) Stock market prediction system with modular neural networks. In Neural Networks, 1990., 1990 IJCNN International Joint Conference on, pp. 1–6. Cited by: §1.
  • [23] D. Kingma and J. Ba (2014) Adam: a method for stochastic optimization. Proceedings of the 3rd International Conference on Learning Representations. Cited by: §1, §3.2, §3.4.
  • [24] D. P. Kingma and M. Welling (2013) Auto-encoding variational bayes. Proceedings of the 2nd International Conference on Learning Representations. Cited by: §1, §1, §2, §2, §3.2, §3.2, §3.3, §4.1, §4.1, Table 1, Table 2.
  • [25] R. G. Krishnan, U. Shalit, and D. Sontag (2015) Deep kalman filters. arXiv preprint arXiv:1511.05121. Cited by: §2.
  • [26] N. D. Lawrence (2004) Gaussian process latent variable models for visualisation of high dimensional data. Advances in neural information processing systems 16 (3), pp. 329–336. Cited by: §2, §3.4.
  • [27] Y. Li, K. Swersky, and R. Zemel (2015) Generative moment matching networks. In International Conference on Machine Learning, pp. 1718–1727. Cited by: §1, §2.
  • [28] A. W. Lo and A. C. MacKinlay (1988) Stock market prices do not follow random walks: evidence from a simple specification test. Review of financial studies 1 (1), pp. 41–66. Cited by: §1.
  • [29] S. N. MacEachern and P. Müller (1998) Estimating mixture of dirichlet process models. Journal of Computational and Graphical Statistics 7 (2), pp. 223–238. Cited by: §2.
  • [30] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow (2015) Adversarial autoencoders. arXiv preprint arXiv:1511.05644. Cited by: §2.
  • [31] H. Nam and B. Han (2016)

    Learning multi-domain convolutional neural networks for visual tracking

    In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4293–4302. Cited by: §2.
  • [32] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin (2016)

    Variational autoencoder for deep learning of images, labels and captions

    In Advances in Neural Information Processing Systems, pp. 2352–2360. Cited by: §2.
  • [33] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §2.
  • [34] C. E. Rasmussen (1999)

    The infinite gaussian mixture model.

    In NIPS, Vol. 12, pp. 554–560. Cited by: §2.
  • [35] C. E. Rasmussen (2006) Gaussian processes for machine learning. Cited by: §1, §2, §3.3, §3.4.
  • [36] D. J. Rezende, S. Mohamed, and D. Wierstra (2014)

    Stochastic backpropagation and approximate inference in deep generative models

    In Proceedings of The 31st International Conference on Machine Learning, pp. 1278–1286. Cited by: §1.
  • [37] R. Salakhutdinov and G. E. Hinton (2009)

    Deep boltzmann machines.

    In AISTATS, Vol. 1, pp. 3. Cited by: §2.
  • [38] R. Salakhutdinov (2009) Learning deep generative models. Ph.D. Thesis, University of Toronto. Cited by: §1, §2.
  • [39] K. Swersky, J. Snoek, and R. P. Adams (2013) Multi-task bayesian optimization. In Advances in neural information processing systems, pp. 2004–2012. Cited by: §1, §2.
  • [40] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei (2012) Hierarchical dirichlet processes. Journal of the american statistical association. Cited by: §2.
  • [41] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli (2004) Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 13 (4), pp. 600–612. Cited by: §4.1, Table 1.
  • [42] B. Wei (1998) Exponential family nonlinear models. Vol. 130, Springer Verlag. Cited by: §2.
  • [43] X. Yan, J. Yang, K. Sohn, and H. Lee (2015) Attribute2Image: conditional image generation from visual attributes. arXiv preprint arXiv:1512.00570. Cited by: §2, §3.2, §4.2, §4.2.
  • [44] H. Yang, L. Chan, and I. King (2002) Support vector machine regression for volatile stock market prediction. In International Conference on Intelligent Data Engineering and Automated Learning, pp. 391–396. Cited by: §1.

Appendix A Implementation Detail

In the experiments, the encoder , decoder and

of the kernel for GP regression in Figure 2 of the paper are defined as multi-layered perceptrons. The encoder

is designed with five convolution layers and one fully connected layer (the convolution layers are composed of channels with filter size each). All are resized to three channel -by- images. We set the dimension of the and to , and the fully connected layer returns the elements for and . The former elements are used as , and the latter entries are defined as . For the mapping function , refers to and the additional outputs in indicates , as in Figure 2 of the paper. Therefore, the overall dimension of the final fully connected layer is ; dimensions for , dimensions for , and one dimension for . For the decoding function , convolution layers with -by- upsampling are used to reconstruct the image. The convolution layers have 256, 128, 64, 32, 16, 3 channels with filter size and .

Appendix B Additional Results

To check the validity of the regression procedure conducted in a latent space, we compared the latent vector obtained by regression with the latent vector obtained by reconstruction for an input data pair. For the reconstruction, we used both the joint vector and the corresponding image in the H3.6m dataset [20]. For the regression, only the joint vector was given and the projected point was estimated by the proposed regression method. Since the latent vectors were obtained from the same input data, in ideal conditions the vectors should converge to the same location. Figure 11 shows the qualitative results for the regression and the reconstruction for the same input data pair, where it can be seen that both responses converged to the ground truth image. The graph in Figure 13 indicates the KL-divergence between the two latent vectors.

Figure 13: KL divergence between the latent distributions for regression and reconstruction, from the same joint vectors.
Figure 14: Negative log likelihood ratio for the regressed and reconstructed visual responses, from the proposed method and C-VAE (2).

Since the latent vector in the paper is defined by a Gaussian distribution, we used the KL-divergence as distance measure. As seen in the graph, the KL-divergence obtained by the proposed method was gradually decreased. The result demonstrates that the two vectors obtained from both cases converged to the same location, as expected. When tested with the C-VAE (2), the divergence did not converge.

As shown in Figure 14, we also measured the changes of the negative log likelihood (NLL), for both the regression and the reconstruction. In the proposed method, we confirmed that the NLL ratio for both cases converged. In C-VAE (2), the NLL ratio converged only when both the joint vector and the images were given, but it did not successfully converge when the regression was applied.

Figure 15, Figure 16 and Figure 17 show additional generation results of sports sequences. The figures describe the regression results of the proposed method and of R-VAE in our work, which are the supplementary results of Figure 7 included in the submitted version. We confirmed that the proposed method achieved a superior regression performance for diverse action sequences compared to R-VAE.

In addition to the image sequence regression and human pose reconstruction examples, we newly performed human joint estimation experiments as shown in Fig. 12. Although our method is not originally designed for the human joint estimation purpose, the proposed method could successfully estimate the human joints by performing regression using an (image - joint) data pair and finding the corresponding joints when a new image is given. This example further shows that the proposed method is applicable to practical applications composed of various data pairs.

Figure 15: Qualitative results on regression from the baseball swing dataset. The first row in each action represents the proposed method, and the second row shows the result from R-VAE.
Figure 16: Qualitative results on regression from the golf swing dataset. The first row in each action represents the proposed method, and the second row shows the result from R-VAE.
Figure 17: Qualitative results on regression from the weightlifting dataset. The first row in each action represents the proposed method, and the second row shows the result from R-VAE.