Neural-Kernelized Conditional Density Estimation

06/05/2018
by   Hiroaki Sasaki, et al.
Helsingin yliopisto
0

Conditional density estimation is a general framework for solving various problems in machine learning. Among existing methods, non-parametric and/or kernel-based methods are often difficult to use on large datasets, while methods based on neural networks usually make restrictive parametric assumptions on the probability densities. Here, we propose a novel method for estimating the conditional density based on score matching. In contrast to existing methods, we employ scalable neural networks, but do not make explicit parametric assumptions on densities. The key challenge in applying score matching to neural networks is computation of the first- and second-order derivatives of a model for the log-density. We tackle this challenge by developing a new neural-kernelized approach, which can be applied on large datasets with stochastic gradient descent, while the reproducing kernels allow for easy computation of the derivatives needed in score matching. We show that the neural-kernelized function approximator has universal approximation capability and that our method is consistent in conditional density estimation. We numerically demonstrate that our method is useful in high-dimensional conditional density estimation, and compares favourably with existing methods. Finally, we prove that the proposed method has interesting connections to two probabilistically principled frameworks of representation learning: Nonlinear sufficient dimension reduction and nonlinear independent component analysis.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/27/2019

Kernel Conditional Density Operators

We introduce a conditional density estimation model termed the condition...
06/30/2021

A Statistical Taylor Theorem and Extrapolation of Truncated Densities

We show a statistical version of Taylor's theorem and apply this result ...
02/14/2018

Conditional Density Estimation with Bayesian Normalising Flows

Modeling complex conditional distributions is critical in a variety of s...
05/07/2019

Sum-of-Squares Polynomial Flow

Triangular map is a recent construct in probability theory that allows o...
04/26/2013

Learning Densities Conditional on Many Interacting Features

Learning a distribution conditional on a set of discrete-valued features...
08/23/2020

Blindness of score-based methods to isolated components and mixing proportions

A large family of score-based methods are developed recently to solve un...
03/03/2019

Conditional Density Estimation with Neural Networks: Best Practices and Benchmarks

Given a set of empirical observations, conditional density estimation ai...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Given observations of two random variables

and

, estimating the conditional probability density function of

given is a general framework for solving various problems in machine learning. In particular, when the conditional density is multimodal, the conventional prediction by the conditional mean of given is rather useless, and conditional density estimation becomes very useful (Carreira-Perpiñán, 2000; Einbeck and Tutz, 2006; Chen et al., 2016). Also, in regression problems where noise is heavy-tailed or asymmetric, the conditional mean is not necessarily a suitable quantity for prediction, and again, one may want to estimate the conditional density  (Yao et al., 2012; Feng et al., 2017).

However, the importance of conditional density estimation is not limited to supervised learning. A well-known example is structure estimation in graphical models 

(Bishop, 2006). Furthermore, conditional density estimation seems to be closely related to representation learning (or feature learning), which has recently received a great deal of attention due to the success of deep neural networks (Bengio et al., 2013). For example, sufficient dimension reduction (SDR) is a rigorous framework for supervised dimension reduction based on the conditional density, where the goal is to estimate a lower-dimensional representation of without losing information to predict  (Li, 1991; Cook, 1998). In addition, nonlinear independent component analysis

(ICA) provides a principled way for representation learning in unsupervised learning, which is aimed at recovering latent nonlinear sources (or representation) that generated the input data

. Hyvärinen and Morioka (2016) recently proposed a nonlinear ICA method with a rigorous proof of recovering the sources (i.e., identifiability) where the conditional probability density of

given time-segment labels is estimated via logistic regression. Thus, we see that conditional density estimation is closely related to representation learning as well. In this paper, we restrict ourselves to the case where

and are both continuous-valued.

In a nonparametric setting, a number of methods for conditional density estimation have been proposed. Fan et al. (1996) estimates conditional densities based on a local polynomial expansion. However, it is known that the local polynomial expansion does not scale well with the dimensionality of data. Sugiyama et al. (2010) proposed another method called the least-squares conditional density estimation (LSCDE), which directly fits a kernel model to the true conditional density ratio under the squared loss. Another method with reproducing kernels has been proposed recently based on the Fisher divergence, estimating conditional densities up to their partition functions (Arbel and Gretton, 2018). A drawback is that since these methods rely on reproducing kernels, it is not easy to apply them to large datasets. The Nyström approximation (Rudi et al., 2015), where only a subset of data samples is used as the centers of kernel functions, is a remedy to make kernel methods more scalable, but it may not allow us to accurately estimate highly complex functions (Bengio et al., 2006)

, especially with high-dimensional data.

To accurately estimate conditional densities, we need models which have high expressive power, yet are scalable to a large amount of data—such as deep neural networks. However, since such a model is often very complex, it may be almost impossible to compute the partition function of the conditional density, which is the fundamental challenge in learning deep neural networks (Goodfellow et al., 2016, Chapter 18). Presumably due to this reason, some previous work has combined neural networks with a parametric density model: The mixture density network (MDN) (Bishop, 1994) employs a mixture model for the conditional density, and each parameter in the mixture density model is estimated as a feedforward neural network. Another approach is based on variational methods (Tang and Salakhutdinov, 2013; Sohn et al., 2015), which also makes parametric assumptions on the densities. Although these methods may work rather well in some applications, the parametric assumption on the densities seems too restrictive to us.

In this paper, we propose a novel method of estimating a conditional density function up to the partition function. In stark contrast to previous work, we use neural networks but do not make explicit parametric assumptions on the probability densities. Our approach is to employ the Fisher divergence, and more precisely its computation by score matching (Cox, 1985; Hyvärinen, 2005; Sasaki et al., 2014; Sriperumbudur et al., 2017), for conditional density estimation. However, the significant challenge is that score matching requires us to compute the first- and second-order derivatives of the model with respect to each coordinate in , and thus it is difficult to apply to learning neural networks (Martens et al., 2012). Here, we tackle this challenge by combining the benefits of the neural network approach with those of reproducing kernel Hilbert spaces (RKHS). Basically, we develop a neural-kernelized approach in which the dependence of the conditional density on is modelled in an RKHS and the dependence on by a neural network. This means the derivatives only need to be computed in the RKHS, which is computationally easy, while the neural network part still allows for powerful deep representation learning to happen in high-dimensional spaces and with big data sets. Since usually has much lower dimension than as seen especially in supervised learning problems, our method is efficient in many applications of conditional density estimation.

We prove that our function approximator following the neural-kernelized approach has the universal approximation capability under certain conditions, and thus potentially approximates a wide range of continuous functions of and . We further prove that our method for conditional density estimation is consistent. In artificial and benchmark data comparisons based on conditional likelihood, our method is demonstrated to be useful in high-dimensional conditional density estimation, and compares favourably with competing methods. Finally, we explore very interesting connections of our method to representation learning. It turns out that under some conditions, our method for conditional density estimation automatically provides novel methods for two different principled frameworks related to representation learning: nonlinear SDR and nonlinear ICA. More specifically, the neural network in our model provides a useful lower-dimensional representation as in SDR, while it recovers the latent independent sources as in ICA.

2 Neural-Kernelized Conditional Density Estimator

We start by formulating the problem of conditional density estimation. Then, we propose a simple function approximation framework, and validate it using the well-known concept of universal approximation capability. Next, a statistical divergence is proposed based on score matching. Finally, a practical learning algorithm is developed by combining neural networks with reproducing kernels, together with the score matching divergence.

Problem Formulation:

Suppose that we are given

pairs of data samples drawn from the joint distribution with density

:

(1)

where and denote the -th observations of and , respectively, and and denote the dimensions of and , respectively. For example, in supervised learning, would be the output data. We will also consider the case of unsupervised learning, where following Hyvärinen and Morioka (2016), could a time index (more on this below). Our primary interest is to estimate the logarithmic conditional density of given up to the partition function from :

(2)

where is an unnnormalized conditional density, and is the partition function.

Function Approximation Framework:

In this paper, we propose a function approximation framework where we estimate under the following form:

(3)

where and

are vector-valued functions.

The model (3) in our framework may seem to be of a restricted form, and unable to express general continuous functions of and . However, we next prove that the model can actually approximate a wide range of continuous functions. To this end, we employ the well-known concept of universal approximation capability: [Universal approximation capability] Let be the set of all continuous functions on a domain . Then, a set of functions on , has the universal approximation capability to if is a dense subset of , i.e., for all function and , there always exists such that .

Let us express two sets of functions on domains and by and for all , respectively. Eq.(3) indicates that is estimated as an element in the following set of functions:

does not necessarily have the universal approximation capability to in general. Thus, it is important to understand under what conditions has the universal approximation capability. The following proposition states sufficient conditions. Assume that (i) both and are compact, (ii) every function in and is continuous, and (iii) both and have the universal approximation capability to and , respectively. Then, has the universal approximation capability to , i.e., is a dense subset of . We omit the proof because it is essentially the same as Waegeman et al. (2012, Thereom II.I).

Proposition 2 implies that the model (3) can approximate a wide range of continuous functions on when both and have the universal approximation capability. Thus, we should use function approximators for and such that both of them have the universal approximation capability.

It is well-known that the function set of feedforward neural networks have the universal approximation capability (Hornik, 1991). Another example of function sets with the universal approximation capability is reproducing kernel Hilbert spaces (RKHSs) with universal kernels whose example is the Gaussian kernel (Micchelli et al., 2006). Motivated by these facts and Proposition 2, we employ both neural networks and RKHSs in the learning algorithm below.

Statistical Divergence based on Score Matching

To estimate , we employ score matching (Hyvärinen, 2005). It is based on fitting a model to under the Fisher divergence (Cox, 1985; Sasaki et al., 2014; Sriperumbudur et al., 2017), defined as

where . A useful property is that the minimizer of coincides with up to the partition function, which is stated by the following proposition: Assume that for all ans . Let us express the minimizer of with respect to by

Then, equals to up to the partition function, i.e., satisfies the following equation:

(4)

The proof can be seen in the supplementary material.

Proposition 2 implies that minimising asymptotically yields a consistent estimator. However, in practice, we need to estimate from finite samples. Following score matching (Hyvärinen, 2005) and further developments by Arbel and Gretton (2018), we can derive an easily computable version of under some mild assumptions as follows:

where , and we used the relation and applied the integration by parts on the second line under a mild assumption that for all and . After substituting the model into , the empirical version of can be obtained as

Thus, an estimator can be obtained by minimising with respect to and for all .

Learning Algorithm with Neural Networks and Reproducing Kernels:

Motivated by Proposition 2, we model by a feedforward neural network (fNN), where denotes a vector of parameters in the fNN. On the other hand, modelling by fNNs is problematic because includes the partial derivatives of , and computation of the (particularly second-order) derivatives are highly complicated in learning fNNs (Martens et al., 2012). Alternatively, following Proposition 2, we estimate as an element in an RKHS with a universal kernel, which has the universal approximation capability (Micchelli et al., 2006). This approach greatly simplifies the derivative computation in  because of the representer theorem: Inspired by the representer theorem for derivatives (Zhou, 2008)111If we exactly follow the representer theorem in Zhou (2008), should have the following form: , where denotes coefficients. To decrease the computational costs, we simplify the model by setting all and using only a randomly chosen subset of as the centers in the kernel function. The same simplification under the Fisher divergence is previously performed and theoretically shown not to incur the performance degeneration under some conditions (Sutherland et al., 2018)., we employ the following linear-in-parameter model for :

(5)

where and are the kernel function and coefficients respectively. To decrease the computation cost, we follow the Nystöm approximation (Rudi et al., 2015) and use as the center points in by randomly choosing points from .

By substituting an fNN and the linear-in-parameter model into , the optimal parameters are obtained as

Setting and , the estimator is finally given by

We call this method the neural-kernelized conditional density estimator (NKC).

3 Simulations

This section investigates the practical performance of NKC, and compares it with existing methods.

Algorithms, settings, and evaluation:

We report the results on the following three methods:

  • NKC (Proposed method): was modelled by a feedforward neural network with three layers: The numbers of hidden units in the three layers were and , where was either or (we performed preliminary experiments for as well, but the results were often worse than

    ). The activation functions were all ReLU. For

    , we employed the Gaussian kernel, and fixed the number of the kernel centers at

    . To perform stochastic gradient descent, RMSprop 

    (Hinton et al., 2012) was used with minibatches. When learning and , model selection was also performed for the learning rate and the width parameter in the Gaussian kernel with a early stopping technique: The datasets were first divided into the training (80%) and validation (20%) datasets, and then and were learned using the training datasets for epochs. Finally, the best epoch result was chosen as the final result using the validation dataset in terms of . Other details are given in the supplementary material.

  • Conditional variational autoencoder (CVAE) 

    (Sohn et al., 2015):

    This is a conditional density estimator based on variational methods, shown for comparison. In CVAE, three conditional densities related to latent variables, which are called recognition, prior and generation networks, were modelled by parametric Gaussian densities: the means and the logarithms of standard deviations in the Gaussian densities of the recognition and generation networks were expressed by neural networks, while the prior network was modelled by the normal density independent of

    as in (Kingma et al., 2014; Sohn et al., 2015). The neural networks had two layers with hidden units. The activation function was ReLU in the middle layer, while no activation function was applied in the final layer. The dimensionality of latent variables was . The same optimisation procedures as in NKC was applied. It should be noted that the number of parameters in CVAE was a bit, but not much, larger than in NKC (the exact numbers of parameters are given in the supplementary material).

  • LSCDE (Sugiyama et al., 2010)222http://www.ms.k.u-tokyo.ac.jp/software.html#LSCDE: This is a least-squares estimation method for conditional densities based on the Gaussian kernel, shown for comparison. In the default setting, LSCDE only uses centers in the Gaussian kernel (Sugiyama et al., 2010), but to apply LSCDE to a large amount of and high-dimensional data, we set centers, which were randomly chosen from the whole data. The bandwidth in the Gaussian kernel, as well as the regularisation parameters, were determined by five-fold cross-validation.

The performance of each method was evaluated by the log-likelihood using test data. To estimate the log-likelihood in NKC, we estimated the partition function by importance sampling, while the log-likelihood in CVAE is estimated as in Sohn et al. (2015, Eq.(6)).

Multimodal Density Estimation on Artificial Data:

Conditional density estimation is particularly useful compared to simple conditional mean estimation in the case where conditional densities are multimodal. Therefore, we first numerically demonstrate the performance of NKC on multimodal density estimation. To this end, we first generated data samples of from the normal density, and then data samples of was drawn from a mixture of three Gaussians: where , for all and . The dimension took the value , , or . Here, were modelled by a three layer neural network with leaky ReLU where the number of hidden units is , and , and the parameter values were randomly determined in each run. The total number of samples was fixed at .

The results are presented in the left panel of Table 1. As increases, the performance of LSCDE gets worse, as would be expected from a method based on Gaussian kernels. On the other hand, NKC and CVAE work well even in . When , NKC performs better than CVAE. Thus, NKC should be a useful method for multimodal density estimation with high-dimensional data. Regarding the computational time, LSCDE was the fastest because the optimal solution in LSCDE is analytically computed. On the other hand, NKC and CVAE had comparable time if any model selection was not used.

NKC NKC CVAE LSCDE -0.75 -0.75 -0.79 -0.96 -0.76 -0.76 -0.77 -1.25 -0.81 -0.82 -0.89 -1.28 -0.81 -0.82 -0.93 -1.69 NKC NKC CVAE LSCDE Ail. -0.52 -0.52 -0.50 -1.00 Fri. -0.34 -0.34 -0.41 -1.01 Kin. -0.62 -0.63 -0.65 -0.96 Pum. -1.53 -1.58 -0.97 -1.59
Table 1: Simulation on artificial data (left panel) and experiments on benchmark datasets (right panel). The mean values of the log-likelihood over runs. A larger value indicates a better result. The best and comparable methods judged by the Wilcoxon signed-rank test at the significance level 5% are described in boldface. NKC and NKC denote NKC with and , respectively.

Benchmark Dataset:

Second, we evaluate the performance of NKC on benchmark datasets. The datasets were the publicly available333http://www.dcc.fc.up.pt/~ltorgo/Regression/DataSets.html datasets Ailerons , Friedman , Kinematics, and Pumadyn. Before performing conditional density estimation, we first standardised the datasets by subtracting the means and dividing each variable by its standard deviation. Then, in each dataset, 10% of the data samples was used for test and the remaining data samples were for training.

The results are summarised in the right panel of Table 1, showing that NKC compares favourably with CVAE, and clearly beats LSCDE.

4 Insights on Representation Learning

Next we provide interesting insights on how NKC is related to representation learning. In fact, we show that NKC has interesting connections to two frameworks called sufficient dimension reduction (SDR) and independent component analysis (ICA). More specifically, we prove that the internal representation given by in NKC can provide a useful lower-dimensional representation of for supervised learning, if the dimension of the internal representation is restricted. On the other hand, if the data comes from a nonlinear ICA model, and the dimension is not reduced, the internal representation corresponds to an estimate of nonlinear independent components in unsupervised learning.

4.1 Nonlinear Sufficient Dimension Reduction

Problem Formulation in Sufficient Dimension Reduction:

Sufficient dimension reduction (SDR) is a rigorous framework of supervised dimension reduction (Li, 1991; Cook, 1998). The goal of SDR is to estimate a lower-dimensional representation of satisfying the following condition:

(6)

where denotes statistical independence, , and . The SDR condition (6) intuitively means that has the same amount of information to predict as .

Connection to SDR:

Here, we show a connection of NKC to nonlinear SDR. In fact, the following theorem, which is proven in the supplementary material, implies that in NKC is a useful lower-dimensional representation on SDR: Suppose that , there exists a lower-dimensional representation such that the SDR condition (6) is satisfied, the domains of and are compact, and are both continuous, and in the limit of infinite data, is universally approximated as

(7)

Then, satisfies the SDR condition.

Propositions 2 and 2 imply that approximates well as the number of data samples increases. Thus, the universal approximation assumption (7) would be realistic. Interestingly, Thereom 4.1 shows that NKC estimates the log-conditional density, while implicitly performing nonlinear dimensionality reduction. This would imply that if there exists a lower-dimensional structure (manifold), the log-conditional density can be accurately estimated with a small , i.e., small parameters. To the best of our knowledge, this work is the first attempt to explicitly connect neural networks to nonlinear SDR.

Relation with Existing Nonlinear SDR Methods:

Kernel sliced inverse regression (KSIR) is a nonlinear extension of a classical linear SDR method called sliced inverse regression (SIR) (Li, 1991) via the kernel trick (Wu, 2008; Yeh et al., 2009; Wu et al., 2007). As in kernel PCA (Schölkopf and Smola, 2002), input data is mapped to a high-dimensional feature space by , and then SIR is performed in . As in SIR, KSIR makes an assumption that the probability density function of is elliptically symmetric (Wu, 2008, pp.594), while NKC does not have explicit assumptions on densities. Other nonlinear SDR methods have been proposed in Lee et al. (2013)

. A drawback of these methods as well as KSIR require to compute the eigenvectors of a

by matrix. Therefore, it is not straightforward to apply them to a large amount of data. In contrast, a possible advantage of NKC would be applicability to large datasets.

4.2 Nonlinear Independent Component Analysis

Problem Formulation and Background in Independent Component Analysis:

Independent component analysis (ICA) is a successful framework in unsupervised learning. ICA assumes that data is generated from

(8)

where , denotes a smooth and invertible nonlinear function, and is a vector of the source components. In classical ICA, are assumed to be statistically independent each other. Then, the problem is to estimate or from observations of only. When with an invertible by matrix , the identifiability conditions are well-understood (Comon, 1994). On the other hand, for general nonlinear functions , the problem is considerably more difficult: There exist infinitely many solutions and therefore the problem is ill-posed without any additional information (Hyvärinen and Pajunen, 1999). Recently, some identifiability conditions were found by Hyvärinen and Morioka (2016, 2017). The key idea is to use time information as additional information. Below, we show novel identifiability conditions by connecting NKC with nonlinear ICA.

Connection to Nonlinear ICA:

Here, suppose that contains some additional variables which are not of primary interest, but are dependent on , thus providing extra information that helps in identifiability. For example, following Hyvärinen and Morioka (2016), could be the time index for time series data. We show next that in NKC recovers the source vector up to some indeterminancies with a new identifiability proof. To this end, we make the following assumptions:

  1. Given , are statistically independent each other: .

  2. The conditional density of given belongs to the exponential family as

    where is the partition function and is a nonlinear scalar function.

  3. There exists the inverse of the generative model, i.e., where .

  4. In the limit of infinite data, is universally approximated as in (7).

  5. There exist points, such that the following by matrix is invertible: , where .

Then, the following theorem elucidates the relationship between NKC and nonlinear ICA: Suppose that assumptions (I1-I5) hold. Then, in the limit of infinite data, in (7) equals to

up to an invertible linear transformation, i.e.,

where , and . The proof can be seen in the supplementary material:

Relation with Existing Nonlinear ICA Methods:

Theorem 4.2 states that are identified up to linear transformation, as in the previous proof for a nonlinear ICA method called time contrastive learning (TCL) (Hyvärinen and Morioka, 2016). Thus, in practice, after estimating , we might apply some linear ICA method to to get closer to the independent components. For that purpose we should further assume the are marginally independent, while the Theorem assumed only their conditional independent. The assumption of an exponential family model for the sources is also very similar to TCL. The fundamental difference of the present Theorem compared to TCL is that here we have a continuous-valued additional variable , while TCL is based on a discrete variable (time-segmentation label). Thus, the current proof is related to the TCL theory but not a simple modification.

5 Conclusion

We proposed a novel method based on score matching to estimate conditional densities in a general setting where no particular (parametric) assumptions on the density need to be made. We developed a novel neural-kernelized approach, which combines neural networks with reproducing kernels to apply score matching. In particular, using neural networks allows for powerful function approximation which is scalable for large dataset, while using kernels avoids the complications related to the derivatives needed in score matching. We proved that the model combining neural networks with reproducing kernels has the universal approximation capability, and that the ensuing estimation method is consistent. The practical performance of the proposed method was investigated both on artificial and benchmark datasets, and it was useful in high-dimensional conditional density estimation and compared favourably to existing methods. Finally, we showed that the proposed method has interesting connections to two rigorous probabilistic frameworks of representation learning, nonlinear sufficient dimension reduction and nonlinear independent component analysis, thus opening new avenues for theoretical analysis of representation learning.

References

  • Arbel and Gretton (2018) M. Arbel and A. Gretton. Kernel conditional exponential family. In Proceedings of the 21st International Conference on Artficial Intelligence and Statistics, volume 84, pages 1337–1346. PMLR, 2018.
  • Bengio et al. (2006) Y. Bengio, O. Delalleau, and N. L. Roux. The curse of highly variable functions for local kernel machines. In Advances in neural information processing systems (NIPS), pages 107–114, 2006.
  • Bengio et al. (2013) Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
  • Bishop (1994) C. Bishop. Mixture density networks. Neural Computing Research Group Report, 1994.
  • Bishop (2006) C. Bishop. Pattern Recognition and Machine Learning. Springer, 2006.
  • Carreira-Perpiñán (2000) M. Carreira-Perpiñán. Reconstruction of sequential data with probabilistic models and continuity constraints. In Advances in neural information processing systems, pages 414–420, 2000.
  • Chen et al. (2016) Y.-C. Chen, C. Genovese, R. Tibshirani, and L. Wasserman. Nonparametric modal regression. The Annals of Statistics, 44(2):489–514, 2016.
  • Comon (1994) P. Comon. Independent component analysis, a new concept? Signal Processing, 36(3):287–314, 1994.
  • Cook (1998) R. D. Cook. Regression Graphics: Ideas for Studying Regressions Through Graphics. John Wiley & Sons, 1998.
  • Cox (1985) D. D. Cox. A penalty method for nonparametric estimation of the logarithmic derivative of a density function. Annals of the Institute of Statistical Mathematics, 37(1):271–288, 1985.
  • Einbeck and Tutz (2006) J. Einbeck and G. Tutz. Modelling beyond regression functions: an application of multimodal regression to speed–flow data. Journal of the Royal Statistical Society: Series C (Applied Statistics), 55(4):461–475, 2006.
  • Fan et al. (1996) J. Fan, Q. Yao, and H. Tong. Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems. Biometrika, 83(1):189–206, 1996.
  • Feng et al. (2017) Y. Feng, J. Fan, and J. A. Suykens. A statistical learning approach to modal regression. arXiv:1702.05960, 2017.
  • Goodfellow et al. (2016) I. Goodfellow, Y. Bengio, and A. Courville. Deep learning. MIT press, 2016.
  • Hinton et al. (2012) G. Hinton, N. Srivastava, and K. Swersky. Lecture 6d - a separate, adaptive learning rate for each connection. Slides of lecture neural networks for machine learning, 2012.
  • Hornik (1991) K. Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(2):251–257, 1991.
  • Hyvärinen (2005) A. Hyvärinen. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6:695–709, 2005.
  • Hyvärinen and Morioka (2016) A. Hyvärinen and H. Morioka.

    Unsupervised feature extraction by time-contrastive learning and nonlinear ICA.

    In Advances in Neural Information Processing Systems, pages 3765–3773, 2016.
  • Hyvärinen and Morioka (2017) A. Hyvärinen and H. Morioka. Nonlinear ICA of temporally dependent stationary sources. In

    Proceedings of the 20th International Conference on Artificial Intelligence and Statistics

    , volume 54, pages 460–469. PMLR, 2017.
  • Hyvärinen and Pajunen (1999) A. Hyvärinen and P. Pajunen. Nonlinear independent component analysis: Existence and uniqueness results. Neural Networks, 12(3):429–439, 1999.
  • Kingma et al. (2014) D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems (NIPS), pages 3581–3589, 2014.
  • Lee et al. (2013) K.-Y. Lee, B. Li, and F. Chiaromonte. A general theory for nonlinear sufficient dimension reduction: Formulation and estimation. Annals of Statistics, 41(1):221–249, 2013.
  • Li (1991) K. Li. Sliced inverse regression for dimension reduction. Journal of the American Statistical Association, 86(414):316–327, 1991.
  • Martens et al. (2012) J. Martens, I. Sutskever, and K. Swersky. Estimating the Hessian by back-propagating curvature. In Proceedings of the 29th International Coference on International Conference on Machine Learning (ICML), pages 1783–1790. Omnipress, 2012.
  • Micchelli et al. (2006) C. A. Micchelli, Y. Xu, and H. Zhang. Universal kernels. Journal of Machine Learning Research, 7:2651–2667, 2006.
  • Rudi et al. (2015) A. Rudi, R. Camoriano, and L. Rosasco. Less is more: Nyström computational regularization. In Advances in Neural Information Processing Systems, pages 1657–1665, 2015.
  • Sasaki et al. (2014) H. Sasaki, A. Hyvärinen, and M. Sugiyama. Clustering via mode seeking by direct estimation of the gradient of a log-density. In Machine Learning and Knowledge Discovery in Databases Part III- European Conference, ECML/PKDD 2014, volume 8726, pages 19–34, 2014.
  • Schölkopf and Smola (2002) B. Schölkopf and A. J. Smola.

    Learning with kernels: support vector machines, regularization, optimization, and beyond

    .
    MIT press, 2002.
  • Sohn et al. (2015) K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems (NIPS), pages 3483–3491, 2015.
  • Sriperumbudur et al. (2017) B. Sriperumbudur, K. Fukumizu, A. Gretton, A. Hyvärinen, and R. Kumar. Density estimation in infinite dimensional exponential families. Journal of Machine Learning Research, 18(57):1–59, 2017.
  • Strang (1991) G. Strang. Calculus. Wellesley-Cambridge Press, 1991.
  • Sugiyama et al. (2010) M. Sugiyama, I. Takeuchi, T. Suzuki, T. Kanamori, H. Hachiya, and D. Okanohara. Least-squares conditional density estimation. IEICE Transactions on Information and Systems, 93(3):583–594, 2010.
  • Sutherland et al. (2018) D. Sutherland, H. Strathmann, M. Arbel, and A. Gretton. Efficient and principled score estimation with nyström kernel exponential families. In Proceedings of the 21st International Conference on Artficial Intelligence and Statistics (AISTATS), volume 84, pages 652–660. PMLR, 2018.
  • Tang and Salakhutdinov (2013) Y. Tang and R. R. Salakhutdinov. Learning stochastic feedforward neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 530–538, 2013.
  • Waegeman et al. (2012) W. Waegeman, T. Pahikkala, A. Airola, T. Salakoski, M. Stock, and B. De Baets. A kernel-based framework for learning graded relations from data. IEEE Transactions on Fuzzy Systems, 20(6):1090–1101, 2012.
  • Wu (2008) H.-M. Wu. Kernel sliced inverse regression with applications to classification. Journal of Computational and Graphical Statistics, 17(3):590–610, 2008.
  • Wu et al. (2007) Q. Wu, F. Liang, and S. Mukherjee. Regularized sliced inverse regression for kernel models. Technical report, Technical Report, Duke Univ, 2007.
  • Yao et al. (2012) W. Yao, B. G. Lindsay, and R. Li. Local modal regression. Journal of nonparametric statistics, 24(3):647–663, 2012.
  • Yeh et al. (2009) Y.-R. Yeh, S.-Y. Huang, and Y.-J. Lee. Nonlinear dimension reduction with kernel sliced inverse regression. IEEE Transactions on Knowledge and Data Engineering, 21(11):1590–1603, 2009.
  • Zhou (2008) D. Zhou. Derivative reproducing properties for kernel methods in learning theory. Journal of Computational and Applied Mathematics, 220(1-2):456–463, 2008.

Appendix A Proof of Proposition 2

Proof.

The minimizer should satisfy the following optimality condition for the Gâteaux derivative of along with an arbitrary direction :

(9)

The left-hand side of (9) shows that

Since is arbitrary and , substituting the equation above into the optimality condition (9) yields

(10)

Furthermore, the positive assumption of ensures that is the unique minimizer. Thus, the proof is completed. ∎

Appendix B Simulation Details

Regarding simulations on artificial data, in model selection, we set the candidates of the learning rate and width parameter at and , respectively. The regularisation was applied to only in the kernel part with decay, while no regularisation was used in the neural network part . For benchmark datasets, the candidates of the learning rate is the same , but the width parameter is selected from multiplied by which is the median of with respect to and . In addition, no regularisation was used for any parameters in benchmark datasets.

The following table shows the number of parameters of NKC and CVAE in simulations on artificial data (Section 3).

NKC NKC CVAE

Appendix C Proof of Theorem 4.1

Proof.

We first use the standard path integral formula [Strang, 1991]: For the vector field and a differentiable curve connecting a fixed point and arbitrary point ,

where and denotes the inner product. Then, applying the path integral formula to both sides in the universal approximation assumption (7) yields

where , and is independent of because is a fixed point. This equation gives

(11)

Integrating the both sides yields

By the definition of the conditional density,

Thus, satisfies the SDR condition (6), and the proof is completed. ∎

Appendix D Proof of Thereom 4.2

Proof.

Bayes theorem shows that

After taking the gradient with respect to above, the universal approximator assumption (I4) gives

(12)

where the -th element in is . Interestingly, the gradient with respect to deletes in (12). On the other hand, the exponential family assumption yields another form of as

where denotes the Jacobian. By taking the gradient with respect to , we have

(13)

A notable point is that disappeared after taking the gradient. Equating (12) with (13) yields

(14)

where the -th element in is . By taking the summation over points after multiplying to the both sides of (14), we have

Taking the inverse of on both sides completes the proof as