1 Introduction
In many areas of science, we strive to answer questions that are fundamentally causal in nature. For example, in medicine one is often interested in the genetic drivers of diseases, while in commerce one might want to identify the motives behind customers’ purchasing behaviour. Furthermore, it is of the utmost importance to thoroughly understand the underlying causal structure of the datagenerating process if we are to predict, with reasonable accuracy, the consequences of interventions or answer counterfactual questions about what would have happened had we acted differently. While most machine learning methods excel at prediction tasks by successfully inferring statistical dependencies, there are still many open questions when it comes to uncovering the causal dependencies between the variables driving the underlying datagenerating process. Given the growing interest in using data to guide decisions in areas where interventional and counterfactual questions abound, causal discovery methods have attracted considerable research interest
(Hoyer et al., 2009; Zhang and Hyvärinen, 2009; LopezPaz et al., 2015; Mooij et al., 2016).While causal inference is preferably performed on data coming from randomized control experiments, often this kind of data is not available due to a combination of ethical, technical and financial considerations. These realworld limitations have motivated research into inferring causal relationships from purely observational data. One group of methods (Spirtes et al., 2000; Sun et al., 2007b) attempts to recover the causal structure by analyzing conditional independencies present in the data, but does not provide a definitive answer for the underlying causal structure and is not robust to the choice of conditional independence testing methodology. Another group of methods (Hoyer et al., 2009; Zhang and Hyvärinen, 2009; Mooij et al., 2009) postulates that there is some inherent asymmetry between cause and effect and proposes different asymmetry measures that form the basis for causal discovery. While these methods provide a definitive answer to the question of causal structure, they typically assume a particular functional form for the interaction between the variables and a particular noise structure which limits their applicability. We aim our contribution to be a step towards a method that can deal with highly complex datagenerating processes, provides a definitive answer for the causal structure relying only observational data and whose inference can easily be extended without the need to develop novel, specifically tailored algorithms for each new model class.
In this work, we develop a fully nonparametric causal inference method to automatically discover causal relationships from purely observational data. In particular, our proposed method does not require any a priori assumptions on the functional form of the interaction between the variables or the noise structure. Furthermore, we propose a novel interpretation of the notion of asymmetry between cause and effect (Daniusis et al., 2012). Before we introduce our proposed interpretation, we motivate it with the following example. Let with where we consider the correct causal direction to be . Figure 1 visualizes the conditional distributions and for different values of and , respectively.
Note that the conditional distributions in the anticausal direction exhibit a larger structural variability across different values of the conditioning variable than the conditional distributions in the causal direction. It is important to note here that structural variability does not only refer to variability in the scale and location parameters, but should be understood more broadly as variability in the “parametric” form, e.g. differences in the number of modes and in higher order moments. If one thinks of conditional distributions as programs generating
from and vice versa, we see that in the causal direction the structure of the program remains unchanged although different input arguments are provided. In the anticausal direction, we see that the program requires structural modification across different values of the input in order to account for the differing behaviour of the conditional densities.Motivated by the above observation, we popose a novel interpretation of the notion of asymmetry between cause and effect in terms of the shortest description length, i.e. Kolmogorov complexity (Grünwald and Vitányi, 2008), of the datagenerating process. Whereas previous work (Lemeire and Dirkx, 2006; Janzing and Scholkopf, 2010; Daniusis et al., 2012; Budhathoki and Vreeken, 2017)
quantifies the asymmetry in terms of the Kolmogorov complexity of the factorization of the joint distribution, we propose to interpret the asymmetry based on the Kolmogorov complexity of the conditional distribution. Specifically, we propose that this asymmetry is realized by
the Kolmogorov complexity of the mechanism in the causal direction being independent of the input value of the cause. On the other hand, in the anticausal direction, there will be a dependence between the shortest description length of the mechanism and the particular input value of the effect. This (in)dependence can be measured by looking at the variability of Kolmogorov complexities of the mechanism for particular of the input. Unfortunately, as computing the Kolmogorov complexity is an intractable problem, we resort to conditional distributions as approximations of the corresponding programs. Thus, we can infer the causal direction by comparing the description length variability of conditional distributions across different values of the conditioning variable with the causal direction being the less variable. For measuring this variability, we use the framework of reproducing kernel Hilbert spaces (RKHS). This allows us to represent conditional distributions in a compact, yet expressive way and efficiently capture their many nuanced aspects thus enabling more accurate causal inference. In particular, by way of the kernel trick, we can efficiently compute the variability of infinitedimensional objects using finitedimensional quantities that can be easily estimated from data. Using the RKHS framework makes our method readily applicable also in situations when trying to infer the causal direction between pairs of random variables taking values in structured or nonEuclidean domains on which a kernel can be defined. Next, we propose three decision rules for causal inference based on the description length variability of sets of conditional distributions.
The main contributions of this paper are:

an interpretation of the notion of asymmetry between cause and effect in terms of the independence of the description length of the mechanism on the value of the cause,

an approximation to the intractable description length in terms of conditional distributions,

a flexible asymmetry measure based on RKHS embeddings of conditional distributions,

a fully nonparametric method for causal inference that does not impose a priori any assumptions on the functional relationship between the variables or the noise structure.
The rest of the paper is organized as follows. In Section 2, we review related work, while in Section 3 we introduce and discuss our causal inference methodology. In Section 4, we present experimental results on synthetic and realworld datasets. We discuss extensions to the case of more than two variables in Section 5. In Section 6, we discuss future research directions and conclude.
2 Related Work
Most approaches to causal inference from purely observational data can be grouped into three categories. The first category of approaches, socalled constraintbased methods, assume that the true causal structure can be represented with a directed acyclic graph (DAG) and then try to infer the true causal DAG by analyzing conditional independencies present in the observational data distribution . Under some technical assumptions on the relationship between and (Pearl, 2000), these methods can determine up to its Markov equivalence class^{1}^{1}1All DAGs that encode the same set of conditional independence relations constitute a Markov equivalence class. which usually contains DAGs that can be structurally very diverse and still have many unoriented edges thus not allowing for a definitive answer to the question of causal structure. An example of this approach is the PC algorithm (Spirtes et al., 2000) which builds a graph skeleton by successively removing unnecesary connections between the variables and then orienting the remaining edges if possible. Other examples of this approach rely on kernelbased conditional independence criteria, e.g. (Sun et al., 2007b; Zhang et al., 2011). Although mathematically wellfounded, the performance of these methods is highly dependent on the utilized conditional independence methodology, whose performance usually depends on the amount of available data. Furthermore, these methods are not robust as small errors in building the graph skeleton (e.g. a missing independence relation) can lead to significant errors in the inferred Markov equivalence class. As conditional independence tests require at least three variables, they are not applicable in the two variable case.
A second class of models, socalled scorebased methods, searches the space of all DAGs of a certain size by scoring their fit to the observed data using a predefined score function. An example of this approach is Greedy Equivalent Search (Chickering, 2002) which combines greedy search with the Bayesian information criterion. As the search space grows superexponentially with the number of variables, these methods quickly become computationally intractable. An answer to this shortcoming are hybrid methods which use constraintbased approaches to decrease the search space that can then be effectively explored with scorebased methods, e.g. (Tsamardinos et al., 2006)
. DAGs have also been represented using generative neural networks and scored according to how well the generated data matches the observed data, e.g.
(Goudet et al., 2017). A major shortcoming of this hybrid methodology is that there exists no principled way of choosing problemspecific combinations of scoring functions and search strategies which is a significant problem as different search strategies in combination with different scoring rules can potentially lead to very different results.The third category of methods assumes that there exists some inherent asymmetry between cause and effect. One line of research, often refered to as functional causal models or structural equation models, assumes a particular functional form for the causal interactions between the variables and a particular noise structure. In these models, each variable is a deterministic function of its causes and some independent noise, with all noise variables assumed to be jointly independent. Examples of this methodology assume linearity and additive nonGaussian noise (Shimizu et al., 2006), nonlinear additive noise (Hoyer et al., 2009; Mooij et al., 2009) and invertible interactions between the covariates and the noise (Zhang and Hyvärinen, 2009). In order to perform causal discovery in these models, the special structural assumptions placed on the interaction between the covariates and on the noise are of crucial importance, thus limiting their applicability. A second strand of research interprets the asymmetry between cause and effect through an informationtheoretic lens by examining the complexity of the factorization of the joint distribution (Lemeire and Dirkx, 2006). (Janzing and Scholkopf, 2010) argue that if causes , then the factorization in the causal direction, i.e. , should have a shorter description in terms of the Kolmogorov complexity than the factorization in the anticausal direction, i.e. . In (Daniusis et al., 2012), instead of computing the intractable Kolmogorov complexity, the correlation between the input and the conditional distribution is measured, whereas (Budhathoki and Vreeken, 2017) use the minimum description length principle. The approach of (Sun et al., 2007a) measures the complexity of conditional distributions by RKHS seminorms computed on the logarithms of their densities.
Lastly, causal discovery has also been framed as a learning problem. Examples of this approach are (LopezPaz et al., 2015; Fonollosa, 2016). RCC (LopezPaz et al., 2015)
constructs feature representations of the data based on RKHS embeddings of the joint and marginal distributions and uses these within a random forest classifier. In
(Fonollosa, 2016), the feature representation of the data includes quantities describing the joint, marginal and conditional distributions. In particular, the conditional distributions are represented with conditional entropy, mutual information and a quantification of their variability in terms of the spread of the entropy, variance and skewness for different values of the conditioning variable.
On the other hand, we propose a causal inference methodology based on a novel interpretation of the asymmetry between cause and effect and derive three decision rules with one of these decision rules based on classifying feature representation of the data. In particular, we consider feature representations based only on conditional distributions which we argue to be more discriminative for inferring the causal direction.
3 Kernel Conditional Deviance for Causal Inference
We first briefly review some basics of RKHS theory that constitute the building blocks of our approach. For a detailed discussion, see (Scholkopf and Smola, 2001).
3.1 Background
Let and be measurable spaces with and the associated Borel algebras. Denote by and the RKHSs of functions defined on and
, respectively, and their corresponding kernels. Given a probability distribution
on , the mean embedding ^{2}^{2}2 and will be used interchangeably if it does not lead to confusion. (Scholkopf and Smola, 2001) is a representation of in given bywith
. This can be unbiasedly estimated by
with . Furthermore, if is a characteristic kernel (Scholkopf and Smola, 2001), then this representation yields a metric on probability measures, i.e.
. The radial basis function (RBF) kernel with bandwidth
given byis an example of a characteristic kernel. A conditional distribution can be encoded using the conditional mean embedding (Scholkopf and Smola, 2001) which is an element of that satisfies
Using the equivalence between conditional mean embeddings and vectorvalued regressors
(Lever et al., 2012), we can estimate from a sample as(1) 
with , , , regularization parameter
and identity matrix
.3.2 Method
For simplicity, we restrict our attention to the two variable problem of causal discovery, i.e. distinguishing between cause and effect. Possible extensions to the multivariable setting are discussed in Section 5. Following the usual approach in the literature, we derive our method under the assumption of causal sufficiency of the data. In particular, we ignore the potential existence of confounders, i.e. all causal conclusions should be understood with respect to the set of observed variables. Nevertheless, in Section 4, we see that our method performs well also in settings where the noise has positive mean which can be interpreted as accounting for potential confounders.
Given observations of a pair of random variables , our goal is to infer the causal direction, i.e. decide whether causes (i.e. ) or causes (i.e. ). To this end, we develop a fully nonparametric causal discovery method that relies only on observational data. In particular, our method does not a priori postulate a particular functional model for the interactions between the variables or a particular noise structure. Our approach, Kernel Conditional Deviance for Causal Inference (KCDC), is based on the assumption that there exists an asymmetry between cause and effect that is inherent in the datagenerating process. While there are many interpretations of how this asymmetry might be realized, two of the more prominent ideas phrase it in terms of the independence of cause and mechanism (Daniusis et al., 2012) and in terms of the complexity of the factorization of the joint distribution (Lemeire and Dirkx, 2006; Janzing and Scholkopf, 2010).
Motivated by these two ideas, we propose a novel interpretation of the notion of asymmetry between cause and effect. First, we take an informationtheoretic approach to reasoning about the complexity of distributions similar to (Lemeire and Dirkx, 2006; Janzing and Scholkopf, 2010). In particular, we reason about it in terms of algorithmic complexity, i.e. Kolmogorov complexity (Grünwald and Vitányi, 2008) which is the description length of the shortest program that implements the sampling process of the distribution. For a distribution , the Kolmogorov complexity is
with a precision parameter, extracting the output of applying program onto a realization of the random variable denoted by . Analogously, for a conditional distribution , the Kolmogorov complexity is
Assuming , the asymmetry notion specified in terms of factorization complexity can be expressed as
which holds up to an additive constant (Stegle et al., 2010). Further, the independence of cause and mechanism can be interpreted as algorithmic independence (Janzing and Scholkopf, 2010), i.e. knowing the distribution of the cause does not enable a shorter description of the mechanism .
Based on this, we argue that not only knowing the distribution of the cause does not enable a shorter description of the mechanism, but also knowing any particular value of the cause does not provide any information that can be used to construct a shorter description of the mechanism. To formalize this, we introduce the notation
to be the Kolmogorov complexity of the conditional distribution when the conditioning variable takes on the value . From our argument above, we see that in the causal direction the Kolmogorov complexity of is independent of the particular value of the cause , i.e.
On the other hand, this will not hold in the anticausal direction as the input and mechanism are not algorithmically independent in that direction, i.e.
This motivates our interpretation of the notion of asymmetry between cause and effect which is summarized as follows.
Postulate.
(Minimal description length independence)
If , the minimal description length of the mechanism mapping to is independent of the value of , whereas the minimal description length of the mechanism mapping to is dependent on the value of .
Building on this, we can infer the causal direction by comparing how much the description length of the minimal description length program implementing the mechanism varies across different values of its input arguments. In particular, in the causal direction, we expect to see less variability than in the anticausal direction. As computing the Kolmogorov complexity is an intractable problem, we use th norm of RKHS embeddings of the corresponding conditional distributions as a proxy for it. Thus, we recast causal inference in terms of comparing the variability in RKHS norm of embeddings of sets of conditional distributions indexed by values of the conditioning variable. In order to perform causal inference, we use the framework of reproducing kernel Hilbert spaces. This allows us to construct highly expressive, yet compact approximations of the potentially highlycomplex programs and circumvent the challenges of density estimation when trying to represent conditional distributions. Furthermore, using the RKHS framework allows us to efficiently capture the many nuanced aspects of distributions thus enabling more accurate causal inference. For example, using nonlinear kernels allows us to capture more comprehensive distributional properties including higher order moments. Furthermore, using the RKHS framework makes our method readily applicable also in situations when trying to infer the causal direction between two random vectors (treated as single variables) or pairs of other types of random variables taking values in structured or nonEuclidean domains on which a kernel can be defined. Examples of such types of data include discrete data, genetic data, phylogenetic trees, strings, graphs and other structured data (Gärtner et al., 2002).
We represent conditional distributions in the RKHS using conditional mean embeddings (Scholkopf and Smola, 2001). In particular, given observations of a pair of random variables , we construct the embeddings of the two sets of conditional distributions, and using (1). Furthermore, if we choose a characteristic kernel (Scholkopf and Smola, 2001), the conditional mean embeddings of two distinct distributions will not overlap. For example, we can choose the RBF kernel which is characteristic and embeds the distributions into the Hilbert space of infinitely differentiable functions. Next, we compute the variability in RKHS norm of a set of conditional mean embeddings as the deviance of the RKHS norms of that set. Thus, using the KCDC measure with
(2) 
we compute the deviance in RKHS norm of the set . Analogously, for , the KCDC measure can be calculated as
(3) 
Based on our proposed interpretation of the notion of asymmetry between cause and effect, we can determine the causal direction between and . For this purpose, we propose three different decision rules. First, we can determine the causal direction by directly comparing the KCDC measures for the two directions, i.e.
but leave the causal direction undetermined if with some fixed decision threshold. The case of undetermined direction accounts for situations where the KCDC measures are too close in value to determine the causal direction. This situation might come about due to numerical errors or nonidentifiability. Furthermore, we can also derive a confidence measure for the inferred causal direction as
Second, we can determine the causal direction based on majority voting of an ensemble constructed using different model hyperparameters, i.e.
where the dependence on the model hyperparameters has been made explicit. Third, the KCDC measures can also be used for constructing feature representations of the data which can then be used within a classification method. In particular, we can infer the causal relationship between and using
where Classifier is a classification algorithm that classifies against . For training the classifier, we generate synthetic data, e.g. as in (LopezPaz et al., 2015). The following algorithms summarize our causal inference methodology.
Identifiability. For methods that assume a functional model and determine the causal direction based on the independence between covariates and noise, (Zhang and Hyvärinen, 2009) show that the assumed functional class needs to be constrained in order to ensure the identifiability of the model. Although KCDC is not based on this approach, it still fulfills the above requirement as the kernel hyperparameters used for computing the KCDC measures are the same in both directions. Given our approach to causal inference, the causal direction will not be identifiable in situations where the description length of conditional distributions in both the causal and anticausal direction does not vary with the value of the cause and effect, respectively. This happens when in both directions the functional form of the mechanism can be described by one family of distributions for all its input arguments. One example of this is linear Gaussian dependence which is nonidentifiable for most other causal discovery methods too. Another example is the case of independent variables which is usually not considered in the literature, but can be easily mitigated with an independence test. Note that using characteristic kernels eliminates any potential nonidentifiability that might arise as a consequence of the noninjectivity of the embedding process.
4 Experimental Results
4.1 Synthetic Data
In order to showcase the wide applicability of our proposed approach, we test it extensively on several synthetic datasets spanning a wide range of functional dependencies between cause and effect and different interaction patterns with different kinds of noise. We compare our approach to LiNGAM (Shimizu et al., 2006), IGCI (Daniusis et al., 2012), ANM (Mooij et al., 2016) with Gaussian Process regression and HSIC test (Scholkopf and Smola, 2001) on the residual and the postnonlinear model (PNL) (Zhang and Hyvärinen, 2009) with HSIC test. In all of the below experiments, we sample datasets of observations each with and test three different noise regimes showcasing the robustness of our method with respect to different types of noise across different functional dependencies. In particular, the noise is either drawn from a standard normal , a uniform or an exponential distribution. Note that the exponential noise has positive mean which can be interpreted as accounting for confounders. In all experiments, we apply the decision rule based on direct comparison for KCDC. We tested across different combinations of characteristic kernels (RBF, log and rational quadratic kernels) which yielded fairly consistent performance. In the following tables, we report the results when using the log kernel on the input and the rational quadratic kernel on the response.
Additive Noise. As a first proof of concept, we examine the performance of our method on additive noise models as such models are the basis of many causal inference methods, e.g. (Hoyer et al., 2009). We test our approach on
(A) ,
(B) ,
(C) . From the table below, we see that LiNGAM performs does not perform well which is to be expected given its assumption of linear dependence. ANM performs very well across all functional and noise settings due to its assumption of additive noise. PNL does not perform well in any settting which is probably due to overfitting. IGCI peforms well for (C) and under exponential noise, while KCDC correctly classifies every dataset in every setting.
(A)  Gaussian  Uniform  Exponential 

LiNGAM  26%  87%  28% 
ANM  100%  100%  100% 
PNL  53%  14%  47% 
IGCI  52%  52%  94% 
KCDC  100%  100%  100% 
(B)  Gaussian  Uniform  Exponential 
LiNGAM  4%  40%  4% 
ANM  94%  97%  79% 
PNL  54%  33%  46% 
IGCI  54%  68%  96% 
KCDC  100%  100%  100% 
(C)  Gaussian  Uniform  Exponential 
LiNGAM  25%  32%  18% 
ANM  98%  100%  97% 
PNL  39%  27%  36% 
IGCI  98%  100%  99% 
KCDC  100%  100%  100% 
Multiplicative Noise. Next, we look at datasets where the noise interacts multiplicatively with the covariates. To test our method in this setting, we generate data according to the following functional dependencies
(A) ,
(B) ,
(C) .
In the table below, we see that ANM and LiNGAM do not perform well which is to be expected given their assumption of additive noise. PNL has somewhat better performance, but does not surpass chance level in half the settings. On the other hand, IGCI peforms very well across all settings, while KCDC correctly classifies every dataset in every setting.
(A)  Gaussian  Uniform  Exponential 

LiNGAM  20%  30%  5% 
ANM  0%  0%  1% 
PNL  52%  24%  30% 
IGCI  100%  89%  100% 
KCDC  100%  100%  100% 
(B)  Gaussian  Uniform  Exponential 
LiNGAM  10%  22%  4% 
ANM  8%  30%  12% 
PNL  49%  58%  32% 
IGCI  100%  89%  100% 
KCDC  100%  100%  100% 
(C)  Gaussian  Uniform  Exponential 
LiNGAM  0%  3%  0% 
ANM  5%  1%  0% 
PNL  55%  41%  30% 
IGCI  100%  99%  100% 
KCDC  100%  100%  100% 
More complex noise. Further, we examine exponential and periodic interactions of the covariates with the noise. In particular, we generate synthetic data according to
(A) ,
(B)
(C) . As can be seen from Table (3), LiNGAM and ANM do not perform very well which is to be expected as they rely on the assumption of additive noise. PNL, which assumes a invertible interaction between the covariates and noise, performs at or above chance level in almost all cases with very good performance under periodic noise. The nonparametric approach of IGCI has very good performance across all the functional and noise settings, while KCDC achieves perfect performance in all cases except under Gaussian and uniform noise for (A).
(A)  Gaussian  Uniform  Exponential 

LiNGAM  0%  2%  0% 
ANM  28%  26%  24% 
PNL  55%  50%  48% 
IGCI  100%  85%  100% 
KCDC  98%  92%  100% 
(B)  Gaussian  Uniform  Exponential 
LiNGAM  31%  32%  23% 
ANM  16%  54%  6% 
PNL  56%  50%  72% 
IGCI  88%  72%  97% 
KCDC  100%  100%  100% 
(C)  Gaussian  Uniform  Exponential 
LiNGAM  0%  0%  1% 
ANM  31%  19%  37% 
PNL  95%  92%  92% 
IGCI  97%  98%  98% 
KCDC  100%  100%  100% 
4.2 Tübingen CauseEffect Pairs
Next, we discuss the performance of our method on realworld data. For this purpose, we test KCDC on the only widely used benchmark dataset Tübingen CauseEffect Pairs (TCEP) (Mooij et al., 2015). This dataset is comprised of realworld causeeffect samples that are collected across very diverse subject areas with the true causal direction provided by human experts. Due to the heterogenous origins of the data pairs, many diverse functional dependencies are expected to be present in TCEP.
In order to show the flexibility and capacity of KCDC when dealing with many diverse functional dependencies simultaneously, we test it using both the direct comparison decision rule and the majority decision rule. We use TCEP version 1.0 which consists of 100 causeeffect pairs. Each pair is assigned a weight in order to account for potential sources of bias given that different pairs are sometimes selected from the same multivariable dataset. Following the widespread approach present in the literature of testing only on scalarvalued pairs, we remove the multivariate pairs 52, 53, 54, 55 and 71 from TCEP in order to ensure a fair comparison to previous work. Note that contrary to some methods in literature, this is not necessary for our approach. For the majority approach, we choose the best settings of the kernel hyperparameters as inferred from the synthetic experiments. The direct approach represents the single best performing hyperparameter configuration on TCEP.
From the summary of classification accuracies of KCDC and related methods in Table 4, we see that KCDC is competitive to the stateoftheart methods even when only one setting of kernel hyperparameters is used, i.e. when the direct comparison decision rule is used. When we combine multiple kernel hyperparameters under the majority vote approach, we see that our method outperforms other methods by a significant margin. Note that the review (Mooij et al., 2016) discusses additive noise models (Hoyer et al., 2009) and informationgeometric causal inference (Daniusis et al., 2012). In particular, an extensive experimental evaluation of these methods across a wide range of hyperparameter settings is performed. In the fourth row of Table 4, we report the most favourable outcome across both types of methods of their largescale experimental analysis. For testing RCC on TCEP v1.0, we use the code provided in (LopezPaz et al., 2015).
4.3 Inferring the Arrow of Time
In addition to the many realworld pairs above, we also test our method at inferring the direction of time on causal time series. Given a time series , the task is to infer if or .
We use a dataset containing quarterly growth rates of the real gross domestic product (GDP) of the UK, Canada and USA from 1980 to 2011 as in (Bauer et al., 2016). The resulting multivariate time series has length 124 and dimension three. According to the above selection of hyperparameters on the synthetic datasets, we chose a wide range of hyperparameters to test KCDC on. In particular, both on the response and input we used either a log kernel with in or an RBF kernel with bandwidth
times the median heuristic. Across all of these hyperparameters, KCDC correctly identifies the causal direction with the confidence measure
measuring the absolute relative difference between the KCDC measures varying between and . We compare our approach to methods readily applicable to causal infenrence on multivariable time series. In particular, LiNGAM does not identify the correct direction. On the other hand, the method developed in (Bauer et al., 2016) that models the data as an autoregressive moving average model with nonGaussian noise correctly identifies the causal direction.5 Extensions to the Multivariable Case
While we present and discuss our method for the case of pairs of variables, it can be extended to the setting of more than two variables. Assuming we have variables with , i.e. , we can apply KCDC to every pair of variables with while conditioning on all of the remaining variables in . This corresponds to inferring the causal relationship between and while accounting for the confounding effect of all the remaining variables.
Another way of dealing with the multivariable setting is to use KCDC in conjunction with, for example, the PC algorithm (Spirtes et al., 2000). In particular, one would first apply the PC algorithm to the data. The resulting DAG skeleton containing potentially many unoriented edges can then be processed with KCDC. In particular, our method can be applied sequentially to every pair of variables that is connected with an unoriented edge while conditioning on the remaining variables in the DAG.
Yet another approach to the multivariable case is to use KCDC measures as features in a multiclass classification problem for dimensional distributions. However, as noted in (LopezPaz et al., 2015), this approach quickly becomes rather cumbersome as the number of labels grows superexponentially in the number of variables due to the rapid increase of the number of DAGs that can be constructed from variables.
6 Discussion
In this paper, we proposed a fully nonparametric causal inference method that uses purely observational data and does not postulate a priori assumptions on the functional relationship between the variables or the noise structure. As part of this, we developed a novel interpretation of the notion of asymmetry between cause and effect using informationtheoretic considerations. In particular, we proposed to reason about this asymmetry in terms of the variability, across different values of the input, of the minimal description length of programs implementing the datagenerating process of conditional distributions. As computing the Kolmogorov complexity is not tractable, we used the RKHS framework to construct highly expressive approximations in terms of the norm of conditional distribution embeddings. In order to quantify the description length variability, we proposed a flexible measure in terms of the withinset deviance of the RKHS norms of conditional mean embeddings. Based on this measure, we presented three decision rules for causal inference based on direct comparison, ensembling and classification, respectively. We extensively tested our proposed method across a wide range of diverse synthetic datasets showcasing its wide applicability. Furthermore, we tested our method on realworld time series data and the realworld benchmark dataset Tübingen CauseEffect Pairs where we outperformed existing stateoftheart methods by a significant margin.
Although we focused on conditional mean embeddings, there exist other representations of conditional distributions in the RKHS, e.g. conditional embedding operators. The study of these representations and their comparison to KCDC is left for future work. As KCDC was developed under the assumption of causal sufficiency, extending it to explicitely model confounding is another possible avenue for future research.
References
 Bauer et al. [2016] Stefan Bauer, Bernhard Schölkopf, and Jonas Peters. The arrow of time in multivariate time series. In International Conference on Machine Learning, pages 2043–2051, 2016.
 Budhathoki and Vreeken [2017] Kailash Budhathoki and Jilles Vreeken. Causal inference by stochastic complexity. arXiv:1702.06776, 2017.
 Chickering [2002] David Maxwell Chickering. Optimal structure identification with greedy search. Journal of machine learning research, 3(Nov):507–554, 2002.
 Daniusis et al. [2012] Povilas Daniusis, Dominik Janzing, Joris Mooij, Jakob Zscheischler, Bastian Steudel, Kun Zhang, and Bernhard Schölkopf. Inferring deterministic causal relations. arXiv preprint arXiv:1203.3475, 2012.
 Fonollosa [2016] José AR Fonollosa. Conditional distribution variability measures for causality detection. arXiv preprint arXiv:1601.06680, 2016.

Gärtner et al. [2002]
Thomas Gärtner, John W Lloyd, and Peter A Flach.
Kernels for structured data.
In
International Conference on Inductive Logic Programming
, pages 66–83. Springer, 2002.  Goudet et al. [2017] Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, David LopezPaz, Isabelle Guyon, Michele Sebag, Aris Tritas, and Paola Tubaro. Learning functional causal models with generative neural networks. arXiv preprint arXiv:1709.05321, 2017.
 Grünwald and Vitányi [2008] Peter D Grünwald and Paul MB Vitányi. Algorithmic information theory. Handbook of the Philosophy of Information, pages 281–320, 2008.
 Hoyer et al. [2009] Patrik O Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal discovery with additive noise models. In Advances in neural information processing systems, pages 689–696, 2009.
 Janzing and Scholkopf [2010] Dominik Janzing and Bernhard Scholkopf. Causal inference using the algorithmic markov condition. IEEE Transactions on Information Theory, 56(10):5168–5194, 2010.
 Lemeire and Dirkx [2006] Jan Lemeire and Erik Dirkx. Causal models as minimal descriptions of multivariate systems, 2006.
 Lever et al. [2012] Guy Lever, Luca Baldassarre, Sam Patterson, Arthur Gretton, Massimiliano Pontil, and Steffen Grünewälder. Conditional mean embeddings as regressors. In Proceedings of the 29th International Conference on Machine Learning (ICML12), pages 1823–1830, 2012.
 LopezPaz et al. [2015] David LopezPaz, Krikamol Muandet, Bernhard Schölkopf, and Iliya Tolstikhin. Towards a learning theory of causeeffect inference. In International Conference on Machine Learning, pages 1452–1461, 2015.
 Mooij et al. [2009] Joris Mooij, Dominik Janzing, Jonas Peters, and Bernhard Schölkopf. Regression by dependence minimization and its application to causal inference in additive noise models. In Proceedings of the 26th annual international conference on machine learning, pages 745–752. ACM, 2009.
 Mooij et al. [2015] Joris M. Mooij, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. Causeeffect pairs repository. 2015. http://webdav.tuebingen.mpg.de/causeeffect/.
 Mooij et al. [2016] Joris M Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. Distinguishing cause from effect using observational data: methods and benchmarks. Journal of Machine Learning Research, 17(32):1–102, 2016.
 Pearl [2000] Judea Pearl. Causality: models, reasoning, and inference. Cambridge University Press Cambridge, UK:, 2000.

Scholkopf and Smola [2001]
Bernhard Scholkopf and Alexander J Smola.
Learning with kernels: support vector machines, regularization, optimization, and beyond
. MIT press, 2001.  Shimizu et al. [2006] Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, and Antti Kerminen. A linear nongaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(Oct):2003–2030, 2006.
 Spirtes et al. [2000] Peter Spirtes, Clark Glymour, Richard Scheines, et al. Causation, prediction, and search. MIT Press Books, 2000.
 Stegle et al. [2010] Oliver Stegle, Dominik Janzing, Kun Zhang, Joris M Mooij, and Bernhard Schölkopf. Probabilistic latent variable models for distinguishing between cause and effect. In Advances in Neural Information Processing Systems, pages 1687–1695, 2010.
 Sun et al. [2007a] Xiaohai Sun, Dominik Janzing, and Bernhard Schölkopf. Distinguishing between cause and effect via kernelbased complexity measures for conditional distributions. In ESANN, pages 441–446, 2007a.
 Sun et al. [2007b] Xiaohai Sun, Dominik Janzing, Bernhard Schölkopf, and Kenji Fukumizu. A kernelbased causal learning algorithm. In Proceedings of the 24th international conference on Machine learning, pages 855–862. ACM, 2007b.

Tsamardinos et al. [2006]
Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis.
The maxmin hillclimbing bayesian network structure learning algorithm.
Machine learning, 65(1):31–78, 2006. 
Zhang et al. [2011]
K Zhang, J Peters, D Janzing, and B Schölkopf.
Kernelbased conditional independence test and application in causal
discovery.
In
Proceedings of the 27th Annual Conference on Uncertainty in Artificial Intelligence (uai)
, 2011.  Zhang and Hyvärinen [2009] Kun Zhang and Aapo Hyvärinen. On the identifiability of the postnonlinear causal model. In Proceedings of the twentyfifth conference on uncertainty in artificial intelligence, pages 647–655. AUAI Press, 2009.
Comments
There are no comments yet.