Causal Compression

11/01/2016 ∙ by Aleksander Wieczorek, et al. ∙ Universität Basel 0

We propose a new method of discovering causal relationships in temporal data based on the notion of causal compression. To this end, we adopt the Pearlian graph setting and the directed information as an information theoretic tool for quantifying causality. We introduce chain rule for directed information and use it to motivate causal sparsity. We show two applications of the proposed method: causal time series segmentation which selects time points capturing the incoming and outgoing causal flow between time points belonging to different signals, and causal bipartite graph recovery. We prove that modelling of causality in the adopted set-up only requires estimating the copula density of the data distribution and thus does not depend on its marginals. We evaluate the method on time resolved gene expression data.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 10

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Causality modelling has recently received much attention in the machine learning community. Various approaches to discovering causal relationships in the data have been proposed. The idea of an intervention, i.e. forcing a node in a graphical model to a particular value and analysing the resulting distribution has been introduced in 

(Pearl, 2009) and further developed in (Eberhardt and Scheines, 2007; Hagmayer et al., 2007) as well as adjusted to account for observational data (Hauser and Bühlmann, 2015). The use of structural equation models has been advocated in (Pearl, 2012; Peters, 2012). Expressing Pearl’s intervention calculus in terms of information theoretic concepts capturing the difference between interventional and observational distributions resulted in rich literature, e.g. (Raginsky, 2011; Amblard and Michel, 2013; Massey, 1990). As a result, asymmetrical information theoretic measures (Massey, 1990) are used for modelling causal relationships in graphical models.

We build on these ideas by employing directed information between time series to quantify the amount of directed information flow. In this setting, we introduce the notion of causal compression, i.e. compression which, by maximising directed information, selects time points carrying the causal flow between the time series. We show that sparsity of compression ensures causal compression, i.e. only the nodes or edges which reflect directed causal connections are selected. We then construct a constrained optimisation problem for finding a causal sparse representation of a time series. We also show that the modelled directed relationships only depend on the copula.

Motivation

We motivate our compression-based approach by a general principle of solving problems formulated by Vapnik in the context of learning theory in (Vapnik, 1995): ,, When solving a problem, try to avoid solving more general problem as an intermediate step.“ We interpret it in the following manner: it is not necessary to infer a general structure from data , if one is only interested in a function preserving only certain semantics of (see Figure 1(a)). In our setting, the general structure , the estimation of which we try to avoid, is the full causal network. We show that the partial semantics defined by can be obtained by employing causal compression without inferring . We give two examples of , presented schematically in Figure 1(b). The first is the causal segmentation of time points of one time series into those that exhibit outgoing or incoming causal flow (orange and green nodes in Figure 1(b), respectively) to the other time series and those involved in instantaneous information exchange (blue nodes in Figure 1(b)). Another example of that we present is causal bipartite graph estimation, e.g. computing a mixed bipartite graph between the two time series, where the arrows mean causal dependence and edges mean instantaneous information exchange.

(a) Problem setting
(b) Direct computation of
Figure 1: Causal compression — motivation

We show how to compute the two for two parallel time series, which might describe two evolving systems, but all the concepts can be extended to more general structures according to (Raginsky, 2011)

as long as a global ordering of the random variables is known.

This paper is structured as follows: Section 2 formalises the setting of Pearlian graphs as well as introduces the formalism of directed information theory, both for directed acyclic graphs and for time series. In Section 3, the proposed causal compression principle is described, its applications to causal segmentation of time series and to directed bipartite graph recovery are characterised and the copula formulation of causal discovery is presented. Section 4 includes experiments on synthetic as well as gene expression data. Concluding remarks are presented in Section 5.

2 Related work

Causal graphs.

Causal relationships in graphical models are frequently represented with directed acyclic graphs (DAGs). The arrows in the graphs can be imbued with causal interpretation in different ways, e.g. through considering differences in factorisations of the joint probability 

(Janzing and Scholkopf, 2010) or by introducing the causal Markov condition (Spohn, 1980). We follow the approach to representing causality with DAGs proposed in (Pearl, 2009). It requires that one be able to perform, or think of performing, an intervention on any node or collection of nodes in the graph. An intervention means that the variable intervened upon has its value set externally, while the influence of any other variables in the DAG (most importantly its parents) upon it is suppressed. This process corresponds to measuring the influence of a chosen set of variables on the rest of the system. A Pearlian DAG satisfies two more conditions:

  1. It represents the conditional independence relations of the underlying probability distribution via

    d-separation, i.e. any pair of sets of variables d-separated by a third set are conditionally independent given .

  2. For any node , its conditional distribution given its parents does not depend on interventions on any other nodes in the DAG.

The latter condition is called modularity and can be shown to imply locality, i.e. that only interventions performed on the parents of affect . This can be thought of as an intuitive extension of conditional independence representation in graphical models to causality: the absence of an arrow between two nodes implies the absence of a direct causal relationship between them. Thus, Pearlian DAGs, while representing conditional independence relationships in the data, also provide a setting for analysing direct causal relationships.

Causal graphs and directed information.

Let be a Pearlian graph where the elements of are random variables taking values in and assume .

Define the Kullback-Leibler divergence between two (discrete or continuous) probability distributions and as and the conditional Kullback-Leibler divergence as The mutual information between and is then defined as .

For any disjoint and denote with the interventional distribution of , i.e. the distribution of which results from intervening on by setting its value to as described above. This distribution is contrasted with the observational distribution of which is obtained by passively observing the values of .

The idea of defining causality as the difference between the two distributions has recently gained popularity (Massey, 1990; Raginsky, 2011; Amblard and Michel, 2013). It was originally introduced in (Massey, 1990) as causal conditioning of time series and extended to the notion of directed stochastic kernels in (Tatikonda and Mitter, 2009). We follow this approach and define the directed information as:

(1)

This measure has an intuitive interpretation in the setting of interventions in Pearlian graphs: if its value is small, then the two distributions are similar, thus any common changes of and can be identified without intervening on . Otherwise, performing an intervention on has influenced the distribution of , hence the difference must stem from the connections between and in , which were destroyed while intervening on .

The directed information as defined in Eq. (1) quantifies the difference between the interventional distribution of and the observational conditional distribution of . One might also consider a mixed quantity, i.e. an interventional distribution with conditioning on a set of passive observations. This leads to the definition of conditional directed information for three disjoint sets  (Raginsky, 2011):

(2)

Analogously to directed information, conditional directed information as defined in Eq. (2) can be interpreted as a measure of the causal relationship between and , when paths traversing in the underlying DAG are excluded.

Directed Information Theory for Time Series.

For the sequel, assume and

to be random vectors representing time series indexed at the same time points.

For a Pearlian DAG where consists of two time series , and of all possible arrows pointing to the future (i.e. all arrows , , , with ), the directed information defined in Eq. (1) takes the following form (see (Massey, 1990; Amblard and Michel, 2013)):

(3)

where . As in (Amblard and Michel, 2013), or stand for delayed collections of samples of or and their first elements should be understood as wild cards not influencing the conditioning. This means that the symbol denotes an -dimensional time series , where the first element does not affect conditioning and where for any .

In (Amblard and Michel, 2013) it is also shown that the decomposition of mutual information into directed informations and the instantaneous coupling term exist:

(4)

where .

For jointly Gaussian distributed

, let the partitioning of the joint covariance matrix be denoted as follows: . Recall that for -dimensional Gaussian distributed random variables, entropy, and hence mutual information, have the following form (for the sake of clarity we will neglect the constant term in the sequel):

(5)

where denotes the covariance matrix of . Hence, for jointly Gaussian distributed and we have:

(6)

There have been attempts at discovering causal relationships in time series. Transfer entropy, which is the asymptotic value of the directed information when stationarity of the time series is assumed, is used to model neurobiological data (Shimono and Beggs, 2015; Vicente et al., 2011). In (Quinn et al., 2015, 2011), directed information is employed to measure causal relationships between nodes in networks of stochastic processes by approximating the probability distribution of the graphical model with causal trees or causal graphs. Unlike those approaches, we do not treat time series (or random processes) as nodes and do not model relationships between such nodes. Rather than focusing on the interdependencies among multiple time series, we propose to model causal relationships between specific time points of the time series. We also do not have to make any assumptions concerning stationarity.

Another approach to causal compression could be taken where a series of statistical tests of conditional mutual informations is devised in order to compute the directed information (according to Eq. (3)). In this set-up, however, all subsets of the time series would have to be tested in order to establish the optimal representation. In contrast to this, the proposed method produces a solution path where nodes comprising the compressed representation are added as the sparsity criterion is relaxed.

3 Causal Compression

We propose to combine directed information theory as introduced in Section 2 with sparse optimisation in order to compute causal compression of time series as motivated in Section 1. In this way, directed relationships in the data can be modelled. To this end, we note that sparse representation ensures causal compression, i.e. that for a given value of directed information, choosing the most sparse time series representation is equivalent to excluding the nodes that do not contribute to the direct causal relationships in the Pearlian graph (see Corollary 1). Let and represent the two time series. The time series , which is the result of the compression, is a sparse representation of that preserves a given amount of the directed information between and . This representation is obtained by introducing the causal compression principle, where the sparse maximising the directed information is found. We subsequently (Sections 3.2 and 3.3) present two applications of causal compression. Both applications are ways of circumventing the necessity of estimating the whole causal network, as presented in Figure 1(b). Finally, we show that solutions to both problems only depend on the copula density of .

The property of sparsity as a building block of causal compression is formalised as Corollary 1. We first show that the equivalent of the chain rule for mutual information also holds for directed information (Lemma 1), and subsequently apply it to time series , .

Lemma 1 (Chain rule for directed information).

For any disjoint sets

(7)
Proof.

The proof is similar to the one of chain rule for conditional Kullback-Leibler divergence and follows from the factorisation of the underlying probability distribution (see Eq. (1)):

where all expectations are taken with respect to . ∎

Corollary 1 (Causal compression is equivalent to sparsity).

For ,

(8)

Corollary 1 states that for the same value of directed information between a subset of , and , adding more variables to the subset means adding variables which do not exhibit causal (in the sense of Pearlian graphs) relations with other than via the original . Therefore, the optimal causal compression at a given level of directed information is ensured by the sparsity of the compressed representation of , i.e. by selecting as few time points as possible. Note that Corollary 1 can be interpreted in the spirit of Granger causality: the variables in that are not selected by the sparsity requirement do not Granger-cause the effect .

Based on Corollary 1, the idea behind the causal compression principle is to find the compressed representation (here, ) of a set of random variables by maximising directed information involving and enforcing its sparsity. The causal compression principle can therefore be implemented in any method that:

  1. admits the use of directed information or cognate information theoretic tools,

  2. allows for incorporation of sparsity.

We now proceed to describe two ways of applying the causal compression as depicted in Figure 1(b) by assuming the time series , to be jointly Gaussian distributed and devising an optimisation problem111Note that other approaches can be proposed for implementing the causal compression principle, such as adjusting the sparse Gaussian information bottleneck (Rey et al., 2014) for preserving . for finding the optimal sparse representation of . We then relax the Gaussian assumption in Section 3.4. We begin with specifying and solving the optimisation problem which implements the causal compression principle in Section 3.1.

3.1 Defining and solving the optimisation problem

According to the conditions specified in Section 3, we define , the compressed version of , to be a linear noisy projection of , i.e.  with . In order to impose sparsity of the projection, we assume to be diagonal. Thus, non-zero entries of define which elements of

are chosen to the sparse representation. Note that if the projection were not noisy, the optimisation problem would reduce to binary feature selection akin to the statistical tests approach considered in Section 

2 (the directed information between and would then only depend on the rank of ). We incorporate directed information by maximising its value according to the decomposition given by Eq. (4). The assumption that is Gaussian distributed means that . Thus, the optimisation problem for finding the causal compression of described in Section 3 can be stated in a LASSO fashion as follows:

(9)

where is a diagonal matrix and its norm. Plugging Eq. (6) in to Eq. (9) and noting that , yields:

(10)

where .

Greedy optimization methods such as stagewise forward (Tibshirani, 2014) can now be applied to approximate the optimal solution to (9). The stagewise forward procedure will recover the whole solution path. For handling the non-negativity constraints on the elements of , we use gradient projection in the spirit of the monotone stagewise forward method (Hastie et al., 2007). This procedure is formalised as Algorithm 1.

The gradient computed in line 1 of Algorithm 1 is a sum of terms of the following form: for every and between and . By applying the Sherman–Morrison formula for rank- update times, the gradient can be computed in time in every iteration, assuming that the covariance matrix has been precomputed. The while loop is executed at most times, where is the sparsity parameter and — the learning rate. Thus, the running time of Algorithm 1 is

Input: Sample covariance matrix , learning rate
Output:
1 Initialise
2 while  do
3      
4       if  then
5            break
6      
7      
8      
Algorithm 1 Optimisation algorithm for Eq. (10)

3.2 Causal Segmentation

In this section we show how the causal compression principle can be used to classify points in a time series into three classes with respect to another time series: points carrying incoming directed information, outgoing directed information and points instantaneously coupled with corresponding points from the other time series.

The above optimisation problem finds a set , which is a compressed representation of . This compressed representation (i.e. non-zero values in the vector in Eq. (9)) defines the segment of containing all the nodes in that carry directed information from to with (orange nodes in Figure 1(b)). Thus all nodes in with possible outgoing causal flow to future nodes in are selected.

As defined in Eq. (4), mutual information between the compressed representation of (i.e. ) and decomposes into three elements: . This means that by substituting the directed information with in Eq. (9) and solving the resulting optimisation problem, the compressed representation of is forced to contain all nodes from carrying the information flow in the other direction, i.e. from to with . In this way the subset of with all the nodes in with possible incoming causal flow from past nodes in is selected (see green nodes in Figure 1(b)). Analogously, if is replaced with in Eq. (9), one obtains the set , i.e. nodes in which are instantaneously coupled with their counterparts in (blue nodes in Figure 1(b)).

The above procedure can be summarised as follows: in order to fully describe the causal relationships involving , find the segments in containing nodes with outgoing, incoming or instantaneous causal flow. To this end, compress to three times, each time modifying the condition in (9) accordingly:

  • optimise to select : the segment of with outgoing causal flow to the future of ,

  • optimise to select : the segment of with incoming causal flow from the past ,

  • optimise to select : the segment of with which is instantaneously coupled to .

3.3 Causal bipartite graph retrieval

In this section we show how to apply the causal compression principle to estimate the causal bipartite graph between two time series, without estimating the whole directed network. This corresponds to the left hand side short-cut in Figure 1(b). Note that it is a different problem than the causal segmentation described in Section 3.2, since it is not sufficient to estimate which points are in the and sets. It also has to be established, to which points in the other time series the arrows lead. Note that it is straightforward to infer the causal segmentation given the causal bipartite graph, but not the other way around (see Figure 1(a)).

In order to establish the arrows, one can make use of the decomposition of the directed information between and (Eq. (3)). It consists of a sum of terms of the form for all , where each such term measures the information flow from the past of to the current . Therefore, by exchanging the expression for with in Eq. (9), one obtains a sparse representation of all time points in that make up the causal flow to , and thus all arrows that lead to . If this procedure is now repeated for all , all arrows from to are established. The arrows in the other direction, i.e. from to , are established by simply exchanging and and finding the sparse compression of . The undirected edges representing pairs of instantaneously coupled points can be found as described in Section 3.2, since they always connect time points with the same index .

As in the case of causal segmentation, the above procedure consists of three steps where the causal compression is performed with different optimisation objectives (condition in Eq. (3)):

  • for each , optimise to select : the segment of with outgoing causal flow to ; add arrows from to to the model,

  • for each , optimise to select : the segment of with outgoing causal flow to ; add arrows from to to the model

  • optimise and to select and ; add edges between all pairs of , for which and .

The optimisation problem is solved as described in Section 3.2, the only difference being the substitution of the full directed information with its element corresponding to the time point :

3.4 Copula extension

Directed information, as well as conditional mutual information, can be decomposed into a sum of multiinformations (Liu, 2012): , where . In (Ma and Sun, 2011) it was shown that for a continuous random vector , its multiinformation is equal to the negative entropy of its copula density, i.e. , where and is the copula density of the vector .

Theorem 1 (Copula formulation of causal discovery.).

For continuous , any causal relationship described with directed information only depends on the entropy of copula density of . 222Note that this result reaches beyond the time series setting as long as one expresses directed information as a sum of conditional mutual informations analogously to Eq. (3) with conditioning on parent sets.

This result can be shown by expressing directed information in terms of multiinformations and using their equivalence to the copula entropy as described above.

From Theorem 1 it follows that the causal compression principle, as described in this section, only depends on the copula density of . This means that for inference we only have to estimate the copula part of the distribution. In particular, for Gaussian distributed data only the correlation matrices have to be identified. The Gaussian assumption can therefore be relaxed to the class of distributions with a Gaussian copula, sometimes called meta-Gaussian distributions.

In practice, to fit a semi-parametric copula model (with non-parametric marginals and a parametric Gaussian copula), one has to estimate correlation matrices between the dimensions of the model. They depend on the normal scores where is the rank of the -th observation of dimension . The normal scores rank correlation coefficient between dimensions and can then defined as , which is an efficient estimator studied in (Boudt et al., 2012). The correlation matrix made up of such coefficients for all dimensions is positive definite, and is in practice fed in to Algorithm 1 in place of the covariance matrix to perform inference on meta-Gaussian distributions.

4 Experiments

4.1 Synthetic data

We first test our approach on artificial data. To this end we draw samples from a multivariate Gaussian model for . For , we assume the model to be , with being a lower triangular matrix and , i.e. . We define so that this corresponds to a

-th order Markov model for

and , 3 additional links and 4 links as well as two instantaneous coupling terms , as depicted in Figure 3.

Based on the above model, we first perform causal segmentation as described in Section 3.2. We compute sets , and by varying in Eq. (9). Then we compute the full solution path and choose subsets based on information score (slope of the red curve in Figure 3), evaluated for every variable at the point where this variable becomes non-zero (note that ). A threshold for the information score is obtained from repeated experiments with uncorrelated .

Subsequently, we perform the recovery of the causal bipartite graph between and . We compute sets , , and according to the procedure described in Section 3.3 and add the corresponding edges and arrows to the bipartite graph. We are able to both perform the causal segmentation and recover the bipartite graph correctly, as presented in Figure 3.

Figure 2: Time-resolved gene expression data from HCV patients: reconstructed causal graphs for the groups of poor and marked responders. The drawing style and corresponding semantic interpretation is the same as in Figure 3.



Figure 3: Top: Ground truth graph. Middle: solution path, information curve and “information score” defined as the derivative at entry points of new variables (left) and causal segmentation of time series (right). Height of the coloured bars represents the information score of the corresponding coefficient, colour refers to direction: red = “outgoing”, green = “incoming”, blue = instantaneous (i.e. undirected). Bottom: recovered bipartite graph.

4.2 Time resolved gene expression data

As a demo application of the causal compression principle we have chosen a human hepatitis C virus (HCV) dataset that contains time-resolved gene expression profiles from patients with chronic HCV genotype 1 infection (Taylor et al., 2007). Gene expression was profiled with a HG-U133A GeneChip at six time points after initiation of treatment with pegylated alpha interferon and ribavirin (at days 0,1,2,7,14,28, with “0” indicating pre-treatment conditions). Based on the observed decrease in HCV RNA levels at day 28, patients were labelled to have a “marked” (27 patients) or “poor” response to treatment (25 patients). The data is available from NCBI/GEO under accession no. GSE7123. For our analysis, we focused on two different genes that are known to have a crucial interacting role in interferon signalling, namely the transcription factor STAT1 and the interferon-induced antiviral gene IFIT3. Note that these transcriptional interactions between genes take place at timescales of the order of hours, which would appear as instantaneous couplings in our dataset with its timescales form days to weeks. We used the same experimental setup as described for synthetic data in Section 4.1, i.e. the causal compression principle for the reconstruction of causal bipartite graphs. The analysis was carried out separately for the “marked” and the “poor” responders, see Figure 2. There are pronounced differences between the two groups: generally, in the marked responders, the interferon therapy destroys most of the normally tight interactions between STAT1 and IFIT3 (complete loss of instantaneous coupling terms), whereas these interactions seem to be largely unaffected in the poor responders. Secondly, both groups show causal pre-treatment/post-treatment interactions, but for the marked responders, the influence of initial IFIT3 on late STAT1 values is much more prominent. This latter observation might be particularly interesting, since pre-treatment expression levels of interferon-induced genes are known to be strong predictors of treatment response (Dill et al., 2011), but the underlying mechanism of this effect is largely unknown.

5 Conclusion

We have proposed a new way of discovering causal relationships in temporal data by employing causal compression. We have introduced the chain rule for directed information and proved that causal compression is equivalent to sparsity. Conditions under which the principle of causal compression can be employed have been identified.

We have demonstrated how to tune the compression procedure for the case of time series distributed with a Gaussian copula. A method of causal time series segmentation with respect to incoming and outgoing causal flow as well as instantaneous coupling, was proposed in Section 3.2. Recovery of causal interactions between two time series in the form of a directed bipartite graph was described in Section 3.3. Note that the causal compression principle remains valid for arbitrary Pearlian graphs other than time series and non-Gaussian data as long as the directed information can be computed.

The third contribution of the paper is the proposition that directed information can be expressed as a function of the entropy of the copula density only, as stated in Theorem 1. In the case of Gaussian distribution this means that one only has to estimate the correlation matrices from the data. This means that the modelling of causality in the framework of Pearlian graphs only requires knowing the copula structure of the modelled data and is independent of their marginals.

References

  • Amblard and Michel (2013) Pierre-Olivier Amblard and Olivier J. J. Michel. The relation between granger causality and directed information theory: A review. Entropy, 15(1):113, 2013.
  • Boudt et al. (2012) Kris Boudt, Jonathan Cornelissen, and Christophe Croux. The gaussian rank correlation estimator: robustness properties. Statistics and Computing, 22(2):471–483, 2012. ISSN 1573-1375. doi: 10.1007/s11222-011-9237-0.
  • Dill et al. (2011) Michael T Dill, Francois HT Duong, Julia E Vogt, Stéphanie Bibert, Pierre-Yves Bochud, Luigi Terracciano, Andreas Papassotiropoulos, Volker Roth, and Markus H Heim. Interferon-induced gene expression is a stronger predictor of treatment response than il28b genotype in patients with hepatitis c. Gastroenterology, 140(3):1021–1031, 2011.
  • Eberhardt and Scheines (2007) Frederick Eberhardt and Richard Scheines. Interventions and causal inference. Philosophy of Science, 74(5):981–995, 2007.
  • Hagmayer et al. (2007) York Hagmayer, Steven A Sloman, David A Lagnado, and Michael R Waldmann. Causal reasoning through intervention. 2007.
  • Hastie et al. (2007) Trevor Hastie, Jonathan Taylor, Robert Tibshirani, Guenther Walther, et al. Forward stagewise regression and the monotone lasso. Electronic Journal of Statistics, 1:1–29, 2007.
  • Hauser and Bühlmann (2015) Alain Hauser and Peter Bühlmann. Jointly interventional and observational data: estimation of interventional markov equivalence classes of directed acyclic graphs. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 77(1):291–318, 2015.
  • Janzing and Scholkopf (2010) Dominik Janzing and Bernhard Scholkopf. Causal inference using the algorithmic markov condition. IEEE Transactions on Information Theory, 56(10):5168–5194, 2010.
  • Liu (2012) Ying Liu. Directed Information for Complex Network Analysis from Multivariate Time Series. PhD thesis, East Lansing, MI, USA, 2012. AAI3516352.
  • Ma and Sun (2011) Jian Ma and Zengqi Sun. Mutual information is copula entropy. Tsinghua Science & Technology, 16(1):51–54, 2011.
  • Massey (1990) James Massey. Causality, feedback and directed information. Citeseer, 1990.
  • Pearl (2009) J. Pearl. Causality. 2009. ISBN 9781139643986.
  • Pearl (2012) Judea Pearl. The causal foundations of structural equation modeling. Technical report, DTIC Document, 2012.
  • Peters (2012) Jonas Martin Peters. Restricted structural equation models for causal inference. PhD thesis, University of Heidelberg, 2012.
  • Quinn et al. (2011) Christopher J Quinn, Todd P Coleman, and Negar Kiyavash. Causal dependence tree approximations of joint distributions for multiple random processes. arXiv preprint arXiv:1101.5108, 2011.
  • Quinn et al. (2015) Christopher J Quinn, Negar Kiyavash, and Todd P Coleman. Directed information graphs. IEEE Transactions on Information Theory, 61(12):6887–6909, 2015.
  • Raginsky (2011) Maxim Raginsky. Directed information and pearl’s causal calculus. In Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on, pages 958–965. IEEE, 2011.
  • Rey et al. (2014) Melanie Rey, Volker Roth, and Thomas Fuchs. Sparse meta-gaussian information bottleneck. In Tony Jebara and Eric P. Xing, editors, Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 910–918. JMLR Workshop and Conference Proceedings, 2014.
  • Shimono and Beggs (2015) Masanori Shimono and John M Beggs. Functional clusters, hubs, and communities in the cortical microconnectome. Cerebral Cortex, 25(10):3743–3757, 2015.
  • Spohn (1980) Wolfgang Spohn. Stochastic independence, causal independence, and shieldability. Journal of Philosophical logic, 9(1):73–99, 1980.
  • Tatikonda and Mitter (2009) Sekhar Tatikonda and Sanjoy Mitter. The capacity of channels with feedback. IEEE Transactions on Information Theory, 55(1):323–349, 2009.
  • Taylor et al. (2007) Milton W Taylor, Takuma Tsukahara, Leonid Brodsky, Joel Schaley, Corneliu Sanda, Matthew J Stephens, Jeanette N McClintick, Howard J Edenberg, Lang Li, John E Tavis, et al. Changes in gene expression during pegylated interferon and ribavirin therapy of chronic hepatitis c virus distinguish responders from nonresponders to antiviral therapy. Journal of virology, 81(7):3391–3401, 2007.
  • Tibshirani (2014) Ryan J Tibshirani. A general framework for fast stagewise algorithms. arXiv preprint arXiv:1408.5801, 2014.
  • Vapnik (1995) Vladimir N. Vapnik.

    The Nature of Statistical Learning Theory

    .
    Springer-Verlag New York, Inc., New York, NY, USA, 1995. ISBN 0-387-94559-8.
  • Vicente et al. (2011) Raul Vicente, Michael Wibral, Michael Lindner, and Gordon Pipa. Transfer entropy–a model-free measure of effective connectivity for the neurosciences. J. Comput. Neurosci., 30(1):45–67, 2011. ISSN 0929-5313. doi: 10.1007/s10827-010-0262-3.