Large-scale directed network inference with multivariate transfer entropy and hierarchical statistical testing

02/18/2019 ∙ by Leonardo Novelli, et al. ∙ 0

Network inference algorithms are valuable tools for the study of large-scale neuroimaging datasets. Multivariate transfer entropy is well suited for this task, being a model-free measure that captures nonlinear and lagged dependencies between time series to infer a minimal directed network model. Greedy algorithms have been proposed to efficiently deal with high-dimensional datasets while avoiding redundant inferences and capturing synergistic effects. However, multiple statistical comparisons may inflate the false positive rate and are computationally demanding, which limited the size of previous validation studies. The algorithm we present---as implemented in the IDTxl open-source software---addresses these challenges by employing hierarchical statistical tests to control the family-wise error rate and to allow for efficient parallelisation. The method was validated on synthetic datasets involving random networks of increasing size (up to 100 nodes), for both linear and nonlinear dynamics. The performance increased with the length of the time series, reaching consistently high precision, recall, and specificity (>98 average) for 10000 time samples. Varying the statistical significance threshold showed a more favourable precision-recall trade-off for longer time series. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and MEG experiments.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The increasing availability of large-scale, fine-grained datasets provides an unprecedented opportunity for quantitative studies of complex systems. Nonetheless, a shift towards data-driven modelling of these systems requires efficient algorithms for analysing multivariate time series, which are obtained from observation of the activity of a large number of elements.

In the field of neuroscience, the multivariate time series typically obtained from brain recordings serve to infer minimal (effective) network models which can explain the dynamics of the nodes in a neural system. The motivation for such models can be, for instance, to describe a causal network (Friston, 1994; Ay and Polani, 2008) or to model the directed information flow in the system (Vicente et al., 2011) in order to produce a minimal computationally equivalent network (Lizier and Rubinov, 2012).

Information theory (Shannon, 1948; Cover and Thomas, 2005) is well suited for the latter motivation of inferring networks that describe information flow as it provides model-free measures that can be applied at different scales and to different types of recordings. These measures, including conditional mutual information (Cover and Thomas, 2005) and transfer entropy (Schreiber, 2000)

, are based purely on probability distributions and are able to identify nonlinear relationships

(Paluš et al., 1993). Most importantly, information-theoretic measures allow the interpretation of the results from a distributed computation or information processing perspective, by modelling the information storage, transfer, and modification within the system (Lizier, 2013). Therefore, information theory simultaneously provides the tools for building the network model and the mathematical framework for its interpretation.

The general approach to network model construction can be outlined as follows: for any target process (element) in the system, the inference algorithm selects the minimal set of processes that collectively contribute to the computation of the target’s next state. Every process can be separately studied as a target and the results can be combined into a directed network describing the information flows in the system.

This task presents several challenges:

  • The state space of the possible network models grows faster than exponentially with respect to the size of the network;

  • Information-theoretic estimators suffer from the “curse of dimensionality” for large sets of variables

    (Roulston, 1999; Paninski, 2003);

  • In a network setting, statistical significance testing requires multiple comparisons. This results in a high false positive rate (type I errors) without adequate family-wise error rate controls

    (Dickhaus, 2014)

    or a high false negative rate (type II errors) with naive control procedures;

  • Nonparametric statistical testing based on shuffled surrogate time series is computationally demanding but currently necessary when using general information-theoretic estimators (Lindner et al., 2011; Bossomaier et al., 2016).

Several previous studies (Vlachos and Kugiumtzis, 2010; Faes et al., 2011; Lizier and Rubinov, 2012; Sun et al., 2015) proposed greedy algorithms to tackle the first two challenges outlined above (see a summary by Bossomaier et al. (2016, sec 7.2)

). These algorithms mitigate the curse of dimensionality by greedily selecting the random variables that iteratively reduce the uncertainty about the present state of the target. The reduction of uncertainty is rigorously quantified by the information-theoretic measure of conditional mutual information (CMI), which can also be interpreted as a measure of conditional independence

(Cover and Thomas, 2005). In particular, these previous studies employed multivariate forms of the transfer entropy (i.e., conditional and collective forms (Lizier et al., 2008, 2010)). In general, such greedy optimisation algorithms provide a locally optimal solution to the NP-hard problem of selecting the most informative set of random variables. An alternative optimisation strategy—also based on conditional independence—employs a preliminary step to prune the set of sources (Runge et al., 2012, 2018). Despite this progress, the computational challenges posed by the estimation of multivariate transfer entropy have severely limited the size of problems investigated in previous validation studies in the general case of nonlinear estimators, e.g., Montalto et al. (2014) used nodes and samples; Kim et al. (2016) used nodes and samples; Runge et al. (2018) used nodes and samples. However, modern neural recordings often provide hundreds of nodes and tens of thousands of samples.

These computational challenges, as well as the multiple testing challenges described above, are addressed here by the implementation of rigorous statistical tests, which represent the main theoretical contribution of this paper. These tests are used to control the family-wise error rate and are compatible with parallel processing, allowing the simultaneous analysis of the targets. This is a crucial feature, which enabled an improvement on the previous greedy algorithms: exploiting the parallel computing capabilities of high-performance computing clusters and graphics processing units (GPUs) enabled the analysis of networks at a relevant scale for brain recordings—up to nodes and samples. Our algorithm has been implemented in the recently released IDTxl Python package (the “Information Dynamics Toolkit xl” (Wollstadt et al., 2019)).111The “Information Dynamics Toolkit xl” is an open-source Python package available on GitHub (github.com/pwollstadt/IDTxl). In this paper, we refer to the current release at the time of writing (v1.0).

We validated our method on synthetic datasets involving random structural networks of increasing size (also referred to as ground truth

) and different types of dynamics (vector autoregressive processes and coupled logistic maps). In general, effective networks are able to reflect dynamic changes in the regime of the system and do not reflect an underlying structural network. Nonetheless, in the absence of hidden nodes (and other assumptions, including stationarity and the causal Markov condition), the inferred information network was proven to reflect the underlying structure for a sufficiently large sample size

(Sun et al., 2015). Experiments under these conditions provide arguably the most important validation that the algorithm performs as expected, and here we perform the first large-scale empirical validation for non-Gaussian variables. As shown in the Results, the performance of our algorithm increased with the length of the time series, reaching consistently high precision, recall, and specificity ( on average) for time samples. Varying the statistical significance threshold showed a more favourable precision-recall trade-off for longer time series.

Ii Methods

ii.1 Definitions and assumptions

Let us consider a system of discrete-time stochastic processes for which a finite number of samples have been recorded (over time and/or in different replications of the same experiment). In general, let us assume that the stochastic processes are stationary in each experimental time-window and Markovian with finite memory .222The present state of the target does not depend on the past values of the target and the sources beyond a maximum finite lag . Further assumptions will be made for the validation study.

The following quantities are needed for the setup and formal treatment of the algorithm and are visualised in Figure 1 and Figure 2:

target process

: a process of interest within the system (where ); the choice of the target process is arbitrary and all the processes in the system can separately be studied as targets;

source processes

: the remaining processes within the system (where and );

sample number (or size)

: the number of samples recorded over time;

replication number

: the number of replications of the same experiment (e.g., trials);

target present state

: the random variable (RV) representing the state of the target at time  (where ), whose information contributors will be inferred;

candidate target past

: an arbitrary finite set of RVs in the past of the target, up to a maximum lag , i.e., ;

candidate sources past

: an arbitrary finite set of RVs in the past of the sources, up to a maximum lag , i.e., ;

selected target past

: the subset of RVs within the candidate target past set that maximally reduces the uncertainty about the present state of the target;

selected sources past

: the subset of RVs within the candidate sources past set that maximally further reduces the uncertainty about the present state of the target, in the context of the selected target past (explained in detail in the following section).

Figure 1: Example of a possible definition of the candidate sets. The bottom row represents the time series of the target process , with the present state highlighted in green and the candidate target past set highlighted in red (up to a lag ). The remaining rows represent the time series of the source processes , with the candidate sources past set highlighted in blue (up to a lag ). For simplicity, only a single trial of the experiment is represented.

ii.2 Inference algorithm

For a given target process , the goal of the algorithm is to infer the minimal set of information contributors to —defined as the selected sources past —in the context of the relevant information contributors from the candidate target past set, defined as the selected target past .

The algorithm operates in four steps:

  1. Select variables in the candidate target past set to obtain

  2. Select variables in the candidate sources past set to obtain

  3. Prune the selected sources past variables

  4. Test relevant variables collectively for statistical significance

The operations performed in the four steps are described in detail hereafter; the result is a nonuniform embedding of the target and sources time series, as illustrated in Figure 2.333The term embedding refers to the property of the selected set in capturing the underlying state of the process as it relates to the target’s next value, akin to a Takens’ embedding (Takens, 1981) yet with nonuniform delays between selected points (Vlachos and Kugiumtzis, 2010; Faes et al., 2011).

Figure 2: Example of a resulting nonuniform embedding of the time series relevant to . The bottom row represents the time series of the target process , with the present state highlighted in green and the selected target past set highlighted in red (as a subset of the candidate target past set shown in light red). The remaining rows represent the time series of the source processes , with the selected sources past set highlighted in blue (as a subset of the candidate sources past set shown in light blue). The embedding only specifies the relative lags between the variables. For simplicity, only a single trial of the experiment is shown.

ii.2.1 Step 1: Select variables in the candidate target past set

The goal of the first step is to find the subset of RVs within the candidate target past set that maximally reduces the uncertainty about the present state of the target while meeting statistical significance requirements. Let be the selected target past set found via optimisation under these criteria.

Finding the globally optimal embedding is an NP-hard problem and requires testing all the subsets of the candidate target past set. Since the number of subsets grows exponentially with the size of the candidate set, this is computationally unfeasible; therefore, a greedy approximation algorithm is employed to find a locally optimal solution in the search space of possible embeddings. This approach tackles the challenge of computational complexity by aiming at identifying a minimal conditioning set; in doing so, it also tackles the curse of dimensionality in the estimation of information-theoretic functionals.

The set is initialised as an empty set and it is iteratively built up via the following algorithm:

  1. [label=]

  2. For each candidate variable , estimate the CMI contribution ;

  3. Find the candidate

    which maximises the CMI contribution (reduction of uncertainty) and perform a statistical significance test against the null hypothesis of conditional independence, i.e., that the new variable does not further reduce the uncertainty in the context of the previously included variables. If significant, add

    to and remove it from . The maximum statistic is employed to control the family-wise error rate (explained in detail in the Statistical tests section);

  4. Repeat the previous steps until the maximum CMI contribution is not significant or is empty.

From a distributed, intrinsic computation perspective, the goal can be interpreted as finding the embedding of the target’s past states that maximises the active information storage444The active information storage is defined as the mutual information between the past and the present of the target: . (Lizier et al., 2012) to ensure self-prediction optimality as suggested by Wibral et al. (2013), and similar to Garland et al. (2016), but also with nonuniform embedding and additional statistical controls.

The nonuniform embedding of the time series was introduced by Vlachos and Kugiumtzis (2010) and Faes et al. (2011), who used an arbitrary threshold for the conditional mutual information. Lizier and Rubinov (2012) introduced a statistical significance test to select the candidates, which this study builds upon in proposing the maximum statistic. In addition, they embedded the target time series before embedding the sources, i.e., the active information storage is modelled first and the information transfer is then examined in that context, thereby taking a specific modelling perspective on the information processing carried out by the system.

ii.2.2 Step 2: Select variables in the candidate sources past set

The goal of the second step is to find the subset of RVs within the candidate sources past set that maximally further reduces the uncertainty about the present state of the target, in the context of the selected target past found in the first step. Let be the selected sources past set found via optimisation under these criteria.

As for step , a greedy approximation algorithm is employed and the statistical significance is tested throughout the selection process. is initialised as an empty set and it is iteratively built up via the following algorithm:

  1. [label=]

  2. For each candidate variable , estimate the conditional transfer entropy contribution (Verdes, 2005; Lizier et al., 2008, 2010; Vakorin et al., 2009). When is empty, this is simply a pairwise or bivariate transfer entropy (Schreiber, 2000); using the conditional form while building serves to eliminate candidates carrying only redundant information (due to, e.g., common driver or pathway effects) from being selected, as well as to capture synergistic interactions with .

  3. Find the candidate which maximises the conditional transfer entropy contribution (reduction of uncertainty) and perform a statistical significance test against the null hypothesis of conditional independence: if significant, add to and remove it from . The maximum statistic is employed to control the family-wise error rate;

  4. Repeat the previous steps until the maximum conditional transfer entropy contribution is not significant or is empty.

From a distributed computation perspective, the goal can be interpreted as finding the nonuniform embedding of the source processes’ past that maximises the collective transfer entropy to the target, defined as (Lizier et al., 2010). As above, the rationale for embedding the past of the sources as a second step is to achieve optimal separation of the storage and transfer contributions (Lizier and Rubinov, 2012).

ii.2.3 Step 3: Prune the selected sources past variables

The third step of the algorithm is a pruning procedure performed to ensure that the variables included in the early iterations of the second step still provide a statistically-significant information contribution in the context of the final selected sources past set . The pruning step involves the following operations:

  1. [label=]

  2. For each variable , estimate the conditional mutual information contribution ;

  3. Find the variable which minimises the CMI contribution and perform a statistical significance test: if not significant, remove from . The minimum statistic is employed to test for significance against the null hypothesis of conditional independence while controlling the family-wise error rate;

  4. Repeat the previous steps until the minimum CMI contribution is not significant or is empty.

The pruning step was introduced by Lizier and Rubinov (2012); remarkably, Sun et al. (2015) proved that this step is essential for the theoretical convergence of the inferred network to the causal network in the Granger-Wiener framework; they also rigorously laid out the mathematical assumptions needed for such convergence (see Validation tasks section).

ii.2.4 Step 4: Test relevant variables collectively for statistical significance

The fourth and final step of the algorithm is the computation of the collective transfer entropy from the selected sources past set to the target and the performance of an omnibus test to ensure statistical significance against the null hypothesis of conditional independence. The resulting omnibus p-value can further be used for correction of the family-wise error rate if the inference is carried out for multiple targets. The set is only accepted as a result if all the statistical tests are passed. Importantly, the selected sources set , inferred in the context of , is the final result of the algorithm for a given target process ; the order in which variables were inferred is not relevant.

The statistical tests play a fundamental role in the inference and provide the stopping conditions for the iterations involved in the first and second steps of the algorithm. These stopping conditions are adaptive and change according to the amount of data available (the length of the time series). Given their importance, the statistical tests are described in detail in the following section.

ii.3 Statistical tests

The crucial steps in the inference algorithm rely on determining whether the CMI is positive. However, due to the finite sample size, the CMI estimators may produce non-zero estimates in the case of zero CMI and it may even return negative estimates if the estimator bias is larger than the true CMI (Roulston, 1999; Kraskov et al., 2004). For this reason, statistical tests are required to assess the significance of the CMI estimates against the null hypothesis of no CMI (i.e., conditional independence) (Chávez et al., 2003; Vicente et al., 2011; Lindner et al., 2011; Lizier et al., 2011).

For certain estimators, analytic solutions exist for the finite-sample distribution under this null hypothesis (see Lizier (2014)); in the absence of an analytic solution, the null distributions are computed in a nonparametric way by using surrogate time series (Schreiber and Schmitz, 2000). The surrogates are generated to satisfy the null hypothesis by destroying the temporal relationship between the source and the target while preserving the temporal dependencies within the sources.

Finally, the inference algorithm is based on multiple comparisons and requires an appropriate calibration of the statistical tests to achieve the desired family-wise error rate (i.e., the probability of making one or more false discoveries, or type I errors, when performing multiple hypotheses tests). The maximum statistic and minimum statistic tests employed in this study were specifically conceived to tackle these challenges.

ii.3.1 Maximum statistic test

The maximum statistic test is a step-down statistical test555A test which proceeds from the smallest to the largest p-value. When the first non-significant p-value is found, all the larger p-values are also deemed not significant. used to control the family-wise error rate when selecting the past variables for the target and sources embeddings, which involves multiple comparisons.

Let us first consider the first step of the main algorithm and assume that we have picked the candidate variable (from the candidate target past set ) which maximises the CMI contribution. The maximum statistic test described below mirrors this selection process by picking the maximum value among the surrogates. Specifically, let be the maximum contribution (i.e., the maximum statistic); the following algorithm is used to test for statistical significance:

  1. [label=]

  2. For each , generate a surrogate time series and compute the surrogate CMI value . If multiple replications (e.g., trials) are available, surrogate data is generated by shuffling the replications of candidate while keeping the order of replications for the remaining variables intact, thereby preserving the temporal order of samples within a replication but destroying any potential relationship between the candidate and the target. When the number of replications is not sufficient to guarantee enough permutations, the samples are shuffled in time, e.g.,  via block shuffling;

  3. Find and store the maximum surrogate CMI value (i.e., the surrogate maximum statistic);

  4. Repeat steps a and b a sufficient number of times (according to the desired critical level for statistical significance ) in order to obtain an empirical distribution of surrogate maximum statistic values, which will be used as a null distribution;

  5. Calculate the p-value for as the fraction of surrogate maximum statistic values that are larger than ;

  6. is deemed significant if the p-value is smaller than (i.e., the null hypothesis of conditional independence for the candidate variable with the maximum CMI contribution is rejected at level ).

Note that steps a-c can be omitted when the null distributions can be derived analytically, e.g., with Gaussian estimators (Barnett and Bossomaier, 2012). The same test is performed during the selection of the variables in the candidate sources past set (step 2 of the main algorithm, with obvious replacements of the symbols).

ii.3.2 Family-wise error rate correction

How does the maximum statistic test control the family-wise error rate? Intuitively, one or more statistics will exceed a given threshold if and only if the maximum exceeds it. This relationship can be used to obtain an adjusted threshold from the distribution of the maximum statistic under the null hypothesis, which can be used to control the family-wise error rate both in the weak and strong sense (Nichols and Hayasaka, 2003).

Let us quantify the false positive rate for a single variable when the maximum statistic at the significance level is employed. For simplicity, the derivation is performed under the hypothesis that the information contributors to the target have been selected in the first iterations of the greedy algorithm and removed from the candidate sources past set . Under this hypothesis, the target is conditionally independent of the remaining variables in given the selected source and target variables. Let be the corresponding CMI estimates and let be the maximum statistic. As discussed above, the estimates might be positive even under the conditional independence hypothesis, due to finite-sample effects. Since the estimates are independently obtained by shuffling the time series, they are treated as i.i.d. RVs.

Let be the critical threshold corresponding to the given significance value , i.e., . Then

(1)
Therefore,
(2)

Interestingly, equation (2) shows that the maximum statistic correction is equivalent to the Dunn-Šidák correction (Šidák, 1967). Performing a Taylor expansion of (2) around and truncating it to the first order yields:

(3)

which coincides with the false positive rate resulting from the Bonferroni correction (Dickhaus, 2014). The details are presented in the Supplementary materials, where it is also shown that the maximum statistic correction is marginally less stringent than the Bonferroni correction.

Let us now study the effect of the maximum statistic test on the family-wise error rate for a single target while accounting for all the iterations performed during the step-down test, (i.e.,  is the probability that at least one of the selected sources is a false positive). We have:

(4)
Therefore,
(5)

for the typical small values of used in statistical testing (even in the limit of large ), which shows that effectively controls the family-wise error rate for a single target.

ii.3.3 Minimum statistic test

The minimum statistic test is employed during the third main step of the algorithm (pruning step) to remove the selected variables that have become redundant in the context of the final set of selected source past variables , while controlling the family-wise error rate. This is necessary because of the multiple comparisons involved in the pruning procedure. The minimum statistic test works identically to the maximum statistic test (replacing “maximum” with “minimum” in the algorithm presented above).

ii.3.4 Omnibus test

Let be the collective transfer entropy from all the selected sources past variables to the target . The value is tested for statistical significance against the null hypothesis of zero transfer entropy (this test is referred to as the omnibus test). The null distribution is built using surrogates time series obtained via shuffling, i.e., using a similar procedure to the one described in the Maximum statistic test section above. Testing all the selected sources collectively is in line with the perspective that the goal of the network inference is to find the set of relevant sources for each node.

ii.3.5 Combining across multiple targets

When the inference is performed on multiple targets, the omnibus p-values can be employed in further statistical tests to control the family-wise error rate for the overall network (e.g., via FDR-correction (Benjamini and Hochberg, 1995; Dickhaus, 2014), which is implemented in the IDTxl toolbox).

It is important to fully understand the statistical questions and validation procedure implied by this approach. Combining the results across multiple targets by reusing the omnibus test p-values for the FDR-correction yields a hierarchical test. The test answers two nested questions: (1) ’which nodes receive any significant overall information transfer?’ and, if any, (2)’what is the structure of the incoming information transfer to each node?’. However, the answers are computed in the reverse order, for the following reason: it would be computationally unfeasible to directly compute the collective transfer entropy from all candidate sources to the target right at the beginning of the network inference process. At this point, the candidate source set usually contains a large number of variables so that estimation will likely fall prey to the curse of dimensionality. Instead, a conservative approximation of the collective information transfer is obtained by considering only a subset of the potential sources, i.e., those deemed significant by the maximum and minimum statistic tests described in the previous sections. Only if this approximation of the total information transfer is also deemed significant by the omnibus test (as well as by the FDR test at the network level), then the subset of significant sources for that target is interpreted post-hoc as the local structure of the incoming information transfer. This way, the testing procedure exhibits a hierarchical structure: the omnibus test operates at the higher (global) level concerned with the collective information transfer, whereas the minimum and maximum tests operate at the lower (local) level of individual source-target connections.

Compared to a non-hierarchical analysis with a correction for multiple comparisons across all links (e.g., by network-wide Bonferroni correction or the use of the maximum statistic across all potential links), the above strategy buys both statistical sensitivity (’recall’) and the possibility to trivially parallelise computations across targets. The price to be paid is that a link with a relatively strong information transfer into a node with non-significant overall incoming information transfer may get pruned, while a link with relatively weaker information transfer into a node with significant overall incoming information transfer will prevail. This behaviour clearly differs from a correction for multiple comparisons across all links. Arguably, this difference is irrelevant in many practical cases, although it could become noticeable for networks with high average in-degree and relatively uniform information transfer across the links. The difference can be reduced by setting a conservative critical threshold for the lower-level greedy analysis.

ii.4 Validation tasks

For the purpose of the validation study, the additional assumptions of causal sufficiency666Causal sufficiency: The set of observed variables includes all their common causes (or the unobserved common causes have constant values). and the causal Markov condition777Causal Markov condition: A variable X is independent of every other past variable conditional on all of its direct causes. were made, such that the inferred network was expected to closely reflect the structural network for a sufficiently large sample size (Sun et al., 2015). Although this is not always the case, experiments under these conditions allow the evaluation of the performance of the algorithm with respect to an expected ground truth. An intuitive definition of these conditions is provided here, while the technical details are discussed at length in Spirtes et al. (1993). Moreover, the intrinsic stochastic nature of the processes makes purely synergistic and purely redundant interactions unlikely (and indeed vanishing for large sample size), thus satisfying the faithfulness condition (Spirtes et al., 1993).

The complete network inference algorithm implemented in the IDTxl toolkit (release v0.4.0) was validated on multiple synthetic datasets, where both the structural connectivity and the dynamics were known. Given the general scope of the toolkit, two dynamical models of broad applicability were chosen: a vector autoregressive process (VAR) and a coupled logistic maps process (CLM); both models are widely used in computational neuroscience (Zalesky et al., 2014; Rubinov et al., 2009; Valdes-Sosa et al., 2011), macroeconomics (Sims, 1980; Lorenz, 1993), and chaos theory (Strogatz, 2015).

The primary goal was to quantify the scaling of the performance with respect to the size of the network and the length of the time series. Sparse directed random Erdős-Rényi networks (Erdős and Rényi, 1959) of increasing size (  to nodes) were generated with a link probability to obtain an expected in-degree of 3 links. Both the VAR and the CLM stochastic processes were repeatedly simulated on each causal network with increasingly longer time series (  to samples), a single replication (or trial, i.e., ), and with random initial conditions. The performance was evaluated in terms of precision, recall, and specificity in the classification of the links. Further simulations were carried out to investigate the influence of the critical alpha level for statistical significance and the performance of different estimators of conditional mutual information.

ii.4.1 Vector autoregressive process

The specific VAR process used in this study is described by the following discrete-time recurrence relation:

(6)

where denotes the set of causal sources of the target process and a single random lag was used for each source . A Gaussian noise term with mean

and standard deviation

was added at each time step ; the noise terms added to different variables were uncorrelated. The self-coupling coefficient was set to and the cross-coupling coefficients were sampled at random from the unitary interval and normalised such that . This choice of parameters guaranteed that the VAR processes were stable (the resulting spectral radii were between and

) and had stationary multivariate Gaussian distributions

(Atay and Karabacak, 2006). As such, the Gaussian estimator implemented in IDTxl was employed for transfer entropy measurements in VAR processes. Note that transfer entropy and Granger causality (Granger, 1969) are equivalent for Gaussian variables (Barnett et al., 2009); therefore, using the Gaussian estimator with our algorithm can be viewed as extending Granger causality in the same multivariate/greedy fashion.

ii.4.2 Coupled logistic maps process

The coupled logistic maps process used in this study is described by the following discrete-time recurrence relations:

(7)

At each time step , each node computes the weighted input as a linear combination of its past value and the past of its sources, with the same conditions used for the VAR process on the choice of the random lags and coupling coefficients and . The value

is then computed by applying the logistic map activation function

to the weighted input and adding the Gaussian noise with the same properties used for the VAR process. Notice that the coefficient () used in the logistic map function corresponds to the fully-developed chaotic regime. The modulo- operation ensures that after the addition of noise. The nearest-neighbour estimators were employed for transfer entropy measurements in the analysis of the CLM processes (in particular, Kraskov’s estimator with nearest neighbours (Kraskov et al., 2004) and its extensions (Frenzel and Pompe, 2007; Vejmelka and Paluš, 2008; Gómez-Herrero et al., 2015)). Nearest-neighbour estimators are model-free and are able to detect nonlinear dependencies in stochastic processes with non-Gaussian stationary distributions; fast CPU and GPU implementations are provided by the IDTxl package.

Iii Results

iii.1 Influence of network size and length of the time series

The aim of the first analysis was to quantify the scaling of the performance with respect to the size of the network and the length of the time series.

The inferred network was built by adding a directed link from a source node to a target node whenever a significant transfer entropy from to was measured while building the selected sources past set (i.e., whenever ). The critical alpha level for statistical significance was set to and surrogates were used for all experiments unless otherwise stated. The candidate sets for the target as well as the sources were initialised with a maximum lag of five (i.e., , corresponding to the largest lag values used in the definition of the VAR and CLM processes).

The network inference performance was evaluated in comparison to the known underlying structural network as a binary classification task, using standard statistics based on the number of true positives

 (TP, i.e., correctly classified existing links),

false positives (FP, i.e., absent links falsely classified as existing), true negatives (TN, i.e., correctly classified absent links), and false negatives (FN, i.e., existing links falsely classified as absent). The following standard statistics were employed in the evaluation:

precision

recall

specificity

The plots in Figure 3

summarise the results in terms of precision and recall, while the specificity is additionally plotted in the Supplementary materials. For both types of dynamics, the performance increased with the number of samples and decreased with the size of the network.

For shorter time series (  and  ), the recall was the most affected performance measure as a function of and , while the precision and the specificity were always close to optimal ( on average). For longer time series ( ), high performance according to all measures was achieved for both the VAR and CLM processes, regardless of the size of the network. The high precision and specificity are due to the effective control of the false positives, in accordance with the strict statistical significance level (the influence of is further discussed in the following sections). The inference algorithm was therefore conservative in the classification of the links.

VAR CLM
Figure 3: Precision (top) and recall (bottom) for different network sizes, sample sizes, and dynamics. Left: Vector autoregressive process; Right: Coupled logistic maps. Each subplot shows three curves, corresponding to different time series lengths ( , , ). The results for simulations from different initial conditions are shown (low-opacity circles) in addition to the mean values (solid circles). All the random networks have an average in-degree .

iii.2 Validation of false positive rate

The critical alpha level for statistical significance is a parameter of the algorithm which is designed to control the number o false positives in the network inference. As discussed in the statistical tests section in the Methods, controls the probability that a target is a false positive, i.e., that at least one of its sources is a false positive. This approach is in line with the perspective that the goal of the network inference is to find the set of relevant sources for each node.

A validation study was carried out to verify that the final number of false positives is consistent with the desired level after multiple statistical tests are performed. The false positive rate (i.e. ) was computed after performing the inference on empty networks, where every inferred link is a false positive by definition (i.e., under the complete null hypothesis). The rate was in good accordance with the critical alpha threshold for all network sizes, as shown in Figure 4.

Figure 4: Validation of the false positive rate for a single target () on empty networks. The error bars represent the standard deviation over simulations using a vector autoregressive process with   samples. The missing points correspond to perfect inference over all the repetitions (i.e., zero false positive rate) achieved with a strict statistical significance threshold . The identity function is plotted as a reference (dashed line).

iii.3 Influence of critical level for statistical significance

Given the conservative results obtained for both the VAR and CLM processes (Figure 3), a natural question is to what extent the recall could be improved by increasing the critical alpha level and to what extent the precision would be negatively affected as a side effect.

In order to elucidate this trade-off, the analysis described above (Figure 3) was repeated for increasing values of , with results shown in Figure 5. For the shortest time series ( ), increasing resulted in a higher recall and a lower precision, as expected; on the other hand, for the longest time series ( ), the performance measures were not significantly affected. Interestingly, for the intermediate case ( ), increasing resulted in higher recall without negatively affecting the precision.

VAR

CLM

Figure 5: Influence of statistical significance threshold on network inference performance. Precision vs. recall for different statistical significance levels ( , , ), corresponding to different colours. The plots summarise the results for different dynamics (Top: Vector autoregressive process; Bottom: Coupled logistic maps), different time series lengths ( , , ), and different network sizes ( , , , , not distinguished). The arrows join the mean population values for the lowest and highest significance levels, illustrating the average trade-off between precision loss and recall gain.

iii.4 Inference of coupling lags

So far, the performance evaluation focused on the identification the correct set of sources for each target node, regardless of the coupling lags. However, since the identification of the correct coupling lags is particularly relevant in neuroscience (see Wibral et al. (2013) and references therein), the performance of the algorithm in identifying the correct coupling lags was additionally investigated.

By construction, a single coupling lag was imposed between each pair of processes (chosen at random between one and five discrete time steps, as described in the Methods). In line with the previous results, samples were sufficient to achieve nearly optimal performance, regardless of the size of the network and for both the VAR and the CLM processes (Figure 6).

VAR CLM
Figure 6: Average absolute error in the coupling delay inference, when a source process is correctly inferred in the set . Results for different dynamics (Left: Vector autoregressive process; Right: Coupled logistic maps), different time series lengths ( , , ), and different network sizes ( , , , ). The error bars indicate the standard deviation over simulations from different initial conditions.

iii.5 Estimators

Given its speed, the Gaussian estimator is often used for large datasets or as a first exploratory step, even when the stationary distribution cannot be assumed to be Gaussian. The availability of the ground truth allowed to compare the performance of the Gaussian estimator and the nearest-neighbour estimator on the nonlinear CLM process, which does not satisfy the Gaussian assumption. As expected, the performance of the Gaussian estimator was lower than the performance of the nearest-neighbour estimator for all network sizes (Figure 7).

Figure 7: Gaussian vs. nearest-neighbour estimator on the coupled logistic maps process. The recall is plotted against the network size and a fixed time series length (  samples). The results for simulations from different initial conditions are shown (low-opacity circles) in addition to the mean values (solid circles). The statistical significance level was employed; an even larger gap between the performance of the estimators is obtained with .

The hierarchical tests introduced in the Methods section allow running the network inference algorithm in parallel on a high-performance computing cluster. Such parallelisation is especially needed when employing the nearest-neighbour estimator. In particular, each target node can be analysed in parallel on a CPU (employing one or more cores) or a GPU, which is made possible by the CPU and GPU estimators provided by the IDTxl package (a summary of the CPU and GPU run-times is provided in the Supplementary materials).

Iv Discussion

The algorithm presented in this paper provides robust statistical tests for network inference to control the false positive rate. These tests are compatible with parallel computation on high-performance computing clusters, which enabled the validation study on synthetic sparse networks of increasing size ( to nodes), using different dynamics (linear autoregressive processes and nonlinear coupled logistic maps) and increasingly longer time series ( to samples). Both the network size and the sample size are one order of magnitude larger than previously demonstrated, showing feasibility for typical EEG and MEG experiments. The results demonstrate that the statistical tests achieve the desired false positive rate and successfully address the multiple-comparison problems inherent in network inference tasks (Figure 4).

The ability to control the false positives while building connectomes is a crucial prerequisite for the application of complex network measures, to the extent that Zalesky et al. (2016) concluded that “specificity is at least twice as important as sensitivity [i.e., recall] when estimating key properties of brain networks, including topological measures of network clustering, network efficiency and network modularity”. The reason is that false positives occur more prevalently between network modules than within them and the spurious inter-modular connections have a dramatic impact on network topology (Zalesky et al., 2016).

The trade-off between precision and recall when relaxing the statistical significance threshold was further investigated (Figure 5). When only samples were used, the average recall gain was more than five times smaller than the average precision loss. In our opinion, this result is possibly due to the sparsity of the networks used in this study and suggests a conservative choice of the threshold for sparse networks and short time series. The trade-off was reversed for longer time series: when samples were used, the average recall gain was more than five times larger than the average precision loss. Finally, for samples, high precision and recall were achieved ( on average) for both the vector autoregressive and the coupled logistic maps processes, regardless of the statistical significance threshold.

For both types of dynamics, the network inference performance increased with the length of the time series and decreased with the size of the network (Figure 3

). This is to be expected since larger systems require more statistical tests and hence stricter conditions to control the family-wise error rate (false positives). Specifically, larger networks result in wider null distributions of the maximum statistic (i.e., larger variance), whereas longer time series have the opposite effect. Therefore, for large networks and short time series, controlling the false positives can have a negative impact on the ability to identify the true positives, particularly when the effect size (i.e., the transfer entropy value) is small.

In addition, the superior ability of the nearest-neighbour estimator over the Gaussian estimator in detecting nonlinear dependencies was quantified. There is a critical motivation for this comparison: the general applicability of the nearest-neighbour estimators comes at the price of higher computational complexity and a significantly longer run-time, so that the Gaussian estimator is often used for large datasets (or at least as a first exploratory step), even when the Gaussian hypothesis is not justified. To investigate such scenario, the Gaussian estimator was tested on the nonlinear logistic map processes: while the resulting recall was significantly lower than the nearest-neighbour estimator for all network sizes, it was nonetheless able to identify over half of the links for a sufficiently large number () of time samples (Figure 7).

The stationarity assumption about the time series corresponds to assuming a single regime of neuronal activity in real brain recordings. If multiple regimes are recorded, which is typical in experimental settings (e.g., sequences of tasks or repeated presentation of stimuli interleaved with resting time windows), different stationary regimes can be studied by performing the analysis within each time window. The networks obtained in different time windows can either be studied separately and compared against each other or collectively interpreted as a single evolving temporal network. To obtain a sufficient amount of observations per window, multiple replications of the experiment under the same conditions are typically carried out. Replications can be assumed to be cyclo-stationary and estimation techniques exploiting this property have been proposed

(Gómez-Herrero et al., 2015; Wollstadt et al., 2014); these estimators are also available in the IDTxl Python package. The convergence to the (unknown) causal network was only proven under the hypotheses of stationarity, causal sufficiency, and the causal Markov condition (Sun et al., 2015). However, conditional independence holds under milder assumptions (Runge, 2018) and the absence of links is valid under general conditions. The conditional independence relationships can, therefore, be used to exclude variables in following intervention-based causal experiments, making network inference methods valuable for exploratory studies.

In fact, the directed network is only one part of the model and provides the scaffold over which the information-theoretic measures are computed. Therefore, even if the structure of a system is known and there is no need for network inference, information theory can still provide nontrivial insights on the distributed computation by modelling the information storage, transfer, and modification within the system (Lizier, 2013). This decomposition of the predictive information into the active information storage and transfer entropy components is one out of many alternatives within the framework proposed by Chicharro and Ledberg (2012). Arguably, this decomposition reflects the segregation-integration dichotomy that has long characterised the interpretation of brain function (Zeki and Shipp, 1988; Sporns, 2010). Information theory has the potential to provide a quantitative definition of these fundamental but still unsettled concepts. In addition, information theory provides a new way of testing fundamental computational theories in neuroscience, e.g., predictive coding (Brodski-Guerniero et al., 2017).

As such, information-theoretic methods should not be seen as opposed to model-based approaches, but complementary to them (Friston et al., 2013)

. If certain physically motivated parametric models are assumed, the two approaches are equivalent for network inference: maximising the log-likelihood is asymptotically equivalent to maximising the transfer entropy

(Barnett and Bossomaier, 2012; Cliff et al., 2018). Moreover, different approaches can be combined, e.g., the recent large-scale application of spectral DCM was made possible by using functional connectivity models to place prior constraints on the parameter space (Razi et al., 2017). Networks inferred using bivariate transfer entropy have also been employed to reduce the model space prior to DCM analysis (Chan et al., 2017).

In conclusion, the continuous evolution and combination of methods show that network inference from time series is an active field of research and there is a current trend of larger validation studies, statistical significance improvements, and reduction of computational complexity. Information-theoretic approaches require efficient tools to employ nearest-neighbour estimators on large datasets of continuous-valued time series, which are ubiquitous in large-scale brain recordings (calcium imaging, EEG, MEG, fMRI). The algorithm presented in this paper is compatible with parallel computation on high-performance computing clusters, which enabled the study of synthetic nonlinear systems of nodes and samples. Both the network size and the sample size are one order of magnitude larger than previously demonstrated, bringing typical EEG and MEG experiments into scope for future information-theoretic network inference studies.

V Acknowledgements

This research was supported by: Universities Australia/German Academic Exchange Service (DAAD) Australia-Germany Joint Research Cooperation Scheme grant: “Measuring neural information synthesis and its impairment”, Grant/Award Number: PPP Australia Projekt-ID 57216857; and Australian Research Council DECRA grant DE160100630. The authors acknowledge the Sydney Informatics Hub and the University of Sydney’s high-performance computing cluster Artemis for providing the high-performance computing resources that have contributed to the research results reported within this paper. Furthermore, the authors thank Aaron J. Gutknecht for commenting on a draft of this paper.

Vi Author Contributions

Leonardo Novelli: Conceptualization; Data curation; Formal analysis; Investigation; Software; Validation; Visualization; Writing — original draft. Patricia Wollstadt: Conceptualization; Software; Writing — review & editing. Pedro Mediano: Software; Writing — review & editing. Michael Wibral: Conceptualization; Funding acquisition; Methodology; Software; Supervision; Writing — review & editing. Joseph T. Lizier: Conceptualization; Funding acquisition; Methodology; Software; Supervision; Writing — review & editing.

References

Vii Supplementary materials

vii.1 Relationship with Bonferroni correction

Performing a Taylor expansion of (2) around yields:

(8)

Truncating the Taylor series at yields the first-order approximation

(9)

which coincides with the false positive rate resulting from the Bonferroni correction (Dickhaus, 2014).

Moreover, since the summands in (8) are positive for every , the Taylor series is lower-bounded by any truncated series. In particular, the false positive rate resulting from the Bonferroni correction is a lower bound for the (the false positive rate for a single variable resulting from the maximum statistic test), i.e., the maximum statistic correction is less stringent than the Bonferroni correction.

vii.2 Other plots

Figure 8 replots the precision and recall from Figure 3 using different subplots for each sample size, and additionally shows the specificity. Similarly, Figure 9 replots the precision and recall from Figure 5 using different subplots for each sample size.

Figure 8: Precision, recall, and specificity for different network sizes, sample sizes, and dynamics. Left: Vector autoregressive process; Right: Coupled logistic maps. Each row corresponds to a different time series length (Top:  ; middle:  , bottom:  ). The results for simulations from different initial conditions are shown (low-opacity circles) in addition to the mean values (solid circles).
Figure 9: Precision-recall trade-off for different statistical significance levels. The plots show the results for different dynamics (Left: Vector autoregressive process; Right: Coupled logistic maps), different time series lengths (top to bottom:  , , ), and different network sizes ( , , , ). The error bars indicate the standard deviation over simulations from different initial conditions.

vii.3 Run-time on CPU and GPU

The algorithm was either run using a single core per target (on different Intel Xeon CPUs with similar characteristics: - GHz), or using a whole dedicated GPU per target (NVIDIA V100 SXM2, GB RAM). These computations were performed on the Artemis computing cluster made available by the Sydney Informatics Hub at The University of Sydney. The maximum CPU and GPU run-times for a single target are shown in Table 1, which summarises the results for different time series lengths and different network sizes.

Sample size Network size max GPU time per node max CPU time per node
(h) (h)
Table 1: Maximum CPU and GPU run-time for a single target using the nearest-neighbour estimator and surrogates. Summary of the results for different time series lengths ( , , ) and different network sizes ( , , , ).