Nonlinear Structural Vector Autoregressive Models for Inferring Effective Brain Network Connectivity

10/20/2016 ∙ by Yanning Shen, et al. ∙ University of Minnesota 0

Structural equation models (SEMs) and vector autoregressive models (VARMs) are two broad families of approaches that have been shown useful in effective brain connectivity studies. While VARMs postulate that a given region of interest in the brain is directionally connected to another one by virtue of time-lagged influences, SEMs assert that causal dependencies arise due to contemporaneous effects, and may even be adopted when nodal measurements are not necessarily multivariate time series. To unify these complementary perspectives, linear structural vector autoregressive models (SVARMs) that leverage both contemporaneous and time-lagged nodal data have recently been put forth. Albeit simple and tractable, linear SVARMs are quite limited since they are incapable of modeling nonlinear dependencies between neuronal time series. To this end, the overarching goal of the present paper is to considerably broaden the span of linear SVARMs by capturing nonlinearities through kernels, which have recently emerged as a powerful nonlinear modeling framework in canonical machine learning tasks, e.g., regression, classification, and dimensionality reduction. The merits of kernel-based methods are extended here to the task of learning the effective brain connectivity, and an efficient regularized estimator is put forth to leverage the edge sparsity inherent to real-world complex networks. Judicious kernel choice from a preselected dictionary of kernels is also addressed using a data-driven approach. Extensive numerical tests on ECoG data captured through a study on epileptic seizures demonstrate that it is possible to unveil previously unknown causal links between brain regions of interest.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 6

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Several contemporary studies in the neurosciences have converged on the well-accepted view that information processing capabilities of the brain are facilitated by the existence of a complex underlying network; see e.g., [36] for a comprehensive review. The general hope is that understanding the behavior of the brain through the lens of network science will reveal important insights, with an enduring impact on applications in both clinical and cognitive neuroscience.

However, brain networks are not directly observable, and must be inferred from observable or measurable neuronal processes. To this end, functional magnetic resonance imaging (fMRI) has emerged as a powerful tool, capable of revealing varying blood oxygenation patterns modulated by brain activity [34]. Other related brain imaging modalities include positron emission tomography (PET), electroencephalography (EEG), and electrocorticography (ECoG), to name just a few. Most state-of-the-art tools for inference of brain connectivity leverage variants of causal and correlational analysis methods, applied to time-series obtained from the imaging modalities [13, 7, 17, 16, 12].

Contemporary brain connectivity analyses fall under two broad categories, namely, functional connectivity which pertains to discovery of non-directional pairwise correlations between regions of interest (ROIs), and effective connectivity which instead focuses on inference of directional (a.k.a., causal) dependencies between them [14]. Granger causality or vector autoregressive models (VARMs) [35], structural equation models (SEMs) [32], and dynamic causal modeling (DCM) [15] constitute widely used approaches for effective connectivity studies. VARMs postulate that connected ROIs exert time-lagged dependencies among one another, while SEMs assume instantaneous causal interactions among them. Interestingly, these points of view are unified through the so-termed structural vector autoregressive model (SVARM) [9], which postulates that the spatio-temporal behavior observed in brain imaging data results from both instantaneous and time-lagged interactions between ROIs. It has been shown that SVARMs lead to markedly more flexibility and explanatory power than VARMs and SEMs treated separately, at the expense of increased model complexity.

The fundamental appeal of the aforementioned effective connectivity approaches stems from their inherent simplicity, since they adopt linear models. However, this is an oversimplification that is highly motivated by the need for tractability, even though consideration of nonlinear models for causal dependence may lead to more accurate approaches for inference of brain connectivity. In fact, recognizing the limitations associated with linear models, several variants of nonlinear SEMs have been put forth in a number of recent works; see e.g., [21, 39, 20, 27, 18, 23].

For example, [27] and [28] advocate SEMs in which nonlinear dependencies only exist in the so-termed exogenous or independent variables. Furthermore, [20] puts forth a hierarchical Bayesian nonlinear modeling approach in which unknown random parameters capture the strength and directions of causal links among variables. Several other studies adopt polynomial SEMs, which offer an immediate extension to classical linear SEMs; see e.g., [21, 39, 18, 23]. In all these contemporary approaches, it is assumed that the network connectivity structure is known a priori, and developed algorithms only estimate the unknown edge weights. However, this is a rather major limitation since such prior information may not be available in practice, especially when dealing with potentially massive networks, e.g., the brain.

Similarly, several variants of nonlinear VARMs have been shown useful in unveiling links that often remain undiscovered by traditional linear models; see e.g.,[30, 31, 38, 29]. More recently, [29] proposed a kernel-based VARM, with nonlinear dependencies among nodes encoded by unknown functions belonging to a reproducing kernel Hilbert space.

Building upon these prior works, the present paper puts forth a novel additive nonlinear VARM to capture dependencies between observed ROI-based time-series, without explicit knowledge of the edge structure. Similar to [29], kernels are advocated as an encompassing framework for nonlinear learning tasks. Note that SVARMs admit an interesting interpretation as SEMs, with instantaneous terms viewed as endogenous variables, and time-lagged terms as exogenous variables. Since numerical measurement of external brain stimuli is often impractical, or extremely challenging in conventional experiments, adoption of such a fully-fledged SEM (with both endo- and exogenous inputs) is often impossible with traditional imaging modalities.

A key feature of the novel approach is the premise that edges in the unknown network are sparse, that is, each ROI is linked to only a small subset of all potential ROIs that would constitute a maximally-connected power graph. This sparse edge connectivity has recently motivated the development of efficient regularized estimators, promoting the inference of sparse network adjacency matrices; see e.g., [3, 1, 25, 29] and references therein. Based on these prior works, this paper develops a sparse-regularized kernel-based nonlinear SVARM to estimate the effective brain connectivity from per-ROI time series. Compared with [29], the novel approach incorporates instantaneous (cf. endogenous for SEMs) variables, it turns out to be more computationally efficient than [29], and facilitates a data-driven approach for kernel selection.

The rest of this paper is organized as follows. Section II introduces the conventional SVARM, while Section III puts forth its novel nonlinear variant. Section IV advocates a sparsity-promoting regularized least-squares estimator for topology inference from the nonlinear SVARM, while Section V deals with an approach to learn the kernel that ‘best’ matches the data. Results of extensive numerical tests based on EEG data from an Epilepsy study are presented in Section VI, and pertinent comparisons with linear variants demonstrate the efficacy of the novel approach. Finally, Section VII concludes the paper, and highlights several potential future research directions opened up by this work.

Notation. Bold uppercase (lowercase) letters will denote matrices (column vectors), while operators , and

will stand for matrix transposition and diagonal matrices, respectively. The identity matrix will be represented by

, while

will denote the all-zero matrix, and their dimensions will be clear from the context. Finally,

and Frobenius norms will be denoted by , and , respectively.

Fig. 1: (left) A simple illustration of a -node brain network; and (right) a set of five neuronal time series (e.g., ECoG voltage) each associated with a node. Per interval , SVARMs postulate that causal dependencies between the nodal time series may be due to both the instantaneous effects (blue links), and/or time-lagged effects (red links). Estimating the values of the unknown coefficients amounts to learning the causal (link) structure of the network.

Ii Preliminaries on Linear SVARMs

Consider a directed network whose topology is unknown, comprising nodes, each associated with an observable time series measured over time-slots, for . Note that denotes the -th sample of the time series measured at node . In the context of the brain, each node could represent a ROI, while the per-ROI time series are obtainable from standard imaging modalities, e.g., EEG or fMRI time courses. The network topology or edge structure will be captured by the weighted graph adjacency matrix , whose -th entry is nonzero only if a directed (causal) effect exists from region to region .

In order to unveil the hidden causal network topology, traditional linear SVARMs postulate that each can be represented as a linear combination of instantaneous measurements at other nodes , and their time-lagged versions   [9]. Specifically, admits the following linear instantaneous plus time-lagged model

(1)

with capturing the causal influence of region upon region over a lag of time points, while encodes the corresponding instantaneous causal relationship between them. The coefficients encode the causal structure of the network, that is, a causal link exists between nodes and only if , or if there exists for . If , then (1) reduces to classical Granger causality [35]. Similarly, setting reduces (1) to a linear SEM with no exogenous inputs [22]. Defining , , and the time-lagged adjacency matrix with the -th entry , one can write (1) in vector form as

(2)

where has zero diagonal entries for .

Given the multivariate time series , the goal is to estimate matrices , and consequently unveil the hidden network topology. Although generally known, one can readily deduce via standard model order selection tools e.g., the Bayesian information criterion [10], or Akaike’s information criterion [6].

Knowing which entries of

are nonzero, several approaches have been put forth to estimate their values. Examples are based upon ordinary least-squares 

[9], and hypothesis tests developed to detect presence or absence of pairwise causal links under prescribed false-alarm rates [35]. Albeit conceptually simple and computationally tractable, the linear SVARM is incapable of capturing nonlinear dependencies inherent to complex networks such as the human brain. To this end, the present paper generalizes the linear SVARM in (1) to a nonlinear kernel-based SVARM.

It is also worth noting that most real world networks (including the brain) exhibit edge sparsity, the tendency for each node to link with only a few other nodes compared to the maximal set of potential connections per node. This means that per , only a few coefficients are nonzero. In fact, several recent approaches exploiting edge sparsity have been advocated, leading to more efficient topology estimation; see e.g.,  [3, 1, 29].

Iii From linear to nonlinear SVARMs

To enhance flexibility and accuracy, this section generalizes (1) so that nonlinear causal dependencies can be captured. The most general nonlinear model with both the instantaneous and time-lagged structure can be written in multivariate form as , or, entry-wise as

(3)

where collects all but the -th nodal observation at time , , and denotes a nonlinear function of its multivariate argument. With limited data available, in (3) entails

variables. This fact motivates simpler model functions to cope with the emerging ‘curse of dimensionality’ in estimating

. A simplified form of (3) has been studied in [29] with , and without instantaneous influences , which have been shown of importance in applications such as brain connectivity [32] and gene regulatory networks [8]. Such a model is simplified compared with (3) because the number of variables of reduces to . Nevertheless, estimating such an -variate functional model still suffers from the curse of dimensionality, especially when the size of typical networks scales up.

To circumvent this challenge, we further posit that the multivariate function in (3) is separable with respect to each of its variables. Such a simplification of (3) amounts to adopting a generalized additive model (GAM) [19, Ch. 9]. In the present context, the GAM adopted is , where the nonlinear functions will be specified in the next section. Defining , the node observation at time is a result of both instantaneous and multi-lag effects; that is [cf. (1)]

(4)

where similar to (1), define the matrices . As before, a directed edge from node to node exists if the corresponding for any . Instead of having to estimate an -variate function in (3) or an -variate function in [29], (4) requires estimating univariate functions. Note that conventional linear SVARMs in (1) assume that the functions in (4) are linear, a limitation that the ensuing Section IV will address by resorting to a reproducing kernel Hilbert space (RKHS) formulation to model .

Problem statement. Given , the goal now becomes to estimate the nonlinear functions , as well as the adjacency matrices in (4).

Iv Kernel-based Sparse SVARMs

Suppose that each univariate function in (4) belongs to the RKHS

(5)

where is a preselected basis (so-termed kernel) function that measures the similarity between and . Different choices of specify their own basis expansion spaces, and the linear functions can be regarded as a special case associated with the linear kernel . An alternative popular kernel is the Gaussian one that is given by . A kernel is reproducing if it satisfies , which induces the RKHS norm .

Considering the measurements per node , with functions , for and , the present paper advocates the following regularized least-squares (LS) estimates of the aforementioned functions obtained as

(6)

where denotes a regularizing function, which will be specified later. An important result that will be used in the following is the representer theorem [19, p. 169], according to which the optimal solution for each in (IV) is given by

(7)

Although the function spaces in (5) include infinite basis expansions, since the given data are finite, namely per node, the optimal solution in (7) entails a finite basis expansion. Substituting (7) into (IV), and letting , and , the functional minimization in (IV) boils down to optimizing over vectors . Specifically, (IV) can be equivalently written in vector form as

(8)

where , and the matrices have entries . Furthermore, collecting all the observations at different nodes in and letting , (IV) can be written as

(9)

where the block matrix

(10)

exhibits a structure ‘modulated’ by the entries of . For instance, if , then is an all-zero block, irrespective of the values taken by .

Instead of the LS cost used in (IV) and (9

), alternative loss functions could be employed to promote robustness using the

-insensitive, or, the -error norm; see e.g., [19, Ch. 12]. Regarding the regularizing function , typical choices are , or, . The former is known to promote sparsity of edges, which is prevalent to most networks; see e.g., [36]. In principle, leveraging such prior knowledge naturally leads to more efficient topology estimators, since are promoted to have only a few nonzero entries. The sparse nature of manifests itself as block sparsity in . Specifically, using , one obtains the following estimator of the coefficient vectors for

(11)

Recognizing that summands in the regularization term of (IV) can be written as , which is the weighted -norm of , the entire regularizer can henceforth be regarded as the weighted -norm of , that is known to be useful for promoting block sparsity. It is clear that (IV) is a convex problem, which admits a globally optimal solution. In fact, the problem structure of (IV) lends itself naturally to efficient iterative proximal optimization methods e.g., proximal gradient descent iterations [5, Ch. 7], or, the alternating direction method of multipliers (ADMM) [37].

For a more detailed description of algorithmic approaches adopted to unveil the hidden topology by solving (IV), the reader is referred to Appendix A. All in all, Algorithm 1 is a summary of the novel iterative solver of (IV) derived based on ADMM iterations. Per iteration, the complexity of ADMM is in the order of , which is linear in the network size . A couple of remarks are now in order.

Remark 1

Selecting is known to control model complexity, and thus prevent overfitting [19, Ch. 3]. Let , and , where is a block diagonal of its matrix arguments. Substituting into (9), one obtains

(12)

where , and . Problem (12) is convex and can be solved in closed form as

(13)

where denotes the vector obtained after removing entries of the -th column of indexed by ; collects columns of excluding the columns indexed by ; and the block-diagonal matrix is obtained after eliminating rows and columns of indexed by . Using the matrix inversion lemma, the complexity of solving (13) is in the order of .

Remark 2

Relying on an operator kernel (OK), the approach in [29] offers a more general nonlinear VARM (but not SVARM) than the one adopted here. However, [29] did not account for instantaneous or the multiple-lagged effects. Meanwhile, estimating in [29] does not scale well as the size of the network () increases. Also OK-VARM is approximated in [29] using the Jacobian, which again adds to the complexity of the algorithm, and may degrade the generality of the proposed model. Finally, the model in [29] is limited in its ability to incorporate the structure of the network (e.g., edge sparsity). In order to incorporate prior information on the model structure, [29] ends up solving a nonconvex problem, which might experience local minima, and the flexibility in choosing kernel functions will also be sacrificed. In contrast, our approach entails a natural extension to a data-driven kernel selection, which will be outlined in the next section.

1:Input: , , , ,
2:Initialize: , ,
3:for  do
4:     
5:     
6:end for
7:,
8:for  do
9:     
10:     
11:     
12:     ,
13:end for
14:while not converged do
15:     for  (in parallel) do
16:         
17:         
18:         ,
19:          for ,
20:     end for
21:     
22:     
23:     
24:     
25:end while
26:Edge identification:(after converging to )
27: if , else ,
28:return
Algorithm 1 ADMM for network topology identification

V Data-driven kernel selection

Choice of the kernel function determines the associated Hilbert space, and it is therefore of significant importance in estimating the nonlinear functions . Although Section IV assumed that the kernels are available, this is not the case in general, and this section advocates a data-driven strategy for selecting them. Given a dictionary of reproducing kernels , it has been shown that any function in the convex hull is a reproducing kernel. Therefore, the goal of the present section is to select a kernel from that best fits the data. For ease of exposition, consider , for all and in (IV), therefore . Note that the formulation can be readily extended to settings when are different. Incorporating as a variable function in (IV) yields

(14)

where denotes the Hilbert space associated with kernel function . With denoting the RKHS induced by , it has been shown in [4] and [33] that the optimal in (V) is expressible in a separable form as

(15)

where belongs to RKHS , for . Substituting (15) into (V), one obtains

(16)

Note that (16) and (IV) have similar structure, and their only difference pertains to an extra summation over candidate kernels. Hence, (16) can be solved in an efficient manner along the lines of the iterative solver of (IV) listed under Algorithm 1 [cf. the discussion in Section IV]. Further details of the solution are omitted due to space limitations.

Vi Numerical tests

This section presents test results on seizure data, captured through experiments conducted in an epilepsy study [26]. Epilepsy refers to a chronic neurological condition characterized by recurrent seizures, globally afflicting over million people, and often associated with abnormal neuronal activity within the brain. Diagnosis of the condition sometimes involves comparing EEG or ECoG time series obtained from a patient’s brain before and after onset of a seizure. Recent studies have shown increasing interest in analysis of connectivity networks inferred from the neuronal time series, in order to gain more insights about the unknown physiological mechanisms underlying epileptic seizures. In this section, connectivity networks are inferred from the seizure data using the novel approach, and a number of comparative measures are computed from the identified network topologies.

Vi-a Seizure data description

Seizure data were obtained for a -year-old female subject with a case of intractable epilepsy at the University of California, San Francisco (UCSF) Epilepsy Center; see also [26]. An subdural electrode grid was implanted into the cortical surface of the subject’s brain, and two accompanying electrode strips, each comprising six electrodes (a.k.a., depth electrodes) were implanted deeper into the brain. Over a period of five days, the combined electrode network recorded ECoG time series, consisting of voltage levels measured in a region within close proximity of each electrode.

ECoG epochs containing eight seizures were extracted from the record and analyzed by a specialist. The time series at each electrode were first passed through a bandpass filter, with cut-off frequencies of

and Hz, and the so-termed ictal onset of each seizure was identified as follows. A board-certified neurophysiologist identified the initial manifestation of rhythmic high-frequency, low-voltage focal activity, which characterizes the onset of a seizure. Samples of data before and after this seizure onset were then extracted from the ECoG time series. The per-electrode time series were then divided into s windows, with s overlaps between consecutive windows, and the average spectral power between Hz and Hz was computed per window. Finally, power spectra over all electrodes were averaged, and the ictal onset was identified by visual inspection of a dramatic increase in the average power. Two temporal intervals of interest were picked for further analysis, namely, the preictal and ictal intervals. The preictal interval is defined as a s interval preceding seizure onset, while the ictal interval comprises the s immediately afterwards. Further details about data acquisition and pre-processing are provided in [26].

The goal here was to assess whether modeling nonlinearities, and adopting the novel kernel-based approach would yield significant insights pertaining to causal/effective dependencies between brain regions, that linear variants would otherwise fail to capture. Toward this goal, several standard network analysis measures were adopted to characterize the structural properties of the inferred networks.

(a) (b) (c) (d) (e) (f)
Fig. 2: Visualizations of -electrode networks inferred from ECoG data: (a) linear SVARM with on preictal time series; (b) linear SVARM with on ictal time series; (c) K-SVARM with on preictal time series, using a polynomial kernel of order ; (d) the same K-SVARM on ictal time series; (e) K-SVARM with kernel selection on preictal time series; and finally (f) K-SVARM with kernel selection on ictal time series.

Vi-B Inferred networks

Prior to running the developed algorithm, s intervals were chosen from the preprocessed ECoG data, and then divided into successive segments, each spanning s. To illustrate this, suppose the s interval starts from s and ends at , then the first segment comprises samples taken over the interval , the second one would be , and so on. After this segmentation of the time series, directed network topologies were inferred using Algorithm 1 with , based on the s segments, instead of the entire signal, to ensure that the signal is approximately stationary per experiment run. A directed link from electrode to was drawn if at least one of the estimates of turned out to be nonzero.

(a) (b) (c) (d)
Fig. 3: Node degrees of networks inferred from ECoG data encoded by circle radii: (a) linear SVARM on preictal data; (b) linear SVARM on ictal data; (c) K-SVARM on preictal time series; and (d) K-SVARM on ictal data.

Networks inferred from the preictal and ictal intervals were compared using the linear, the kernel-based (K-)SVARMs, as well as the K-SVARM with data-driven kernel selection. The lag lengths were set to for all cases. For the K-SVARM, a polynomial kernel of order was selected. Furthermore, the threshold in Algorithm 1 was set to , was set to , while the regularization parameter was selected via cross-validation. For the data-driven kernel selection scheme, two candidate kernels were employed, namely, a linear kernel, and a polynomial kernel of order .

Figure 2 depicts networks inferred from different algorithms for both preictal and ictal intervals of the time series. The figure illustrates results obtained by the linear SVARM, and the K-SVARM approach with and without kernel selection. Each node in the network is representative of an electrode, and it is depicted as a circle, while the node arrangement is forced to remain consistent across the four visual representations. A cursory inspection of the visual maps reveals significant variations in connectivity patterns between ictal and preictal intervals for both models. Specifically, networks inferred via the K-SVARMs, reveal a global decrease in the number of links emanating from each node, while those inferred via the linear model depict increases and decreases in links connected to different nodes. Interestingly, the K-SVARM with kernel selection recovered most of the edges inferred by the linear and the K-SVARM using a polynomial kernel, which implies that both linear and nonlinear interactions may exist in brain networks. Clearly, one is unlikely to gain much insight only by visual inspection of the network topologies. To further analyze differences between inferred networks from both models, and to assess the potential benefits gained by adopting the novel scheme, several network topology metrics are computed and compared in the next subsection.

Vi-C Comparison of network metrics

First, in- and out-degree was computed for nodes in each of the inferred networks. Note that the in-degree of a node counts its number of incoming edges, while the out-degree counts the number of out-going edges. The total degree per node sums the in- and out-degrees, and is indicative of how well-connected a given node is. Figure 3 depicts nodes in the network and their total degrees encoded by the radii of circles associated with the nodes. As expected from the previous subsection, Figures 3 (a) and (b) demonstrate that the linear SVARM yields both increases and deceases in the inferred node degree. On the other hand, the nonlinear SVARM leads to a more spatially consistent observation with most nodes exhibiting a smaller degree after the onset of a seizure (see Figures 3 (c) and (d)), which may imply that causal dependencies thin out between regions of the brain once a seizure starts.

Fig. 4: Same as in Figure 3 for comparison based on betweenness centralities of inferred graphs.

In order to assess the information-routing abilities of brain regions before and after seizure onset, comparisons of the so-termed betweenness centrality were done. Betweenness centrality of a node computes the fraction of shortest paths between all node pairs that traverse the given node, and it is useful to identify the key information transmitting hubs in a network; see e.g., [24] for more details. The per-node betweenness centrality for each inferred network are depicted in Figure 4, with node radii similarly encoding the computed values. Little variation between preictal and ictal betweenness centralities is seen for the linear model (Figures 4 (a) and (b)), while variations are slightly more marked for the K-SVARM, see Figures 4 (c) and (d). It can be seen that modeling nonlinearities reveals subtle changes in information-routing capabilities of nodes between preictal and ictal phases.

Fig. 5: Same as in Figure 3 for comparison based on clustering coefficients of inferred graphs.
Fig. 6: Same as in Figure 3 for comparison based on closeness centrality of inferred graphs.

Clustering coefficients are generally used to quantify network cohesion, the tendency for nodes to form groups or communities. Comparison of such coefficients between the preictal and ictal phases may reveal differences in cohesive behavior after onset of a seizure. In the present paper, a per-node clustering coefficient is adopted, and it computes the fraction of triangles in which a node participates out of all possible triangles to which it could possibly belong [24]. Note that a triangle is defined as a fully connected three-node subgraph. Figure 5 depicts clustering coefficients per electrode obtained during the ictal and preictal phases of the ECoG time series. While both the linear and nonlinear models yield changes in the computed coefficients, most nodes have lower clustering coefficients upon seizure onset in the networks inferred via the K-SVARM.

Finally, Figure 6 depicts the closeness centrality computed per node in the inferred networks. Closeness centrality measures how reachable a node is from all other nodes, and is generally defined as the reciprocal of the sum of geodesic distances of the node from all other nodes in the network; see also [24]. Once again, Figure 6 depicts a more general decrease in closeness centralities after seizure onset in networks inferred by the nonlinear SVARM, as compared to the linear variant. This empirical result indicates a change in reachability between regions of the brain during an epileptic seizure.

Moreover, the performance of K-SVARM with data-driven kernel selection was also tested. Figure  7 illustrates the per node degree as well as the closeness centrality of networks inferred from preictal and ictal phases. Consistent with Figures 3 and 6, Figure 7 again reveals universal decrease in node degrees as well as closeness centrality at seizure onset.

In addition to the local metrics, a number of global measures were computed over entire inferred networks, and pertinent comparisons were drawn between the two phases; see Table I for a summary of the global measures of the inferred networks. Several global metrics were considered, e.g., network density, global clustering coefficient, network diameter, average number of neighbors, number of self loops, number of isolated nodes and the size of the largest connected component.

Network density refers to the number of actual edges divided by the number of potential edges, while the global clustering coefficient is the fraction of connected triplets that form triangles, adjusted by a factor of three to compensate for double counting. On the other hand, network diameter is the length of the longest geodesic, excluding infinity. Table I shows that networks inferred via the K-SVARM exhibit lower network cohesion after seizure onset, as captured by network density, global clustering coefficient, and average number of neighbors, while the network diameter increases.

These changes provide empirical evidence that the brain network becomes less connected, and diffusion of information is inhibited after the onset of an epileptic seizure. Also interestingly, it turns out that the number of self-loops significantly decrease during the ictal phase for networks inferred using the K-SVARM. Note that in this experiment, only connections to the previous time interval are considered (), while is constrained to have no self-loops. As a consequence, existence of a self loop reveals a strong temporal dependence between measurements at the same node. In fact, a drop in the number of self loops implies a lower temporal dependence between successive ECoG samples, a phenomenon that was not captured by the linear SVARM.

(a) (b) (c) (d)
Fig. 7: Comparison of node degrees of inferred graphs: (a) K-SVARM with kernel selection on preictal data; (b) K-SVARM with kernel selection on ictal data. Comparison of closeness centrality of inferred graphs: (c) K-SVARM with kernel selection on preictal data; (d) K-SVARM with kernel selection on ictal data.
Linear SVARM K-SVARM
Preictal Ictal Preictal Ictal
Network density 0.189 0.095 0.242 0.148
Glob. clustering coeff. 0.716 0.731 0.775 0.624
No. of connect. comp. 1 1 1 2
Network diameter 5 3 3 4
Avg. no. of neighbors 14.18 9.39 18.18 11.11
No. of self loops 19 30 42 31
Size of Largest comp. 76 76 76 67
TABLE I: Comparison of global metrics associated with networks inferred from ECoG seizure data using the linear and K-SVARMs with . The K-SVARM was based on a polynomial kernel of order . Major differences between the computed metrics indicate that one may gain insights from network topologies inferred via models that capture nonlinear dependencies.

Vii Conclusions

This paper put forth a novel nonlinear SVARM framework that leverages kernels to infer effective connectivity networks in the brain. Postulating a generalized additive model with unknown functions to capture the hidden network structure, a novel regularized LS estimator that promotes sparse solutions was advocated. In order to solve the ensuing convex optimization problem, an efficient algorithm that resorts to ADMM iterations was developed, and a data-driven approach was introduced to select the appropriate kernel. Extensive numerical tests were conducted on ECoG seizure data from a study on epilepsy.

In order to assess the utility of the novel approach, several local and global metrics were adopted and computed on networks inferred before and after the onset of a seizure. By observing changes in network behavior that are revealed by standard metrics before and after seizure onset, it is possible identify key structural differences that may be critical to explain the mysteries of epileptic seizures. With this in mind, the paper focused on identifying structural differences in the brain network that could not be captured by the simpler linear model. Interestingly, empirical results support adoption of a nonlinear modeling perspective when analyzing differences in effective brain connectivity for epilepsy patients. Specifically, adopting the novel kernel-based approach revealed more significant differences between the preictal and ictal phases of ECoG time series. For instance, it turned out that some regions exhibited fewer dependencies, reduced reachability, and weakened information-routing capabilities after the onset of a seizure. Since the kernel-based model includes the linear SVARM as an instance, the conducted experiments suggest that one may gain more insights by adopting the nonlinear model, a conclusion that may yield informative benefits to studies of epilepsy that leverage network science.

This work paves the way for a number of exciting research directions in analysis of brain networks. Although it has been assumed that inferred networks are static, overwhelming evidence suggests that topologies of brain networks are dynamic, and may change over rather short time horizons. Future studies will extend this work to facilitate tracking of dynamic brain networks. Furthermore, the novel approach will be empirically tested on a wider range of neurological illnesses and disorders, and pertinent comparisons will be done to assess the merits of adopting the advocated nonlinear modeling approach.

Appendix

A Topology Inference via ADMM

Given matrices and , this section capitalizes on convexity, and the nature of the additive terms in (IV) to develop an efficient topology inference algorithm. Proximal optimization approaches have recently been shown useful for convex optimization when the cost function comprises the sum of smooth and nonsmooth terms; see e.g., [11]. Prominent among these approaches is the alternating direction method of multipliers (ADMM), upon which the novel algorithm is based; see e.g., [37] for an early application of ADMM to distributed estimation.

For ease of exposition, let the equality constraints () temporarily remain implicit. Introducing the change of variables , problem (IV) can be recast as

s.t. (17)

where , , and is the nonsmooth regularizer. Let , and , where is a block diagonal of its matrix arguments. One can then write the augmented Lagrangian of (A) as

(18)

where , and . Note that is a matrix of dual variables that collects Lagrange multipliers corresponding to the equality constraints in (A), denotes the inner product between and , while a prescribed penalty parameter. ADMM boils down to a sequence of alternating minimization iterations to minimize over the primal variables , and , followed by a gradient ascent step over the dual variables ; see also [2, 37]. Per iteration , this entails the following provably-convergent steps, see e.g. [37]

(19a)
(19b)
(19c)

Focusing on , note that (19a) decouples across columns of , and admits closed-form, parallelizable solutions. Incorporating the structural constraint , one obtains the following decoupled subproblem per column

(20)

where is constructed by removal of entries indexed by from , with denoting the -th column of . Assuming is invertible, the per-column subproblem (20) admits the following closed-form solution per

(21)

On the other hand, (19b) can be solved per component vector , and a closed-form solution can be obtained via the so-termed block shrinkage operator for each and , namely,

(22)

where . Upon convergence, can be determined by thresholding , and declaring an edge present from to , if there exists any , for .

References

  • [1] D. Angelosante and G. B. Giannakis, “Sparse graphical modeling of piecewise-stationary time series,” in Proc. Int. Conf. Acoust. Speech Signal Process., Prague, Czech Republic, May 2011, pp. 1960–1963.
  • [2] B. Baingana, G. Mateos, and G. B. Giannakis, “Dynamic structural equation models for tracking topologies of social networks,” in Proc. of Workshop on Computational Advances in Multi-Sensor Adaptive Processing, Saint Martin, Dec. 2013, pp. 292–295.
  • [3] ——, “Proximal-gradient algorithms for tracking cascades over social networks,” IEEE J. Sel. Topics Sig. Proc., vol. 8, no. 4, pp. 563–575, Aug. 2014.
  • [4] J. A. Bazerque and G. B. Giannakis, “Nonparametric basis pursuit via sparse kernel-based learning: A unifying view with advances in blind methods,” IEEE Signal Processing Magazine, vol. 30, no. 4, pp. 112–125, Jul. 2013.
  • [5] D. P. Bertsekas, Nonlinear Programming, 2nd ed.   Athena-Scientific, 1999.
  • [6] H. Bozdogan, “Model selection and Akaike’s information criterion (AIC): The general theory and its analytical extensions,” Psychometrika, vol. 52, no. 3, pp. 345–370, Sep. 1987.
  • [7] C. Büchel and K. J. Friston, “Modulation of connectivity in visual pathways by attention: Cortical interactions evaluated with structural equation modelling and fMRI.” Cerebral Cortex, vol. 7, no. 8, pp. 768–778, Dec. 1997.
  • [8] X. Cai, J. A. Bazerque, and G. B. Giannakis, “Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations,” PLoS Comp. Biol., vol. 9, no. 5, p. e1003068, May 2013.
  • [9] G. Chen, D. R. Glen, Z. S. Saad, J. P. Hamilton, M. E. Thomason, I. H. Gotlib, and R. W. Cox, “Vector autoregression, structural equation modeling, and their synthesis in neuroimaging data analysis,” Computers in Biology and Medicine, vol. 41, no. 12, pp. 1142–1155, Dec. 2011.
  • [10] S. Chen and P. Gopalakrishnan, “Speaker, environment and channel change detection and clustering via the Bayesian information criterion,” in Proc. DARPA Broadcast News Transcription and Understanding Workshop, Virginia, USA, Aug. 1998, pp. 127–132.
  • [11] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding algorithm for linear inverse problems with a sparsity constraint,” Comm. Pure Appl. Math., vol. 57, no. 11, pp. 1413–1457, Aug. 2004.
  • [12] G. Deshpande, X. Hu, R. Stilla, and K. Sathian, “Effective connectivity during haptic perception: A study using Granger causality analysis of functional magnetic resonance imaging data,” NeuroImage, vol. 40, no. 4, pp. 1807–1814, May 2008.
  • [13] K. J. Friston, C. D. Frith, and R. S. Frackowiak, “Time-dependent changes in effective connectivity measured with PET,” Human Brain Mapping, vol. 1, no. 1, pp. 69–79, Jan. 1993.
  • [14] K. J. Friston, “Functional and effective connectivity in neuroimaging: A synthesis,” Human Brain Map., vol. 2, no. 1-2, pp. 56–78, Jan. 1994.
  • [15] K. J. Friston, L. Harrison, and W. Penny, “Dynamic causal modelling,” NeuroImage, vol. 19, no. 4, pp. 1273–1302, Aug. 2003.
  • [16] D. R. Gitelman, W. D. Penny, J. Ashburner, and K. J. Friston, “Modeling regional and psychophysiologic interactions in fMRI: The importance of hemodynamic deconvolution,” NeuroImage, vol. 19, no. 1, pp. 200–207, May 2003.
  • [17] R. Goebel, A. Roebroeck, D.-S. Kim, and E. Formisano, “Investigating directed cortical interactions in time-resolved fMRI data using vector autoregressive modeling and Granger causality mapping,” Magnetic Resonance Imaging, vol. 21, no. 10, pp. 1251–1261, Dec. 2003.
  • [18] J. R. Harring, B. A. Weiss, and J.-C. Hsu, “A comparison of methods for estimating quadratic effects in nonlinear structural equation models.” Psychological Methods, vol. 17, no. 2, pp. 193–214, Jun. 2012.
  • [19] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning.   Springer, 2009.
  • [20] X. Jiang, S. Mahadevan, and A. Urbina, “Bayesian nonlinear structural equation modeling for hierarchical validation of dynamical systems,” Mechanical Systems and Signal Processing, vol. 24, no. 4, pp. 957–975, Apr. 2010.
  • [21] K. G. Jöreskog, F. Yang, G. Marcoulides, and R. Schumacker, “Nonlinear structural equation models: The Kenny-Judd model with interaction effects,” Advanced Structural Equation Modeling: Issues and Techniques, pp. 57–88, Jan. 1996.
  • [22] D. Kaplan, Structural Equation Modeling: Foundations and Extensions.   Sage, 2009.
  • [23]

    A. Kelava, B. Nagengast, and H. Brandt, “A nonlinear structural equation mixture modeling approach for nonnormally distributed latent predictor variables,”

    Structural Equation Modeling: A Multidisciplinary Journal, vol. 21, no. 3, pp. 468–481, Jun. 2014.
  • [24] E. D. Kolaczyk, Statistical Analysis of Network Data: Methods and Models.   Springer, 2009.
  • [25] Y. Kopsinis, K. Slavakis, and S. Theodoridis, “Online sparse system identification and signal reconstruction using projections onto weighted balls,” IEEE Trans. Sig. Proc., vol. 59, no. 3, pp. 936–952, Mar. 2011.
  • [26] M. A. Kramer, E. D. Kolaczyk, and H. E. Kirsch, “Emergent network topology at seizure onset in humans,” Epilepsy Research, vol. 79, no. 2, pp. 173–186, May 2008.
  • [27] S.-Y. Lee and X.-Y. Song, “Model comparison of nonlinear structural equation models with fixed covariates,” Psychometrika, vol. 68, no. 1, pp. 27–47, Mar. 2003.
  • [28] S.-Y. Lee, X.-Y. Song, and J. C. Lee, “Maximum likelihood estimation of nonlinear structural equation models with ignorable missing data,” J. of Educ. and Behavioral Stat., vol. 28, no. 2, pp. 111–134, Jun. 2003.
  • [29] N. Lim, F. d’Alché Buc, C. Auliac, and G. Michailidis, “Operator-valued kernel-based vector autoregressive models for network inference,” Machine Learning, vol. 99, no. 3, pp. 489–513, Jun. 2015.
  • [30] D. Marinazzo, M. Pellicoro, and S. Stramaglia, “Kernel-Granger causality and the analysis of dynamical networks,” Physical Review E, vol. 77, no. 5, p. 056215, May 2008.
  • [31] ——, “Kernel method for nonlinear Granger causality,” Physical Review Letters, vol. 100, no. 14, p. 144103, Apr. 2008.
  • [32] A. Mclntosh and F. Gonzalez-Lima, “Structural equation modeling and its application to network analysis in functional brain imaging,” Human Brain Mapping, vol. 2, no. 1-2, pp. 2–22, Oct. 1994.
  • [33] C. A. Micchelli and M. Pontil, “Learning the kernel function via regularization,” Journal of Machine Learning Research, vol. 6, no. Jul, pp. 1099–1125, Jul. 2005.
  • [34] R. Roche and S. Commins, Pioneering Studies in Cognitive Neuroscience.   McGraw-Hill Education, 2009.
  • [35] A. Roebroeck, E. Formisano, and R. Goebel, “Mapping directed influence over the brain using Granger causality and fMRI,” NeuroImage, vol. 25, no. 1, pp. 230–242, Mar. 2005.
  • [36] M. Rubinov and O. Sporns, “Complex network measures of brain connectivity: Uses and interpretations,” NeuroImage, vol. 52, no. 3, pp. 1059–1069, Sep. 2010.
  • [37] I. D. Schizas, A. Ribeiro, and G. B. Giannakis, “Consensus in ad hoc WSNs with noisy links -Part I: Distributed estimation of deterministic signals,” IEEE Trans. Sig. Proc., vol. 56, pp. 350–364, Jan. 2008.
  • [38] X. Sun, “Assessing nonlinear Granger causality from multivariate time series,” in Proc. Eur. Conf. Mach. Learn. Knowl. Disc. Databases, Antwerp, Belgium, Sep. 2008, pp. 440–455.
  • [39] M. Wall and Y. Amemiya, “Estimation for polynomial structural equation models,” J. American Stat. Assoc., vol. 95, no. 451, pp. 929–940, Oct. 2000.