1 Introduction: Brains & connectomes
The term “connectome” was coined by Hagmann (2005) and Sporns et al. (2005), and has come to mean any “brain graph”; “connectomics” means the study of such graphs; and “statistical connectomics” means the statistical analysis of such graphs.
The Human Connectome Project (http://www.humanconnectomeproject.org) “aims to provide an unparalleled compilation of neural data, an interface to graphically navigate this data and the opportunity to achieve never before realized conclusions about the living human brain.” Sporns (2012) provides a recent survey of the quest to discover the human connectome. Glasser et al. (2016) presents the newest results in the long history of efforts to update the Brodmann region atlas (Zilles and Amunts, 2010).
There are various connectomes available, including at the macro-scale connectomes constructed from structural, functional, and diffusion MRI data. For instance, the Open Connectome Project (http://www.openconnectomeproject.org
) makes available connectomes collected via structural magnetic resonance imaging (MRI), functional MRI (fMRI), diffusion tensor imaging (DTI), and diffusion spectrum imaging (DSI). These macro-scale connectomes are used, for example, to investigate connectivity between (sub)cortical regions.
In addition to MRI modalities, there are behavioral connectomes (via optogenetics), activity-based connectomes (via calcium imaging), etc. For example, Ko et al. (2011) and Lee et al. (2016) consider activity-based connectomes, and Vogelstein et al. (2014) investigates a behavioral connectome obtained via optogenetic neuron manipulation.
At the neurons-as-vertices and synapses-as-edges scale, partial connectomes of various organisms (C. elegans, Drosophila, zebrafish, mouse, etc.) are also available. The full connectome of the roundworm C. elegans has been made available at this micro-scale via electron microscopy (White et al., 1986, Varshney et al., 2011). This data continues to be widely studied. Notably, there are two connectomes available for C. elegans – one based on chemical synapses and one based on electrical synapses; Chen et al. (2016) presents a joint graph inference case study of the C. elegans chemical and electrical connectomes.
A holy grail of connectomics is the “connectome code” – a generative model characterizing important aspects of the connectome111 Neural coding characterizes the relationship between the ongoing external environment (stimuli or behaviors) and neural activity. By way of analogy, connectome coding characterizes the relationship between past experience (including genetics) and neural connectivity. Specifically, which properties of connectomes are preserved across individuals of the same species, and which vary as a function of life history? Similarly, which connectome codes are preserved across species, and which are adapted to species’ specific evolutionary niches?. This paper reports the results of a “structure discovery” analysis of an important first-of-its-kind complete neurons-as-vertices and synapses-as-edges electron microscopy connectome. The paper is organized as follows. Section 2 provides a brief description of our data, the larval Drosophila mushroom body connectome described in detail in Eichler et al. (2017)
. Section 3 presents a thorough spectral clustering investigation of this connectome, demonstrating conclusively that there is one major aspect of the connectome that is insufficiently captured by this approach. Section 4 introduces and develops the principled semiparametric spectral modeling methodology that we use to generate a much more satisfying connectome code for the larvalDrosophila mushroom body. Section 5 provides both neuroscientific and methodological discussion, and Section 6 presents our conclusion. Algorithmic details are provided in an appendix.
2 The larval Drosophila mushroom body connectome
HHMI Janelia recently reconstructed the complete wiring diagram of the higher order parallel fiber system for associative learning in the larvalDrosophila brain, the mushroom body (MB). Memories are thought to be stored as functional and structural changes in connections between neurons, but the complete circuit architecture of a higher-order learning center involved in memory formation or storage has not been known in any organism until now. This data set provides a real and important example for initial investigation into synapse-level structural connectome modeling.
The MB connectome was obtained via serial section transmission electron microscopy of an entire larval Drosophila nervous system (Ohyama et al., 2015, Schneider-Mizell et al., 2016). This connectome contains the entirety of MB intrinsic neurons called Kenyon cells and all of their pre- and post-synaptic partners (Eichler et al., 2017).
We consider the right hemisphere MB. The connectome consists of four distinct types of neurons – Kenyon Cells (KC), Input Neurons (MBIN), Output Neurons (MBON), Projection Neurons (PN) – with directed connectivity illustrated in Figure 1. There are neurons222 There are 13 isolates, all are KC; removing these isolates makes the (directed) graph one (weakly, but not strongly) connected component with 213 vertices and 7536 directed edges., with , , , and . Figure 2 displays the observed MB connectome as an adjacency matrix. Note that, in accordance with Figure 1, Figure 2 shows data (edges) in only eight of the 16 blocks. cir
3 Spectral clustering
Due to its undeniable four-neuron-type connectivity structure, we might think of the MB connectome, to first order, as an observation from a -block directed stochastic block model (SBM) on vertices. (The SBM was introduced in Holland et al. (1983); the directed version in Wang and Wong (1987).) This model is parameterized by (isuch that for all and and (ii) a block connectivity probability matrix with entries governing the probability of directed edges from vertices in block to vertices in block . For this model of the MB connectome we have
Theory and methods suggest Gaussian mixture modeling (see, for example, Fraley and Raftery (2002)) composed with adjacency spectral embedding (see, for example, Sussman et al. (2012)), denoted , for analysis of the (directed) SBM333 for directed SBM: the ASE CLT (Athreya et al., 2016) suggests (mutatis mutandis) that concatenation of the top left/right singular vectors from a directed -SBM adjacency matrix behaves approximately as a random sample from a mixture of Gaussians in . Tang and Priebe (2016) demonstrates that the choice between adjacency spectral embedding and Laplacian spectral embedding is an empirical modeling issue as neither dominates the other for subsequent inference and that -means is inferior to GMM for spectral clustering. .
Adjacency spectral embedding (ASE) of a directed graph on vertices (e.g., the MB connectome with neurons, depicted as a directed adjacency matrix in Figure 2
) employs the singular value decomposition (SVD) to represent theadjacency matrix via and chooses the top singular values and their associated left- and right-singular vectors to embed the graph as points in via the concatenation
(The scaled left-singular vectors are interpreted as the “out-vector” representation of the digraph, modeling vertices’ propensity to originate directed edges; similarly, are interpreted as the “in-vectors”.) Gaussian mixture modeling (GMM) then fits a -component -dimensional Gaussian mixture model to the points given by the rows of . If the graph is an SBM (or indeed more generally) then provides consistent subsequent inference (Sussman et al., 2012, Fishkind et al., 2013, Tang et al., 2013, Sussman et al., 2014, Lyzinski et al., 2014, 2017, Athreya et al., 2016, Tang and Priebe, 2016).
applied to the MB connectome yields the clustered embedding depicted via the pairs plot presented in Figure 3, with the associated cluster confusion matrix with respect to true neuron types presented in Table 1. The clusters are clearly coherent with the four true neuron types. (For ease of illustration, Figure 4 presents just the out1 vs. out2 subspace.)
There are two model selection problems inherent in spectral clustering in general, and in obtaining our clustered embedding (Figure 3) in particular: choice of embedding dimension (), and choice of mixture complexity ().
A ubiquitous and principled method for choosing the number of dimensions in eigendecompositions and SVDs (e.g., principal components analysis, factor analysis, spectral embedding, etc.) is to examine the so-called scree plot (the SVD scree plot for our MB connectome is presented in Figure5
) and look for “elbows” or “knees” defining the cut-off between the top (signal) dimensions and the noise dimensions. Identifying a “best” method is, in general, impossible, as the bias-variance tradeoff demonstrates that, for small, subsequent inference may be optimized by choosing a dimension smaller than the true signal dimension; see Section 3 of Jain et al. (2000) for a clear and concise illustration of this phenomenon. There are a plethora of variations for automating this singular value thresholding (SVT); Section 2.8 of Jackson (2004) provides a comprehensive discussion in the context of principal components, and Chatterjee (2015) provides a theoretically-justified (but perhaps practically suspect, for small ) universal SVT. Using the profile likelihood SVT method of Zhu and Ghodsi (2006) yields a cut-off at three singular values, as depicted in Figure 5. Recall that, as this is a directed graph, we have both left- & right-singular vectors for each vertex; thus the SVT choice of three singular values results in .
Similarly, a ubiquitous and principled method for choosing the number of clusters in, for example, Gaussian mixture models, is to maximize a fitness criterion penalized by model complexity. Common approaches include Akaike Information Criterion (AIC) (Akaike, 1974), Bayesian Information Criterion (BIC) (Schwarz, 1978), Minimum Description Length (MDL) (Rissanen, 1978), etc. Again, identifying a “best” method is, in general, impossible, as the bias-variance tradeoff demonstrates that, for small , inference performance may be optimized by choosing a number of clusters smaller than the true cluster complexity; see for example Bickel and Doksum (2007) Problem 6.6.8. MCLUST’s BIC (Fraley and Raftery, 2002) applied to our MB connectome embedded via ASE into is maximized at six clusters, as depicted in Figure 6, and hence . (MCLUST’s most general covariance model – “VVV” = ellipsoidal with varying volume, shape, and orientation – is the winner.)
The clusters reported in Table 1 are essentially correct, with just a few misclustered neurons – e.g., cluster #3 contains only MBIN and most of the MBIN, cluster #5 contains mostly PN and most of the PN, and cluster #6 contains only MBON and most of the MBON – and, notably, KC being distributed across multiple clusters.
While BIC chooses clusters, it is natural to ask whether the distribution of KC across multiple clusters is an artifact of insufficiently parsimonious model selection. However, choosing four or five clusters not only (substantially) decreases BIC, but in fact leaves KC distributed across multiple clusters while splitting and/or merging other neuron types. In the direction of less parsimony, Figure 6 suggests that any choice from 7 to 11 clusters is competitive, in terms of BIC, with the maximizer . Moreover, any of these choices only slightly decreases BIC, while leaving PN, MBIN, and MBON clustered (mostly) singularly and (mostly) purely and distributing KC across more clusters. Tables 2, 3, and 4 show cluster confusion matrices for other choices of .
We perform a cluster assessment to investigate the (unsupervised) selection of via BIC and via SVT in terms of the true neuron types. There are numerous cluster assessment criteria available in the literature; we consider Adjusted Rand Index (ARI) (Hubert and Arabie, 1985), Normalized Mutual Information (NMI) (Danon et al., 2005), Variation of Information (VI) (Meilă, 2007), and Jaccard (Jaccard, 1912). (For all but VI, bigger is better; ergo, we report 1/VI for convenience.) In Table 5 we fix and show that BIC’s coincides with the best choice of mixture complexity. In Table 6 we find the best clustering in various embedding dimensions and show that SVT’s coincides with a fine choice of embedding dimension – choosing 4 dimensions seems approximately as good for the subsequent clustering task, while choosing 2 or 8 dimensions yields degraded performance.
The conclusion of this section is that our spectral clustering of the MB connectome via , with principled model selection for choosing embedding dimension and mixture complexity, yields meaningful results: a single Gaussian cluster for each of MBIN, MBON, and PN, and multiple clusters for KC. That is, we have one substantial revision to Figure 1’s illustration of the larval Drosophila mushroom body connectome as a directed graph on four neuron types: significant substructure associated with the KC neurons. It is a more satisfactory model of this KC substructure that we pursue in the next section.
4 Semiparametric spectral modeling
The spectral clustering results of the previous section – a single Gaussian cluster for each of MBIN, MBON, and PN, and from at least 3 to as many as 8 Gaussian clusters for KC – hint at the possibility of a continuous, rather than discrete, structure for the KC.
Eichler et al. (2017) describes so-called “claws” associated with each KC neuron, and posits that KCs with only 1 claw are the oldest, followed in decreasing age by multi-claw KCs (from 2 to 6 claws), with finally the youngest KCs being those with 0 claws. Figure 7 and Table 7 use this additional neuronal information to show that the multiple clusters for the KC neurons are capturing neuron age – and in a seemingly coherent geometry.
As the clusters for the KC neurons are capturing neuron age – a continuous vertex attribute – in a seemingly coherent geometry, this section introduces the “latent structure model” (LSM) generalization of the SBM together with the principled semiparametric spectral modeling methodology associated thereto. Specifically, we fit a continuous curve to (the KC subset of) the data in latent space and show that traversal of this curve corresponds monotonically to neuron age.
We digress for a moment, to motivate our approach to the task at handand to develop, under the impetus of connectome modeling, a new direction for the theory & methods of statistical modeling for random graphs. (“The wealth of your practical experience with sane and interesting problems will give to mathematics a new direction and a new impetus.” – Leopold Kronecker to Hermann von Helmholtz (1888).)
|claw: 1 (oldest)||15||4||—||0||0||—|
|claw: 0 (youngest)||0||10||—||16||2||—|
An alternative formulation for the directed SBM
is as a latent position model (LPM) a la Hoff
et al. (2002).
Here one considers latent positions on ,
and the graph is generated from the via a link function or kernel
(e.g. distance or inner product ).
In particular, ASE – an SVD of the adjacency matrix –
begs for an inner product kernel (that is, a random dot product graph, or RDPG; see e.g. Sussman
et al. (2012)).
Definition 1 (Directed Random Dot Product Graph (RDPG)).
Let , and let be a distribution on a set such that for all and . We say that is an instance of a directed random dot product graph (RDPG) if with , and is a hollow matrix satisfying
The SBM as a latent position model says the latent position distribution is mixture of point masses,
with the block membership probability vector given by the weights on these point masses
and, for the RDPG, the block connectivity probability matrix generated by their inner products.
Definition 2 (Directed Stochastic Block Model (SBM)).
Let , with . We say that an vertex graph is a directed stochastic block model (SBM) with blocks if the distribution is a mixture of point masses,
with block membership probability vector in the unit -simplex and distinct latent positions given by . The first elements of each latent position are the out-vectors, denoted , and the remaining elements are the in-vectors . We write and we refer to as the block connectivity probability matrix for the model.
The CLT for ASE of an SBM (Athreya et al., 2016) says that the
behave approximately as a random sample from a Gaussian mixture model (GMM) with the means and mixing coefficients of the mixture components providing a consistent estimate for. (An analogous CLT for Laplacian spectral embedding is available in Tang and Priebe (2016).) In general, for an RDPG with any distribution on such that inner products are in , the CLT yields approximate normality: .
The above theoretical motivation is illustrated by considering the observed block connectivity probability matrix for our MB connectome data, under the assumption that it really is a 4-block directed SBM on the neuron types, given by
The SVD of provides four 4-dimensional in-vectors and four 4-dimensional out-vectors – the true latent positions for an SBM with block connectivity probability matrix . Together with the observed block membership probability vector , this yields a synthetic mushroom body directed SBM, dubbed “synthMB”, with latent positions where is the weighted mixture of four point masses in . These four point masses are depicted in Figure 8 via brown concentric circles. ASE provides estimated latent positions for a large (10000-vertex) graph generated from synthMB; these are depicted in gold in Figure 8 and agree with the truth as large-sample approximation theory demands. ASE provides estimated latent positions for a small graph generated from synthMB with precisely vertices in precisely their observed neuron type proportions; these are depicted in grey diamonds in Figure 8 and demonstrate that the large-sample approximation provides some practical guidance but has not entirely kicked in at . The colored circles in Figure 8 depict the actual MB connectome embedding of the four neuron types; one can see that the real embedded MBIN, MBON, and PN are behaving approximately as expected with respect to synthMB but clearly KC requires latent structure more complex than just synthMB’s point mass.
All models are wrong.
We have demonstrated that the MB connectome is not a 4-block SBM,
and this model’s usefulness has been in allowing us to identify a first-order sense in which the real data deviate from the model.
The results of
a single cluster for each of MBIN, MBON, and PN,
and multiple clusters indicating a geometrically coherent structure for KC
compel us to use, as our new model, a generalization of SBM allowing KC a curve, rather than just a point, in latent space.
We now endeavor to model the MB connectome as a 4 component latent structure model (LSM),
where LSM denotes the “generalized SBM” where each “block”
may be generalized from point mass latent position distribution
to latent position distribution with support on some curve
(with the "block" curves disconnected, as (of course) are SBM’s point masses).
So LSM does have structure just not quite so simple as SBM;
and LSM will exhibit clustering just not quite so simple as SBM.
Thanks to the foregoing RDPG discussion,
we see that the LSM is easily formulated
by considering latent position distribution
more general than SBM’s finite mixture of point masses
but not arbitrary as for the fully general RDPG.
Definition 3 (Directed Latent Structure Model (LSM)).
Let , and let be a distribution on a set such that for all and . We say that an vertex graph is a directed latent structure model (LSM) with “structure components” if the support of distribution is a mixture of (disjoint) curves,
with block membership probability vector in the unit -simplex and supported on and disjoint. We write .
NB: The degree-corrected SBM (Karrer and Newman, 2011) is a special case of LSM where each is a ray.
NB: The “hierarchical stochastic block model” (HSBM), introduced and exploited in Lyzinski et al. (2017), is a similarly “generalized SBM” where each “block” may be generalized from point mass latent position distribution to structured latent position distribution with support given by a hierarchical mixture of points.
So now we investigate our MB connectome as an LSM with latent positions where is no longer a mixture of four point masses with one point mass per neuron type but instead is three points and a continuous curve .
The approximate normality provided by the CLT for ASE of an RDPG compels us to consider estimating via a semiparametric Gaussian mixture model for the ’s. Let be a probability measure on a parameter space , where is the space of -dimensional covariance matrices, and let be a family of normal densities. Then the function given by
is a semiparametric GMM. is referred to as the mixing distribution of the mixture, where is the class of all probability measures on . If consists of a finite number of atoms, then is a finite normal mixture model with means, variances and proportions determined by the locations and weights of the point masses. Lindsay (1983) provides theory for maximum likelihood estimation (MLE) in the semiparametric GMM.
Thus (ignoring covariances for presentation simplicity, so that
is the component mean vector) we see that the ASE RDPG CLT suggests estimating the probability density function of the embedded MB connectome, under the LSM assumption, as the semiparametric GMM with and where is supported by three points and a continuous curve . Note that in the general case, where includes both means and covariance matrices, we have . However, we emphasize that it is (or ) that is the “connectome code” – the key to the generative model; the covariances (in but not in ) that we are “ignoring for presentation simplicity” are in fact nuisance parameters from the perspective of the connectome code. The ASE RDPG CLT provides a large-sample approximation for , and provides a mean-covariance constraint so that if we knew the latent position distribution
we would have no extra degrees of freedom (though perhaps a more challenging MLE optimization problem). As it is, we do our fitting in the general case, with simplifying constraints on the covariance structure associated with.
The MLE (continuing to ignore covariances) is given by
where are given by the means of the Gaussian mixture components for MBIN, MBON, and PN, and is a one-dimensional curve. Figure 9 displays the MLE results from an EM optimization for the curve constrained to be quadratic, as detailed in the Appendix. (Model testing for in
does yield quadratic: testing the null hypothesis of linear against the alternative of quadratic yields clear rejection (), while there is insufficient evidence to favor cubic over quadratic ().)
That is, (continuing to ignore covariances) our structure discovery via yields an latent position estimate for the MB connectome – a connectome code for the larval Drosophila mushroom body – as a semiparametric Gaussian mixture of three point masses and a continuous parameterized curve ; the three Gaussians correspond to three of the four neuron types, and the curve corresponds to the fourth neuron type (KC) with the parameterization capturing neuron age. See Figure 10.
Eichler et al. (2017) suggests distance-to-neruopile – the distance to the MB neuropile from the bundle entry point of each KC neuron – as a proxy for neuron age, and analyzes this distance in terms of number of claws for neuron . See Figure 11. We now demonstrate that the correlation of this distance with the KC neurons’ projection onto the parameterized curve is highly significant – this semiparametric spectral model captures neuroscientifically important structure in the connectome. To wit, we project each KC neuron’s embedding onto our parameterized and study the relationship between the projection’s position on the curve, , and the neuron’s age through the distance proxy . See Figures 12 and 13. We find significant correlation of with – Spearman’s , Kendall’s , Pearson’s , with in each case – demonstrating that our semiparametric spectral modeling captures biologically relevant neuronal properties.
We briefly discuss a few of the many issues raised by this investigation: first a few points of neuroscientific relevance, and then some technical points regarding our methodology.
5.1 Neuroscientific discussion points
5.1.1 Directed! Weighted?
Inspection of the six-dimensional embedding of our MB connectome (Figure 3) suggests that neither the in-vectors nor the out-vectors alone suffice. Figure 14 left panel demonstrates this quantitatively.
Right panel: Weighted? Plotting ARI vs. dimension for the analysis of the weighted MB connectome demonstrates, in comparison with Figure 14, that the best weighted version yields inferior results. (Transformation of the weights can make the weighted results competitive.)
The connectome is a multi-graph – there are multiple edges (synapses) between neurons. We have analyzed the unweighted version. ASE is applicable to weighted graphs, and the analogous analysis with the weighted MB connectome yields inferior results – see Figure 14 right panel. NB: It does appear that one might do better using some transformation of the weights – e.g., ; this is an area of current investigation.
5.1.2 Synthetic validation
Figure 15 presents synthetic KC sampling (grey diamonds) vs. true KC embedding (red circles). The left panel shows the synthMB sampling from Section 4 – just the KCs from Figure 8; sampling from the SBM, which models KC with a single latent space Gaussian, demonstrates the need for a more elaborate KC model. The center panel shows the synthetic sampling using the best estimate obtained via in Section 3; sampling from the SBM, which models KC with three latent space Gaussians, demonstrates superiority over synthMB but still the need for a more elaborate KC model. The right panel shows the synthetic sampling using the best estimate obtained via the methodology developed in Section 4; here we see that sampling from the structured LSM, which models KC with a semiparametric GMM in latent space, reproduces the KC structure amazingly well.
5.1.3 Outlier detection and characterization
Our semiparametric connectome code greatly facilitates the search for and characterization of outliers. For example, there is one outlier readily apparent in Figure13 – a KC neuron at the bottom left with small , , and four claws. (This is the larger green dot at in Figure 7, perhaps but not quantitatively an outlier without our parameterized semiparametric curve ; nothing stands out too dramatically in the four-claw boxplot in Figure 11.) Neuroscientifically, post facto investigation shows that this neuron is clearly an outlier in the group of mature KCs, unusual in that it has many synapses in the calyx but isn’t fully grown out yet in the lobes, explaining why this neuron might group with the 0 claw KCs having small in Figure 13.
This result serves as both an example of the utility of our theory and methods for subsequent neuroscientific investigations and an empirical validation of our claim that the LSM and the associated estimation methodology capture biologically relevant neuronal properties in the MB connectome.
5.1.4 Hemispheric validation: right vs. left
We have considered the right hemisphere MB. In the absence of – that is, MB connectomes for other larval Drosophila animals, which data are not yet available – we compare and contrast our estimate obtained on the right connectome with data from the left connectome. (Indeed, we developed the theory & methods and the estimate for the right hemisphere MB without ever looking at the left hemisphere MB data for just this validation purpose.) Figure 16 shows that the right hemisphere MB estimate (right panel – a repeat of Figure 10) not only captures the structure in the right connectome, but also provides compelling evidence (left panel) that the same right connectome estimate captures the structure in the left connectome data as well. (NB: The analogous result holds when we estimate using the left connectome – the left estimate captures the right structure.)
5.2 Methodological discussion points
It is established (Tang and Priebe, 2016) that there is no uniformly best choice between adjacency spectral embedding (ASE) and Laplacian spectral embedding (LSE) for spectral clustering. The extension of ASE to directed graphs is straightforward (as we have seen), while LSE for directed graphs remains an open area of investigation (see, e.g., Section 4.3.2 in the recent survey by Malliaros and Vazirgiannis (2013)).
It is established (Tang and Priebe, 2016) that -means is inferior to GMM for spectral clustering. That is, however, a limit theorem; it says little about our real MB connectome with vertices. Therefore, we did consider replacing GMM with -means in the Section 3 investigation, everything else remaining the same; the results were that we did still get , but the clustering solution was evidently much less consistent with the true neuron types, and ARI was significantly degraded (0.42 with -means vs. 0.63 with GMM).
In practice, for small , it is empirically useful to augment the diagonal of the adjacency matrix (default: for undirected graphs) prior to ASE. As a general rule, we use this augmentation. For thoroughness, we did consider without diagonal augmentation; the results were that we did still get , but the clustering solution was evidently much less consistent with the true neuron types, and ARI was significantly degraded (0.36 without diagonal augmentation vs. 0.63 with).
In a recent PNAS opinion piece (Geman and Geman, 2016), Donald & Stuart Geman describe the ‘usual explanation’ for a perceived lack of ‘fundamental innovation’ in areas such as brain science: that such systems are somehow ‘inherently too complex,’ ‘unsimplifiable,’ ‘not amenable to abstraction.’ The quest for a connectome code that reveals the principles and mechanisms of the connectivity of neural circuits seems to be in keeping with their position that this ‘usual explanation’ may be shortsighted.
Motivated by the results of a spectral clustering investigation of the recently-reconstructed synapse-level larval Drosophila mushroom body structural connectome, which demonstrate conclusively that modeling the Kenyon Cells (KC) demands additional latent space structure, we have developed semiparametric spectral modeling. Exploratory data analysis suggests that the MB connectome can be productively approximated by a 4 component latent structure model (LSM), and the resulting MB connectome code derived via captures biologically relevant neuronal properties.
Of course, the true connectome code is more elaborate, and cannot be completely encompassed by any simple latent position model – such a model precludes the propensity for transitivity, e.g. – but our semiparametric spectral modeling provides another step along the path. In terms of a (partial) ladder of biological scales – e.g., C. elegans, Drosophila, zebrafish, mouse, primate, and humans – this works moves us off the first rung for analysis of a complete neurons-as-vertices and synapses-as-edges connectome.
Next steps include extending this investigation to (currently unavailable) data for (a) new complete synapse-level larval Drosophila MB structural connectomes from different animals, (b) new complete supersets of the synapse-level larval Drosophila MB structural connectome, and (c) new complete synapse-level structural connectomes from different species, such as the adult Drosophila. Furthermore, mutli-modal connectome analyses, combining complete synapse-level structural connectomes with other modalities such as behavioral connectomes obtained via optogenetics and activity-based connectomes obtained via calcium imaging, promise additional advances in connectome coding.
Appendix: Constrained Maximum Likelihood Estimation of the Semiparametric Gaussian Mixture Model for Adjacency Spectral Embedding of a Latent Structure Model
The CLT for the ASE of an RDPG (Athreya et al., 2016) gives that the are approximately normal about the true (unobserved) latent positions . Of course, this is a large-sample (asymptotic) result; furthermore, the theory does not provide mutual independence of all embedded points, but rather of only a fixed finite of the , as . Nevertheless, we proceed assuming independence with what we call “MLE in the embedding space”.
We model the MB connectome embedding as a semiparametric GMM
where is the family of -dimensional Gaussian densities, with , and ; represents a probability measure on the parameter space .
Our exploratory data analysis presented above suggests we consider to be a four component mixture model, with the first three components being point masses and a final component (for the KC neurons) having continuous support on a one-dimensional curve :
where and is a curve in . (Note that the ASE CLT provides a mean-variance constraint which we do not attempt to take advantage of here.)
For the remainder of this appendix we focus on the final non-trivial component in , and consider
and the semiparametric MLE
over a space of constrained probability measures described below.
This optimization for has been widely studied. Kiefer and Wolfowitz (1956) first established the consistency of the semiparametric MLE under suitable conditions. However, many issues remained unresolved until Laird (1978) and Lindsay (1983). The work of Lindsay (1983) has established that there indeed exists a maximizer and provides conditions under which its uniqueness can be established; the nature of the maximizer is further revealed to be that of a discrete distribution with finitely many mass points. Moreover Lindsay (1983) shows that the number of mass points (which is random, depending upon the sample ) is bounded above by the number of distinct observations in the sample; that is, . These results indicate that simply applying the EM algorithm will serve the purpose of estimating because the maximizer is essentially a finite GMM. (Due to the limitations of the EM algorithm, the computational speed for such an estimation can be a challenge. Alternative computational approaches include Intra Simplex Direction Method (ISDM) by Lesperance and Kalbfleisch (1992), Generalized ISDM by Susko et al. (1998), and Constrained Newton Method (CNM) by Wang (2010). For a comprehensive review of semiparametric MLE for mixture models, see Lindsay and Lesperance (1995).)
Based on these previous results, and after settling on a value for , we can directly fit a -component Gaussian mixture to the data. To facilitate such an estimation and reduce the complexity of the optimization, and motivated by the MB connectome exploratory data analysis presented in the main body of this paper, we further assume that is supported on
where , and
is the identity matrix. For, represents a quadratic curve in and represents a linear structure for the covariance; thus, is a curve in . This parametrization has significantly simplified the structure of . Therefore, after settling on for the KC neurons based on a BIC analysis analogous to that presented in Figure 6 but accounting for our semiparametric constraints, we fit the following -component GMM to the data:
where are equally spaced on the curve . Since such a Gaussian mixture restricts its means and variances to satisfy the support of , the estimation can be easily performed using the EM algorithm.
We propose the following EM algorithm to estimate such a mixture model.
Given initial values: .
E step: compute the conditional expectation of class labels
M step: maximize the complete likelihood and obtain the parameter estimates
such that and for .
Use the estimates from Step 3 as the initial values, and repeat Steps 2 and 3 until convergence.
We apply the proposed EM algorithm on the adjacency spectral embeddings of KC neurons. The results are presented in Figure 9 in the manuscript, where the bold curve represents .
Wolfowitz (1956) established the consistency of the semiparametric MLE .
Since our estimation is conducted in a restricted parameter space ,
as long as the true parameter is inside this restricted parameter space the consistency of our estimator follows immediately:
Suppose the true probability measure is supported on . As , and ; consequently, .
This work was partially supported by NSF BRAIN EAGER award DBI-1451081, DARPA XDATA contract FA8750-12-2-0303, DARPA SIMPLEX contract N66001-15-C-4041, and HHMI Janelia. The authors thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, for support and hospitality during the programme Theoretical Foundations for Statistical Network Analysis (EPSRC grant no. EP/K032208/1) where a portion of the work on this paper was undertaken. The authors thank Keith Levin and Joshua Cape for helpful comments and criticism.
- Akaike (1974) Akaike, H. (1974). A new look at the statistical model identification. IEEE Transactions on Automatic Control 19(6), 716–723.
Athreya et al. (2016)
Athreya, A., C. E. Priebe, M. Tang, V. Lyzinski, D. J. Marchette, and D. L.
A limit theorem for scaled eigenvectors of random dot product graphs.Sankhya A 78(1), 1–18.
- Bickel and Doksum (2007) Bickel, P. J. and K. A. Doksum (2007). Mathematical Statistics: Basic Ideas and Selected Topics (2nd ed.), Volume 1 of Holden-Day series in probability and statistics. Prentice Hall.
- Chatterjee (2015) Chatterjee, S. (2015). Matrix estimation by universal singular value thresholding. The Annals of Statistics 43(1), 177–214.
- Chen et al. (2016) Chen, L., J. T. Vogelstein, V. Lyzinski, and C. E. Priebe (2016). A joint graph inference case study: the C. elegans chemical and electrical connectomes. Worm 5(2), e1142041.
- Danon et al. (2005) Danon, L., A. Díaz-Guilera, J. Duch, and A. Arena (2005). Comparing community structure identification. Journal of Statistical Mechanics: Theory and Experiment 2005(09), P09008.
- Diaconis et al. (2008) Diaconis, P., S. Goel, and S. Holmes (2008). Horseshoes in multidimensional scaling and local kernel methods. The Annals of Applied Statistics 2(3), 777–807.
- Eichler et al. (2017) Eichler, K., F. Li, A. L. Kumar, Y. Park, I. Andrade, C. Schneider-Mizell, T. Saumweber, A. Huser, D. Bonnery, B. Gerber, R. D. Fetter, J. W. Truman, C. E. Priebe, L. F. Abbott, A. Thum, M. Zlatic, and A. Cardona (2017). The complete wiring diagram of a high-order learning and memory center, the insect mushroom body. Nature, accepted for publication.
- Ekman (1954) Ekman, G. (1954). Dimensions of Color Vision. Journal of Psychology 38, 467–474.
- Fishkind et al. (2013) Fishkind, D. E., D. L. Sussman, M. Tang, J. T. Vogelstein, and C. E. Priebe (2013). Consistent adjacency-spectral partitioning for the stochastic block model when the model parameters are unknown. SIAM Journal on Matrix Analysis and Applications 34, 23–39.
- Fraley and Raftery (2002) Fraley, C. and A. E. Raftery (2002). Model-based clustering, discriminant analysis and density estimation. Journal of the American Statistical Association 97, 611–631.
- Geman and Geman (2016) Geman, D. and S. Geman (2016). Opinion: Science in the age of selfies. Proceedings of the National Academy of Sciences 113(34), 9384–9387.
- Glasser et al. (2016) Glasser, M. F., T. S. Coalson, E. C. Robinson, C. D. Hacker, J. Harwell, E. Yacoub, K. Ugurbil, J. Andersson, C. F. Beckmann, M. Jenkinson, S. M. Smith, and D. C. Van Essen (2016). A multi-modal parcellation of human cerebral cortex. Nature 536(7615), 171–178.
- Hagmann (2005) Hagmann, P. (2005). From diffusion MRI to brain connectomics. Ph. D. thesis, STI, Lausanne.
- Hoff et al. (2002) Hoff, P. D., A. E. Raftery, and M. S. Handcock (2002). Latent space approaches to social network analysis. Journal of the American Statistical Association 97(460), 1090–1098.
- Holland et al. (1983) Holland, P. W., K. B. Laskey, and S. Leinhardt (1983). Stochastic blockmodels: First steps. Social networks 5(2), 109–137.
- Hubert and Arabie (1985) Hubert, L. and P. Arabie (1985). Comparing partitions. Journal of Classification 2(1), 193–218.
- Jaccard (1912) Jaccard, P. (1912). The distribution of the flora in the alpine zone. The New Phytologist 11(2), 37–50.
- Jackson (2004) Jackson, J. E. (2004). A User’s Guide to Principal Components. John Wiley & Sons, Inc.
et al. (2000)
Jain, A. K., R. P. W. Duin, and J. Mao (2000).
Statistical pattern recognition: A review.IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1), 4–37.
- Karrer and Newman (2011) Karrer, B. and M. E. J. Newman (2011). Stochastic blockmodels and community structure in networks. Physical Review E (3) 83(1), 016107, 10.
- Kiefer and Wolfowitz (1956) Kiefer, J. and J. Wolfowitz (1956). Consistency of the maximum likelihood estimator in the presence of infinitely many incidental parameters. The Annals of Mathematical Statistics 27, 886 – 906.
- Ko et al. (2011) Ko, H., S. B. Hofer, B. Pichler, K. A. Buchanan, P. J. Sjostrom, and T. D. Mrsic-Flogel (2011). Functional specificity of local synaptic connections in neocortical networks. Nature 473(7345), 87–91.
- Laird (1978) Laird, N. (1978). Nonparametric maximum likelihood estimation of a mixing distribution. Journal of the American Statistical Association 73(364), 805 – 811.
- Lee et al. (2016) Lee, W.-C. A., V. Bonin, M. Reed, B. J. Graham, G. Hood, K. Glattfelder, and R. C. Reid (2016). Anatomy and function of an excitatory network in the visual cortex. Nature 532(7599), 370–374.
- Lesperance and Kalbfleisch (1992) Lesperance, M. L. and J. D. Kalbfleisch (1992). An algorithm for computing the nonparametric mle of a mixing distribution. Journal of the American Statistical Association 87(417), 120 – 126.
- Lindsay (1983) Lindsay, B. G. (1983). The geometry of mixture likelihoods: A general theory. The Annals of Statistics 11(1), 86–94.
- Lindsay and Lesperance (1995) Lindsay, B. G. and M. L. Lesperance (1995). A review of semiparametric mixture models. Journal of Statistical Planning and Inference 47, 29–39.
- Lyzinski et al. (2014) Lyzinski, V., D. L. Sussman, M. Tang, A. Athreya, and C. E. Priebe (2014). Perfect clustering for stochastic blockmodel graphs via adjacency spectral embedding. Electronic Journal of Statistics 8, 2905–2922.
- Lyzinski et al. (2017) Lyzinski, V., M. Tang, A. Athreya, Y. Park, and C. E. Priebe (2017). Community detection and classification in hierarchical stochastic blockmodels. IEEE Transactions on Network Science and Engineering 4(1), 13–26.
- Malliaros and Vazirgiannis (2013) Malliaros, F. D. and M. Vazirgiannis (2013). Clustering and community detection in directed networks: A survey. Physics Reports 533(4), 95 – 142.
Meilă, M. (2007).
Comparing clusterings–an information based distance.
Journal of Multivariate Analysis, 873–195.
- Ohyama et al. (2015) Ohyama, T., C. M. Schneider-Mizell, R. D. Fetter, J. V. Aleman, R. Franconville, M. Rivera-Alba, B. D. Mensh, K. M. Branson, J. H. Simpson, J. W. Truman, et al. (2015). A multilevel multimodal circuit enhances action selection in drosophila. Nature 520(7549), 633–639.
- Rissanen (1978) Rissanen, J. (1978). Modeling by shortest data description. Automatica 14(5), 465 – 471.
- Schneider-Mizell et al. (2016) Schneider-Mizell, C. M., S. Gerhard, M. Longair, T. Kazimiers, F. Li, M. F. Zwart, A. Champion, F. M. Midgley, R. D. Fetter, S. Saalfeld, et al. (2016). Quantitative neuroanatomy for connectomics in drosophila. Elife 5, e12059.
- Schwarz (1978) Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics 6(2), 461–464.
- Sporns (2012) Sporns, O. (2012). Discovering the Human Connectome (1st ed.). The MIT Press.
- Sporns et al. (2005) Sporns, O., G. Tononi, and R. Kotter (2005). The human connectome: A structural description of the human brain. PLoS Computational Biology 1(4).
- Susko et al. (1998) Susko, E., J. D. Kalbfleisch, and J. Chen (1998). Constrained nonparametric maximum-likelihood estimation for mixture models. Canadian Journal of Statistics 26(4), 601–617.
- Sussman et al. (2012) Sussman, D. L., M. Tang, D. E. Fishkind, and C. E. Priebe (2012). A consistent adjacency spectral embedding for stochastic blockmodel graphs. Journal of the American Statistical Association 107(499), 1119–1128.
- Sussman et al. (2014) Sussman, D. L., M. Tang, and C. E. Priebe (2014). Consistent latent position estimation and vertex classification for random dot product graphs. IEEE Transactaions on Pattern Analysis and Machine Intelligence 36, 48–57.
- Tang and Priebe (2016) Tang, M. and C. E. Priebe (2016). Limit theorems for eigenvectors of the normalized Laplacian for random graphs. arXiv:1607.0123.
- Tang et al. (2013) Tang, M., D. L. Sussman, and C. E. Priebe (2013). Universally consistent vertex classification for latent position graphs. The Annals of Statistics 41, 1406 – 1430.
- Varshney et al. (2011) Varshney, L. R., B. L. Chen, E. Paniagua, D. H. Hall, and D. B. Chklovskii (2011). Structural properties of the caenorhabditis elegans neuronal network. PLoS Computational Biology 7(2), 1–21.
- Vogelstein et al. (2014) Vogelstein, J. T., Y. Park, T. Ohyama, R. A. Kerr, J. W. Truman, C. E. Priebe, and M. Zlatic (2014). Discovery of brainwide neural-behavioral maps via multiscale unsupervised structure learning. Science 344(6182), 386–392.
- Wang (2010) Wang, Y. (2010). Maximum likelihood computation for fitting semiparametric mixture models. Statistics and Computing 20, 75–86.
- Wang and Wong (1987) Wang, Y. J. and G. Y. Wong (1987). Stochastic blockmodels for directed graphs. Journal of the American Statistical Association 82, 8–19.
- White et al. (1986) White, J. G., E. Southgate, J. N. Thomson, and S. Brenner (1986). The Structure of the Nervous System of the Nematode Caenorhabditis elegans. Philosophical Transactions of the Royal Society of London Series B 314, 1–340.
- Zhu and Ghodsi (2006) Zhu, M. and A. Ghodsi (2006). Automatic dimensionality selection from the scree plot via the use of profile likelihood. Computational Statistics and Data Analysis 51(2), 918–930.
- Zilles and Amunts (2010) Zilles, K. and K. Amunts (2010). Centenary of Brodmann’s map – conception and fate. Nature Reviews Neuroscience 11(2), 139–145.