I Introduction
Many developments in the socalled network science [1] have been successfully applied to empirical studies of natural, sociological and technological systems [2]. Nevertheless, network analyses are frequently based on sampled data, which imposes an inherent problem of data incompleteness (missing or erroneous nodes/edges). The sampling problem, although fundamental, has been insufficiently understood in the case of network data (some notable efforts are [3, 4, 5, 6]). In this paper we report a comprehensive simulation framework for the study of functional networks, an important class of network samples, usually taken from brain imaging methods [7, 8, 9]
. We have used a spatial (neurons and synapses are distributed in a 3D space) neural network to investigate the implications of the functional sampling process, taking as inspiration experimental studies frequently done in neuroinformatics that heavily depend on multichannel recordings (in our case, the entire model is based on the electroencephalogram  EEG). This type of functional network is experimentally constructed by means of pairwise crosscorrelations (or other time series measurement) computed between all EEG signals
[7, 8, 9]. The fundamental problem being addressed here is that of relating the microscopic organization patterns (spatial network) with the mesoscopic dynamical patterns (functional network) using well known network/graph measurements.Some critical analyses of functional brain sampling have already been performed. It has been shown, for instance, that functional networks obtained from singleneuron spike trains recorded in the monkey visual system tend to overestimate the smallworld effect [10] – a smallworld network is locally clustered while distances between nodes are short on average [11]. In other study, the number of recorded signals have been evaluated in the light of network stability [12]. More specifically, the authors used experimental human EEG data and analyzed the dependence of network measurements on the sample size (number of electrodes). Results showed that, among other findings, larger functional samples show an increase in efficiency and assortativity. On the other side, functional networks obtained from human longterm intracranial EEG signals revealed a recurring functional network core that persisted independently of cognitive process [13]. It has also been shown that EEGlike functional samples tend to incorporate random graph properties when the underlying neural networks resemble a smallworld [14]. Although conclusions are relevant, those simulations are based on techniques that overlook neural dynamics. We instead propose to use a more realistic simulation approach based on a spiking neuron model. Moreover, we also point out other theoretical study reported in [15], which was based on the integrateandfire model and meanfield theory. Authors came to an interesting conclusion: it is possible to generate scalefree functional samples from scalefree neural networks – in a few words, a scalefree network has a powerlaw degree distribution [16]. We report in the following sections results based on a wide range of different network measurements that expands conclusions beyond the analysis of degree distributions of scalefree networks – without even assuming that biological neural networks are purely scalefree.
The simulation technique reported in this paper was defined in order to mimic experimental situations, and mainly encompass (i) a nonuniform network model that incorporates neuroanatomical connectivity properties (spatial network, Fig. 1a), (ii) the construction of mesoscale cortical electrical signals (pseudoEEG) based on the integrateandfire model, and (iii) the estimation of functional networks based on those signals. Steps (ii) and (iii) correspond the experimental procedure depicted in Fig. 1b. A large set of network (graph) measurements is computed for each spatial and functional network in order to perform detailed comparisons between them. Examples of measurements are those based on shortestpaths (betweenness, average distance, efficiency, ect) and those relying on local connectivity (clustering coefficient, assortativity, etc). We also computed concentric (or hierarchical) generalizations of local measurements [17]. Results for a range of sample sizes and densities indicate that some connectivity properties of spatial networks are well reflected in functional samples (e.g., closeness vitality). Other network features heavily depend on the number of nodes and/or edge density (e.g., node and edge betweenness). These findings pave the way for other similar theoretical developments using spiking neuron models and spatial networks.
This paper is organized as follows. Next (Section II) we describe and justify the simulation methods adopted, and also define the measurements employed for network comparison. A detailed analysis of estimated functional samples is carried out in Section III. Comments on further directions for this research as well as other concluding remarks are given in Section IV.
Ii Material and Methods
The following subsections contain details on the organization of the spatial network (Section IIA) and the simulation procedure adopted to obtain functional networks (Section IIB). The latter encompass details about the spiking neuron model (integrateandfire), the simulated signals inspired by EEG experiments, and the method used to build functional networks from those signals. All the measurements employed to compare networks are defined accordingly (Section IIC). Finally, we provide details about the specific parameters and organization of the simulation experiment (Section IID).
Iia Generating Spatial Networks
The establishment of synapses in the spatial network is governed by a probabilistic rule that yields higher probabilities of connection between spatially closer neurons than between distant neurons. In other words, global connections (shortcuts) are created less frequently than local connections, therefore incorporating the concept of wiring economy – longrange synapses impose higher costs in terms of volume and transmission
[18]. The probability of creating a synapse from neuron to neuron is given by the following equation:(1) 
where and are parameters that shape the interplay between local and global connections (, ). The Euclidean distance between neurons and is represented by . We consider for simplicity that neurons are randomly distributed inside a volume (a unit radius semisphere) that represents the finite size effects of the human skull (see an example in Fig. 1a). Equation (1) is derived from developments previously made on spatial networks [19, 20].
It is worth to mention that this model generates directed networks, i.e., each synapse connects a presynaptic neuron and a postsynaptic neuron , which is in fact a neuroanatomical property. Consequently, although we have in (1), the creation of synapse in this model does not depend on the existence of synapse (and viceversa).
IiB Generating Functional Networks
IiB1 IntegrateandFire Neuron Model
The spatial network model of the previous section is static in the sense that it does not incorporate any neuronal dynamics, which is in fact necessary to conceptually connect the structural level of neurons/synapses to the mesoscopic functional level. More specifically, in order to construct EEGlike signals (more details in Section IIB2) we have to incorporate the time varying membrane potential of each neuron. Spiking neuron models emerge as a natural choice, since spiking dynamics is a essential feature of biological neurons that directly shapes postsynaptic membrane potentials. Many types of models could be employed, ranging from simple neurons to the complex and more realistic ones [21]. We chose the leaky integrateandfire (LIF) model which, despite showing a simple behavior when considered in isolation, enables complex dynamics inside nonuniform networks and features enough anatomically and physiologically inspired features for our simulations. Moreover, we focus here on the system dynamics rather than on the specific shape and function of specific individual neurons.
This neuron model is represented by an RC (resistorcapacitor) parallel circuit that shapes neuron dynamics, which comprises the time variation of the membrane potential and spike (action potential) trains [21, 22]. A differential equation defines the variation of the membrane potential of one neuron:
(2) 
The membrane potential is the potential difference across the capacitor (expressed in volts – V), which represents the difference between the neuron internal potential and the potential of the external medium (hence negative values [21]). Other parameters are: the resistance of the circuit in ohms (), the input current in amperes (A), the neuron resting potential (represented by a battery in the model circuit), and the membrane time constant (in seconds) ( is the capacitance in farads – F).
The membrane potential is entirely driven by (2) until a threshold is reached, which causes the model to generate a pulse (neuron spike). The two following equations specify the instant of a spike and the reset procedure of the membrane potential to a predefined value () after a spike, respectively:
(3) 
(4) 
The model also specifies a time varying input current that depends on spikes generated by presynaptic neurons (now taking into account the spatial network structure of Section IIA):
(5) 
where iterates over all presynaptic neurons and iterates over all spikes generated by . The strength or efficiency of the synapse is denoted by ( for excitatory and for inhibitory synapses [21]). In (5) currents are shaped by an function as follows [23]:
with maximum value at (the growth time is therefore specified by ). Variable is substituted in (5) by the elapsed time since a spike occurred in a presynaptic neuron (each spike is considered separately).
IiB2 PseudoEEG Signals
In order to simulate signals inspired by conventional electroencephalography we consider that a set of
recording points (functional sampling points or just “sensors”) is uniformly distributed over the volume that contains the spatial network of
nodes. By uniform we mean that all sensors are equidistant from each other. Then the mesoscale electric potential recorded by sensor at time is an attenuated sum of individual membrane potentials of neurons:(6) 
where the membrane potential of neuron in time is denoted by . Recall that sensor is located at the surface of the semisphere, whereas neuron is placed inside the semisphere, therefore denotes the Euclidean distance between them. For the purpose of simulating (approximately) real distances, we consider in (6) that the semisphere has a radius of 200 mm (units are in this case given in mm since membrane potentials are in the order of mV). In our simulations the medium between neurons and electrodes has constant conductivity. Fig. 2 shows three examples of mesoscale simulated potentials.
We use an inverse square rule to attenuate membrane potentials. Therefore, only neurons close to the sensor significantly contribute to the formation of the captured signal. The reasoning behind this behavior is that neuron potentials can be approximated by dipoles (two poles with opposite charges) in the case of scalp encephalographic measurements, which allows the use of a inverse quadratic decay [24]. Moreover, in order to complement our reasoning behind (6), we note that the signal captured by real EEG recordings is mostly influenced by microscopic postsynaptic currents [25, 26] (hence postsynaptic potentials).
IiB3 Building Functional Edges
One of the tools frequently applied to the analysis of neural multichannel experiments is the crosscorrelation between all recorded time series. Take any pair of zscore normalized pseudoEEG signals and (obtained from surface sensors and , respectively), then the crosscorrelation between them is computed as follows [27]:
(7) 
The number of recorded points in both signals is denoted by , whereas the time lag of correlation is . This formula gives results between and , but we follow the usual approach of taking the maximum absolute value for a given time lag interval.
A new network is then constructed as follows. Let be the symmetric correlation matrix between all recorded signals. This square matrix naturally represents an undirected weighted graph with all possible edges, where nodes represent sensors. The functional network is constructed by thresholding , i.e., an adjacency matrix is created with if and only if is above a given threshold ( otherwise). As in graph theory, the adjacency matrix completely describes an unweighted and undirected functional network (graph). Consequently, only the strongest correlations lead to the creation of edges in the functional sampled network. Thresholds are usually arbitrarily chosen, but we specified one that allows a matching between a relevant property of both the underlying spatial network and the functional network. More specifically, we always choose a threshold that yields a functional network with the same edge density as the spatial network. Density is the proportion of edges in a network (the number of actual edges divided by the total possible number of edges).
IiC Network Measurements
We are now able to compare functional networks and their respective spatial networks. To perform this comparison we resource to a rich set of network measurements, which have been defined for the purpose of characterizing specific network features. Since we compare undirected (functional) to directed (spatial) networks, we have to consider that each undirected functional edge is in fact a pair of two symmetric directed edges. Note that this is just a conceptual adaptation where no network information is lost. Measurement definitions that follow are thus based on directed edges encoded, whenever necessary, in an adjacency matrix .
IiC1 Maximum degree
The degree can be thought as the simplest network measurement, which is just the number of edges attached to a given node . For directed networks there is a distinction between in and outdegrees (in and outgoing edges, respectively): and . We are interested here in analyzing measures of the whole network instead of single nodes. For that purpose the average network degree is usually taken. Since the average degree will always be the same in our comparisons (see Section IID, where we explain that comparisons are performed between networks of same size and density), we have chosen to use the maximum indegree (and outdegree) value of a given network.
IiC2 Clustering coefficient
This coefficient has been proposed to study smallworld networks with undirected edges [11], and it quantifies the proportion of local clustering of a network. Here we devised a way to compute this coefficient in directed networks. Let be the number of neighbors of node (it does not matter if they are in or outgoing neighbors) and be the number of directed edges that connect those neighbors among themselves (i.e., edges connecting to are not taken into account). The clustering coefficient of node is equal to:
(8) 
This measurement quantifies the level of interactions among neighbors of : if neighbors are completely interconnected, then ; on the opposite side, if those neighbors have no link between them, then . We consider the average clustering coefficient of a network as a measure of intensity of its local node grouping.
IiC3 Assortativity
One way to quantify degree correlations is by calculating the assortativity index [28]. This measurement is useful to check if, for instance, highly connected nodes tend to create links among each other. It can be calculated using the Pearson correlation coefficient on the degrees of both ends of every edge. More specifically, for every directed edge we check if the indegree of is linearly correlated with the indegree of . We call this measurement the “inin” assortativity coefficient. We also consider the other three possible combinations of in and outdegrees at both ends of every edge: outout, inout, and outin, which consequently leads to three more assortativity indexes. The interpretation of this coefficient is naturally the same as for the Pearson correlation coefficient, where absolute values close to 1 mean strong linear correlation, whereas values close to 0 represent weak or no correlations. Positive or negative correlations are indicated by the sign of the coefficient.
IiC4 Shortest path length
The distance (or length of the shortest path) between nodes and is frequently used to evaluate the overall node separability. We take the distances between all possible pairs of nodes to compute the average shortest path length of a network:
(9) 
If and are placed inside different connected components we take , since conceptually it is an infinite distance. Note that for directed graphs is not necessarily equal to , thus strictly speaking it is not a distance measure. Nevertheless, we use the term distance for simplicity.
IiC5 Global efficiency
To avoid artifacts in the case of unconnected nodes we have also considered the measurement of network efficiency [29], which is the average of the inverse of distance:
(10) 
The inverse of an infinite distance is therefore taken as 0. When distances tend to be short efficiency approaches 1 (a complete graph has maximum efficiency, for instance), whereas for higher distances smaller efficiencies appear (the extreme case of zero efficiency is a totally unconnected graph, i.e., with no edges).
IiC6 Betweenness centrality
Another measurement based on shortest paths is betweenness [30]. It estimates the probability of finding a node (or edge) inside a shortest path. More specifically, the betweenness centrality of a node is given by:
(11) 
where is the number of different paths of shortest length that connect nodes and , and is the number of those paths that include node . We take the average of all as a measurement of betweenness for an entire network. Finally, if we consider that is an edge instead of a node, the interpretation of (11) is naturally adapted and thus we also have a measurement of edge betweenness centrality.
IiC7 Closeness vitality
This measurement has been proposed to quantify the effects of excluding a node from a network [31]. To define it, let be the sum of all shortest path lengths (). Then consider that we remove from the network a node along with its corresponding edges and recompute the sum of path lengths (let us denote this new summation by ). The closeness vitality of is:
(12) 
which is the total increase in all shortest path lengths when is discarded. As an overall closeness vitality measurement (i.e., for an entire network) we take the average of all individual .
IiC8 Concentric number of nodes
The class of concentric (or hierarchical) measurements is based on the notion of higher order neighborhoods of a given node , not just its immediate neighbors [32]. Let us consider that the level of concentric neighborhoods is composed of just node . The next level () if formed by those nodes directly connected to , independently of edge direction (i.e., ingoing or outgoing edges are treated in the same way). Next, level contains nodes directly connected to nodes of level that are not already included in the first level. Generally, level is composed of nodes not yet associated with any level that are directly connected to any node of level . With this definition, levels must be computed for increasing , not in any order. Breadthfirst search is sufficient to find those levels given that (for our purposes) edge directions are removed from directed networks. Finally, the first concentric measurement applied to our work is the concentric number of nodes of a node at level , which is just the number of nodes associated with level given that the “center” node is . We compute this measurement for levels 2, 3, and 4. For every concentric measurement (including the next) we take the average between all nodes as a global network index of hierarchical connectivity.
IiC9 Concentric degree
Connections between different concentric levels can be used to generalize in and outdegrees. In a short definition, the indegree of node at level is the number of edges coming from level to level . Notice that the indegree at level is equivalent to the usual indegree . The concentric outdegree is defined analogously: it is the number of edges coming from level to level (again, ). In our analyses we consider concentric in and outdegrees at levels 2 to 4.
IiC10 Concentric neighbor degree
Another concentric measure is the neighbor average degree, which quantifies the overall connectivity of nodes at level . The concentric neighbor indegree of node at level is defined as follows, where is the set of nodes (with elements) at level of node :
(13) 
The concentric neighbor average outdegree is defined analogously. We compute these measurements for levels 1 to 4.
IiC11 Concentric clustering coefficient
An interesting concentric generalization can be done in the case of the clustering coefficient. Let and be the number of directed edges and nodes, respectively, at the level of . For computing the number of edges take into consideration only edges connecting nodes inside level . The concentric clustering coefficient of node at level is equal to:
(14) 
The usual clustering coefficient is therefore equal to . Levels 2 to 4 were considered in our simulations.
IiD Simulation Setup
Simulations were implemented in Python using the following packages: NetworkX [33], Brian [34], and PyNN [35]. Spatial networks of nodes were created with . The other connection probability parameter () was set to , , and , each time generating other 20 spatial networks. Therefore we have three sets of spatial networks with distinct (average) edge densities induced by those different : lower (6.5%), intermediate (8.6%), and higher densities (10.8%). In this manner we are able to assess the influence of network density on functional sampling.
Integrateandfire neurons have their resting, reset and threshold potentials set to mV, mV, and mV, respectively. Their membrane capacitance is equal to nF, whereas the growth time of the function is 5 ms. Their membrane time constant and refractory period were both set to 15 ms. Some values ( and ) were inspired by actual neurophysiological observations [36], although for the other variables we needed to search the parameter space for specific values that prevent sub or super neural activity (absence of spikes or overexcitability, respectively).
Parameter variability was reserved for synaptic weights: excitatory weights follow a Gaussian distribution with average
( and , respectively, for inhibitory weights, which comprise 20% of all synapses). Total simulation time is ms with time steps of 1 ms. The system input is a set of Poisson spike generators individually connected to a fraction of neurons (2% of ). Each generator has an expected number of 20 spikes per second. PseudoEEGs were calculated after the initial 100 ms of simulation time (i.e., we removed the transient period of membrane potential dynamics). Crosscorrelations were computed for time lags between 50 and 50 ms.Seven different number of sensors were considered (40, 50, …, 100) for multichannel surface recording. Given one specific , one functional network is obtained for each spatial network. To properly compare two networks (in our case, functional and spatial) by means of network measurements, it is necessary both have the same number of nodes and edges. Nevertheless, since EEG has a low spatial resolution, the size of a functional network is much smaller than the size of the underlying spatial network. We are now able to restate the main purpose of this work in the form of a question: Can simulated functional networks resemble the connectivity structure of a spatial neural network of same size and density? In other words, we want to verify whether functional networks are rescaled versions of the spatial networks that generated the observed dynamics. In our framework this analysis is possible since we know how to create spatial networks of virtually any size and density (Section IIA). Consequently, we generated new sets of 20 spatial networks of sizes 40, 50, …, 100 and same densities as before (lower, intermediate, and higher) to properly perform comparisons with the respective functional networks. Simulation results are shown in the next section.
Iii Results and Discussion
Results are mainly discussed in terms of network density (lower, intermediate, and higher) and sample size (40 to 100). We also take into consideration the type of network measurements under analysis, which can be classified into
local, concentric and global measurements according to the type of information considered in calculations. Local measurements employ information from the immediate vicinity of nodes (e.g, degree, clustering coefficient and assortativity). Those classified as concentric consider the generalization of node neighborhoods to further distances. Global measurements need to take into account the entire network topology in order to compute minimum paths (e.g., efficiency, betweenness and vitality). We first introduce results associated with the measurements that best approximate functional and spatial networks and gradually show which measurements are not able to produce good estimations.Iiia Best Approximations
Three measurements tend to be very similar when computed for functional and spatial networks. The leading one is the concentric neighbor indegree at level (Fig. 3, second row). Approximations are very good for intermediate densities, with slight deviations when the largest network sizes are employed for other densities. Notice that the best approximations are indicated by filled dots in Fig. 3 (details in the figure legend). Results were strikingly similar to the case of outdegrees (concentric neighbor outdegree at , results not shown).
Closeness vitality also shows very good approximations (Fig. 3, first row), mainly for the case of higher densities. As densities decrease approximations get worse, although the largest network sizes are still able to accurately estimate the vitality of spatial networks in functional samples. Fig. 3 also includes results for the concentric clustering coefficient at (third row). Again, estimations are better when higher densities are adopted. Notice that no local measurement is among the best approximations (see below), only global and concentric. Nevertheless, results are sensitive to network density.
IiiB More Specific Approximations
Other measurements are even more sensitive to sample size and network density, although still showing in specific conditions relevant approximations. It is the case, for instance, of edge/node betweenness centralities (Fig. 4). Although both measurements are intimately related, they show opposite behavior: node betweenness is best approximated for higher densities and larger network sizes, whereas edge betweenness shows good estimations when lower/intermediate densities and smaller network sizes are adopted.
A distinctive behavior is shown by the concentric indegree at level (Fig. 4). Average values for functional networks grow steadily, whereas for spatial networks they tend to decay after a peak. Nevertheless, good estimations are shown in specific points (sizes 70 and 40 for intermediate and higher densities, respectively). Very similar results were observed for the outdegree analogous (concentric outdegree at , results not shown). Qualitatively similar results were observed for the concentric number of nodes at , where specific points of approximation were observed for lower and intermediate densities (sizes 90 and 60, respectively – results not shown).
Other measurements highly diverge as network sizes increase, with values for functional networks steadily growing whereas for spatial networks they tend to decrease. These measurements are: concentric in/outdegrees, concentric neighbor in/outdegrees, and concentric number of nodes, all at . Nevertheless, it is still possible to get good approximations with size 40 and lower densities (results not shown). Level is therefore a mesoscopic turning point between functional and spatial networks: the former tend to have significantly higher populations (nodes and edges) at this level. In other words, concentric analyses are more limited (in the sense of increasing ) in spatial networks than in functional ones.
Another type of result also concerns concentric measurements. In this case, average values also tend to diverge for increasing sizes and densities. Nevertheless, for both network types there is a tendency of increasing values (Fig. 5, see also the very low standard deviations). In some cases, values for spatial networks are steadily higher than those for functional samples (concentric indegree at – first row of Fig. 5 – very similar to the concentric outdegree at the same level – not shown). Nevertheless, good approximations can be seen when the lowest network size is applied (but averages are still statistically different). In other cases functional values are constantly higher than spatial ones, as for the concentric neighbor in and outdegrees at levels 1 and 2 and maximum in and outdegrees (bottom row of Fig. 5, only one of those measurements is shown). The best approximations are again obtained for the smallest network size.
Global and concentric measurements are again able to significantly estimate values observed in spatial networks, although in more restricted conditions than those highlighted in Section IIIA. Moreover, notice that variations between in and outdegrees show extremely similar results for all measurement types. One possible cause is that directed edges have symmetric probabilities of creation in the spatial network model. Therefore opposing edge directions tend to be created in a (global) balanced way. Also notice that variations between in and outdegrees must indeed yield the same results for functional networks since they have undirected edges.
IiiC Severely Limited Approximations
The remaining measurements tend to be very different among spatial and functional networks. For instance, the clustering coefficient of functional samples is much higher that those for spatial networks (Fig. 6, first row). Similar results were observed for the concentric clustering coefficient at and 4 (not shown). The relative difference in global efficiency between network types tends to be preserved as sizes and densities increase, although approximations are low quality in general (Fig. 6, second row). Lengths of shortest paths are much smaller in spatial networks, with even worse approximations for lower densities (not shown). The reason for this behavior is that it is not uncommon to see isolated nodes in functional networks (this is also one of the possible causes for higher efficiencies appearing in spatial networks). The concentric number of nodes at (not shown) steadily grows like measurements in Fig. 5, although values between network types are very different even for the best cases (small network sizes). Finally, all assortativity types are poorly represented in both networks, with highly varying averages (i.e., no tendency observed) and very large standard deviations (results not shown). Values for functional networks tend to be slightly higher than those for spatial networks (in general against ). It is reasonable to say that, although estimations are not sufficiently precise, degree correlations (assortativity mixing) in both network types tend to be very low or even absent.
Iv Concluding Remarks
Our modeling approach shows that it is possible to precisely estimate network measurements of spatial networks by just observing functional samples inspired by EEG experiments, although specific conditions of sampling size and density must be met. Local network measurements tend to be poorly estimated, whereas good approximations were observed for several global and concentric measurements. We thoroughly modeled and planned simulations in order to be able to perform a wide evaluation of functional samples in several different circumstances.
Beyond those contributions, another important goal is to adapt established computer science tools (in this case, spiking neural networks) on contemporary network problems. Although computer science has been playing a crucial role in the development of relevant networkbased methodologies for diverse tasks (e.g., image segmentation [37]), they are only sparsely integrated with the modern approach emerging from the highly multidisciplinary complex network field [2].
Further developments of this work shall incorporate more realistic simulations (e.g., synaptic plasticity) and other types of analyses considering, for instance, nonlinear correlations between time series and network resilience under perturbations. All in all, besides specific numerical results, we report in this paper a general and robust framework for simulating and analyzing the confidence of functional samples that can be further adapted to other neuroimaging techniques (e.g., magnetoencephalography).
References
 [1] U. Brandes, G. Robins, A. McCranie, and S. Wasserman, “What is network science?” Network Science, vol. 1, no. 1, pp. 1–15, 2013.
 [2] M. E. J. Newman, Networks: An Introduction. New York: Oxford University Press, 2010.
 [3] D. Achlioptas, A. Clauset, D. Kempe, and C. Moore, “On the bias of traceroute sampling: Or, powerlaw degree distributions in regular graphs,” Journal of the ACM, vol. 56, no. 4, pp. 21:1–21:28, 2009.
 [4] L. Dall’Asta, I. AlvarezHamelin, A. Barrat, A. Vázquez, and A. Vespignani, “Exploring networks with traceroutelike probes: Theory and simulations,” Theoretical Computer Science, vol. 355, no. 1, pp. 6–24, 2006.
 [5] M. Latapy and C. Magnien, “Complex network measurements: Estimating the relevance of observed properties,” in Proceedings of the 27th Conference on Computer Communications (INFOCOM), Phoenix, AZ, 2008, pp. 1660–1668.
 [6] M. P. H. Stumpf, C. Wiuf, and R. M. May, “Subnets of scalefree networks are not scalefree: Sampling properties of networks,” Proceedings of the National Academy of Sciences, vol. 102, no. 12, pp. 4221–4224, 2005.
 [7] C. J. Stam, “Characterization of anatomical and functional connectivity in the brain: A complex networks perspective,” International Journal of Psychophysiology, vol. 77, no. 3, pp. 186–194, 2010.
 [8] M. Rubinov and O. Sporns, “Weightconserving characterization of complex functional brain networks,” Neuroimage, vol. 56, no. 4, pp. 2068–2079, 2011.
 [9] S. Palva and J. M. Palva, “Discovering oscillatory interaction networks with M/EEG: challenges and breakthroughs,” Trends in Cognitive Sciences, vol. 16, no. 4, pp. 219–230, 2012.
 [10] F. Gerhard, G. Pipa, B. Lima, S. Neuenschwander, and W. Gerstner, “Extraction of network topology from multielectrode recordings: Is there a smallworld effect?” Frontiers in Computational Neuroscience, vol. 5, p. 4, 2011.
 [11] D. J. Watts and S. H. Strogatz, “Collective dynamics of ‘smallworld’ networks,” Nature, vol. 393, no. 6684, pp. 440–442, 1998.
 [12] A. Joudaki, N. Salehi, M. Jalili, and M. G. Knyazeva, “EEGbased functional brain networks: Does the network size matter?” PLoS ONE, vol. 7, no. 4, p. e35673, 2012.
 [13] M. A. Kramer, U. T. Eden, K. Q. Lepage, E. D. Kolaczyk, M. T. Bianchi, and S. S. Cash, “Emergence of persistent networks in longterm intracranial EEG recordings,” Journal of Neuroscience, vol. 31, no. 44, pp. 15 757–15 767, 2011.
 [14] L. Antiqueira, F. A. Rodrigues, B. C. M. van Wijk, L. F. Costa, and A. Daffertshofer, “Estimating complex cortical networks via surface recordings – a critical note,” NeuroImage, vol. 53, no. 2, pp. 439–449, 2010.
 [15] M. S. Shkarayev, G. Kovacic, A. V. Rangan, and D. Cai, “Architectural and functional connectivity in scalefree integrateandfire networks,” Europhysics Letters, vol. 88, no. 5, p. 50001, 2009.
 [16] A. L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999.
 [17] L. F. Costa, F. A. Rodrigues, G. Travieso, and P. R. Villas Boas, “Characterization of complex networks: A survey of measurements,” Advances in Physics, vol. 56, no. 1, pp. 167–242, 2007.
 [18] E. Bullmore and O. Sporns, “The economy of brain network organization,” Nature Reviews Neuroscience, vol. 13, no. 5, pp. 336–349, 2012.
 [19] B. M. Waxman, “Routing of multipoint connections,” IEEE Journal on Selected Areas in Communications, vol. 6, no. 9, pp. 1617–1622, 1988.
 [20] M. Kaiser and C. C. Hilgetag, “Spatial growth of realworld networks,” Physical Review E, vol. 69, p. 036103, 2004.
 [21] T. P. Trappenberg, Fundamentals of Computational Neuroscience. Oxford: Oxford University Press, 2002.
 [22] G. Deco, V. K. Jirsa, P. A. Robinson, M. Breakspear, and K. Friston, “The dynamic brain: From spiking neurons to neural masses and cortical fields,” PLoS Computational Biology, vol. 4, no. 8, p. e1000092, 2008.
 [23] A. Roth and M. C. W. van Rossum, Computational Modeling Methods for Neuroscientists. Cambridge, MA: MIT Press, 2009, ch. Modeling Synapses, pp. 139–160.
 [24] P. L. Nunez and R. Srinivasan, Electric Fields of the Brain: The Neurophysics of EEG. New York, NY: Oxford University Press, 2006.
 [25] P. Olejniczak, “Neurophysiologic basis of EEG,” Journal of Clinical Neurophysiology, vol. 23, no. 3, pp. 186–189, 2006.
 [26] G. Buzsáki, C. A. Anastassiou, and C. Koch, “The origin of extracellular fields and currents – EEG, ECoG, LFP and spikes,” Nature Reviews Neuroscience, vol. 13, no. 6, pp. 407–420, 2012.

[27]
E. Pereda, R. Q. Quiroga, and J. Bhattacharya, “Nonlinear multivariate analysis of neurophysiological signals,”
Progress in Neurobiology, vol. 77, no. 12, pp. 1–37, 2005.  [28] M. E. J. Newman, “Assortative mixing in networks,” Physical Review Letters, vol. 89, no. 20, p. 208701, 2002.
 [29] V. Latora and M. Marchiori, “Efficient behavior of smallworld networks,” Physical Review Letters, vol. 87, no. 19, p. 198701, 2001.
 [30] L. C. Freeman, “A set of measures of centrality based on betweenness,” Sociometry, vol. 40, no. 1, pp. 35–41, 1977.
 [31] D. Koschützki, K. A. Lehmann, L. Peeters, S. Richter, D. TenfeldePodehl, and O. Zlotowski, “Centrality indices,” in Network Analysis, ser. Lecture Notes in Computer Science, U. Brandes and T. Erlebach, Eds. Springer Berlin Heidelberg, 2005, vol. 3418, pp. 16–61.
 [32] L. da F. Costa and L. E. C. Rocha, “A generalized approach to complex networks,” European Physical Journal B, vol. 50, no. 12, pp. 237–242, 2006.
 [33] A. A. Hagberg, D. A. Schult, and P. J. Swart, “Exploring network structure, dynamics, and function using NetworkX,” in Proceedings of the 7th Python in Science Conference, G. Varoquaux, T. Vaught, and J. Millman, Eds., Pasadena, CA, 2008, pp. 11–15.
 [34] D. F. M. Goodman and R. Brette, “Brian: a simulator for spiking neural networks in Python,” Frontiers in Neuroinformatics, vol. 2, no. 5, 2008.
 [35] A. P. Davison, D. Brüderle, J. M. Eppler, J. Kremkow, E. Muller, D. Pecevski, L. Perrinet, and P. Yger, “PyNN: a common interface for neuronal network simulators,” Frontiers in Neuroinformatics, vol. 2, p. 11, 2009.
 [36] F. H. Martini, J. L. Nath, and E. F. Bartholomew, Fundamentals of Anatomy & Physiology. San Francisco, CA: Benjamin Cummings, 2012.

[37]
P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graphbased image
segmentation,”
International Journal of Computer Vision
, vol. 59, no. 2, pp. 167–181, 2004.
Comments
There are no comments yet.