I Introduction
Timevarying complex networks (TVCNs) provides a comprehensive mathematical framework for modeling complex biological, social and technological systems such as the brain dynamic activity [1, 2, 3], the gene expression and interactions [4, 5], the bacteria dynamics [6, 7, 8], or the swarm robotics [9, 10]
. Many such time varying complex (biological) networks exhibit complex spatiotemporal interactions. For instance, the short and longrange interactions among neurons contribute to the emergence of longrange memory and fractional dynamics at macroscopic brain regions. Moreover, the nonstationarity which arises in most of the biophysical processes require modeling techniques supporting interaction among variables in space and time. A computationally efficient strategy for constructing compact yet accurate mathematical models of TVCNs relies on describing the selfactivity of nodes in TVCNs through fractional order operators
[11, 12, 13, 14, 15].The modeling of interactions across nodes require assumption of complete knowledge of the associated complex network (CN). However, due to the experimental limitations, (e.g., resource constraint, limiting probing capabilities^{1}^{1}1As Wigner argued once: ‘It is the skill and ingenuity of the experimenter which show him phenomena which depend on a relatively narrow set of relatively easily realizable and reproducible conditions’ [16].), only a part of the complete CN is available at most of the times. The influence of socalled latent nodes on the observed nodes can be captured by noise, which may not be a good approach as these latent nodes activities have specific patterns. For example, a sensor capturing brain region activity (known to be fractional dynamic), going to be relaxed in the future, and thereby making it latent. In the current work, we are concerned with fractional dynamical model partly because of their celebrated success in modeling several physiological signals like electroencephalogram (EEG), electrocardiogram (ECG), electromyogram (EMG), and blood oxygenation level dependent (BOLD) [17, 18]. In experimental setups, for example the brain, the recorded data is influenced by subcortical regions which are not probed. Instead of labeling them as unknown inputs to the observed system, it can be realized that they also obey the fractional dynamical patterns. In other situations, it is often required to relax some sensors, due to power limitations or sensor failure. In all such conditions, we have only partial information regarding the complete TVCN, and we wish to better predict the observed data in the presence of latent fractional dynamics as well as unknown drivers.
There is rich literature regarding study of networks with latent nodes. To list a few, the importance of realizing the inclusion of latent nodes in the context of linear time invariant (LTI) systems has been explored in [19, 20, 21]
, for Bayesian networks in
[22], and graphical models with Gaussian distribution of nodes in
[23], complex systems [24]. The Markovian assumption helps in the case of LTI systems, and in some sense to uncover the latent nodes. However, the LTI systems are not sufficient to accurately model physiological signals, such as EEG, ECG and BOLD (just to mention a few) due to their inability in capturing the longrange memory property of the biological signals. To the best of authors’ knowledge, the latent node framework is still nonexistent in the context of discretetime fractional dynamical systems. The closest work concerning fractionalorder systems are [1] and [2], where complete knowledge of the CN is assumed. The work in [2] has generalized the work in [1], and included the contribution of unknown drivers in the system, but also with the assumption of complete knowledge of the nodes in the TVCN. The unknown unknowns were designated as external activities which does not obey the fractional dynamics of the CN.Our contributions in this work are as follows: (i) we present a TVCN having fractional dynamics with latent nodes; (ii) jointly estimate the fractional latent node activities, and unknown drivers which does not obey fractional dynamics from observed data; (iii) iteratively estimate the complete model (latent
observed) parameters in the likelihood sense.The rest of the paper is organized as follows. Section II introduces the system model with latent nodes considered in this paper, and then formally presents the main problem statement. In Section III, we provide solutions to the considered problems as the main results, and in Section IV, we evaluate the proposed methods on simulated and realworld datasets. Finally, we conclude the work and present future directions in Section V. The proofs are presented in the Appendix.
Ii Problem Formulation
We begin by introducing in Section IIA the TVCN system model obeying fractionalorder dynamical evolution with latent nodes and unknown excitations. Next, in Section IIB, we describe the main problem addressed in this paper.
Iia System Model
We consider a timevarying complex network (TVCN) described by a linear discretetime fractionalorder model with latent nodes, which can be mathematically written as
(1)  
where are the observed state variables, are the latent states, and are the unknown excitations. The system matrices are of appropriate dimensions. The noise variables are assumed to be uncorrelated across observed and latent nodes with and . The fractionalorder derivative in equation (1) obeys the discrete form for every node, either observed or latent, as follows [25]:
(2) 
where the matrices and with , and denotes the gamma function. The fractionalorder coefficients corresponding to the th node of observed and latent variables are denoted by and , respectively.
IiB System Identification with Latent nodes
The physiological signals, e.g. EEG and ECG, display spatiotemporal behavior, where the temporal component shows longrange memory dependence as realized in [1, 2]. As a consequence, properly modeled by the systems described in Section IIA. When trying to obtain the system representation, i.e., to identify the system’s parameters, the model estimation is contingent on the complete knowledge of the assumed complexnetwork dynamics; specifically, their nodes’ activities in terms of timeseries. Such assumptions may not hold in most of the cases where we are provided only with partial data. In such cases, to better predict the complex systems dynamics, it is beneficial to incorporate latent nodes. In this regard, the problem considered in this work can be stated as follows.
Problem Statement: Given the partial data (observed) in terms of timeseries across a timehorizon , and knowledge of the fractional orders of latent nodes . Estimate the model parameters and latent states , and the unknown inputs .
This problem will build upon the models in [2, 1], but with striking difference of availability of only the partial data, and presence of latent nodes in the model. In contrast to [2], this work also relax the assumption of knowledge of the input matrices, and they will be computed as part of the system’s parameters. Notice that we are implicitly assuming that the fractionalorder coefficients are constant over time since these have been shown to be empirically slowly timevarying. We will see in the Section IV that by considering latent nodes we can improve the prediction accuracy of the observed data. In the next section, we detail the assumptions required, and solution to this estimation problem.
Iii Latent Model Estimation
Due to notational convenience, we denote as
. The model estimation procedure begins with, first making an estimate of the latent node activities with assumptions that some approximation of the system’s parameters are known. A fractional Kalman filtering approach similar to
[26] under Bayesian approximation is used for this purpose in Section IIIA. Subsequently, we will use the new data (estimated latent and the available data) to perform the system identification. In Section IIIB, we present an iterative algorithm to jointly estimate the system’s parameters and the unknown unknowns.Iiia Fractional Kalman filtering
The fractionalorder Kalman filtering aims at estimation of the latent states at each th time step with the available data. Using standard notations in the Kalman filtering (for the linear systems), we define the following estimates
(3) 
(4)  
(5) 
In contrast to the Kalman filtering for classical linear system, we have the ’s conditioned on the observed data from till . The reasoning behind this can be quickly seen in the definition of system model in equation (1). In the classical linear system, the observations and latent activities are indexed at the same time. While in the considered system model, with the observations being and latent nodes being , we can witness that the equation (1) relate the latent node activity with observations till . The Kalman filtering solutions can be mathematically intractable due to the complexities introduced by longrange dependence of the fractional operator . We will resort to the Bayesian network assumption (as depicted in Figure 1) for the latent state estimates which is described in the following lemma.
Lemma 1.
where the conditional covariances are defined as and .
Next, we present an algorithm to determine the system’s parameters attaining maximum likelihood estimation.
IiiB Maximum Likelihood Estimation
The MLE estimate of the system’s parameters has to be performed in the presence of latent variables. We propose to use an ExpectationMaximization (EM) like algorithm. The conditional distributions of the latent fractional nodes considered are as described in the Section
IIIA. Moreover, we have unknown unknowns in our system , and this work in contrast to [2] will make an estimation of as well as the input matrices . We will define the EM algorithm to make an estimate of the latent fractional nodes activities and unknown unknowns jointly.The EM update of the system’s parameters at each iteration is performed via the following result.
Theorem 1.
An update of the system parameters used in (1) with given and the initial conditions , at each iteration index is (4), (5) and
(6)  
(7) 
where and .
Now, using these results, we can write the iterative algorithm for system’s parameter estimation. Intuitively, the algorithm starts with an initial guess of the system parameters, and then at each step, obtain the latent node activities using the approach described in Section IIIA. Further, upon estimating the incurred error through unknown unknowns, an update of the system’s parameters is obtained and the process is repeated until convergence. The mentioned steps are formally detailed as Algorithm 1.
Iv Experiments
We evaluate the performance of the proposed approach on a variety of datasets. First, we demonstrate on an artificially generated fractionalorder timeseries (see Section IVA). Next, we use realworld physiological signals available in the form of channel EEG timeseries (see Section IVB). The channel electrode distribution is shown in Figure 2. The subjects were asked to perform various motor (actual and imagery) tasks, for example, left/right hand or feet, both hand or feet. The data was collected by BCI system with sampling rate of Hz [27, 28].
Iva Simulated Data
We consider a pedagogical fractionalorder system with three nodes, and without unknown inputs with the following parameters:
For studying the latent node behavior, we remove one node and observe the rest, and , from which we use to recover the timeseries generated by the accessible nodes (i.e., the nodes that are not latent) using our proposed method (i.e., with latent variables) versus those previously used in the literature [2] (i.e., without latent variables). The fivestep prediction error results are summarized in Table I, where the relative error values are computed as
(8) 
Observed  without latent  with latent  

2  13.40%  7.94%  
1  22.12%  21.12%  
1  11.44 %  8.37% 
where is the predicted value of the th node at time . The error percentage is consistently high for node , and the reason for this lies in the actual behavior of the node activity as seen in Figure 3. Specifically, node activity –unlike other two nodes– stays very close to zero and vary frequently, which makes it difficult to use for accurate predictions using the proposed model. Next, we see the application on realworld EEG dataset.


IvB Realworld data
The estimation of model’s parameters, such as the coupling matrices , the input matrices , and the latent states, is performed for the EEG data using Algorithm 1. We notice that the choice of initial conditions can play a great role in fast convergence, as well as the accuracy of the results. If some previous knowledge is available (for example, through experiments) about the coupling matrices, then it can be used to achieve better results. In this work, the matrices are initialized to entries selected uniformly at random between and . In the current experiment, we have used the data when the subject has executed ‘both feet movement’. While performing model estimation, the use of relevant EEG sensors are required for having accurate predictions. Therefore, we used neurophysiological evidencebased sensors that capture the behavior associated with the peripheral nervous system (i.e., the motor cortex) and labeled from to , –see Figure 2. Specifically, different subregions in the motor region are activated when the feet move, so we have used this information to carefully reduce the number of sensors/nodes in our study.
The proposed latent model is tested in a comprehensive manner by performing the following steps: (i) first fixing the nodes to make prediction from sensor IDs to (denoted by ); (ii) second, consecutively reveal new nodes to increase the total observed nodes dimension (originally from ) by one and decrease total latent node dimension (from ) by one, in each step. The reported error values are computed from equation (8), and are averaged across the fixed twelve observed nodes. In Table I(a) we provide some evidence that the latent model with minimal information concerning fractional orders of the latent nodes perform better than without using any model on the unobserved nodes. We also observed the necessity of the relevant latent nodes, by considering the set of sensors which are placed on the region of brain least related to the undertaken situation of ‘both feet movement’. The same experiment is repeated to predict the activities of fixed nodes in consideration and varying the total observed/latent nodes, but this time from a set of sensor IDs . The prediction error values are reported in Table I(b). We notice that the error values are higher upon revealing nodes from the set . This raises an important and intuitive point that, revealing/hiding timeseries that have less relation to the data under consideration are very likely to increase inaccuracies in the model.
The experiments, both simulated and with real EEG data, provided evidence that the inclusion of latent model is helpful in the context of the accuracy of the retrieved model and prediction accuracies.
V Conclusion
We have introduced and studied the framework of latent nodes in the TVCN with fractional dynamical model and additional unknown drivers. An iterative solution is proposed which jointly estimate the system’s parameters, the latent node states as they evolve over time, and the unknown unknowns (i.e., the unknown input matrices and inputs). We have shown that the minimal assumption of knowledge of fractional coefficients of the latent node allows us to better explain the observed data. We have applied the proposed concepts on both simulated and experimental data to see the gains in prediction accuracy of the observed data.
Future work will focus on the intriguing role of fractionalorder coefficients of the latent nodes. Instead of the entire latent node data, it is observed that the associated fractionalorder coefficients are enough for removing the effects of hidden nodes. Moreover, the fractional coefficients can be used to decide the required dimensions, or degreesoffreedom, of the hidden part to make observed and hidden a complete system.
Appendix A Proof of Lemma 1
Proof.
With the assumption of initial states and
being normal distributed, and
known, the Kalman filtering is application of the Bayes’ formula.(9) 
Let , , and . Upon comparing the terms within equation (9) and using (1), we can write
(10)  
where using matrix inversion lemma [29]. For , we can write
(11) 
where follows from equation (1), and is obtained using Bayesian network assumption of Figure 1. To obtain the recursion for covariance matrix , we begin by expressing using (11) and (1) as
(12)  
Appendix B Proof of Theorem 1
Proof.
Let, and the initial conditions be known. Let us also denote the set of variables . The ‘ function’ for the EM like algorithm (described in Algorithm 1) with latent variables being can be written as
(16) 
where is written using Bayesian assumption of Figure 1, and can be written due to the uncorrelated noise assumption of and as assumed in (1). For notational convenience, we have dropped the constants at each step. The terms inside the summation can be written as
(17)  
(18) 
where we have dropped the constants for notational convenience, and are from equation (1). We now proceed to obtain the function which is described as follows:
Comments
There are no comments yet.