We have recently testimony a renewed interest in invasive and non-invasive brain interfaces. Elon Musk has released the NeuraLink initiative 
that aims to develop devices and mechanisms to interact with the brain in a symbiotic fashion, thus merging the artificial intelligence with the human brain. The potential is enormous since it would ideally present a leap in our understanding of the brain, and an unseen enhancement of its functionality. Alternatively, Facebook just announced the ‘Typing-by-Brain’ project that gathered a team of researchers whose target is to be capable of writing 100 words per minute that contrasts with the state-of-the-art of to words per minute assuming an average of letters per word. Towards this goal, Facebook has invested in developing new non-invasive optical imaging technology that is five times faster and portable with respect to the one available on the market and would possess increased spatial and temporal resolution. Nonetheless, these are just some of the initiatives among others by some big Silicon Valley players that want to commercialize brain technologies .
Part of the motivation for the ‘hype’ in the use of neurotechnologies – both invasive and non-invasive brain interfaces – is due to their promising application domains : (i) replace, i.e., the interaction of the brain with a wheelchair or a prosthetic device to replace a permanent functionality loss, (ii) restore, i.e., to bring to its normal use some reversible loss of functionality such as walking after a severe car accident or limb movement after a stroke, (iii) enhance, i.e., to enable to outperform in a specific function or task, as for instance an alert system to drive for long periods of time while attention is up; and (iv) supplement as in equipping one with extra functionality, as a third arm to be used during surgery. Notwithstanding, these are just some of the (known) potential uses of neurotechnology.
Despite the developments and promise of future applications of brain interfaces (some of which we cannot currently conceive), we believe that current approaches to both invasive and non-invasive brain interfaces can greatly benefit from cyber-physical systems (CPS) oriented approaches and tools to increase their efficacy and resilience. Hereafter, we propose to focus on non-invasive technology relying on electroencephalogram (EEG) and revisit it through a CPS lens. Moreover, we believe that the proposed methodology can be easily applicable to other technologies, e.g., electromagnetic fields (magnetoencephalography (MEG) , and the hemodynamic responses associated to neural activity, e.g. functional magnetic resonance imaging (fMRI) [6, 7], and functional near-infrared spectroscopy (fNIRS) ). Nonetheless, these technologies present several drawbacks compared to EEG, e.g., cost, scalability, and user comfort, which lead us to focus on EEG-based technologies. Similar argument can be applied in the context of non-invasive versus invasive technologies that require patient surgery.
Traditional approach to EEG neuroadaptive technology consists of proceeding through the following steps [9, 10]: (a) signal acquisition (in our case, measurement of the EEG time-series); (b) signal processing (e.g., filtering with respect to known error sources); (c
) feature extraction (i.e., an artificial construction of the signal that aims to capture quantities of interest); (d) feature translation (i.e., classification of the signal according to some plausible neurophysiological hypothesis); and (e) decision making (e.g., provide instructions to the computer to move a cursor, or a wheelchair to move forward) – see also Figure 1 for an overview diagram.
In this paper, we propose to merge steps (b) and (c) motivated by the fact that these often accounts for spatial or temporal properties, and are only artificially combined in a later phase of the pipeline, i.e., at step (d) of feature translation. First, we construct a time-varying complex network where the activity of nodes (or EEG signal) have long-range memory and the edges accounts for inter-node spatial dependence. Thus, we argue that the previous approach discards several spatial-temporal properties that can be weighted for signal processing and feature extraction phases. In other words, current EEG brain interfaces require one to have an understanding of the different regions of the brain responsible. For instance, regions for motor or visual actions, as well as artificial frequency bands that are believed to be more significant for a specific action (also known as evidence-based). Besides, one needs to understand and anticipate the most likely causes noise/artifacts in the EEG data collected and filter out entire frequency bands, which possibly compromises phenomena of interest not being available for post-processing. Instead, we propose a modeling capability to enable the modeling of long-range memory time-series that at the same time accounts for unknown stimuli, e.g., artifacts or inter-region communication.
I-a Related Work and Novel Contributions
We put forward that the ability to properly model the EEG time-series with complex network models, that account for realistic setups, can enable the brain interfaces methods to get transformed from detectors to decoders. In other words, we do not want to solely look for the existence of a peak of activity in a given band that is believed to be associated with a specific action. But we want to decompose the bunch of signals into different features, i.e., parameters of our complex network model, that are interpretable. Thus, allowing us to understand how different regions communicate during a specific action/task by representing them as nodes of the complex network and estimating the dependence via coupling matrix. The node activities are assumed to be evolving at different time-scales and in the general setup of presence of external stimuli which could either be unwanted noise or external process driving the system. In engineering, this will enable us to depart from a skill dependent situation to general context analysis, which will enhance the resilience of the approaches for practical nonsurgical brain interfaces. Besides, it will equip bioengineers, neuroscientists, and physicians with an exploratory tool to pursue new technologies for neuro-related diagnostics and treatments, as well as neuro-enhancement.
The proposed approach departs from those available in the literature, see [9, 10, 4] and references therein. In fact, to the best of authors’ knowledge, in the context of noninvasive EEG-based technologies,  is the only existing work that explores fractional-order models in the context of single-channel analysis, which contrasts with the spatiotemporal modeled leveraged in this paper that copes with multi-channel analysis. Furthermore, the methodology presented in this paper also accommodate unknown stimuli . For which efficient algorithms are proposed and leveraged hereafter to simultaneously retrieve the best model that conforms with unknown stimuli, and separating the unknown stimuli from the time-series associated with brain activity. Our methods are as computationally efficient and stable as least-squares and spectral analysis methods used in a spatial and temporal analysis, respectively; thus, suitable for online implementation in nonsurgical brain interfaces.
The main contributions of the present paper are those of leveraging some of the recently proposed methods to develop new modeling capabilities for the EEG based neuro-wearables that are capable of enhancing the signal quality and decision-making. Furthermore, the parametric description of these models provides us with new features that are biologically motivated and easier to translate in the context of brain function associated with a signal characterization, and free of signal artifacts. Thus, making the brain-related activity interpretable, which leads to resilient and functional nonsurgical brain interfaces.
I-B Paper Organization
The remaining of the paper is organized as follows. Section II
introduces the complex network model considered in this paper and the main problem studied in this manuscript. Also we will see the description of the employed method for feature selection and then classification techniques. In SectionIII, we present an elaborated study on the different datasets taken from the BCI competition .
Ii Re-thinking EEG-based non-invasive
Brain interfaces aim to address the following problem.
Is it possible to classify a specific cognitive state, e.g., motor task or its imagination, by using measurements collected with a specific sensing technology that harvest information about brain activity?
In the current manuscript, we revisit this problem in the context of brain-computer interfaces (BCI), when dealing with EEG-based noninvasive brain interfaces. Towards this goal, we review the currently adopted procedures for solving this problem (see Figure 1 for an overview), and propose a systems’ perspective that enables to enhance the BCI reliability and resilience. Therefore, in Section II-A we provide a brief overview of the EEG-based technology and the connection with the brain-areas’ function associated with studies conducted in the past. Next, in Section II-B, we introduce the spatiotemporal fractional model under unknown stimuli. This will be the core of the proposed approach in this manuscript to retrieve new features for classification, and, subsequently, enhancing brain interfaces capabilities. In Section II-C, we describe how to determine the system model’s parameters, and in Section II-D we describe how to interpret them for the classification task.
Ii-a EEG-based Technology for Brain Interfaces: a brief overview
EEG enables the electrophysiological monitoring of space-averaged synaptic source activity from millions of neurons occurring at the neocortex level. The EEG signals have a poor spatial resolution but high temporal resolution, since the electrical activity generated at the ensemble of neurons level arrives at the recording sites within milliseconds. The electrodes (i.e., sensor) are placed over an area of the brain of interest, being the most common the visual, motor, sensory, and pre-frontal cortices. Usually, they follow standardmontages – the International 10-20 system is depicted in Figure 2.
Most of the activity captured by the EEG electrodes is due to the interactions between inhibitory interneurons and excitatory pyramidal cells, which produces rhythmic fluctuations commonly referred to as oscilations. The mechanisms that generate those oscillations is not yet completely understood, but it has been already identified that some ‘natural oscillations’ provide evidence of activity being ‘processed’ in certain regions of the brain at certain ‘frequencies’. Therefore, oscillatory behavior of human brain is often partitioned in bands (covering a wide range of frequencies decaying as in power): (i) -band (0.5-3Hz); (ii) -band (3.5-7Hz); (iii) -band (8-13Hz); (iv) -band (14-30Hz); and (v) -band (30-70Hz). Furthermore, there has been some evidence that activity in certain bands is associated with sensory registration, perception, movement and cognitive processes related to attention, learning and memory [14, 15, 16]. Notwithstanding, such associations are often made using correlation and/or coherence techniques that only capture relationships between specific channels. But such methods are not able to assess the causality between signals that enables forecasting on the signal evolution, captured by a model-based representation as we propose to do hereafter.
Different changes in the signals across different bands are also used to interpret the event-related potentials (ERPs) in the EEG signals, i.e., variations due to specific events – see  for detailed analysis. In the context of sensory-motor data used in the current manuscript to validate the proposed methodology, sensorimotor rhythms (SMRs) are often considered. These represent oscillations that are recorded over the posterior frontal and anterior parietal areas of the brain, i.e., over the sensorimotor cortices (see Figure 2). SMRs occur mainly in the -band (for sensors located on the top of the motor cortices), and on beta and lower gamma for those on the sensorimotor cortices . Consequently, these have been used as a default feature for classification of motor-related execution and only the imagination of performing a motor task. Notwithstanding, the spatiotemporal modeling is simultaneously captured through direct state-space modeling that enables the system’s understanding of the dynamics of the underlying process. In addition, it provides a new set of attributes that can be used to improve feature translation, i.e., classification.
Ii-B Spatiotemporal Fractional Model with Unknown Stimuli
A multitude of complex systems exhibit long-range (non-local) properties, interactions and/or dependencies (e.g., power-law decays in memories). Example of such systems includes Hamiltonian systems, where memory is the result of stickiness of trajectories in time to the islands of regular motion . Alternatively, it has been rigorously confirmed that viscoelastic properties are typical for a wide variety of biological entities like stem cells, liver, pancreas, heart valve, brain, muscles [19, 20, 18, 21, 22, 23, 24, 25, 26], suggesting that memories of these systems obey the power law distributions. These dynamical systems can be characterized by the well-established mathematical theory of fractional calculus , and the corresponding systems could be described by fractional differential equations [28, 29, 30, 31, 32]. However, it is until recently that fractional order system (FOS) starts to find its strong position in a wide spectrum of applications in different domains. This is due to the availability of computing and data acquisition methods to evaluate its efficacy in terms of capturing the underlying system states evolution.
Subsequently, we consider a linear discrete time fractional-order dynamical model under unknown stimuli (i.e., inputs) described as follows:
where is the state, is the unknown input and
is the output vector. The differencing operatoris used as the discrete version of the derivative, for example . As evident, the difference order of has only one-step memory, and hence the classic linear-time invariant models are not able to answer the long-range memory property of several physiological signals as discussed before. On the other hand, the expansion of fractional-order derivative in the discretized setting  for any th state can be written as
where is the fractional order corresponding to the th state and with denoting the gamma function. We can observe from (2) that fractional-order derivate provide long-range memory by including all terms. A quick comparison between the prediction accuracy of fractional-order derivative model and linear-time invariant model is shown in Figure 3. The fractional-order model can cope with sudden changes in the signals while the linear model cannot.
We can also describe the system by its matrices tuple of appropriate dimensions. The coupling matrix represents the spatial coupling between the states across time while the input coupling matrix determines how inputs are affecting the states. In what follows, we assume that the input size is always strictly less than the size of state vector, i.e., .
Having defined the system model, the system identification, i.e., estimation of model parameters, from the given data is an important step. It becomes nontrivial when we have unknown inputs since one has to be able to differentiate which part of the evolution of the system is due to its intrinsic dynamics and what is due to the unknown inputs. Subsequently, the analysis part that we need to address is that of system identification from the data, as described next.
Ii-C Data driven system identification
The problem consists of estimating , and inputs from the given limited observations , , which due to the dedicated nature of sensing mechanism is same as and under the assumption that the input matrix is known. The realization of can be application dependent and is computed separately using experimental data – as we explore later in the case study, see Section III. For the simplicity of notation, let us denote with chosen appropriately. The pre-factors in the summation in (2) grows as and, therefore, for the purpose of computational ease we would be limiting the summation in (2) to values, where is sufficiently large. Therefore, can be written as
with the assumption that for . Using the above introduced notations and the model definition in (1), the given observations can be written as
where is assumed to be Gaussian noise independent across space and time. For simplicity we would assume that . Also, let us denote the system matrices as and . The vertical concatenated states and inputs during an arbitrary window of time as , respectively, and for any th state we have . For the sake of brevity we would be dropping the time horizon subscript from the above matrices as it is clear from the context.
Since the problem of joint estimation of the different parameters is highly nonlinear, we proceed as follows: (i) we estimate the fractional order using the wavelet technique described in ; and (ii) with known, the in equation (3) is computed under the additional assumption that the system matrix is known. Therefore, the problem now reduces to estimate and the inputs
. Towards this goal, we exploit the algorithm similar to expectation-maximization (EM) from . Briefly, the EM algorithm is used for maximum likelihood estimation (MLE) of parameters subject to hidden variables. Intuitively, in our case, in Algorithm 1, we estimate in the presence of hidden variables or unknown unknowns . Therefore, the ‘E-step’ is performed to average out the effects of unknown unknowns and obtain an estimate of , where due to the diversity of solutions, we control the sparsity of the inputs using the parameter . Subsequently, the ‘M-step’ can then accomplish MLE estimation to obtain an estimate of .
It was shown theoretically in  that the algorithm is convergent in the likelihood sense. It should also be noted that the EM algorithm can converge to saddle points as exemplified in . The Algorithm 1 being iterative is crucially dependent on the initial condition for the convergence. We will see in Section III that the convergence is very fast making it suitable for online estimation of parameters.
Ii-D Feature Translation (Classification)
The unprocessed EEG signals coming from the sensors although carrying vital information may not be directly useful for making the predictions. However, by representing the signals in terms of parametric modeland the unknown signals as we did in the last section, we can gain better insights. The parameters of the model being representative of the original signal itself can be used to make a concise differentiation.
The matrix represents how strong is the particular signal and how much it is affecting/being affected by the other signals that are considered together. While performing or imagining particular motor tasks, certain regions of the brain gets more activated than others. Simultaneously, the inter-region activity also changes. Therefore, the columns of
which represent the coefficients of the strength of a signal affecting other signals can be used as a feature for classification of motor tasks. In this work, we will be considering the machine learning based classification techniques like logistic regression and Support Vector Machines (SVM)
. The other classification techniques c The choice of kernels would vary from simple ‘linear’ to radial basis function (RBF), i.e.,. The value of parameters of the classifier and possibly of the kernels are determined using the cross-validation. The range of parameters in the cross-validation are from for and for , both in the logarithmic scale, where is the regularization parameter which appears in optimization cost of the classifiers .
Iii Case Study
We will now illustrate the usefulness of the fractional-order dynamic model with unknown inputs in the context of classification for BCI. We have considered two datasets from the BCI competition . The datasets were selected on the priority of larger number of EEG channels and number of trials for training. The available data is split into the ratio of and for the purpose of training and testing, respectively.
We consider for validation the dataset labeled ‘dataset IVa’ from BCI Competition-III . The recording was made using BrainAmp amplifiers and a channel electrode cap and out of which channels were used. The signals were band-pass filtered between and Hz and then digitized at Hz. For the purpose of this study we have used the downsampled version at Hz. The dataset for subject ID ‘al’ is considered, and it contains trials. The subject was provided a visual cue, and immediately after asked to imagine two motor tasks: (R) right hand, and (F) right foot.
Iii-A1 Sensor Selection and Modeling
To avoid the curse-of-dimensionality, instead of considering 118 sensors available, which implies the use ofdynamics entries for classification, only a subset of sensors is considered. Specifically, only the sensors indicated in Figure 4 are selected on the basis that only hand and feet movements need to be predicted, and only a dynamics matrix and fractional order coefficients are required for modeling the fractional order system. Besides, these sensors are selected because they are close to the region of the brain known to be associated with motor actions.
Iii-A2 System Identification and Validation
The model parameters and the unknown inputs are estimated by using the Algorithm 1. As mentioned before, the performance of the algorithm being iterative is dependent on the choice of the initial conditions. For the current case, we have observed that the algorithm converges very fast, and even a single iteration is enough. The convergence of mean squared error in the M-step of Algorithm 1 for one sample from dataset-I is shown in Figure 5. This shows that the choice of initial conditions are fairly good. The one step and five step prediction of the estimated model is shown in Figure 6. It is evident that the predicted values for one step very closely follow the actual values. There are some differences between the actual and predicted values for five step prediction.
Iii-A3 Discussion of the results
The most popular features used in the motor-imagery based BCI classification relies on exploiting the spectral information. The features used are the band-power which quantifies the energy of the EEG signals in certain spectrum bands [38, 39, 40]. The motor cortex of the brain is known to be affecting the energy in the bands namely, and as discussed in Section II. While it happens that unwanted signal energy is captured in these bands as well while performing the experiments, for example neck movement, other muscle activities etc. The filtering of these so called ‘unwanted’ components from the original signal is a challenging task using the spectral techniques as they often share the same band.
We used a different approach to deal with these unknown unknowns in Section II. The magnitude spectrum of the original EEG signal and on removing the estimated unknown inputs is shown in Figure 7. It should be observed that the original signal and the signal upon removing the unknown inputs have significant energy in the and8. The inputs are not mean zero but their PDF is centered around a mean value of approximately 58.
The model parameters are jointly estimated with the unknown inputs using Algorithm 1, therefore the effect of the inputs is inherently taken care of in the parameters. The structure of matrix for two different labels is shown in Figure 9. We have used the sensors and which are indexed as and , respectively in Figure 9. It is apparent from Figure 9 that the columns corresponding to these sensors have different activity and hence deem to be fair candidates for the features to be used in classification. Therefore, the total number of features are .
A channel EEG data from BCI Competition-III, labeled as ‘dataset IVb’ is taken . The data acquisition technique and sampling frequencies are same as in dataset of the previous subsection. The total number of labeled trials are . The subjects upon provided visual cues were asked to imagine two motor tasks, namely (L) left hand and (F) right foot.
Iii-B1 Sensor Selection and Modeling
Due to the small number of training examples, we have again resorted to select the subset of sensors for the model estimation as we did for the dataset-I in the previous section. Since the motor tasks were left hand and feet, therefore we have selected the sensors in the right half of the brain and close to the region which is known to be associated with hand and feet movements as shown in Figure 11. We will see in the final part of this section that selecting sensors based on such analogy helps not only in reducing the number of features, but also to gain better and meaningful results.
Iii-B2 System Identification and Validation
After performing the estimation of the model and the unknown inputs using the subset of sensors, we can see the similar performance of the model on dataset-II as was in dataset-I. The convergence of mean squared error in the M-step of Algorithm 1 for one sample from dataset-II is shown in Figure 10. The one step and five step predictions are shown in Figure 12. The model prediction follows closely the original signal.
Iii-B3 Discussion of the results
The spectrum of the original EEG signal at channel and its version with unknown inputs removed are shown in Figure 13. The spectrum shows peaks in the and
bands. We witness a similar observation as before that both of the signals share the same band and hence making it difficult to remove the effects of the unwanted inputs. The unknown inputs resembles that of white noise and the PDF is close to Gaussian distribution with mean centered at around 48.
The estimated matrix from Algorithm 1 is shown in Figure 15 for two different labels. Out of all sensors, the sensors and which are indexed as and in the matrix have striking different activity. The columns corresponding to these two sensors seem good choice for being the features for classification. Therefore, the total number of features are for this dataset. Next, we discuss the classification accuracy for both the datasets.
Iii-C Classification Performance
Finally, the performance of the classifiers using the features explained for both the datasets are shown in Figure 16. The classifiers are arranged in the order of complexity from left to right with logistic regression (lR) and linear kernel being simplest and SVM with RBF kernel being most complex. The performance plot parallels the classic machine learning divergence curve for both the datasets. The accuracy for training data increases when increasing the classification model complexity while it reduces for the testing data. This is intuitive because a complex classification model would try to better classify the training data. But the performance of the test data would reduce due to overfitting upon using the complex models. We have very few training examples to build the classifier and hence such trend is expected. The performance of the classifiers for both the datasets are fairly high which reflects the strength of the estimated features. We can see a test accuracy for dataset-I and for dataset-II. While these accuracies depend a lot on the cross-validation numbers and other factors like choice of classifier which can be better tuned to get higher numbers.
For both the datasets we have seen that the proposed methodology efficiently extracts the features which serves as good candidate to differentiate the imagined motor movements. By implicitly removing the effects of the unwanted stimuli, the coefficients of the coupling matrix are shown to be sufficient for discriminating relation between various EEG signals which are indicative of the motor movements. The testing accuracies are high which indicate the good quality of the extracted features.
We have revisited the EEG-based noninvasive brain interfaces feature extraction and translation from a cyber-physical systems’ lens. Specifically, we leveraged spatiotemporal fractional-order models that cope with the unknown inputs. The fractional-order models provide us the dynamic coupling changes that rule the EEG data collected from the different EEG sensors, and the fractional-order exponents capture the long-term memory of the process. Subsequently, unknown stimuli is determined as the external input that least conforms with the fractional-order model. By doing so, we have filtered-out from the brain EEG signals the unknown inputs, that might be originated in the deeper brain structures. The presence of unknown stimuli is possibly the result of the structural connectivity of the brain that crisscrosses different regions, or due to artifacts originated in the muscles (e.g., eye blinking or head movement). As a consequence, the filtered signal does not need to annihilate an entire band in the frequency domain, thus keeping information about some frequency regions of the signal that would be otherwise lost.
We have shown how the different features obtained from the proposed model can be used towards rethinking the EEG-based noninvasive interfaces. In particular, two datasets used in BCI competitions were used to validate the performance of the methodology introduced in this manuscript, which is compatible with some of the state-of-the-art performances while requiring a relatively small number of training points. We believe that the proposed methodology can be used within the context of different neurophysiological processes and corresponding sensing technologies. Future research will focus on leveraging additional information from the unknown inputs retrieved to anticipate specific artifacts and enable the deployment of neuro-wearables in the context of real-life scenarios. Furthermore, the presented methodology can be used as an exploratory tool by neuroscientists and physicians, by testing input and output responses and tracking their impact in the unknown inputs retrieved by the algorithm proposed; in other words, one will be able to systematically identify the origin and dynamics of stimulus across space and time. Finally, it would be interesting to explore the proposed approach in the closed-loop context, where the present models would benefit from control-like strategies to enhance the brain towards certain tasks or attenuate side effects of certain neurodegenerative diseases or disorders.
The authors are thankful to the reviewers for their valuable feedback. G.G. and P.B. gratefully acknowledge the support by the U.S. Army Defense Advanced Research Projects Agency (DARPA) under grant no. W911NF-17-1-0076, DARPA Young Faculty Award under grant no. N66001-17-1-4044, and the National Science Foundation under CAREER Award CPS-1453860 support.
-  E. Strickland, “5 Neuroscience Experts Weigh in on Elon Musk’s Mysterious ‘Neural Lace’ Company,” IEEE Spectrum, Apr 2017.
-  ——, “Facebook Announces ”Typing-by-Brain” Project,” IEEE Spectrum, Apr 2017. [Online]. Available: https://spectrum.ieee.org/the-human-os/biomedical/bionics/facebook-announces-typing-by-brain-project
-  ——, “Silicon Valley’s Latest Craze: Brain Tech,” IEEE Spectrum, Jun 2017. [Online]. Available: https://spectrum.ieee.org/biomedical/devices/silicon-valleys-latest-craze-brain-tech
-  J. Wolpaw and E. W. Wolpaw, Brain-computer interfaces: principles and practice. OUP USA, 2012.
-  J. Mellinger, G. Schalk, C. Braun, H. Preissl, W. Rosenstiel, N. Birbaumer, and A. Kübler, “An MEG-based brain–computer interface (BCI),” Neuroimage, vol. 36, no. 3, pp. 581–593, 2007.
-  R. Sitaram, A. Caria, R. Veit, T. Gaber, G. Rota, A. Kuebler, and N. Birbaumer, “Fmri brain-computer interface: a tool for neuroscientific research and treatment,” Computational intelligence and neuroscience, 2007.
-  N. Weiskopf, K. Mathiak, S. W. Bock, F. Scharnowski, R. Veit, W. Grodd, R. Goebel, and N. Birbaumer, “Principles of a brain-computer interface (bci) based on real-time functional magnetic resonance imaging (fmri),” IEEE transactions on biomedical engineering, vol. 51, no. 6, pp. 966–970, 2004.
-  S. M. Coyle, T. E. Ward, and C. M. Markham, “Brain–computer interface using a simplified functional near-infrared spectroscopy system,” Journal of neural engineering, vol. 4, no. 3, p. 219, 2007.
-  J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain–computer interfaces for communication and control,” Clinical neurophysiology, vol. 113, no. 6, pp. 767–791, 2002.
-  F. Lotte, “A tutorial on eeg signal-processing techniques for mental-state recognition in brain–computer interfaces,” in Guide to Brain-Computer Music Interfacing. Springer, 2014, pp. 133–161.
-  N. Brodu, F. Lotte, and A. Lécuyer, “Exploring two novel features for eeg-based brain–computer interfaces: Multifractal cumulants and predictive complexity,” Neurocomputing, vol. 79, pp. 87–94, 2012.
-  G. Gupta, S. Pequito, and P. Bogdan, “Dealing with unknown unknowns: Identification and selection of minimal sensing for fractional dynamics with unknown inputs,” to appear in American Control Conference, 2018.
-  B. Blankertz, K. R. Muller, D. J. Krusienski, G. Schalk, J. R. Wolpaw, A. Schlogl, G. Pfurtscheller, J. R. Millan, M. Schroder, and N. Birbaumer, “The bci competition iii: validating alternative approaches to actual bci problems,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 14, no. 2, pp. 153–159, June 2006.
-  E. Başar, C. Başar-Eroglu, S. Karakaş, and M. Schürmann, “Gamma, alpha, delta, and theta oscillations govern cognitive processes,” International journal of psychophysiology, vol. 39, no. 2, pp. 241–248, 2001.
-  A. K. Engel, P. Fries, and W. Singer, “Dynamic predictions: oscillations and synchrony in top-down processing,” Nature reviews. Neuroscience, vol. 2, no. 10, p. 704, 2001.
-  G. Buzsaki, Rhythms of the Brain. Oxford University Press, 2006.
-  G. Pfurtscheller and C. Neuper, “Motor imagery and direct brain-computer communication,” Proceedings of the IEEE, vol. 89, no. 7, pp. 1123–1134, 2001.
-  T. McMillen, T. Williams, and P. Holmes, “Nonlinear muscles, passive viscoelasticity and body taper conspire to create neuromechanical phase lags in anguilliform swimmers,” PLoS computational biology, vol. 4, no. 8, p. e1000157, 2008.
-  Y. Kobayashi, H. Watanabe, T. Hoshi, K. Kawamura, and M. G. Fujie, “Viscoelastic and nonlinear liver modeling for needle insertion simulation,” in Soft Tissue Biomechanical Modeling for Computer Assisted Surgery. Springer, 2012, pp. 41–67.
-  C. Wex, M. Fröhlich, K. Brandstädter, C. Bruns, and A. Stoll, “Experimental analysis of the mechanical behavior of the viscoelastic porcine pancreas and preliminary case study on the human pancreas,” Journal of the mechanical behavior of biomedical materials, vol. 41, pp. 199–207, 2015.
-  K. Wang, R. McCarter, J. Wright, J. Beverly, and R. Ramirez-Mitchell, “Viscoelasticity of the sarcomere matrix of skeletal muscles. the titin-myosin composite filament is a dual-stage molecular spring,” Biophysical Journal, vol. 64, no. 4, pp. 1161–1177, 1993.
-  T. M. Best, J. McElhaney, W. E. Garrett, and B. S. Myers, “Characterization of the passive responses of live skeletal muscle using the quasi-linear theory of viscoelasticity,” Journal of biomechanics, vol. 27, no. 4, pp. 413–419, 1994.
-  T. C. Doehring, A. D. Freed, E. O. Carew, and I. Vesely, “Fractional order viscoelasticity of the aortic valve cusp: an alternative to quasilinear viscoelasticity,” Journal of biomechanical engineering, vol. 127, no. 4, pp. 700–708, 2005.
-  E. Macé, I. Cohen, G. Montaldo, R. Miles, M. Fink, and M. Tanter, “In vivo mapping of brain elasticity in small animals using shear wave imaging,” IEEE transactions on medical imaging, vol. 30, no. 3, pp. 550–558, 2011.
-  S. Nicolle, L. Noguer, and J.-F. Palierne, “Shear mechanical properties of the spleen: experiment and analytical modelling,” Journal of the mechanical behavior of biomedical materials, vol. 9, pp. 130–136, 2012.
-  N. Grahovac and M. Žigić, “Modelling of the hamstring muscle group by use of fractional derivatives,” Computers & Mathematics with Applications, vol. 59, no. 5, pp. 1695–1700, 2010.
-  V. E. Tarasov, Fractional dynamics: applications of fractional calculus to dynamics of particles, fields and media. Springer Science & Business Media, 2011.
-  P. Bogdan, B. M. Deasy, B. Gharaibeh, T. Roehrs, and R. Marculescu, “Heterogeneous structure of stem cells dynamics: statistical models and quantitative predictions,” Scientific reports, vol. 4, 2014.
-  M. Ghorbani and P. Bogdan, “A cyber-physical system approach to artificial pancreas design,” in Proceedings of the ninth IEEE/ACM/IFIP international conference on hardware/software codesign and system synthesis. IEEE Press, 2013, p. 17.
-  Y. Xue, S. Rodriguez, and P. Bogdan, “A spatio-temporal fractal model for a cps approach to brain-machine-body interfaces,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2016. IEEE, 2016, pp. 642–647.
-  Y. Xue, S. Pequito, J. R. Coelho, P. Bogdan, and G. J. Pappas, “Minimum number of sensors to ensure observability of physiological systems: A case study,” in Communication, Control, and Computing (Allerton), 2016 54th Annual Allerton Conference on. IEEE, 2016, pp. 1181–1188.
-  Y. Xue and P. Bogdan, “Constructing compact causal mathematical models for complex dynamics,” in Proceedings of the 8th International Conference on Cyber-Physical Systems. ACM, 2017, pp. 97–107.
-  A. Dzielinski and D. Sierociuk, “Adaptive feedback control of fractional order discrete state-space systems,” in CIMCA-IAWTIC, 2005.
-  P. Flandrin, “Wavelet analysis and synthesis of fractional brownian motion,” IEEE Transactions on Information Theory, vol. 38, no. 2, pp. 910–917, March 1992.
-  G. McLachlan and T. Krishnan, The EM Algorithm and Extensions. John Wiley & Sons, New York, 1996.
-  K. P. Murphy, Machine Learning: A Probabilistic Perspective. The MIT Press, 2012.
-  G. Dornhege, B. Blankertz, G. Curio, and K.-R. Müller, “Boosting bit rates in non-invasive EEG single-trial classifications by feature combination and multi-class paradigms,” IEEE Trans. Biomed. Eng., vol. 51, no. 6, pp. 993–1002, Jun 2004.
-  A. Bashashati, M. Fatourechi, R. K. Ward, and G. E. Birch, “A survey of signal processing algorithms in brain–computer interfaces based on electrical brain signals,” Journal of Neural Engineering, vol. 4, no. 2, p. R32, 2007.
-  N. Brodu, F. Lotte, and A. Lécuyer, “Comparative study of band-power extraction techniques for motor imagery classification,” in 2011 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB), April 2011, pp. 1–6.
-  P. Herman, G. Prasad, T. M. McGinnity, and D. Coyle, “Comparative analysis of spectral approaches to feature extraction for eeg-based motor imagery classification,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 16, no. 4, pp. 317–326, Aug 2008.